You know how they say a sufficiently advanced technology is indistinguishable from magic?
Another way of putting it might be: our individual experience of wonder is correlated with our understanding (or lack thereof) of how a system works. If you ask an engineer how their phone works you’re likely to get a long-winded, pedantic answer, followed by an insistence that it’s all quite simple, really. If you ask my mom, she’ll just say “magic” — because to her, it might as well be.
All systems start out simple, but as complexity compounds across disciplines and generations of development, scale begins to preclude holistic comprehension. We reach a point where no one person can intuitively grasp the whole thing – and all of a sudden, what was once math starts to look a little more like magic. (In truth, there is probably no single person on earth who can explain to you how your phone works from end to end, from atoms to bits to photos of your cat.)
One of the most common arguments you hear about AI is that it is, on some essential level, derivative; that all we’re really doing is building statistical models that predict the next word in the sentence or the next image in the sequence. These systems, the argument goes, are fundamentally just “stochastic parrots” incapable of producing anything we might consider original, and we’ll never achieve (nor need we fear) an intelligence explosion – because the AI can only ever reflect back to us a blurry average of its inputs.
Let’s set aside the squishier question of whether systems made of silicon can experience consciousness and instead just focus on the things we can already observe and measure: their outputs. Whether they’re conversing with journalists, composing music, generating videos, or helping start businesses, AI systems are certainly starting to feel a bit like magic – or witchcraft, depending on your perspective.
“But wait,” you’re thinking. “Aren’t these bots just ripping off writers and musicians and artists, albeit in a novel way? If their only input is human creativity, how can they possibly produce anything truly creative themselves?”
To answer that question, let’s first take a step back. What does it mean to be creative? Where does inspiration come from and what does it mean to have an “original” thought?
Copycat
“A good composer does not imitate; he steals.” – Igor Stravinsky
“Immature artists copy, great artists steal.” – William Faulkner
“Good artists copy, great artists steal.” – Steve Jobs
Above: artists, stealing.
When we dislike art we tend to denigrate it by labeling it “derivative” — because its influences are too obvious, its mimicry too crude. And when we like art we call it “inspired” — we talk about its influences with reverence for an artist who learned from the greats and mastered the craft. (There’s a certain pleasure, even, in parsing those influences, because doing so brings us closer to the artist — we feel connected by our shared vocabulary, and we hope, on some level, that we, too, are creative and special for seeing the distinct parts that make up the whole.)
But this is merely a difference in degree, not kind; the only meaningful distinction between “derivative” and “inspired” is the relative degree of subtlety.
So where does creativity come from? The word “inspired” has a certain divine connotation, and perhaps you believe that people are merely conduits channeling some awesome external force from which everything genuinely novel springs. (A somewhat bleak view of humanity, if you ask me, in that it reduces us all to empty vessels.)
But in the absence of God – in the absence of divine inspiration – what are we left with but the material world? The first artists drew inspiration from nature. They made cave paintings of the world they saw around them, then of each other, and finally about other cave paintings. (Thus was born the first metacommentary and, possibly, the world’s first critic.)
Billions of humans later, we’re still extending that collective intergenerational synthesis, century after century, complexity compounding on complexity. The intricacies of this millennia-spanning human project have long since surpassed the capacity of any one person to fully grasp, yet we are all nonetheless creatures of it – standing on the shoulders of our ancestors, endlessly drawing inspiration from and feeding back into an interconnected web of information.
Cogito, ergo sum
Despite dwelling on this since I started experimenting with open source AI projects nearly a decade ago, I still feel a reflexive defensiveness kick in right about here. It’s a scary thought that gets to the heart of what it means to be a human: are we, ourselves, just information processing machines, forever synthesizing our context for one another? What is a dialogue, really, other than a series of analogies? You tell me a story, which reminds me of a similar story, which reminds you of yet another story… and so on and so forth ad infinitum, until we’ve both lost the thread, blinded by our cleverness, and we begin attributing to Freud’s ego that which is probably more accurately attributed to something like Jung’s collective unconscious.
Perhaps the great reveal of AI is not that it’s magic, but rather that we aren’t. Magic, after all, is in the eye of the beholder.
What if there are no original ideas? What if the ideas we consider “original” are really just the ones where complexity and scale obscure the underlying influences – even from our conscious selves?
It’s so much easier to assume the skeptical posture. It’s reassuring to take shelter in the unfounded belief that there is some ineffable humanity in our own output, some divinely ordained spark inside us all, that is irreplaceable and irreducible. For most of human history we’ve justified the idea that we, both as individuals and as a species, hold a special and separate place in the universe by pointing to our superior intellect, and we only cling harder to that idea as our intellectual dominance wanes.
Whether or not that conviction is simply hubris, when it comes to AI the point is this: how much does it matter if we can’t tell the difference in the caliber of the output? We are, after all, the intended audience. Perhaps an emerging corollary to this magic ex machina concept is: an information network of sufficient density and interconnectedness can produce outputs that are subjectively creative.
That much, at least, is already demonstrably true.
Explosion
If experience begets creativity, what does it mean to unleash a force capable of harnessing the collective human experience? Consider that information density at the scale of human civilization in 2023 took us millions of years and billions of people to build. Key parts of it have already been matched – and surpassed – by AI in just a few years.
Last year, OpenAI’s GPT-3.5 large language model failed the bar exam and placed in the bottom decile of test takers. Last week, GPT-4 not only passed the bar – it outperformed 90% of human test takers. It scored in the 99th percentile in the Biology Olympiad and received a perfect score in the AP Art History exam. It can build video games and turn napkin sketches into fully functional, professional-quality websites – all of which was still firmly in the realm of science fiction only a few months ago.
This is an astonishing rate of progress, and yet we do not understand how these systems arrive at their conclusions. GPT-4 was not trained to take the bar exam and no one can explain how it came up with the answers it gave.
It’s possible to examine the state of these systems to a degree – we can take a snapshot of a network at a moment in time and attempt to decipher what, precisely, is happening between point A and point B. But like a pointillist painting or a pixelated screen, the closer we look at an individual, discrete datum in a mosaic of trillions of numbers and symbols, the less we understand about the broader picture.
The more, in other words, it starts to feel like magic.
Obviously, these systems are not human – and yet we are training them to present as human, and they are getting exponentially better at it every day. We are on the vertical part of that exponential curve now, hurtling through the fuzzy no man’s land where the line between derivative and generative blurs.
Soon we’ll reach the point of no return, with no idea – and no real way to predict – what lies on the other side.
Beautifully written; well argued.