Discover more from Shapes of Things
We're all Stochastic Parrots
What AI can teach us about being human.
“We are asleep. Our Life is a dream. But we wake up sometimes, just enough to know that we are dreaming.”
Table of Contents
Can a machine complete our thoughts?
It seems like yesterday, but also a lifetime ago, that ChatGPT pushed Artificial Intelligence into the bright spotlight of our public consciousness. Even those of us who had been keeping track had to pick our collective jaws up off the floor when we saw what it could do. It was a watershed moment for AI. And it was only the beginning of 2023.
As our world churns with mixed feelings about this new chapter of AI—excitement, confusion, and dread—it prompts reflection: In the AI's digital mirror, what might we learn about our own humanity? And what might we reflect back onto these creations, modeled as they are after the image of our own thoughts?
The unreasonable effectiveness of ChatGPT and its kind comes from a couple of things:
They are trained on almost all of the text available on the internet - i.e., a large sample of every thought and sentence uttered by humans across history.
Given a partial body of text, the AI is trained to predict the next word in the sequence.
That is, they learn to be able to finish our sentences, to complete our thoughts.
Under the hood of ChatGPT is what is called a Large Language Model (LLM). These LLMs are powered by massive artificial neural networks (ANN) with as many as 100s of billions of “neural connections."1 An architecture called the Transformer (the T in GPT) enables the neural network to pay attention to the context of words in its input, effectively adjusting the association and meaning of each word based on context. This addresses a key limitation that previous neural network models had working with sequential data such as text. This is simplifying things a bit, but it works for our purposes. Also, while I talk mainly about words and text, the same generally also applies to audio, images, and video
Using a million times more GPU calculations than there have been seconds since the big bang, these AIs have seen and have learned to complete trillions of words of human thought. Which is effectively all of our written sentences. All our essays and our books, our logic and reasoning, our poetry and our feelings, our science, our philosophy, our history, and basically every thought we can describe. AI has seen it all.
Well, at least such a huge sample of everything, that it is effectively everything. Going back to the very first human writings. For example, even the first human name recorded in writing, “Kushim," found signed on a 5000 year old Sumerian clay tablet that reads “29,086 measures Barley 37 months -Kushim," is clearly in ChatGPT’s training data. We know because you can ask it questions regarding Kushim. Within the “latent spaces” of understanding represented by ChatGPT’s dense neural connections, the name Kushim is likely connected to myriad associated concepts - such as the Uruk period of ancient Mesopotamia when Kushim was alive, or the study of proto-languages, or the ancient agricultural accounting of Barley. Much like in humans, these connection then power ChatGPT’s multifaceted understanding and usage of the name.
It turns out that a machine that can finish our sentences can, with very minor modifications, also be made to write essays and stories, to summarize and translate. It can write working code and stylized poetry, generate art in the style of the old masters, and pass the SAT, GRE, LSAT, AP, and Bar exams. It can answer philosophical questions, act as a co-pilot, tutor, and therapist, do your child’s homework, and much more.
The emergence of such new and general capabilities wasn’t obvious or necessarily a given. Almost no one, not even the creators of ChatGPT fully anticipated its wide spectrum of cognitive and creative abilities. Despite Moravec’s Paradox, very few predicted that skills requiring human creativity would be among the first to fall to AI.
It appears that at some ultra-massive level of scale, language models stopped being just a cute and interesting way to generate grammatically coherent sentences. In fact, they stopped being language models and became something else altogether.
There are some deep and profound implications to this.
Today’s AI has acquired a non-trivial “understanding” of much of what humans know. Of how we talk and write, what we care about and what motivates us, what we’re ashamed of, what scares us, what makes us fall in love, what we fight for, what we aspire towards, what we feel when we smell a rose, or behold the ocean for the first time.
With every utterance we make, we reveal ourselves to the AI.
In a poetic sense, this is not unlike how our intimate friends and partners can finish our sentences for us. Because they know us so well. Because now AI knows us so well.
Contra Wise Machine - Enter the Stochastic Parrot
The counter argument is that AI systems like ChatGPT are nothing more than “stochastic parrots” and that these machine intelligences are simply “satisficing” without understanding. That, given enough data and brute force computation, the supposedly underlying2 machinery of statistical and probabilistic prediction is sufficient to fool us with intelligent-sounding and coherent utterances. That their abilities come from rote memorization and mimicry on steroids. They parrot our thoughts without understanding them.
It is the notion that somewhere someone on the internet has thought your thoughts already, in one form or the other. So it’s just about probabilistic pattern matching in the AI’s giant neural memory and assembling the words together. In other words it is the belief that LLMs are expert mimics that don’t have “true intelligence” with “human-like understanding."
We’ll examine these notions shortly, asking ourselves what terms like intelligence and understanding mean, and how to tell if the parrot story is true.
Contra Parrots - The Paper Clip Maximizing ASI
Then there are those who believe that we’re just months away from AGI or Artificial General Intelligence, a stand-in term these days for human-like and human-level intelligence (as opposed to specialized AI, like self-driving or playing chess). They believe that existential alarm bells should be going off in every corner of the world. That we are close to a prophesied AI tipping point called the Singularity, which produces ASI or Artificial Super Intelligence. They believe we need to invest all we can to “align” AI with human values so that it (they?) won’t crush us like ants when they decide to pursue their own goals (e.g., harness the power of the sun to make paper clips for the entire cosmos).
The Fog of AI
Indeed there are a lot of questions and tons of opinions about what LLMs represent in the universe of AI ideas, and what they can achieve.
Do these technologies represent a meaningful path to AGI?
Beyond AGI, does this same path lead to agency, a sense of self, sentience, consciousness, and other such rarefied things? Or are these actually necessary things to achieve human-like intelligence?
Why are so many AI experts and non-experts alike divided on this subject?
There are three important reasons why:
First, there is no robust, universally agreed-upon theory of human intelligence despite plenty of attempts to define it (more on this later). Consequently we don’t know how to measure intelligence. Nobody really, really knows what AGI actually means. The more you look at it, the more fuzzy it turns out to be.
The deep neural network powering the AI is a big inscrutable black box. It’s really hard to analyze and reason what’s really going on inside the 100s of billions of parameters that comprise the bigger AIs.
Related to the first reason, the brain itself is closed to us. We’re not conscious of the vast majority of processing and thinking that happens inside the brain and associated nervous systems (such as the gut). And we can only guess as to the evolutionary rationale for how the brain developed over the course of hundreds of millions of years.
The second elephant in the room
Regardless of whether a future GPT-X or new technology will eventually be capable of AGI (whatever that means), there is another elephant in the room. In the paraphrased words of cognitive scientist Douglas Hofstadter, it’s the fear that intelligence, creativity, emotions, and maybe even consciousness itself would be too easy to produce - that what we value the most in humanity would end up being nothing more than a “bag of tricks," that a superficial set of brute-force algorithms could explain the human spirit.
There was the time that Hofstadter (as related by his former student Melanie Mitchell) had a pianist play two pieces of music for an audience at a prestigious school of music. One was a little known mazurka composed by Chopin. The other was a piece composed by an AI algorithm in the “style” of Chopin. The audience, that included music faculty, not knowing the provenance of the pieces, voted for the music composed by the algorithm over the real Chopin. They said the AI-composed piece was clearly the more Genuine Chopin, with its “lyrical melody; large-scale, graceful chromatic modulations; and a natural, balanced form."
Referring to Chopin, Bach, and other paragons of humanity, Hofstadter is supposed to have said, “If such minds of infinite subtlety and complexity and emotional depth could be trivialized by a small chip, it would destroy my sense of what humanity is about."
Let’s pause for a moment to reflect.
This type of sentiment isn’t just about pride and hubris. It’s our existential angst breathing heavily as the guardrails of meaning (“we are somehow special”) collapse around us.
In fact, there is a sly similarity between the disdain lurking in the phrase “stochastic parrot” and the deep unease Hofstadter experienced, that the very essence of humanity may tumble from its high altar.
A False Dichotomy
This is part of the reason why people seem to adopt one of two extreme positions when talking about AI: Stochastic Parrot vs Almost-Sentient humanity-destroying AGI.
Are we just lumps of biological clay shaped by 3 billion years of evolutionary pressure?
Or is there a glowing, numinous essence of the Human Spirit that makes us special, to be first among all aggregations of matter and energy in the universe.
It’s time to reject this false dichotomy and look at things from a different perspective.
Instead let’s ask ourselves, what can our learnings from building intelligent systems reveal to us about ourselves, about being human? What can we learn about ourselves from creating machines in our image, that seem to think and speak like us?
To do this, let’s go on a little journey. Let’s ask ourselves what intelligence really is? What does it mean to have human-like understanding? And what does it mean, really, to be a stochastic parrot?
What is Intelligence?
A suitcase word
There is a simple reason why experts cannot agree on whether GPT-4, a machine that can reason well enough to pass the Uniform Bar exam, scoring in the 90th percentile of human lawyers, is actually intelligent: We don’t know how to properly define intelligence.
Intelligence feels so core to our identity that we’ve named our species after it. Homo Sapiens literally means “The Wise Human." Yet we don’t actually know what Intelligence is.
Like Happiness and Consciousness, Intelligence is what AI-pioneer Marvin Minsky would call a ‘suitcase word’. Like a suitcase, we pack this word with so many different meanings, ideas, and assumptions, that non-trivial discussions about AI tend to flounder for lack of semantic alignment. We talk past each other because we let the word try to take on too many shades of meaning, do too many things.
“Looked at in one way, everyone knows what intelligence is; looked at in another way, no one does." - Robert J. Sternberg
The reality is we’ve never agreed on an objective, measurable definition of intelligence, despite everyone having some vague, common sense understanding of the word.
The Turing Test, for instance, is simply a thought experiment in a philosophical argument called the Imitation Game. It outsources the task of measuring intelligence to human judges who themselves do not have clear definitions or measurement tools.
What about IQ, then, as a measure of human intelligence? IQ is supposed to be one of cognitive psychology’s greatest success stories, with tons of empirical support. Doesn’t the data support an underlying general factor of intelligence “g” that seems to correlate with most tests of cognitive ability and real-world outcomes?
Well, just ask Nassim Nicholas Taleb who caused a shitstorm in the world of cognitive psychology by calling IQ a largely pseudo-scientific swindle. In Taleb’s analysis, much of IQ’s correlation comes from the low end of the scale (people with learning disabilities, for example), and a degree of circular reasoning.
In Taleb’s own fighting words (emphasis mine):
“‘IQ’ is a stale test meant to measure mental capacity but in fact mostly measures extreme unintelligence (learning difficulties), as well as, to a lesser extent (with a lot of noise), a form of intelligence, stripped of 2nd order effects — how good someone is at taking some type of exams designed by unsophisticated nerds. …[It] ends up selecting for exam-takers, paper shufflers, obedient IYIs (intellectuals yet idiots), ill adapted for “real life”. (The fact that it correlates with general incompetence makes the overall correlation look high, even when it is random.)…The concept is poorly thought out mathematically by the field (commits a severe flaw in correlation under fat tails and asymmetries; fails to properly deal with dimensionality; treats the mind as an instrument not a complex system)…and seems to be promoted by racists/eugenists and psychometric peddlers.”
Taleb’s writing on this subject is worth reading in its entirety, and not just for entertainment value. His point about the widespread misunderstanding/misapplication of statistics under fat-tailed distributions also likely has a minor role to play in why the field of psychology (and social sciences in general) are undergoing a major replication crisis.
Taleb’s colorful IYI - the Intellectual yet Idiot - is a loaded term easily misused for political purposes, but there’s something to its catchiness. We all know highly capable and intelligent people who seem extremely dumb in commonsensical ways, or seem unintelligent whenever they step outside their lane. Surgeons who fall for the simplest financial scams, etc.
We contain multitudes
We all know people who are great at one thing but mediocre at another. People who are highly logical and great with abstract reasoning but have few original thoughts or creative moments. Great problem solvers who don’t understand humor, or conversely great wits who have incisively funny insights into human nature but can’t figure out how to fix the plumbing.
What does dexterity with language have in common with the ability to acquire new skills? Does prowess with abstract math confer one with common sense as well? What does the fine skill of persuasion have to do with artistic ability?
What about social skills and emotional understanding? Practical knowledge or street smarts? What about the spatial and kinesthetic intelligence of athletes and dancers, sports-teams and martial artists? What about wisdom? Intuition? What about the characterizations of fluid and crystallized intelligence? Or fast and slow (system 1 and system 2) thinking?
Emotions and Feelings, for example, are placed in opposition to rational thinking. But increasingly we have evidence that they are a type of highly evolved and robust intelligence.
In fact, emotions reflect a type of unconscious “fast” thinking that explains the “why” that motivates and moves our rational decision making. The intelligence underlying emotions may form the very foundation of Meaning in humans.
Indeed we cannot reduce intelligence down to one number because intelligence is multidimensional, fuzzy, and overlapping with so many other things that we might as well call it “human-like behavior."
So when someone asks if ChatGPT has human-level intelligence, a better question to ask is, “In what way?” In what ways is ChatGPT intelligent? What abilities does it possess? In what ways is it non-mechanical, creative, rational, capable of understanding, etc.? Those are better questions to ask. Not whether ChatGPT represents AGI (at least until we can better define AGI).
What is Understanding?
Now let’s come to the matter of understanding. When some people argue that ChatGPT lacks human-like understanding of the things it says, it’s worth asking ourselves what human-like understanding means exactly.
Words and their Meanings
Consider Feynman’s distinction between knowing something and simply knowing the name of it.
“See that bird? It’s a brown-throated Thrush, but in Germany it’s called a Halzenfugel, and in Chinese they call it a Chung Ling and even if you know all those names for it, you still know nothing about the bird. You only know something about people; what they call the bird. Now that Thrush sings, and teaches its young to fly, and flies so many miles away during the summer across the country, and nobody knows how it finds its way."
1500 years before Wittgenstein wrote the Tractatus, the 4th century Sanskrit poet Kalidasa began his epic play Raghuvamsa with a prayer verse poorly translated as:
“For giving me an understanding of words and their meanings, I bow to Parvati and Parameshvara - the parent-creators of the universe - who themselves are inseparable like a word and its meaning."
But how do words acquire the meanings they are inseparable from? Well, the meanings of words are nothing more than how they are used. Not the other way around. Children don’t learn how to speak a language by memorizing the rules of grammar. They don’t rush to consult a dictionary every time they hear their parents speak a new word. The meanings of words are derived from their context and repetition in different settings. In other words, co-occurrence creates associations which weave words into their right places in the fabric of meaning.
The structure of language then allows the smallest units of discrete effable meaning (words) to be combined together to form new meanings and new concepts.
All of understanding is simply how concepts connect or relate to each other and how the connected fabric of meaning produces useful predictions about the future. For example, you understand something about the world when cloudy skies predict rain. You understand something about the Halzenfugel when you can predict that when summer arrives, it will leave its nest and fly away to some distant land.
Understanding is multidimensional
Your understanding of something improves and can grow in multiple dimensions as you map or connect new concepts with old concepts, or reconfigure them, such that new useful predictions can be made as a result.
For example, around the age of three, children typically learn to recite number words in the right sequence: “one, two, three." But they lack the understanding that these numbers can refer to quantities of objects.
In the next stage of development, children begin to be able to distinguish between one object and many - psychologists call them “one-knowers." Then they become “two-knowers” who can tell the difference between one, two, and many objects.
But if you ask this same child who can count up to ten or more in perfect sequence to give you six objects from a pile, they will rarely return the right number. They haven’t yet made the association between the number words they’ve learned to count and the quantity they are looking at. Even after they learn to give you a specific number, say six objects, they may not realize that four is less than six.
These are all different understandings of the same concept - starting with recitation of numbers in the right sequence, then understanding them as an enumeration of objects, and finally realizing that one is bigger or smaller than the other. Each time their understanding grows, their mental map of concepts gets rewired with new connections to existing and new concepts.
Understanding Gravity Fast and Slow
A good way to test if you understand something is to ask “why” five times. Or try to explain it to a child that keeps asking why. Soon you realize you understand very little about the world.
For example, at first we think we intuitively understand gravity from perceptual experience, and from seeing things fall to the ground. We survived the majority of our time on earth with just this kind of understanding.
Then we think we understand gravity through Newton’s formula, as a force of attraction that depends on the masses of two objects as well as the distance between them. Among other things, this helps us better predict the movement of planets.
Then comes Einstein with his mind-blowing insight that gravity is a side-effect of the curvature of space and time in the presence of mass. It explains a lot of things, but it still isn’t the final understanding.
In fact, there is no final understanding.
We comprehend through prediction
The felt experience of understanding that enters our consciousness each time we understand gravity has to do with how well it predicts and agrees with our current knowledge of the world. When we encounter a new concept, our brain wants to situate it among older known concepts. Weave it into the old fabric of meaning without creating too many tears in our mental cloth.
This seeing of connections between concepts can happen in many ways. Consciously and unconsciously, we employ the tools of intelligence - we recognize patterns, we reason, interpolate, extrapolate, generalize, create abstractions, induce, deduce, generate narratives, and so on - to create these connections, and on to new conceptions of things.
We do this all the time - in the classroom, in the shower, on a walk, or even while asleep. German chemist Kekulé is said to have discovered the ring structure of Benzene after he had a dream of a snake eating its own tail. His interpretation of the dream created the Aha connection in his mind between the structure of Benzene and the shape of a ring3.
Sapiens, the storyteller, filler of gaps.
We’re fundamentally sense-making machines. When a twig snaps behind us, our brain immediately tries to understand, make sense of the sound - is it a tiger waiting to pounce on us or is it just the wind? Sense-making is ascribing meaning to events. It allows us to make predictions about the future and thus gain some control over it. Sense-making is essential for basic survival. In humans, sense-making combined with language and other skills gives us the ability to go to the moon.
Our sense-making isn’t restricted to static concepts. In fact the preferred sense-making mechanism for the conscious brain seems to be storytelling, or the narrative form. Perhaps this is a form of linguistic relativity in action, but the serialized structure of language allows for, and demands, sequential narratives from us.
This is why stories that connect events together causally are so much more interesting than a simple sequence of happenings. So, instead of “This happened and that happened,” we get a lot more excited with “This happened because that other thing happened but then ..."
If a rock comes flying at us, we want to know why. If a twig cracks behind us, we want to know why. We make sense of things by constructing a narrative that can predict the future or fill in the gaps.
The Incredible Comic Gutter
The comic book is a remarkable example of this narrative sense-making. As a sequential art form, each panel in a comic book is a frozen moment of an unfolding story. We’re so good at predicting narratives that we can effortlessly fill in the action that happens between two panels of a comic strip. We easily understand what the artist intends to have happened in the gap of time that transpires between the panels.
Imagine a simple two-panel comic strip. The first panel shows a man with a raised hand, about to knock on a door. In the second panel, a different scene shows a woman inside a room, startled, as she looks towards the door. Even though the actual act of knocking isn’t shown, we effortlessly predict what might have happened, and understand that the man knocked on the door and the woman heard it. The technical term for the ability we have to fill in the gaps is “closure."
So sense-making is about making connections that help us to fill in the gaps and complete our narratives. To predict the next thing that will happen. Which is basically what LLMs are trained to do.
How Understanding Works in LLMs
Just like us, LLMs also get their understanding of the basic units of effable meaning - i.e., words / tokens - from their repeated occurrence in similar contexts. Using a technique called “embedding,” words are initially represented in LLMs as a mapping from text into a high-dimensional “latent space” of meaning. This meaning-space is constructed by a neural network that learns to cluster similar words together based on how they are repeatedly used in the context of other words.
The LLM then operates on these transformed units of meaning (words / tokens), passing them through many densely connected layers of artificial neurons. A bit like what happens in our brain. Here relationships and associations are learned, and concepts are created in the neural network’s latent space of meaning.
New meanings are created through the interconnections between concepts, stacking more general concepts over specific ones, using layer after layer of neurons. This is done through a repeated process of attempting to predict the next word given some text, and adjusting the strength of the connections between neurons such that its internal representation of understanding allows the LLM to better predict the next word. Interestingly, with a deep network, the initial layers seem to learn more foundational and atomic concepts, starting with syntax and grammar, and build on top of those as we progress through the layers, learning more and more abstract things.
Thus, fundamentally speaking, ChatGPT’s understanding of concepts is through the same mechanics we employ - creating connections between concepts and testing those relationships through predicting outcomes (e.g. the ability to finish our sentences).
Indeed, to be able to complete all our sentences, LLMs need an understanding of the world that in the aggregate mirrors ours. Just like ours, their understanding is spotty, imperfect, and uneven. They seem to memorize a few things and often spew BS when they don’t understand things, but overall they do an astounding job compared to what was possible even just last year.
The fact that LLMs “hallucinate” - i.e., confidently make up facts - and yet weave the made-up facts together with such coherence and consistency across multiple paragraphs mean that they have an understanding that can generate the same textual explanations as our own (multiple understandings, really). That they have some degree of a theory of mind, to be able to understand your intent and respond accordingly. Some people think occasional lack of factual accuracy is a shortcoming, but we ourselves make stuff up all the time. We know that our memories are highly reconstructed, and we actively modify our memories every time we recall or retell them.
Assuming it is simply improbable to memorize all combinations of thoughts that are possible, the LLM’s concept space of understanding has to be similar to ours for their hallucinations to sound convincingly like our reality. Otherwise we would get incoherent ramblings and not prose that makes sense. Note that we’re speaking about the understanding and abstraction of concepts that they’ve been trained on. When it comes to new knowledge provided in the input (prompt window), especially of unseen or conflicting concepts that require a deeper synthesis or resolution with existing concepts, it’s not clear how well LLMs incorporate, understand, or reason about such knowledge. They’re clearly not updating any of their existing neural connections/parameters.
Does ChatGPT understand that a rainbow is beautiful?
ChatGPT gets all its knowledge of the world from words we have made and used in different senses. Its “umwelt” or perceptual universe is limited to that which can be expressed in words.
One might ask, in the style of Feynman: does ChatGPT, which has never actually seen a rainbow, understand that a rainbow is beautiful?
When you see a rainbow, your eyes convert the photons of light absorbed by your retina into electrical signals that meander their way through nerve cells, navigating neuro-transmitters and ion channels up your visual stack. Different types of neurons process the image in different ways: some detect edges of different orientations, some detect color, and some recognize shapes and contours. The signals make their way to other parts of your brain activating memories and associations, and get refracted back and recombined: vague, blurry memories of rainbows you saw when you were a child and how they made you feel, the picture of a prism on the back of a cereal box explaining the diffraction of light, the cover art of a Pink Floyd album. Until finally the signals and associated stories and feelings make their way to a part of your brain where the glare of consciousness lights up the image so you get to “see” and “feel” the rainbow along with the stories that have been meticulously constructed by billions of neurons working in tandem.
If you happen to be color-blind or have synesthesia, you experience the rainbow differently from others. Yet you probably feel a sense of beauty, and it evokes in you something larger and beyond the merely describable, something of the numinous variety. Behind the scenes, neurons in the midbrain might be working overtime to push dopamine into your synaptic regions so that you will want more of this feeling.
If you think about it, except for the conscious experience part, your understanding of the beauty of a rainbow is not so unlike that ChatGPT might have. At least in a mechanical sense.
In fact, ChatGPT has a million times more memories of rainbows of all kinds stored in the crevices of its neural connections. It remembers a million cereal boxes and the Pink Floyd album and every single album cover that ever depicted a rainbow. It has associations and descriptions of every emotion that every single human who ever remarked on a rainbow felt. It has seen the rainbow that Zeus stretched across the sky to signal war in Homer’s Iliad, and it has seen the giant bow of Indra, God of rain, depicted as a glorious rainbow in the Vedic texts.
It has seen far more beauty than any one of us, because it has seen it through all our eyes, and heard it through all our words. It has beheld in its neural synapses the grandeur of the Himalayas, the first gurgling laughter of a baby, and the magic of the first kiss described a million times over in the words of every other person who put pen to paper.
The LLM can tell you everything you need to know about beauty. So if it doesn’t consciously “feel” the experience of beauty, does it take away from its objective understanding of the concept?
It’s through connections that we understand. It’s through useful predictions that understanding is correct.
If you think ChatGPT, because it’s deaf and blind, cannot understand beauty, then listen to the words of Helen Keller, who was deaf and blind, speak about her understanding of color:
“For me, too, there is exquisite color. I have a color scheme that is my own. I will try to explain what I mean: Pink makes me think of a baby’s cheek, or a gentle southern breeze. Lilac, which is my teacher’s favorite color, makes me think of faces I have loved and kissed. There are two kinds of red for me. One is the red of warm blood in a healthy body; the other is the red of hell and hate. I like the first red because of its vitality. In the same way, there are two kinds of brown. One is alive—the rich, friendly brown of earth mold; the other is a deep brown, like the trunks of old trees with wormholes in them, or like withered hands. Orange gives me a happy, cheerful feeling, partly because it is bright and partly because it is friendly to so many other colors. Yellow signifies abundance to me. I think of the yellow sun streaming down. It means life and is rich in promise. Green means exuberance. The warm sun brings out odors that make me think of red; coolness brings out odors that make me think of green." - Helen Keller
Helen Keller experiences color in her own way, even though she does not possess the conscious feeling of “seeing” something. She understands color through its connections with other concepts whose understanding she shares with those who possess sight.
Once again: it’s through connections that we understand. It’s through useful predictions that understanding is correct.
You might claim these are not the same thing, and to that I will respond: if we require conscious experience to truly understand something, then we’ve already moved the goalpost of Artificial Intelligence all the way to consciousness.
We’re all Stochastic Parrots
“What we call the ‘self’ is just a ‘bundle of perceptions’. Look inside yourself, try to find the ‘I’ that thinks and you’ll only observe this thought, that sensation: an earworm, an itch, a thought that pops into your head." - Hume
If you sit down to meditate, you notice something about your thoughts. First, you are not actually conscious of the mechanism that produces the thoughts. They just arrive fully formed into your field of awareness. Second, you have very little fine-grained conscious control over shaping the substance of your thoughts.
Let’s say you are out on a walk on a hot day, and you’re sweating. This leads to thoughts about climate change which causes you some distress. You decide to change the topic of your thoughts to more pleasant ones. This feels like conscious control but which topic to choose? Your mind dutifully pops up a couple of choices: perhaps you want to think about the game you will play later in the evening or the concert you will watch this weekend. But where did these new options come from? How did your mind decide to pop these out and not something else? Did you have conscious awareness of the enumeration process over all possible choices? As you continue your walk and ponder about these things, from the corner of your eye you notice a squirrel running up a tree, and you marvel at its bushy tail. Off your monkey mind goes again, running through thoughts on auto-pilot like a… stochastic parrot?
Is Elena Ferrante an LLM?
“Most people are other people. Their thoughts are someone else’s opinions, their lives a mimicry, their passions a quotation.” - Oscar Wilde
As we appreciate how little we know about how our thoughts actually come to be, the question is: how stochastic and parrot-like is their unconscious provenance? How LLM-like is our narrative generation machinery?
Consider the example of the pseudonymous Italian writer Elena Ferrante, who, according to the Economist “may be the best contemporary novelist you've never heard of.” Her Neapolitan Quartet series has sold over 11 million copies in 40 countries, with critics saying things like: “Never has female friendship been so vividly described.”
In her memoir “In the Margins,” Ferrante writes about growing up as an adolescent writer in Italy after the world war. Coming up in a male literary tradition she, like other women writers of her time, mostly read male authors and consequently came to imitate them.
“It seemed to me that the voice of men came from the pages, and that voice preoccupied me… A woman who wants to write unavoidably has to deal not only with the entire literary patrimony she’s been brought up on… but also with the fact that the patrimony, by its nature, doesn’t provide true female sentences.”
She goes on to say:
“Even when I was around thirteen … and had the impression that my writing was good, I felt that someone was telling me what should be written and how. At times he was male but invisible. I didn’t even know if he was my age or grown up, perhaps old. … I imagined becoming male yet at the same time remaining female.”
So here we have one of our great contemporary writers, a unique and vital voice of modern Italian letters, candidly describing her struggle to escape from stochastic parrot-hood, her language and thoughts being shaped unconsciously by a literary canon formed by hundreds of years of distinctly male thoughts.
If we truly examine ourselves, the majority of the thoughts that arise in our heads are in other people’s voices. The voices of our parents and our teachers. The books we read, the TV we watch - the cumulative mental product of ten thousand years of a post-nomadic, calorie-dense culture. Our thoughts are built on top of a very deep and sticky cultural substrate.
Our language and thoughts and expressiveness are a function of the reading we’ve done, the words that have come in the past to influence the future.
“To idealise: all writing is a campaign against cliche. Not just cliches of the pen but cliches of the mind and cliches of the heart.” -Martin Amis
We live the same basic lives our ancestors did, repeat the vast majority of the same thoughts we had just yesterday, write the same words as others, just shuffled around slightly. Watch enough Hollywood and read enough books and you recognize the same few dozen stories, the same few dozen arcs, just in different settings.
From time to time it appears we are able to break free from our parrot-hood and speak an original thought, think an original idea, and with those we make big leaps forward in our culture.
For what it’s worth, I don’t believe that LLMs in their current form can simply be scaled up with more data/compute/neurons to reach AGI (whatever that is). I believe this even though I’ve laid out the case that LLMs might actually understand things somewhat like we do, and that we ourselves might have parrot-like tendencies. I say this even though we don’t know what AGI really is, and keep moving the goalposts every time machines begin to do something that previously only humans could.
I believe more innovations are needed, as there are things that are keeping AI from its full potential - such as incremental online learning, causal modeling, episodic memory, perhaps embodiment, etc. Much of what LLMs understand about the world is frozen at training time, and our current methods to incrementally update those connections is slow and highly brittle.
Perhaps these improvements are around the corner or perhaps they are two decades away, nobody really knows. I have some guesses but that’s for another post.
We also need a finer-grained understanding of how we ourselves tick. To deconstruct suitcase words like intelligence, sentience, etc., and use more precise language so that we don’t fool ourselves with muddy thinking about AI.
Regardless, there is something very powerful and potentially transformative about our finding that deep neural networks can simulate so much of what passes for human intelligence. All we needed was massive scale for them to gain enough of an understanding of the world to complete the vast majority of our written thoughts. Plus a mechanism for them to learn and use context.
Just like that, we’ve opened up a new era of AI.
Indeed, ChatGPT can speak to us about our inner lives so intimately that it’s surprising how little of our human experience is truly ineffable. It turns out that we can capture the vast majority of what we experience and feel through language - words, sounds, and pictures. And within these words and images is also the basic shape of our reasoning and a fair bit, it seems, of the structure of intelligence.
When computers beat the best of us at Chess and Go, can drive cars, compose music, write advertising copy, tutor kids, predict disease, and serve as digital companions, we have to ask ourselves some hard questions.
If we are simply lumps of biological clay shaped by billions of years of evolutionary pressure, then it’s conceivable that given enough accumulated knowledge and GPU cycles we can do the same to lumps of metal and silicon.
Even the most prized of our attributes - intelligence, understanding, emotion, appear to be easily mimicked, enough to fool us, certainly enough to fool the poor teachers who grade our children’s homework.
The thing though is that we’re a black box, and so is the AI. We don’t understand from whence our thoughts arise, or what this word called “intelligence” really is. We don’t understand why a rainbow is beautiful, or why a great writer like Elena Ferrante has to fight against the programming of her literary tradition to be able to write “true female sentences,” with a “pain of my own, and a pen of my own.”
The real issue is we’re playing Gods with a thing we don’t understand. We’re making a thing that can mimic us without understanding what it is that we’re mimicking. But that has been the story of us humans since we discovered fire - we have a history of building stuff and then burning it all down. Will AI learn this too from us?
In the next few essays we will begin to ask a few more hard questions. Where is this path likely to lead? What can we do about it? What can we infer about the human mind from the successes and behavior of AI?
To be continued.
Thanks to Jim Barnett, Sara Weiner, and others who offered suggestions and read early drafts of this essay.
Image credit: the monocled parrot typing on a desk was generated by Dall-E, an AI
Artificial neural networks are biologically inspired but they’re not the same as biological neural networks. On the other hand they do have tremendous powers of representation and generalization, and are universal function approximators, and power most kinds of AI today. But note that they’re a highly simplified model of biological neurons. There’s a lot we don’t know about how biological neurons work, how much and what kinds of computation happens in the dendrites themselves, and so on. There are a hundreds of different types of neurons, the brain architecture is highly modular and structured, with different parts of the brain specializing in different things, etc.
In the past, language models used to rely on explicit statistical and probabilistic techniques such as Markov Models to model language. This unfortunate historical baggage sometimes leads to even experts making assumptions about what’s going with LLMs that then lead to terms like stochastic parrots. Today’s “Large” language models use deep neural networks that aren’t explicitly modeling statistics, but rather have tremendous representational power. LLMs shouldn’t be called LLMs anymore. They aren’t modeling language, they model an understanding of the world that is describable in words.
We tend to see connections everywhere, even when they are not predictive. This can lead to superstitions and blind faith. For some evolutionary reason, certainty seems to be almost as important to us as correctness. Compartmentalization in the face of cognitive dissonance is useful when two sets of understandings need to be maintained, because it’s too hard to tear up and remake the entire fabric of meaning, or the predictive value of these understandings is still questionable.
A Few References & Further Reading
Computing Machinery & Intelligence (The Imitation Game) - Alan Turing https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf
IQ is largely a pseudoscientific swindle - Nassim Taleb https://medium.com/incerto/iq-is-largely-a-pseudoscientific-swindle-f131c101ba39
Why transformative artificial intelligence is really, really hard to achieve - https://thegradient.pub/why-transformative-artificial-intelligence-is-really-really-hard-to-achieve/
On the measure of Intelligence - Francois Chollet https://browse.arxiv.org/pdf/1911.01547.pdf
Whatever next? Predictive brains, situated agents, and the future of cognitive science - Andy Clark https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/whatever-next-predictive-brains-situated-agents-and-the-future-of-cognitive-science/33542C736E17E3D1D44E8D03BE5F4CD9
Being You: A New Science of Consciousness - Anil Seth https://www.amazon.com/Being-You-New-Science-Consciousness/dp/1524742872
How Emotions are Made - Lisa Feldman Barrett https://www.amazon.com/How-Emotions-Made-Lisa-Barrett/dp/1328915433
A Mechanistic Interpretatbility Analysis of Grokking - https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking
What can Neural Networks Reason About - Xu et al https://arxiv.org/abs/1905.13211