This is incredible. I love your style. Just when I think you have settled somewhere definite, you take a step back and open up a wider view. You have knowledge across a number of different disciplines that are important to me too: AI, philosophy, psychology, education, and literature. What are your specialties? I love to hear some about your educational and professional journey!
This article was great. When we're born, we're just un untrained LLM. Then the training begins. Society tells us this is an apple, this is a bear, this is a house and we start to form this soup of atoms we're a part of into an organized rule based separated reality. It would be interesting to find the moment when the child stops having to learn what a sunset is and starts to see the beauty in it. That is the moment when a human reaches GI but it seems impossible to predict that transition. Thanks for sharing this piece, it's a good way to start the day.
LLMs are good at learning lots of knowledge - facts in the form of word streams that are copied from humans. Learning in this context is memorising & parrotting with some substitution based on how closely words appear in the consumed text.
LLMs cannot learn how to learn - meaning, causation, inference, algorithms & much more eludes them. This type of learning requires a level of intelligence.
I'm in agreement that LLMs will never be able to reproduce AGI, however saying 'ChatGPT can speak to us about our inner lives so intimately' is ChatGPT repeating what a human has written somewhere - Google Search could probably find it too if you gave it the right search terms.
This is incredible. I love your style. Just when I think you have settled somewhere definite, you take a step back and open up a wider view. You have knowledge across a number of different disciplines that are important to me too: AI, philosophy, psychology, education, and literature. What are your specialties? I love to hear some about your educational and professional journey!
Thank you for your kind words! I have a few posts planned that will touch on some of these topics.
New Yorker probably read your essay and created their own visualization: https://www.newyorker.com/humor/sketchbook/is-my-toddler-a-stochastic-parrot
Ha ha, thanks for sharing.
This article was great. When we're born, we're just un untrained LLM. Then the training begins. Society tells us this is an apple, this is a bear, this is a house and we start to form this soup of atoms we're a part of into an organized rule based separated reality. It would be interesting to find the moment when the child stops having to learn what a sunset is and starts to see the beauty in it. That is the moment when a human reaches GI but it seems impossible to predict that transition. Thanks for sharing this piece, it's a good way to start the day.
very strong
*Learning* is one of those suitcase words.
LLMs are good at learning lots of knowledge - facts in the form of word streams that are copied from humans. Learning in this context is memorising & parrotting with some substitution based on how closely words appear in the consumed text.
LLMs cannot learn how to learn - meaning, causation, inference, algorithms & much more eludes them. This type of learning requires a level of intelligence.
I'm in agreement that LLMs will never be able to reproduce AGI, however saying 'ChatGPT can speak to us about our inner lives so intimately' is ChatGPT repeating what a human has written somewhere - Google Search could probably find it too if you gave it the right search terms.
Thanks Ben!