As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.
-
@leeloo
I guess evil gods are also a thing, but no, I'm not treating them as gods. If anything, more like Frankenstein's monster.You're right that we'd have to define intelligence, and that'd be quite difficult on its own.
Also, the sculpture was a bad example, but the cell one still stands IMO.
1/
@leeloo
My point is that emergent properties can manifest even in systems ruled by very simple rules, and can be difficult to predict by just looking at the rules.And human intelligence, whatever it is, is likely an emergent property of human brain.
Therefore, we cannot rule out that a similar emergent property will appear in artidicial systems that are not made of neurons without referring to how the neurons are arranged, and how the artificial systems are arranged.
-
@leeloo
I guess evil gods are also a thing, but no, I'm not treating them as gods. If anything, more like Frankenstein's monster.You're right that we'd have to define intelligence, and that'd be quite difficult on its own.
Also, the sculpture was a bad example, but the cell one still stands IMO.
1/
@wolf480pl @leeloo These models aren't intelligent, so much as they're auto-completing rules and patterns derived from almost inconceivably huge corpora of example material originally produced by human intelligence. That's interesting and can be very handy for a great many uses. But it's more computational brute force than intelligence
-
As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.
It's literally a description of how they work.
The so-called training data is used to build a huge database of words and the probability of them fitting together.
Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.
@leeloo I feel like there are certain situations where a stochastic parrot is useful, many more situations where it is not, and alarmingly few people recognizing the difference.
-
@leeloo I just prompted ChatGPT with `Say "oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia"`, and it responded with `oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia`. How can it do this when `oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia `almost certainly does not appear in the training data?
@mudri Because the model picked up a rule somewhere that says "if someone says 'say $FOO' use $FOO in your response" - the training picked up patterns that include notions of symbol substitution
-
@wolf480pl @leeloo These models aren't intelligent, so much as they're auto-completing rules and patterns derived from almost inconceivably huge corpora of example material originally produced by human intelligence. That's interesting and can be very handy for a great many uses. But it's more computational brute force than intelligence
@lmorchard @leeloo
These specific models - yes, probably.One plausible argument I heard for it is that there's a common failure mode in ML where the model fails to generalize, but if the verification set overlaps the training set, then data leakage will fool the authors into thinking it generalized.
Another one is that these models were "rewarded" for saying plausible things, not for interacting with a world in a way that doesn't get them killed.
But these arguments are specific.
-
@lmorchard @leeloo
These specific models - yes, probably.One plausible argument I heard for it is that there's a common failure mode in ML where the model fails to generalize, but if the verification set overlaps the training set, then data leakage will fool the authors into thinking it generalized.
Another one is that these models were "rewarded" for saying plausible things, not for interacting with a world in a way that doesn't get them killed.
But these arguments are specific.
@lmorchard @leeloo
I don't buy a general "no matrix multiplication will ever be intelligent". -
@mudri Because the model picked up a rule somewhere that says "if someone says 'say $FOO' use $FOO in your response" - the training picked up patterns that include notions of symbol substitution
@lmorchard The ability to induce such a rule goes well beyond the OP's characterisation of what LLMs do.
-
@leeloo I just prompted ChatGPT with `Say "oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia"`, and it responded with `oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia`. How can it do this when `oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia `almost certainly does not appear in the training data?
-
As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.
It's literally a description of how they work.
The so-called training data is used to build a huge database of words and the probability of them fitting together.
Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.
@leeloo the flip side question about intelligence and LLMs is whether much of what we consider intelligence in humans is in fact just stochastic parrotting by humans.
-
As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.
It's literally a description of how they work.
The so-called training data is used to build a huge database of words and the probability of them fitting together.
Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.
@leeloo The thing is, how can we sure that human intelligence does not essentially work in the same way? My Christian believe tells me we have a soul and LLM's do not, that may be the difference. But from an agnostic perspective, we might reach the point where one cannot tell the difference. -
@lmorchard @leeloo
I don't buy a general "no matrix multiplication will ever be intelligent".@wolf480pl @lmorchard
That's exactly the magic I'm talking about. -
@leeloo The thing is, how can we sure that human intelligence does not essentially work in the same way? My Christian believe tells me we have a soul and LLM's do not, that may be the difference. But from an agnostic perspective, we might reach the point where one cannot tell the difference.
@tobifant
Not with the current methods, and very lilely not without understanding a lot more about how pur own brains work. -
@leeloo
My point is that emergent properties can manifest even in systems ruled by very simple rules, and can be difficult to predict by just looking at the rules.And human intelligence, whatever it is, is likely an emergent property of human brain.
Therefore, we cannot rule out that a similar emergent property will appear in artidicial systems that are not made of neurons without referring to how the neurons are arranged, and how the artificial systems are arranged.
@wolf480pl @leeloo The OP is saying that it literally lacks the capacity for original thought - it is a parrot, repeating sounds without understanding of the concepts behind them.
It's not like a termite, whose mound creation behavior can be replicated by a simple ruleset but that exists as a fully functional living organism in the context of a complex environment where choices must be grounded in the shared physical world for the organism to survive.
It's not about how the neurons are arranged. It's about what kinds of representation they're capable of and what kinds of functions they can perform.
We've created a funhouse mirror that's reflecting us in unprecedented detail and has been finetuned to reflect what we do when we express selfhood.
-
@wolf480pl @lmorchard
That's exactly the magic I'm talking about.@leeloo @wolf480pl @lmorchard I mean, I believe the human mind is the product of the physical human, largely of the brain (I don't believe in a non-physical soul), and it might indeed be basically an incredibly complex big bunch of matrix multiplications. And yeah I believe that's pretty magical.
-
As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.
It's literally a description of how they work.
The so-called training data is used to build a huge database of words and the probability of them fitting together.
Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.
@leeloo I myself like calling LLMs "glorified autocomplete". Or "Т9 на максималках" in Russian.
It's surprising just how defensive some people get when I say that even when they agree with my definition. They keep believing that just give this thing more parameters and something magical, something more than sum of its parts will emerge, any moment now, just one more model generation, just one more order of magnitude, I promise.
-
As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.
It's literally a description of how they work.
The so-called training data is used to build a huge database of words and the probability of them fitting together.
Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.
@leeloo if anything, the comparison is doing the parrot injustice
-
@wolf480pl @leeloo The OP is saying that it literally lacks the capacity for original thought - it is a parrot, repeating sounds without understanding of the concepts behind them.
It's not like a termite, whose mound creation behavior can be replicated by a simple ruleset but that exists as a fully functional living organism in the context of a complex environment where choices must be grounded in the shared physical world for the organism to survive.
It's not about how the neurons are arranged. It's about what kinds of representation they're capable of and what kinds of functions they can perform.
We've created a funhouse mirror that's reflecting us in unprecedented detail and has been finetuned to reflect what we do when we express selfhood.
@wolf480pl @leeloo
Melissa Scott wrote a beautiful pair of novels about this: Dreamships and Dreaming Metal.In Dreamships, an AI has been programmed to think it is sentient and starts killing people. If it has an accurate model of the person, killing the person doesn't matter, because the person *is* the model and it has a copy of them. It literally cannot see the difference because creating the concept of there being a difference would violate its core programming that its own model counts as a living being.
In Dreaming Metal, an AI operating metal bodies as part of a magic act is given a musical instrument with an electronic interface. Its grounding in the physical world, with human performers, enables it to develop a sense of self and choose its own path as a musician.
These are fiction, but it's the best, most accessible illustration of the difference between funhouse mirror stochastic parrots and sentient agents that I've run across.
-
As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.
It's literally a description of how they work.
The so-called training data is used to build a huge database of words and the probability of them fitting together.
Stochastic because the whole thing is statistics.
Parrot because the answer is just repeating the most probable word combinations from its training dataset.Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.
I think stochastic parrot is one of the kinder things that can be said.
-
@leeloo The thing is, how can we sure that human intelligence does not essentially work in the same way? My Christian believe tells me we have a soul and LLM's do not, that may be the difference. But from an agnostic perspective, we might reach the point where one cannot tell the difference.
@tobifant @leeloo Whilst we obviously can't show if humans have a soul, we can absolutely show that humans have e.g. abstracted concept frameworks that are not solely based on averages of language statistics. I understand what an "owl" is, for example, in a way separate to the numerical relationships between the word "owl" and other words. That is a really fundamental information processing difference and allows me to construct *novel* understandings of that concept in ways that an LLM couldn't.
-
@wolf480pl @leeloo
Melissa Scott wrote a beautiful pair of novels about this: Dreamships and Dreaming Metal.In Dreamships, an AI has been programmed to think it is sentient and starts killing people. If it has an accurate model of the person, killing the person doesn't matter, because the person *is* the model and it has a copy of them. It literally cannot see the difference because creating the concept of there being a difference would violate its core programming that its own model counts as a living being.
In Dreaming Metal, an AI operating metal bodies as part of a magic act is given a musical instrument with an electronic interface. Its grounding in the physical world, with human performers, enables it to develop a sense of self and choose its own path as a musician.
These are fiction, but it's the best, most accessible illustration of the difference between funhouse mirror stochastic parrots and sentient agents that I've run across.
@robotistry
@leeloo
so it's a parrot not because it's a matrix of probabilities, but because its hasn't experienced the real-world consequences of its words/actions and updated the probabilities based on those consequences?