Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.”
-
Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.”
Google DeepMind Paper Argues LLMs Will Never Be Conscious
Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.”
404 Media (www.404media.co)
-
Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.”
Google DeepMind Paper Argues LLMs Will Never Be Conscious
Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.”
404 Media (www.404media.co)
the argument broadly boils down to the point that any AI system is ultimately “mapmaker-dependent,” meaning it “requires an active, experiencing cognitive agent”—a human—to “alphabetize continuous physics into a finite set of meaningful states.” In other words, it needs a person to first organize the world in way that is useful to the AI system, like, for example, the way armies of low paid workers in Africa label images in order to create training data for AI.
“You have many other motivations as a human being. It's a bit more complicated than that, but all of those spring from the fact that you have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that,” Jäger told me. “An LLM doesn't do that. It's just a bunch of patterns on a hard drive. Then it gets prompted and it runs until the task is finished and then it's done. So it doesn't have any intrinsic meaning. Its meaning comes from the way that some human agent externally has defined a meaning.”
oh hey, it's also they same thing I had concluded on my own
-
the argument broadly boils down to the point that any AI system is ultimately “mapmaker-dependent,” meaning it “requires an active, experiencing cognitive agent”—a human—to “alphabetize continuous physics into a finite set of meaningful states.” In other words, it needs a person to first organize the world in way that is useful to the AI system, like, for example, the way armies of low paid workers in Africa label images in order to create training data for AI.
“You have many other motivations as a human being. It's a bit more complicated than that, but all of those spring from the fact that you have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that,” Jäger told me. “An LLM doesn't do that. It's just a bunch of patterns on a hard drive. Then it gets prompted and it runs until the task is finished and then it's done. So it doesn't have any intrinsic meaning. Its meaning comes from the way that some human agent externally has defined a meaning.”
oh hey, it's also they same thing I had concluded on my own
@404mediaco another way to put this could be
to have a consciousness you need a motivation of your own, something that drives you inside... and for that, in turn, you need to be able to have a motivation, to have something to win or to lose, to stand on your own and be able to act individually
to have a consciousness you need a heart -
Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.”
Google DeepMind Paper Argues LLMs Will Never Be Conscious
Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.”
404 Media (www.404media.co)
-
R relay@relay.infosec.exchange shared this topic