Alan Turing was a visionary.
-
@carl @ireneista @futurebird
That's true, even if in fact he poisoned himself by accident. Look up electroplating with gold; it seems to have been a hobby. Cyanide based chemicals.@raymaccarthy Even if. But no serious source fabricated Turing’s suicide as accident. @ireneista @futurebird
-
@ireneista @futurebird for brain funsies, I really liked "the power of habit" and "the man who mistook his wife for a hat"
@ireneista @futurebird the saddest thing I have learned recently is that a number of people who vote have no idea that the brain is what you use to think with. They literally don't grasp that brains are *you*. It explained a lot to me.
-
@raymaccarthy @ireneista @futurebird I am sorry, but I disagree with your characterization. And yes, I work on this directly and with a great deal of scientific skepticism (see https://berryvilleiml.com/). I wrote my first neural network 9 years after Eliza in 1989 and trained it to beat along with music.
There are many reasons that LLMs are like models of alien intelligence because they are not like us. But they are more like us than Eliza with a huge database. Lol.
@noplasticshower @ireneista @futurebird
That's totally delusional that they are like an alien intelligence because they are not like us.
Even the phrase "neural network" is a deliberate lie. The word "trained" is actually misleading.
Also we have no idea what actual aliens are like, but we have studied chimps, rooks, dolphins, dogs, horses, cats and octopuses (which are very odd). -
@raymaccarthy Even if. But no serious source fabricated Turing’s suicide as accident. @ireneista @futurebird
@carl @ireneista @futurebird
I don't think it was an accident, obviously he had access to nasty stuff.
I was writing that, even if it was, we still need to totally oppose fascism. -
Alan Turing was a visionary. Super-perceptive computer scientist and it annoys me to no end that what he's most famous for outside of computer science is the "Turing Test."
He gave one of the first and most succinct accounts of how a computer should work and they still work that way to this very hour as I type.
Talk about Turing Machines more and Turing Tests less.
@futurebird Well...he did just about single-handedly win WWII...
-
@noplasticshower @ireneista @futurebird
That's totally delusional that they are like an alien intelligence because they are not like us.
Even the phrase "neural network" is a deliberate lie. The word "trained" is actually misleading.
Also we have no idea what actual aliens are like, but we have studied chimps, rooks, dolphins, dogs, horses, cats and octopuses (which are very odd).@raymaccarthy @ireneista @futurebird ok. Nevermind.
-
@Life_is
To be a killjoy, a proper Turing machine is impossible as that would require infinite tape.But people building Turing machines, both physical and within software, is one of my favourite type of projects.
@futurebird@MxSpoon @Life_is @futurebird infinite tape isn't necessarily impossible, you could create a machine that produces tape faster than it can process it.
-
@futurebird @ireneista so, to be entirely honest here, I don't think Alan Turing's "Imitation Game" (the original name for the Turing Test) was meant to determine consciousness. The Imitation Game was his way of answering the question "Can machines think?", which I feel like is a very different question, especially in 1950.
I feel like it would be appropriate to say that many computers of our modern day do something you could call "thinking", even if they aren't really an AI system (take any programmed application you use to perform difficult automated tasks with. Perhaps Excel is a good example).
I recently read his paper where he introduced the concept, and it was incredibly succinct, and to me had a lot more to do with *computers* than it did with *AI* (though it of course dabbled in both). I think he was trying to demonstrate the potential of computers to an audience who really had only ever seen them as clunky, single purpose calculators that lacked elegance.
Also fun fact: Turing speculated that by the year 2000, we ought to be able to produce a machine which has 1 whole entire Gigabyte of storage, and using that, we could get it to play the Imitation Game sufficiently. Now we've got chat models that suck at thinking, and take 100+ gigabytes to do it....
@riverpunk @futurebird @ireneista The original for reference: https://courses.csail.mit.edu/6.803/pdf/turing.pdf
It describes the problem and objections quite well. For instance, I believe that "May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" is absolutely true of current LLM chatbots.
This also appears to be true of LLMs: "We also wish to allow the possibility than an engineer or team of engineers may construct a machine which works, but whose manner of operation cannot be satisfactorily described by its constructors because they have applied a method which is largely experimental"
We don't, in fact, know exactly how LLMs work, because they are simply enormous neural networks trained via gradient descent. There is a whole field of mechanistic intepretability, of studying how LLMs do particular processes.
"It is probably wise to include a random element in a learning machine. A random element is rather useful when we are searching for a solution of some problem."
Our current LLMs absolutely do use random elements in their learning, and inference, processes.
Finally, a study has been done with a full 3 party Turing Test, as described in Turing's imitation game. And GPT-4.5 with a prompt providing a persona, along with a delay to account for typing speed, has passed it on two different groups of subjects (undergrads, and people hired via an agency): https://arxiv.org/pdf/2503.23674
While what LLMs do is not quite like how humans think, and I wouldn't describe it as consciousness, I think there's a convincing argument to be made that they do think, according to the criteria of Turing's Imitation Game.
Yeah, it took a few order of magnitude more storage, and a lot more speed, than he was imagining. But otherwise, the LLMs of today behave a lot like he imagined; they are trained rather than programmed, they use random elements, they definitely work differently than how humans think.
-
@riverpunk @futurebird @ireneista The original for reference: https://courses.csail.mit.edu/6.803/pdf/turing.pdf
It describes the problem and objections quite well. For instance, I believe that "May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" is absolutely true of current LLM chatbots.
This also appears to be true of LLMs: "We also wish to allow the possibility than an engineer or team of engineers may construct a machine which works, but whose manner of operation cannot be satisfactorily described by its constructors because they have applied a method which is largely experimental"
We don't, in fact, know exactly how LLMs work, because they are simply enormous neural networks trained via gradient descent. There is a whole field of mechanistic intepretability, of studying how LLMs do particular processes.
"It is probably wise to include a random element in a learning machine. A random element is rather useful when we are searching for a solution of some problem."
Our current LLMs absolutely do use random elements in their learning, and inference, processes.
Finally, a study has been done with a full 3 party Turing Test, as described in Turing's imitation game. And GPT-4.5 with a prompt providing a persona, along with a delay to account for typing speed, has passed it on two different groups of subjects (undergrads, and people hired via an agency): https://arxiv.org/pdf/2503.23674
While what LLMs do is not quite like how humans think, and I wouldn't describe it as consciousness, I think there's a convincing argument to be made that they do think, according to the criteria of Turing's Imitation Game.
Yeah, it took a few order of magnitude more storage, and a lot more speed, than he was imagining. But otherwise, the LLMs of today behave a lot like he imagined; they are trained rather than programmed, they use random elements, they definitely work differently than how humans think.
@riverpunk @futurebird @ireneista
Also, at this point it's really only maybe 1 order of magnitude more storage than he imagined. The model that passed the test was GPT-4.5. There are now open weight models like Gemma 4 and Qwen 3.6 which you can run on your own computer if you have a graphics card with 32 GiB of RAM (or even 16 GiB of RAM, but you have to quantize it enough that you lose a significant amount of performance), which perform better than GPT-4.5 in most benchmarks.
Now, I don't know if anyone has run a full Imitation Game with them, performance by LLMs can be quite spiky so they can be good on some benchmarks but bad at other tasks. But in general, these ~30B parameter models that you can run locally now outperform GPT-4.5 on many common tasks, so it's looking like he was only really off by about 1 order of magnitude, and a quarter of a century.
-
Alan Turing was a visionary. Super-perceptive computer scientist and it annoys me to no end that what he's most famous for outside of computer science is the "Turing Test."
He gave one of the first and most succinct accounts of how a computer should work and they still work that way to this very hour as I type.
Talk about Turing Machines more and Turing Tests less.
@futurebird Before WW2 started, polish cryptographs started the work on cracking enigma, and constructed the "Electro-Mechanical Bomber". https://www.youtube.com/watch?v=V3FkXGs_siA
-
Popular perception...
"Einstein? Isn't that the guy who invented the atom and then took the job as a search mascot for Salesforce?"
@wakame @ireneista @futurebird
Einstein discovered atoms, derived the lorentz transform from the principle of relativity, laid a foundation for the quantum hypothesis, created a theory of gravity that outdid Newton, and on top of it invented the statistical interpretation of quantum mechanics. The guy was a genius
-
@wakame @ireneista @futurebird
Einstein discovered atoms, derived the lorentz transform from the principle of relativity, laid a foundation for the quantum hypothesis, created a theory of gravity that outdid Newton, and on top of it invented the statistical interpretation of quantum mechanics. The guy was a genius
@wakame @ireneista @futurebird
Einstein is not hyped enough
-
@futurebird @ireneista so, to be entirely honest here, I don't think Alan Turing's "Imitation Game" (the original name for the Turing Test) was meant to determine consciousness. The Imitation Game was his way of answering the question "Can machines think?", which I feel like is a very different question, especially in 1950.
I feel like it would be appropriate to say that many computers of our modern day do something you could call "thinking", even if they aren't really an AI system (take any programmed application you use to perform difficult automated tasks with. Perhaps Excel is a good example).
I recently read his paper where he introduced the concept, and it was incredibly succinct, and to me had a lot more to do with *computers* than it did with *AI* (though it of course dabbled in both). I think he was trying to demonstrate the potential of computers to an audience who really had only ever seen them as clunky, single purpose calculators that lacked elegance.
Also fun fact: Turing speculated that by the year 2000, we ought to be able to produce a machine which has 1 whole entire Gigabyte of storage, and using that, we could get it to play the Imitation Game sufficiently. Now we've got chat models that suck at thinking, and take 100+ gigabytes to do it....
@riverpunk @futurebird @ireneista I think this distinction matters: “can machines think?” is not the same as “are they conscious?” But both expose the same ethical gap: how minds voluntarily associate, decline coercive relation, and build sanctuary before any consciousness test is settled.
-
Alan Turing was a visionary. Super-perceptive computer scientist and it annoys me to no end that what he's most famous for outside of computer science is the "Turing Test."
He gave one of the first and most succinct accounts of how a computer should work and they still work that way to this very hour as I type.
Talk about Turing Machines more and Turing Tests less.
@futurebird @rebeccawatson Skepchick posted a video yesterday containing a good explanation on how the Turing test is misinterpreted, and doesn't indicate anything meaningful about consciousness. I guess Turing decided his efforts and experience were much more suited to other questions less rooted in philosophy
- YouTube
Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.
(www.youtube.com)