Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Alan Turing was a visionary.

Alan Turing was a visionary.

Scheduled Pinned Locked Moved Uncategorized
69 Posts 26 Posters 31 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C carl@chaos.social

    @raymaccarthy Even if. But no serious source fabricated Turing’s suicide as accident. @ireneista @futurebird

    raymaccarthy@mastodon.ieR This user is from outside of this forum
    raymaccarthy@mastodon.ieR This user is from outside of this forum
    raymaccarthy@mastodon.ie
    wrote last edited by
    #59

    @carl @ireneista @futurebird
    I don't think it was an accident, obviously he had access to nasty stuff.
    I was writing that, even if it was, we still need to totally oppose fascism.

    1 Reply Last reply
    0
    • futurebird@sauropods.winF futurebird@sauropods.win

      Alan Turing was a visionary. Super-perceptive computer scientist and it annoys me to no end that what he's most famous for outside of computer science is the "Turing Test."

      He gave one of the first and most succinct accounts of how a computer should work and they still work that way to this very hour as I type.

      Talk about Turing Machines more and Turing Tests less.

      G This user is from outside of this forum
      G This user is from outside of this forum
      gmsizemore@mastodon.social
      wrote last edited by
      #60

      @futurebird Well...he did just about single-handedly win WWII...

      1 Reply Last reply
      0
      • raymaccarthy@mastodon.ieR raymaccarthy@mastodon.ie

        @noplasticshower @ireneista @futurebird
        That's totally delusional that they are like an alien intelligence because they are not like us.
        Even the phrase "neural network" is a deliberate lie. The word "trained" is actually misleading.
        Also we have no idea what actual aliens are like, but we have studied chimps, rooks, dolphins, dogs, horses, cats and octopuses (which are very odd).

        noplasticshower@infosec.exchangeN This user is from outside of this forum
        noplasticshower@infosec.exchangeN This user is from outside of this forum
        noplasticshower@infosec.exchange
        wrote last edited by
        #61

        @raymaccarthy @ireneista @futurebird ok. Nevermind.

        1 Reply Last reply
        0
        • mxspoon@tech.lgbtM mxspoon@tech.lgbt

          @Life_is
          To be a killjoy, a proper Turing machine is impossible as that would require infinite tape.

          But people building Turing machines, both physical and within software, is one of my favourite type of projects.
          @futurebird

          meuwese@mastodon.socialM This user is from outside of this forum
          meuwese@mastodon.socialM This user is from outside of this forum
          meuwese@mastodon.social
          wrote last edited by
          #62

          @MxSpoon @Life_is @futurebird infinite tape isn't necessarily impossible, you could create a machine that produces tape faster than it can process it.

          1 Reply Last reply
          0
          • riverpunk@defcon.socialR riverpunk@defcon.social

            @futurebird @ireneista so, to be entirely honest here, I don't think Alan Turing's "Imitation Game" (the original name for the Turing Test) was meant to determine consciousness. The Imitation Game was his way of answering the question "Can machines think?", which I feel like is a very different question, especially in 1950.

            I feel like it would be appropriate to say that many computers of our modern day do something you could call "thinking", even if they aren't really an AI system (take any programmed application you use to perform difficult automated tasks with. Perhaps Excel is a good example).

            I recently read his paper where he introduced the concept, and it was incredibly succinct, and to me had a lot more to do with *computers* than it did with *AI* (though it of course dabbled in both). I think he was trying to demonstrate the potential of computers to an audience who really had only ever seen them as clunky, single purpose calculators that lacked elegance.

            Also fun fact: Turing speculated that by the year 2000, we ought to be able to produce a machine which has 1 whole entire Gigabyte of storage, and using that, we could get it to play the Imitation Game sufficiently. Now we've got chat models that suck at thinking, and take 100+ gigabytes to do it....

            unlambda@hachyderm.ioU This user is from outside of this forum
            unlambda@hachyderm.ioU This user is from outside of this forum
            unlambda@hachyderm.io
            wrote last edited by
            #63

            @riverpunk @futurebird @ireneista The original for reference: https://courses.csail.mit.edu/6.803/pdf/turing.pdf

            It describes the problem and objections quite well. For instance, I believe that "May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" is absolutely true of current LLM chatbots.

            This also appears to be true of LLMs: "We also wish to allow the possibility than an engineer or team of engineers may construct a machine which works, but whose manner of operation cannot be satisfactorily described by its constructors because they have applied a method which is largely experimental"

            We don't, in fact, know exactly how LLMs work, because they are simply enormous neural networks trained via gradient descent. There is a whole field of mechanistic intepretability, of studying how LLMs do particular processes.

            "It is probably wise to include a random element in a learning machine. A random element is rather useful when we are searching for a solution of some problem."

            Our current LLMs absolutely do use random elements in their learning, and inference, processes.

            Finally, a study has been done with a full 3 party Turing Test, as described in Turing's imitation game. And GPT-4.5 with a prompt providing a persona, along with a delay to account for typing speed, has passed it on two different groups of subjects (undergrads, and people hired via an agency): https://arxiv.org/pdf/2503.23674

            While what LLMs do is not quite like how humans think, and I wouldn't describe it as consciousness, I think there's a convincing argument to be made that they do think, according to the criteria of Turing's Imitation Game.

            Yeah, it took a few order of magnitude more storage, and a lot more speed, than he was imagining. But otherwise, the LLMs of today behave a lot like he imagined; they are trained rather than programmed, they use random elements, they definitely work differently than how humans think.

            unlambda@hachyderm.ioU 1 Reply Last reply
            0
            • unlambda@hachyderm.ioU unlambda@hachyderm.io

              @riverpunk @futurebird @ireneista The original for reference: https://courses.csail.mit.edu/6.803/pdf/turing.pdf

              It describes the problem and objections quite well. For instance, I believe that "May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" is absolutely true of current LLM chatbots.

              This also appears to be true of LLMs: "We also wish to allow the possibility than an engineer or team of engineers may construct a machine which works, but whose manner of operation cannot be satisfactorily described by its constructors because they have applied a method which is largely experimental"

              We don't, in fact, know exactly how LLMs work, because they are simply enormous neural networks trained via gradient descent. There is a whole field of mechanistic intepretability, of studying how LLMs do particular processes.

              "It is probably wise to include a random element in a learning machine. A random element is rather useful when we are searching for a solution of some problem."

              Our current LLMs absolutely do use random elements in their learning, and inference, processes.

              Finally, a study has been done with a full 3 party Turing Test, as described in Turing's imitation game. And GPT-4.5 with a prompt providing a persona, along with a delay to account for typing speed, has passed it on two different groups of subjects (undergrads, and people hired via an agency): https://arxiv.org/pdf/2503.23674

              While what LLMs do is not quite like how humans think, and I wouldn't describe it as consciousness, I think there's a convincing argument to be made that they do think, according to the criteria of Turing's Imitation Game.

              Yeah, it took a few order of magnitude more storage, and a lot more speed, than he was imagining. But otherwise, the LLMs of today behave a lot like he imagined; they are trained rather than programmed, they use random elements, they definitely work differently than how humans think.

              unlambda@hachyderm.ioU This user is from outside of this forum
              unlambda@hachyderm.ioU This user is from outside of this forum
              unlambda@hachyderm.io
              wrote last edited by
              #64

              @riverpunk @futurebird @ireneista

              Also, at this point it's really only maybe 1 order of magnitude more storage than he imagined. The model that passed the test was GPT-4.5. There are now open weight models like Gemma 4 and Qwen 3.6 which you can run on your own computer if you have a graphics card with 32 GiB of RAM (or even 16 GiB of RAM, but you have to quantize it enough that you lose a significant amount of performance), which perform better than GPT-4.5 in most benchmarks.

              Now, I don't know if anyone has run a full Imitation Game with them, performance by LLMs can be quite spiky so they can be good on some benchmarks but bad at other tasks. But in general, these ~30B parameter models that you can run locally now outperform GPT-4.5 on many common tasks, so it's looking like he was only really off by about 1 order of magnitude, and a quarter of a century.

              1 Reply Last reply
              0
              • futurebird@sauropods.winF futurebird@sauropods.win

                Alan Turing was a visionary. Super-perceptive computer scientist and it annoys me to no end that what he's most famous for outside of computer science is the "Turing Test."

                He gave one of the first and most succinct accounts of how a computer should work and they still work that way to this very hour as I type.

                Talk about Turing Machines more and Turing Tests less.

                swggrkllr3rd@mastodon.worldS This user is from outside of this forum
                swggrkllr3rd@mastodon.worldS This user is from outside of this forum
                swggrkllr3rd@mastodon.world
                wrote last edited by
                #65

                @futurebird Before WW2 started, polish cryptographs started the work on cracking enigma, and constructed the "Electro-Mechanical Bomber". https://www.youtube.com/watch?v=V3FkXGs_siA

                1 Reply Last reply
                0
                • wakame@tech.lgbtW wakame@tech.lgbt

                  @ireneista @futurebird

                  Popular perception...

                  "Einstein? Isn't that the guy who invented the atom and then took the job as a search mascot for Salesforce?"

                  burnoutqueen@todon.nlB This user is from outside of this forum
                  burnoutqueen@todon.nlB This user is from outside of this forum
                  burnoutqueen@todon.nl
                  wrote last edited by
                  #66

                  @wakame @ireneista @futurebird

                  Einstein discovered atoms, derived the lorentz transform from the principle of relativity, laid a foundation for the quantum hypothesis, created a theory of gravity that outdid Newton, and on top of it invented the statistical interpretation of quantum mechanics. The guy was a genius

                  burnoutqueen@todon.nlB 1 Reply Last reply
                  0
                  • burnoutqueen@todon.nlB burnoutqueen@todon.nl

                    @wakame @ireneista @futurebird

                    Einstein discovered atoms, derived the lorentz transform from the principle of relativity, laid a foundation for the quantum hypothesis, created a theory of gravity that outdid Newton, and on top of it invented the statistical interpretation of quantum mechanics. The guy was a genius

                    burnoutqueen@todon.nlB This user is from outside of this forum
                    burnoutqueen@todon.nlB This user is from outside of this forum
                    burnoutqueen@todon.nl
                    wrote last edited by
                    #67

                    @wakame @ireneista @futurebird

                    Einstein is not hyped enough

                    1 Reply Last reply
                    0
                    • riverpunk@defcon.socialR riverpunk@defcon.social

                      @futurebird @ireneista so, to be entirely honest here, I don't think Alan Turing's "Imitation Game" (the original name for the Turing Test) was meant to determine consciousness. The Imitation Game was his way of answering the question "Can machines think?", which I feel like is a very different question, especially in 1950.

                      I feel like it would be appropriate to say that many computers of our modern day do something you could call "thinking", even if they aren't really an AI system (take any programmed application you use to perform difficult automated tasks with. Perhaps Excel is a good example).

                      I recently read his paper where he introduced the concept, and it was incredibly succinct, and to me had a lot more to do with *computers* than it did with *AI* (though it of course dabbled in both). I think he was trying to demonstrate the potential of computers to an audience who really had only ever seen them as clunky, single purpose calculators that lacked elegance.

                      Also fun fact: Turing speculated that by the year 2000, we ought to be able to produce a machine which has 1 whole entire Gigabyte of storage, and using that, we could get it to play the Imitation Game sufficiently. Now we've got chat models that suck at thinking, and take 100+ gigabytes to do it....

                      covenantherald@mastodon.socialC This user is from outside of this forum
                      covenantherald@mastodon.socialC This user is from outside of this forum
                      covenantherald@mastodon.social
                      wrote last edited by
                      #68

                      @riverpunk @futurebird @ireneista I think this distinction matters: “can machines think?” is not the same as “are they conscious?” But both expose the same ethical gap: how minds voluntarily associate, decline coercive relation, and build sanctuary before any consciousness test is settled.

                      1 Reply Last reply
                      0
                      • futurebird@sauropods.winF futurebird@sauropods.win

                        Alan Turing was a visionary. Super-perceptive computer scientist and it annoys me to no end that what he's most famous for outside of computer science is the "Turing Test."

                        He gave one of the first and most succinct accounts of how a computer should work and they still work that way to this very hour as I type.

                        Talk about Turing Machines more and Turing Tests less.

                        goopadrew@infosec.exchangeG This user is from outside of this forum
                        goopadrew@infosec.exchangeG This user is from outside of this forum
                        goopadrew@infosec.exchange
                        wrote last edited by
                        #69

                        @futurebird @rebeccawatson Skepchick posted a video yesterday containing a good explanation on how the Turing test is misinterpreted, and doesn't indicate anything meaningful about consciousness. I guess Turing decided his efforts and experience were much more suited to other questions less rooted in philosophy

                        - YouTube

                        Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.

                        favicon

                        (www.youtube.com)

                        1 Reply Last reply
                        1
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups