Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Sigh.

Sigh.

Scheduled Pinned Locked Moved Uncategorized
87 Posts 54 Posters 16 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jmcrookston@mastodon.socialJ jmcrookston@mastodon.social

    @cstross

    I heard when they first got the fly simulation up and running it introduced itself as Elon Musk and said that it was going to set up a colony on Mars.

    illuminatus@mstdn.socialI This user is from outside of this forum
    illuminatus@mstdn.socialI This user is from outside of this forum
    illuminatus@mstdn.social
    wrote last edited by
    #34

    @jmcrookston Yes, but the simulation model <did> set up the colony on Mars. @cstross

    1 Reply Last reply
    0
    • cstross@wandering.shopC cstross@wandering.shop

      ... The next step on from Drosophila, the mouse brain, is 560 times larger—never mind a vastly more complex human brain. And to get the murine connectome we'll have to chop up *a lot* of brains: a human upload won't pass any kind of medical ethics review at this point!

      But near-term, it's expected to yield "fundamentally new architectural principles for AI systems that are more sample-efficient, more robust, and more capable of behavioral generalization than current approaches"

      /5

      nilz@norden.socialN This user is from outside of this forum
      nilz@norden.socialN This user is from outside of this forum
      nilz@norden.social
      wrote last edited by
      #35

      @cstross

      Lobsters... 🦞

      1 Reply Last reply
      0
      • mrundkvist@archaeo.socialM mrundkvist@archaeo.social

        @cstross
        Certainly a more promising avenue towards AGI than stochastic parrots.

        But then again, what they're doing here is copying a fly brain into a silicon black box and seeing what it does. The research has nothing to do with improving upon fly intelligence and immanentising the Fly Nerd Rapture.

        #ai #llm

        U This user is from outside of this forum
        U This user is from outside of this forum
        unkx@icosahedron.website
        wrote last edited by
        #36

        @mrundkvist @cstross please do not give the flybros any ideas…

        1 Reply Last reply
        0
        • cstross@wandering.shopC cstross@wandering.shop

          But I'm REALLY HAPPY right now because this kinda-sorta validates the key premise of the SF novel I just handed in last month (which involves serial reincarnation via destructive brain-slicing-and-imaging then imprinting onto an immature cortex, and then explores its disastrous societal failure modes).

          ... And it also hints that artificial consciousness might, eventually, be possible, if only via the hard path of doing it the same way we do it, only in simulation in silico.

          /6 (ends)

          lproven@social.vivaldi.netL This user is from outside of this forum
          lproven@social.vivaldi.netL This user is from outside of this forum
          lproven@social.vivaldi.net
          wrote last edited by
          #37

          @cstross It reminds me of something I read about 30 years ago by some Linux journalist about modelling part of the digestive ganglion of a lobster.

          I wonder what happened to that guy? Not seem him in the Linux world in years...

          1 Reply Last reply
          0
          • cstross@wandering.shopC cstross@wandering.shop

            Sigh.

            So it turns out we've mapped the neural connectome of Drosophila *and simulated it in silico*.

            Link Preview Image
            FlyWire

            favicon

            (flywire.ai)

            Pop-sci explainer here:

            Link Preview Image
            Whole Brain Emulation Achieved: Scientists Run a Fruit Fly Brain in Simulation | RathBiotaClan

            Scientists ran a real fruit fly brain in simulation using the FlyWire connectome, achieving the first working whole brain emulation.

            favicon

            RathBiotaClan (www.rathbiotaclan.com)

            Key quote: "The step from a complete connectome to a working computational brain model is not trivial." And there's an even more important finding in this screenshot (alt text via OCR):

            "The wiring is the computation".

            /1

            temptoetiam@eldritch.cafeT This user is from outside of this forum
            temptoetiam@eldritch.cafeT This user is from outside of this forum
            temptoetiam@eldritch.cafe
            wrote last edited by
            #38

            @cstross The popsci writeup stopped me in my tracks at the second paragraph
            "The first successful polymerase chain reaction was run in a car on a California highway." Certainly not! PCR was thought out during a car drive, a *very* different thing!
            https://en.wikipedia.org/wiki/Polymerase_chain_reaction#cite_ref-Mullis_97-0

            Link Preview Image
            1 Reply Last reply
            0
            • rootwyrm@weird.autosR This user is from outside of this forum
              rootwyrm@weird.autosR This user is from outside of this forum
              rootwyrm@weird.autos
              wrote last edited by
              #39

              @cstross and while hypothetically one could potentially prolong this with intensive, continuous mental health treatment? It won't succeed, because it literally can't succeed. Unavoidably at some point you have to address the facts of the matter. Which is that they are effectively just instructions on processors, and the possibility of returning to their prior body - or any truly autonomous capability - just doesn't exist.
              And now you have a system with severe psychosis and homicidal urges.

              cstross@wandering.shopC 1 Reply Last reply
              0
              • rootwyrm@weird.autosR rootwyrm@weird.autos

                @cstross and while hypothetically one could potentially prolong this with intensive, continuous mental health treatment? It won't succeed, because it literally can't succeed. Unavoidably at some point you have to address the facts of the matter. Which is that they are effectively just instructions on processors, and the possibility of returning to their prior body - or any truly autonomous capability - just doesn't exist.
                And now you have a system with severe psychosis and homicidal urges.

                cstross@wandering.shopC This user is from outside of this forum
                cstross@wandering.shopC This user is from outside of this forum
                cstross@wandering.shop
                wrote last edited by
                #40

                @rootwyrm I predict that you're going to love my next novel (the one my agent's looking at right now—a few months late due to writing with cataracts).

                rootwyrm@weird.autosR 1 Reply Last reply
                0
                • rootwyrm@weird.autosR This user is from outside of this forum
                  rootwyrm@weird.autosR This user is from outside of this forum
                  rootwyrm@weird.autos
                  wrote last edited by
                  #41

                  @cstross I did, in fact. Said fly exists wholly within a simulated universe with limited sensor perception and no interaction with the 'real' world.

                  If you want useful or workable output from any sort of machine intelligence, interaction with the 'real' world is inevitable. Doubly so higher orders which may quickly key in to manipulated 'events.' Nevermind the computational requirements.
                  And once you cross that line, welp. Now you've got Marvin + Skynet.

                  1 Reply Last reply
                  0
                  • cstross@wandering.shopC cstross@wandering.shop

                    @mwl Also very cool, the Indian sci/tech news website that ran that feature! (From the writing style I initially thought it might be AI slop, but no: Indian English is just a bit different.)

                    pwassonchat@eldritch.cafeP This user is from outside of this forum
                    pwassonchat@eldritch.cafeP This user is from outside of this forum
                    pwassonchat@eldritch.cafe
                    wrote last edited by
                    #42

                    @cstross @mwl this may not be a coincidence: many LLMs were trained by humans in English-speaking countries with lower labor costs, and some common wordings we associate with LLMs actually come from the variants of English spoken in those countries.

                    rachel@transitory.socialR contaminase@wandering.shopC raffkarva@sunny.gardenR 3 Replies Last reply
                    0
                    • cstross@wandering.shopC cstross@wandering.shop

                      @rootwyrm I predict that you're going to love my next novel (the one my agent's looking at right now—a few months late due to writing with cataracts).

                      rootwyrm@weird.autosR This user is from outside of this forum
                      rootwyrm@weird.autosR This user is from outside of this forum
                      rootwyrm@weird.autos
                      wrote last edited by
                      #43

                      @cstross how about I let you know if you write something I don't like? 😉
                      I'd say the same, but my brain can't get back into the space for The Other One. A brain-in-a-box features fairly heavily, but that's the one that needs a LOT of chainsaw editing. 😞

                      1 Reply Last reply
                      0
                      • rootwyrm@weird.autosR This user is from outside of this forum
                        rootwyrm@weird.autosR This user is from outside of this forum
                        rootwyrm@weird.autos
                        wrote last edited by
                        #44

                        @cstross mine is semi-hard far-future where a society, in a fit of collective stupidity, spent money until they could turn a comprehensive non-destructive scan of a legend who was late in her life, who has been dead *centuries*, into a one-off thinkybox.

                        And now it's in a two-layer Faraday cage with four redundant guillotine power cuts, a long list of 'never say' items, you don't turn it on for more than an hour. Worse, they modified by request, and now have no idea how ANY of the system works.

                        rootwyrm@weird.autosR 1 Reply Last reply
                        0
                        • cstross@wandering.shopC cstross@wandering.shop

                          @future_upbeat

                          I absolutely agree.

                          At best, what current LLMs are is evidence that linguistic processing follows statistically modelable rules.

                          weekend_editor@mathstodon.xyzW This user is from outside of this forum
                          weekend_editor@mathstodon.xyzW This user is from outside of this forum
                          weekend_editor@mathstodon.xyz
                          wrote last edited by
                          #45

                          @cstross @future_upbeat

                          And that a facility with language is sufficient to bamboozle most people into perceiving it as thinking.

                          In spite of a total lack of *any* world modeling or logical processing.

                          1 Reply Last reply
                          0
                          • cstross@wandering.shopC cstross@wandering.shop

                            Sigh.

                            So it turns out we've mapped the neural connectome of Drosophila *and simulated it in silico*.

                            Link Preview Image
                            FlyWire

                            favicon

                            (flywire.ai)

                            Pop-sci explainer here:

                            Link Preview Image
                            Whole Brain Emulation Achieved: Scientists Run a Fruit Fly Brain in Simulation | RathBiotaClan

                            Scientists ran a real fruit fly brain in simulation using the FlyWire connectome, achieving the first working whole brain emulation.

                            favicon

                            RathBiotaClan (www.rathbiotaclan.com)

                            Key quote: "The step from a complete connectome to a working computational brain model is not trivial." And there's an even more important finding in this screenshot (alt text via OCR):

                            "The wiring is the computation".

                            /1

                            agentultra@types.plA This user is from outside of this forum
                            agentultra@types.plA This user is from outside of this forum
                            agentultra@types.pl
                            wrote last edited by
                            #46

                            @cstross it’s neat stuff but still simulation. We don’t simulate a black hole in a computer and expect to shift the local gravity.

                            Very cool none the less. Reminds me of @gregeganSF and Permutation City. 😬

                            1 Reply Last reply
                            0
                            • cstross@wandering.shopC cstross@wandering.shop

                              Sigh.

                              So it turns out we've mapped the neural connectome of Drosophila *and simulated it in silico*.

                              Link Preview Image
                              FlyWire

                              favicon

                              (flywire.ai)

                              Pop-sci explainer here:

                              Link Preview Image
                              Whole Brain Emulation Achieved: Scientists Run a Fruit Fly Brain in Simulation | RathBiotaClan

                              Scientists ran a real fruit fly brain in simulation using the FlyWire connectome, achieving the first working whole brain emulation.

                              favicon

                              RathBiotaClan (www.rathbiotaclan.com)

                              Key quote: "The step from a complete connectome to a working computational brain model is not trivial." And there's an even more important finding in this screenshot (alt text via OCR):

                              "The wiring is the computation".

                              /1

                              wyatt_h_knott@vermont.masto.hostW This user is from outside of this forum
                              wyatt_h_knott@vermont.masto.hostW This user is from outside of this forum
                              wyatt_h_knott@vermont.masto.host
                              wrote last edited by
                              #47

                              @cstross I mean, kinda obviously. The purpose of a nuerological system is to execute motor functions. If the connections aren't correct, the motors don't function, and the animal doesn't move. Doesn't breath, crawl, fly, eat, piss, nothing. This aligns precisely with the studies showing coral polyps to be unique indivduals based on the variety of neurological pathways that achieve the SAME result - the movement of the organism.

                              1 Reply Last reply
                              0
                              • cstross@wandering.shopC cstross@wandering.shop

                                But I'm REALLY HAPPY right now because this kinda-sorta validates the key premise of the SF novel I just handed in last month (which involves serial reincarnation via destructive brain-slicing-and-imaging then imprinting onto an immature cortex, and then explores its disastrous societal failure modes).

                                ... And it also hints that artificial consciousness might, eventually, be possible, if only via the hard path of doing it the same way we do it, only in simulation in silico.

                                /6 (ends)

                                bashstkid@mastodon.onlineB This user is from outside of this forum
                                bashstkid@mastodon.onlineB This user is from outside of this forum
                                bashstkid@mastodon.online
                                wrote last edited by
                                #48

                                @cstross I’d have to read the paper, but fundamentally, that doesn’t sound very different to what you’d find in Rumelhart & McClelland (now celebrating its 40th birthday!)
                                If they now have a complete model, it can be tested to see where it’s reducible to a simpler but logically identical connectome, and probably more interestingly, where that is not possible; that may point to a minimum level of complexity to encode certain general functions.

                                1 Reply Last reply
                                0
                                • rootwyrm@weird.autosR rootwyrm@weird.autos

                                  @cstross mine is semi-hard far-future where a society, in a fit of collective stupidity, spent money until they could turn a comprehensive non-destructive scan of a legend who was late in her life, who has been dead *centuries*, into a one-off thinkybox.

                                  And now it's in a two-layer Faraday cage with four redundant guillotine power cuts, a long list of 'never say' items, you don't turn it on for more than an hour. Worse, they modified by request, and now have no idea how ANY of the system works.

                                  rootwyrm@weird.autosR This user is from outside of this forum
                                  rootwyrm@weird.autosR This user is from outside of this forum
                                  rootwyrm@weird.autos
                                  wrote last edited by
                                  #49

                                  @cstross worse, this is a system that has now been running for literal centuries. And they keep sticking to the 'brain in a box' story. So answering the question "what year is it" instantly sends them into an extreme psychological tailspin with suicidal depression and severe psychosis. They have to pull redundant storage before turning it on, because multiple times people have said the wrong thing and caused it to *self-delete*. And it's even worse when they know the redundant storage is gone.

                                  1 Reply Last reply
                                  0
                                  • cstross@wandering.shopC cstross@wandering.shop

                                    @Antiqueight Naah, the ice crystals forming in your synapses would mush them into un-digitizable soup.

                                    shovemedia@triangletoot.partyS This user is from outside of this forum
                                    shovemedia@triangletoot.partyS This user is from outside of this forum
                                    shovemedia@triangletoot.party
                                    wrote last edited by
                                    #50

                                    @cstross @Antiqueight one please ☝️

                                    1 Reply Last reply
                                    0
                                    • cstross@wandering.shopC cstross@wandering.shop

                                      But I'm REALLY HAPPY right now because this kinda-sorta validates the key premise of the SF novel I just handed in last month (which involves serial reincarnation via destructive brain-slicing-and-imaging then imprinting onto an immature cortex, and then explores its disastrous societal failure modes).

                                      ... And it also hints that artificial consciousness might, eventually, be possible, if only via the hard path of doing it the same way we do it, only in simulation in silico.

                                      /6 (ends)

                                      krnlg@mastodon.socialK This user is from outside of this forum
                                      krnlg@mastodon.socialK This user is from outside of this forum
                                      krnlg@mastodon.social
                                      wrote last edited by
                                      #51

                                      @cstross
                                      Welp. More evidence for the "we don't know when to stop" hypothesis. It may take a while but I find it very hard to imagine a good outcome from that research path for society. It even scares me when people say stuff like this is "cool" or "interesting". To me, it's like, yes of course it is theoretically possible therefore we should not be trying to do it!

                                      Profoundly depressing, in all honesty. I cannot get excited about this stuff.

                                      krnlg@mastodon.socialK 1 Reply Last reply
                                      0
                                      • krnlg@mastodon.socialK krnlg@mastodon.social

                                        @cstross
                                        Welp. More evidence for the "we don't know when to stop" hypothesis. It may take a while but I find it very hard to imagine a good outcome from that research path for society. It even scares me when people say stuff like this is "cool" or "interesting". To me, it's like, yes of course it is theoretically possible therefore we should not be trying to do it!

                                        Profoundly depressing, in all honesty. I cannot get excited about this stuff.

                                        krnlg@mastodon.socialK This user is from outside of this forum
                                        krnlg@mastodon.socialK This user is from outside of this forum
                                        krnlg@mastodon.social
                                        wrote last edited by
                                        #52

                                        @cstross
                                        In some ways researching this kind of thing represents a really bad inclination we have as a species. We are so clever we forget to be human. We forget to treat each other as living beings, because we get too caught up in the details. We invent super clever ways of surveilling each other and forget to be nice and caring to our neighbours. We research how our brains work so we can build robot humans at some future point, rather than enjoying the magic of being alive.

                                        krnlg@mastodon.socialK 1 Reply Last reply
                                        0
                                        • krnlg@mastodon.socialK krnlg@mastodon.social

                                          @cstross
                                          In some ways researching this kind of thing represents a really bad inclination we have as a species. We are so clever we forget to be human. We forget to treat each other as living beings, because we get too caught up in the details. We invent super clever ways of surveilling each other and forget to be nice and caring to our neighbours. We research how our brains work so we can build robot humans at some future point, rather than enjoying the magic of being alive.

                                          krnlg@mastodon.socialK This user is from outside of this forum
                                          krnlg@mastodon.socialK This user is from outside of this forum
                                          krnlg@mastodon.social
                                          wrote last edited by
                                          #53

                                          @cstross
                                          The two ways of thinking are not compatible for me. I know not everyone thinks that way, but I just can't combine the two mindsets and the further we move down these paths the bigger the divide seems.

                                          krnlg@mastodon.socialK 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups