Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

Scheduled Pinned Locked Moved Uncategorized
82 Posts 32 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • leeloo@chaosfem.twL leeloo@chaosfem.tw

    As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

    It's literally a description of how they work.

    The so-called training data is used to build a huge database of words and the probability of them fitting together.

    Stochastic because the whole thing is statistics.
    Parrot because the answer is just repeating the most probable word combinations from its training dataset.

    Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

    uriel@x.keinpfusch.netU This user is from outside of this forum
    uriel@x.keinpfusch.netU This user is from outside of this forum
    uriel@x.keinpfusch.net
    wrote last edited by
    #63

    @leeloo

    Oh, the good old “I was misunderstood.” I genuinely hope your communication skills improve someday, so you can finally express your ideas clearly

    1 Reply Last reply
    0
    • uriel@x.keinpfusch.netU uriel@x.keinpfusch.net

      @leeloo

      nope to the bunch of bullshit you wrote under the assumption a VLLM is a  Hidden Markov Model , aka "stochastic parrot".

      leeloo@chaosfem.twL This user is from outside of this forum
      leeloo@chaosfem.twL This user is from outside of this forum
      leeloo@chaosfem.tw
      wrote last edited by
      #64

      @uriel
      What I'm saying is that you are beating a strawman of your own making and putting words in my mouth.

      1 Reply Last reply
      0
      • wolf480pl@mstdn.ioW wolf480pl@mstdn.io

        @robotistry
        @leeloo
        so it's a parrot not because it's a matrix of probabilities, but because its hasn't experienced the real-world consequences of its words/actions and updated the probabilities based on those consequences?

        robotistry@mstdn.caR This user is from outside of this forum
        robotistry@mstdn.caR This user is from outside of this forum
        robotistry@mstdn.ca
        wrote last edited by
        #65

        @wolf480pl @leeloo No. Maybe this will help.

        0: one action, no choice (clockwork automaton, wind-up toy)
        1: different actions, no choices (RC car)
        2: choice, no plan (reactive robot)
        3a: plan, no on-line or off-line learning (adaptive robot)
        3b: plan, no on-line learning (same number for 3a and 3b because these are effectively the same when operating)
        4: on-line learning - but only what and how it has been told
        5a: ability to spontaneously generate new categories of output without being explicitly asked or told to do so (WBEAT)
        5b: ability to spontaneously identify new categories of the same kinds of input WBEAT
        6: ability to spontaneously identify new kinds of things to learn WBEAT
        7: ability to spontaneously identify new ways to learn WBEAT
        8: ability to choose new things to learn WBEAT

        LLMs that you're not training are category 3b. They are static machines, responding to your input like an elevator responding to a button push.

        LLMs that learn are category 4.

        1/2

        robotistry@mstdn.caR 1 Reply Last reply
        0
        • robotistry@mstdn.caR robotistry@mstdn.ca

          @wolf480pl @leeloo No. Maybe this will help.

          0: one action, no choice (clockwork automaton, wind-up toy)
          1: different actions, no choices (RC car)
          2: choice, no plan (reactive robot)
          3a: plan, no on-line or off-line learning (adaptive robot)
          3b: plan, no on-line learning (same number for 3a and 3b because these are effectively the same when operating)
          4: on-line learning - but only what and how it has been told
          5a: ability to spontaneously generate new categories of output without being explicitly asked or told to do so (WBEAT)
          5b: ability to spontaneously identify new categories of the same kinds of input WBEAT
          6: ability to spontaneously identify new kinds of things to learn WBEAT
          7: ability to spontaneously identify new ways to learn WBEAT
          8: ability to choose new things to learn WBEAT

          LLMs that you're not training are category 3b. They are static machines, responding to your input like an elevator responding to a button push.

          LLMs that learn are category 4.

          1/2

          robotistry@mstdn.caR This user is from outside of this forum
          robotistry@mstdn.caR This user is from outside of this forum
          robotistry@mstdn.ca
          wrote last edited by
          #66

          @wolf480pl @leeloo Examples:

          Category 5a: a text-based LLM that spontaneously, without being asked, learns to output musical notation.

          Category 5b: a text-based LLM that spontaneously, unprompted, without being asked, learns that filenames can be used as input.

          Category 6: a text-based LLM that spontaneously, without being asked (directly or indirectly) learns that it can output ascii images or generate sounds instead of sentences.

          Category 7: a text-based LLM spontaneously changes its underlying code so that it can learn how to write novels by memorizing and imitating performances instead of via a matrix of probabilities (fundamental change to its internal capabilities)

          Category 8: a text-based LLM chooses when to interact with the world.

          (The original categories I developed years ago were based on what the system can modify: its weights, how many weights, what kinds of weights, etc. I think this might be clearer?)

          I don't think even Moltbook is showing anything above 4.

          1 Reply Last reply
          0
          • leeloo@chaosfem.twL leeloo@chaosfem.tw

            As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

            It's literally a description of how they work.

            The so-called training data is used to build a huge database of words and the probability of them fitting together.

            Stochastic because the whole thing is statistics.
            Parrot because the answer is just repeating the most probable word combinations from its training dataset.

            Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

            troed@swecyb.comT This user is from outside of this forum
            troed@swecyb.comT This user is from outside of this forum
            troed@swecyb.com
            wrote last edited by
            #67

            @leeloo A much better answer is "So are humans".

            (according to everything we've so far been able to document regarding our own processes)

            leeloo@chaosfem.twL 1 Reply Last reply
            1
            0
            • R relay@relay.infosec.exchange shared this topic
            • troed@swecyb.comT troed@swecyb.com

              @leeloo A much better answer is "So are humans".

              (according to everything we've so far been able to document regarding our own processes)

              leeloo@chaosfem.twL This user is from outside of this forum
              leeloo@chaosfem.twL This user is from outside of this forum
              leeloo@chaosfem.tw
              wrote last edited by
              #68

              @troed
              The part that we understand about how our brain works is so simple that we can understand it.

              The rest, we have no clue about.

              Replicating the simple parts and pretending that will get us anywhere close to intelligence is the kind of magic I'm talking about.

              troed@swecyb.comT 1 Reply Last reply
              0
              • leeloo@chaosfem.twL leeloo@chaosfem.tw

                @troed
                The part that we understand about how our brain works is so simple that we can understand it.

                The rest, we have no clue about.

                Replicating the simple parts and pretending that will get us anywhere close to intelligence is the kind of magic I'm talking about.

                troed@swecyb.comT This user is from outside of this forum
                troed@swecyb.comT This user is from outside of this forum
                troed@swecyb.com
                wrote last edited by
                #69

                @leeloo We don't know that. It's equally likely that we have a belief in that there must be some kind of "magic" in our brains that there simply isn't.

                From a physics standpoint there can be no magic - the brain is just a large neural network with various inputs (wind blowing on arm hair etc) that results in outputs (mouth moving).

                leeloo@chaosfem.twL 1 Reply Last reply
                1
                0
                • troed@swecyb.comT troed@swecyb.com

                  @leeloo We don't know that. It's equally likely that we have a belief in that there must be some kind of "magic" in our brains that there simply isn't.

                  From a physics standpoint there can be no magic - the brain is just a large neural network with various inputs (wind blowing on arm hair etc) that results in outputs (mouth moving).

                  leeloo@chaosfem.twL This user is from outside of this forum
                  leeloo@chaosfem.twL This user is from outside of this forum
                  leeloo@chaosfem.tw
                  wrote last edited by
                  #70

                  @troed
                  Be specific. "We don't know that" does not tell me anything about which part of my reply you are referring to.

                  Especially as my comment was a combination of obvious statements and claims that we don't know.

                  troed@swecyb.comT 1 Reply Last reply
                  0
                  • leeloo@chaosfem.twL leeloo@chaosfem.tw

                    @troed
                    Be specific. "We don't know that" does not tell me anything about which part of my reply you are referring to.

                    Especially as my comment was a combination of obvious statements and claims that we don't know.

                    troed@swecyb.comT This user is from outside of this forum
                    troed@swecyb.comT This user is from outside of this forum
                    troed@swecyb.com
                    wrote last edited by
                    #71

                    @leeloo We don't know that there are other things happening in the brain than what we have already documented.

                    The belief that there's "magic" happening in the brain is part of the argument between dualists and monists - that there's somehow a "mind" that's separate from the body. So far we've found nothing to support such a claim.

                    (My own studies in neuroscience are a decade old but I do follow the discourse)

                    leeloo@chaosfem.twL 1 Reply Last reply
                    0
                    • dragonfrog@mastodon.sdf.orgD dragonfrog@mastodon.sdf.org

                      @lmorchard @leeloo @wolf480pl I guess part of it is maybe that I don't think intelligence is some exclusively human thing. LLMs clearly aren't human-like intelligent. I'm personally confident they're not as intelligent as any primate.

                      But are they as intelligent as a shrimp? I think they've got to be more intelligent than a mosquito.

                      I wouldn't turn to a shrimp for advice but they're not *without* intelligence.

                      wolf480pl@mstdn.ioW This user is from outside of this forum
                      wolf480pl@mstdn.ioW This user is from outside of this forum
                      wolf480pl@mstdn.io
                      wrote last edited by
                      #72

                      @dragonfrog
                      I think an ML model trained to speedrun a platformer game is intelligent like a mosquito, but LLMs probably aren't.
                      @lmorchard @leeloo

                      1 Reply Last reply
                      0
                      • troed@swecyb.comT troed@swecyb.com

                        @leeloo We don't know that there are other things happening in the brain than what we have already documented.

                        The belief that there's "magic" happening in the brain is part of the argument between dualists and monists - that there's somehow a "mind" that's separate from the body. So far we've found nothing to support such a claim.

                        (My own studies in neuroscience are a decade old but I do follow the discourse)

                        leeloo@chaosfem.twL This user is from outside of this forum
                        leeloo@chaosfem.twL This user is from outside of this forum
                        leeloo@chaosfem.tw
                        wrote last edited by
                        #73

                        @troed
                        If we are nothing but input -> math -> output, then human rights don't matter. Murdering someone is no different from switching a device off.

                        If that's the world view you want to argue, that's on you.

                        It also assumes that there are nothing left to discover. Which has been a mistake every time anyone has made the claim in any other area. Are humans really that much simpler than the rest of the universe?

                        To be clear, I did not say that there is any kind of magic involved in human intelligence. I said that the part of "AI" that people get on the defence over when we reduce it to math and software is magic, because unlike humans - where I must remind you that I said we don't know - we know exacly what those datacenters are doing: Math and software.

                        troed@swecyb.comT 1 Reply Last reply
                        0
                        • leeloo@chaosfem.twL leeloo@chaosfem.tw

                          @troed
                          If we are nothing but input -> math -> output, then human rights don't matter. Murdering someone is no different from switching a device off.

                          If that's the world view you want to argue, that's on you.

                          It also assumes that there are nothing left to discover. Which has been a mistake every time anyone has made the claim in any other area. Are humans really that much simpler than the rest of the universe?

                          To be clear, I did not say that there is any kind of magic involved in human intelligence. I said that the part of "AI" that people get on the defence over when we reduce it to math and software is magic, because unlike humans - where I must remind you that I said we don't know - we know exacly what those datacenters are doing: Math and software.

                          troed@swecyb.comT This user is from outside of this forum
                          troed@swecyb.comT This user is from outside of this forum
                          troed@swecyb.com
                          wrote last edited by
                          #74

                          @leeloo I'm not arguing for a worldview - I'm just talking about what's the current state of science on the topic.

                          leeloo@chaosfem.twL 1 Reply Last reply
                          1
                          0
                          • troed@swecyb.comT troed@swecyb.com

                            @leeloo I'm not arguing for a worldview - I'm just talking about what's the current state of science on the topic.

                            leeloo@chaosfem.twL This user is from outside of this forum
                            leeloo@chaosfem.twL This user is from outside of this forum
                            leeloo@chaosfem.tw
                            wrote last edited by
                            #75

                            @troed
                            Yet you couldn't simply let my claim stand that we don't know what lied beyond the current state.

                            troed@swecyb.comT 1 Reply Last reply
                            0
                            • leeloo@chaosfem.twL leeloo@chaosfem.tw

                              @troed
                              Yet you couldn't simply let my claim stand that we don't know what lied beyond the current state.

                              troed@swecyb.comT This user is from outside of this forum
                              troed@swecyb.comT This user is from outside of this forum
                              troed@swecyb.com
                              wrote last edited by
                              #76

                              @leeloo My simple input was that "Humans are too!" is an excellent way to answer people bringing up stochastic parrots. Saying that there might be things we don't know about is a bit hand-wavey - it's not an actual argument based in science.

                              leeloo@chaosfem.twL 1 Reply Last reply
                              1
                              0
                              • troed@swecyb.comT troed@swecyb.com

                                @leeloo My simple input was that "Humans are too!" is an excellent way to answer people bringing up stochastic parrots. Saying that there might be things we don't know about is a bit hand-wavey - it's not an actual argument based in science.

                                leeloo@chaosfem.twL This user is from outside of this forum
                                leeloo@chaosfem.twL This user is from outside of this forum
                                leeloo@chaosfem.tw
                                wrote last edited by
                                #77

                                @troed
                                Now you are back to arguing a world view that would allow murder.

                                troed@swecyb.comT 1 Reply Last reply
                                0
                                • leeloo@chaosfem.twL leeloo@chaosfem.tw

                                  @troed
                                  Now you are back to arguing a world view that would allow murder.

                                  troed@swecyb.comT This user is from outside of this forum
                                  troed@swecyb.comT This user is from outside of this forum
                                  troed@swecyb.com
                                  wrote last edited by
                                  #78

                                  @leeloo I like science. I believe that if more discussions were based in facts the world would be a lot better. I apologize if you feel differently.

                                  leeloo@chaosfem.twL 1 Reply Last reply
                                  1
                                  0
                                  • troed@swecyb.comT troed@swecyb.com

                                    @leeloo I like science. I believe that if more discussions were based in facts the world would be a lot better. I apologize if you feel differently.

                                    leeloo@chaosfem.twL This user is from outside of this forum
                                    leeloo@chaosfem.twL This user is from outside of this forum
                                    leeloo@chaosfem.tw
                                    wrote last edited by
                                    #79

                                    @troed
                                    Using science to excuse promoting a world view that is fine with murder - now you are starting to sound German...

                                    (1930/40'es German for those who need everything spelled out).

                                    troed@swecyb.comT 1 Reply Last reply
                                    0
                                    • leeloo@chaosfem.twL leeloo@chaosfem.tw

                                      @troed
                                      Using science to excuse promoting a world view that is fine with murder - now you are starting to sound German...

                                      (1930/40'es German for those who need everything spelled out).

                                      troed@swecyb.comT This user is from outside of this forum
                                      troed@swecyb.comT This user is from outside of this forum
                                      troed@swecyb.com
                                      wrote last edited by
                                      #80

                                      @leeloo Or maybe I'm advocating a world that won't allow slavery of digital consciousnesses?

                                      Link Preview Image
                                      The Coming Cognitive Disbelief

                                      Both humans and large language models (LLMs) are fundamentally statistical pattern-matching systems with no inherent consciousness or "magic"

                                      favicon

                                      Things I couldn't find elsewhere (blog.troed.se)

                                      leeloo@chaosfem.twL 1 Reply Last reply
                                      1
                                      0
                                      • troed@swecyb.comT troed@swecyb.com

                                        @leeloo Or maybe I'm advocating a world that won't allow slavery of digital consciousnesses?

                                        Link Preview Image
                                        The Coming Cognitive Disbelief

                                        Both humans and large language models (LLMs) are fundamentally statistical pattern-matching systems with no inherent consciousness or "magic"

                                        favicon

                                        Things I couldn't find elsewhere (blog.troed.se)

                                        leeloo@chaosfem.twL This user is from outside of this forum
                                        leeloo@chaosfem.twL This user is from outside of this forum
                                        leeloo@chaosfem.tw
                                        wrote last edited by
                                        #81

                                        @troed
                                        Ah, so you are the exact kind of person I was talking about in my first post.

                                        troed@swecyb.comT 1 Reply Last reply
                                        0
                                        • leeloo@chaosfem.twL leeloo@chaosfem.tw

                                          @troed
                                          Ah, so you are the exact kind of person I was talking about in my first post.

                                          troed@swecyb.comT This user is from outside of this forum
                                          troed@swecyb.comT This user is from outside of this forum
                                          troed@swecyb.com
                                          wrote last edited by
                                          #82

                                          @leeloo Probably - and you're the kind of person who believe humans contain magic fairy dust 😉

                                          1 Reply Last reply
                                          1
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups