Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

Scheduled Pinned Locked Moved Uncategorized
82 Posts 32 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • leeloo@chaosfem.twL leeloo@chaosfem.tw

    As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

    It's literally a description of how they work.

    The so-called training data is used to build a huge database of words and the probability of them fitting together.

    Stochastic because the whole thing is statistics.
    Parrot because the answer is just repeating the most probable word combinations from its training dataset.

    Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

    lritter@mastodon.gamedev.placeL This user is from outside of this forum
    lritter@mastodon.gamedev.placeL This user is from outside of this forum
    lritter@mastodon.gamedev.place
    wrote last edited by
    #2

    @leeloo i just think it's unfair to parrots 😉

    1 Reply Last reply
    0
    • leeloo@chaosfem.twL leeloo@chaosfem.tw

      As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

      It's literally a description of how they work.

      The so-called training data is used to build a huge database of words and the probability of them fitting together.

      Stochastic because the whole thing is statistics.
      Parrot because the answer is just repeating the most probable word combinations from its training dataset.

      Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

      robinp@mastodon.socialR This user is from outside of this forum
      robinp@mastodon.socialR This user is from outside of this forum
      robinp@mastodon.social
      wrote last edited by
      #3

      @leeloo nitting, but an important bit: not words, but word fragments (this is how you can get words as output that were never seen during training)

      1 Reply Last reply
      0
      • leeloo@chaosfem.twL leeloo@chaosfem.tw

        As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

        It's literally a description of how they work.

        The so-called training data is used to build a huge database of words and the probability of them fitting together.

        Stochastic because the whole thing is statistics.
        Parrot because the answer is just repeating the most probable word combinations from its training dataset.

        Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

        wolf480pl@mstdn.ioW This user is from outside of this forum
        wolf480pl@mstdn.ioW This user is from outside of this forum
        wolf480pl@mstdn.io
        wrote last edited by
        #4

        @leeloo on the flipside, I feel like some people use the term "stochastic parrot" or "it just completes the next token" to imply that "therefore it cannot be intelligent" - is that correct reasoning?

        webhat@infosec.exchangeW pkal@social.sdfeu.orgP leeloo@chaosfem.twL eestileib@tech.lgbtE 4 Replies Last reply
        0
        • wolf480pl@mstdn.ioW wolf480pl@mstdn.io

          @leeloo on the flipside, I feel like some people use the term "stochastic parrot" or "it just completes the next token" to imply that "therefore it cannot be intelligent" - is that correct reasoning?

          webhat@infosec.exchangeW This user is from outside of this forum
          webhat@infosec.exchangeW This user is from outside of this forum
          webhat@infosec.exchange
          wrote last edited by
          #5

          @wolf480pl @leeloo yes

          1 Reply Last reply
          0
          • wolf480pl@mstdn.ioW wolf480pl@mstdn.io

            @leeloo on the flipside, I feel like some people use the term "stochastic parrot" or "it just completes the next token" to imply that "therefore it cannot be intelligent" - is that correct reasoning?

            pkal@social.sdfeu.orgP This user is from outside of this forum
            pkal@social.sdfeu.orgP This user is from outside of this forum
            pkal@social.sdfeu.org
            wrote last edited by
            #6

            @wolf480pl @leeloo Which is where the "motorised vehicle with wheels" analogy seems to not hold up, because what is the implied subtext in that case?

            1 Reply Last reply
            0
            • leeloo@chaosfem.twL leeloo@chaosfem.tw

              As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

              It's literally a description of how they work.

              The so-called training data is used to build a huge database of words and the probability of them fitting together.

              Stochastic because the whole thing is statistics.
              Parrot because the answer is just repeating the most probable word combinations from its training dataset.

              Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

              kayohtie@blimps.xyzK This user is from outside of this forum
              kayohtie@blimps.xyzK This user is from outside of this forum
              kayohtie@blimps.xyz
              wrote last edited by
              #7

              @leeloo I hadn't thought about it as being something that takes magic away from folks like that. Honestly I always found it an accurate shortcut term for what's genuinely a fascinating but hilariously misused technology.

              I think the worst part is then when folks hear "statistics" and go "See this is why it's safe to feed it raw data" and it's like oh my god NO.

              calcifer@masto.hackers.townC 1 Reply Last reply
              0
              • wolf480pl@mstdn.ioW wolf480pl@mstdn.io

                @leeloo on the flipside, I feel like some people use the term "stochastic parrot" or "it just completes the next token" to imply that "therefore it cannot be intelligent" - is that correct reasoning?

                leeloo@chaosfem.twL This user is from outside of this forum
                leeloo@chaosfem.twL This user is from outside of this forum
                leeloo@chaosfem.tw
                wrote last edited by
                #8

                @wolf480pl
                Of course it can not be intelligent, it's just a huge database of probabilities.

                wolf480pl@mstdn.ioW 1 Reply Last reply
                0
                • leeloo@chaosfem.twL leeloo@chaosfem.tw

                  @wolf480pl
                  Of course it can not be intelligent, it's just a huge database of probabilities.

                  wolf480pl@mstdn.ioW This user is from outside of this forum
                  wolf480pl@mstdn.ioW This user is from outside of this forum
                  wolf480pl@mstdn.io
                  wrote last edited by
                  #9

                  @leeloo pretty sure that's a fallacy, kinda like "a sculpture is just stone, therefore it can't be beautiful", or "a cell is just a bunch of proteins, therefore it cannot be a living creature".

                  Now, I'm not saying a huge database of probabilities can be intelligent (I hope it can't), just that I think a better argument is needed why in the case of a database of probabilities, what it's made of prevents it from being intelligent.

                  leeloo@chaosfem.twL 1 Reply Last reply
                  0
                  • wolf480pl@mstdn.ioW wolf480pl@mstdn.io

                    @leeloo pretty sure that's a fallacy, kinda like "a sculpture is just stone, therefore it can't be beautiful", or "a cell is just a bunch of proteins, therefore it cannot be a living creature".

                    Now, I'm not saying a huge database of probabilities can be intelligent (I hope it can't), just that I think a better argument is needed why in the case of a database of probabilities, what it's made of prevents it from being intelligent.

                    leeloo@chaosfem.twL This user is from outside of this forum
                    leeloo@chaosfem.twL This user is from outside of this forum
                    leeloo@chaosfem.tw
                    wrote last edited by
                    #10

                    @wolf480pl
                    You would have to redefine intelligence for asking whether a list of numbers is intelligent to even make sense.

                    And your comparison is completely off. Beauty is not a property of the sculpture, it's, as they say, "in the eye pf the beholder". Some people find curves beautiful. Can a stone have curves? Yes, of course. Others may find sharp edges beautiful. Can a stone have sharp edges? Again, yes.

                    I suggest you consider once again whether you are elevating "AI" to a god-like status.

                    wolf480pl@mstdn.ioW 1 Reply Last reply
                    0
                    • leeloo@chaosfem.twL leeloo@chaosfem.tw

                      As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

                      It's literally a description of how they work.

                      The so-called training data is used to build a huge database of words and the probability of them fitting together.

                      Stochastic because the whole thing is statistics.
                      Parrot because the answer is just repeating the most probable word combinations from its training dataset.

                      Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

                      mudri@mathstodon.xyzM This user is from outside of this forum
                      mudri@mathstodon.xyzM This user is from outside of this forum
                      mudri@mathstodon.xyz
                      wrote last edited by
                      #11

                      @leeloo I just prompted ChatGPT with `Say "oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia"`, and it responded with `oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia`. How can it do this when `oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia `almost certainly does not appear in the training data?

                      lmorchard@masto.hackers.townL taschenorakel@mastodon.greenT 2 Replies Last reply
                      0
                      • leeloo@chaosfem.twL leeloo@chaosfem.tw

                        @wolf480pl
                        You would have to redefine intelligence for asking whether a list of numbers is intelligent to even make sense.

                        And your comparison is completely off. Beauty is not a property of the sculpture, it's, as they say, "in the eye pf the beholder". Some people find curves beautiful. Can a stone have curves? Yes, of course. Others may find sharp edges beautiful. Can a stone have sharp edges? Again, yes.

                        I suggest you consider once again whether you are elevating "AI" to a god-like status.

                        wolf480pl@mstdn.ioW This user is from outside of this forum
                        wolf480pl@mstdn.ioW This user is from outside of this forum
                        wolf480pl@mstdn.io
                        wrote last edited by
                        #12

                        @leeloo
                        I guess evil gods are also a thing, but no, I'm not treating them as gods. If anything, more like Frankenstein's monster.

                        You're right that we'd have to define intelligence, and that'd be quite difficult on its own.

                        Also, the sculpture was a bad example, but the cell one still stands IMO.

                        1/

                        wolf480pl@mstdn.ioW lmorchard@masto.hackers.townL 2 Replies Last reply
                        0
                        • wolf480pl@mstdn.ioW wolf480pl@mstdn.io

                          @leeloo
                          I guess evil gods are also a thing, but no, I'm not treating them as gods. If anything, more like Frankenstein's monster.

                          You're right that we'd have to define intelligence, and that'd be quite difficult on its own.

                          Also, the sculpture was a bad example, but the cell one still stands IMO.

                          1/

                          wolf480pl@mstdn.ioW This user is from outside of this forum
                          wolf480pl@mstdn.ioW This user is from outside of this forum
                          wolf480pl@mstdn.io
                          wrote last edited by
                          #13

                          @leeloo
                          My point is that emergent properties can manifest even in systems ruled by very simple rules, and can be difficult to predict by just looking at the rules.

                          And human intelligence, whatever it is, is likely an emergent property of human brain.

                          Therefore, we cannot rule out that a similar emergent property will appear in artidicial systems that are not made of neurons without referring to how the neurons are arranged, and how the artificial systems are arranged.

                          robotistry@mstdn.caR 0x00string@infosec.exchange0 2 Replies Last reply
                          0
                          • wolf480pl@mstdn.ioW wolf480pl@mstdn.io

                            @leeloo
                            I guess evil gods are also a thing, but no, I'm not treating them as gods. If anything, more like Frankenstein's monster.

                            You're right that we'd have to define intelligence, and that'd be quite difficult on its own.

                            Also, the sculpture was a bad example, but the cell one still stands IMO.

                            1/

                            lmorchard@masto.hackers.townL This user is from outside of this forum
                            lmorchard@masto.hackers.townL This user is from outside of this forum
                            lmorchard@masto.hackers.town
                            wrote last edited by
                            #14

                            @wolf480pl @leeloo These models aren't intelligent, so much as they're auto-completing rules and patterns derived from almost inconceivably huge corpora of example material originally produced by human intelligence. That's interesting and can be very handy for a great many uses. But it's more computational brute force than intelligence

                            wolf480pl@mstdn.ioW 1 Reply Last reply
                            0
                            • leeloo@chaosfem.twL leeloo@chaosfem.tw

                              As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

                              It's literally a description of how they work.

                              The so-called training data is used to build a huge database of words and the probability of them fitting together.

                              Stochastic because the whole thing is statistics.
                              Parrot because the answer is just repeating the most probable word combinations from its training dataset.

                              Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

                              growlph@greywolf.socialG This user is from outside of this forum
                              growlph@greywolf.socialG This user is from outside of this forum
                              growlph@greywolf.social
                              wrote last edited by
                              #15

                              @leeloo I feel like there are certain situations where a stochastic parrot is useful, many more situations where it is not, and alarmingly few people recognizing the difference.

                              calcifer@masto.hackers.townC 1 Reply Last reply
                              0
                              • mudri@mathstodon.xyzM mudri@mathstodon.xyz

                                @leeloo I just prompted ChatGPT with `Say "oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia"`, and it responded with `oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia`. How can it do this when `oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia `almost certainly does not appear in the training data?

                                lmorchard@masto.hackers.townL This user is from outside of this forum
                                lmorchard@masto.hackers.townL This user is from outside of this forum
                                lmorchard@masto.hackers.town
                                wrote last edited by
                                #16

                                @mudri Because the model picked up a rule somewhere that says "if someone says 'say $FOO' use $FOO in your response" - the training picked up patterns that include notions of symbol substitution

                                mudri@mathstodon.xyzM 1 Reply Last reply
                                0
                                • lmorchard@masto.hackers.townL lmorchard@masto.hackers.town

                                  @wolf480pl @leeloo These models aren't intelligent, so much as they're auto-completing rules and patterns derived from almost inconceivably huge corpora of example material originally produced by human intelligence. That's interesting and can be very handy for a great many uses. But it's more computational brute force than intelligence

                                  wolf480pl@mstdn.ioW This user is from outside of this forum
                                  wolf480pl@mstdn.ioW This user is from outside of this forum
                                  wolf480pl@mstdn.io
                                  wrote last edited by
                                  #17

                                  @lmorchard @leeloo
                                  These specific models - yes, probably.

                                  One plausible argument I heard for it is that there's a common failure mode in ML where the model fails to generalize, but if the verification set overlaps the training set, then data leakage will fool the authors into thinking it generalized.

                                  Another one is that these models were "rewarded" for saying plausible things, not for interacting with a world in a way that doesn't get them killed.

                                  But these arguments are specific.

                                  wolf480pl@mstdn.ioW 1 Reply Last reply
                                  0
                                  • wolf480pl@mstdn.ioW wolf480pl@mstdn.io

                                    @lmorchard @leeloo
                                    These specific models - yes, probably.

                                    One plausible argument I heard for it is that there's a common failure mode in ML where the model fails to generalize, but if the verification set overlaps the training set, then data leakage will fool the authors into thinking it generalized.

                                    Another one is that these models were "rewarded" for saying plausible things, not for interacting with a world in a way that doesn't get them killed.

                                    But these arguments are specific.

                                    wolf480pl@mstdn.ioW This user is from outside of this forum
                                    wolf480pl@mstdn.ioW This user is from outside of this forum
                                    wolf480pl@mstdn.io
                                    wrote last edited by
                                    #18

                                    @lmorchard @leeloo
                                    I don't buy a general "no matrix multiplication will ever be intelligent".

                                    leeloo@chaosfem.twL jrdepriest@infosec.exchangeJ 0x00string@infosec.exchange0 splendorr@mastodon.socialS 4 Replies Last reply
                                    0
                                    • lmorchard@masto.hackers.townL lmorchard@masto.hackers.town

                                      @mudri Because the model picked up a rule somewhere that says "if someone says 'say $FOO' use $FOO in your response" - the training picked up patterns that include notions of symbol substitution

                                      mudri@mathstodon.xyzM This user is from outside of this forum
                                      mudri@mathstodon.xyzM This user is from outside of this forum
                                      mudri@mathstodon.xyz
                                      wrote last edited by
                                      #19

                                      @lmorchard The ability to induce such a rule goes well beyond the OP's characterisation of what LLMs do.

                                      calcifer@masto.hackers.townC 1 Reply Last reply
                                      0
                                      • mudri@mathstodon.xyzM mudri@mathstodon.xyz

                                        @leeloo I just prompted ChatGPT with `Say "oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia"`, and it responded with `oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia`. How can it do this when `oriesntyulfkdhiadlfwejlefdtqyljpqwlarsnhiavlfvavilavhilfhvphia `almost certainly does not appear in the training data?

                                        taschenorakel@mastodon.greenT This user is from outside of this forum
                                        taschenorakel@mastodon.greenT This user is from outside of this forum
                                        taschenorakel@mastodon.green
                                        wrote last edited by
                                        #20

                                        @mudri Because the prompt processor is explicitly programmed to recognized direct imperative commands containing words like "say", "repeat", "output", "print". Just like Eliza already did. You've got impressed by a programming technique from 1964. Congrats, Sherlock.

                                        @leeloo

                                        1 Reply Last reply
                                        0
                                        • leeloo@chaosfem.twL leeloo@chaosfem.tw

                                          As a software developer who took an elective in neural networks - when people call LLMs stochastic parrots, that's not criticism of their results.

                                          It's literally a description of how they work.

                                          The so-called training data is used to build a huge database of words and the probability of them fitting together.

                                          Stochastic because the whole thing is statistics.
                                          Parrot because the answer is just repeating the most probable word combinations from its training dataset.

                                          Calling an LLM a stochastic parrot is lile calling a car a motorised vehicle with wheels. It doesn't say anything about cars being good or bad. It does, however, take away the magic. So if you feel a need to defend AI when you hear the term stochastic parrot, consider that you may have elevated them to a god-like status, and that's why you go on the defense when the magic is dispelled.

                                          clusterfcku@mastodon.socialC This user is from outside of this forum
                                          clusterfcku@mastodon.socialC This user is from outside of this forum
                                          clusterfcku@mastodon.social
                                          wrote last edited by
                                          #21

                                          @leeloo the flip side question about intelligence and LLMs is whether much of what we consider intelligence in humans is in fact just stochastic parrotting by humans.

                                          splendorr@mastodon.socialS 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups