Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. A fresh problem with #AI is what might be called Artificial Gullibility.

A fresh problem with #AI is what might be called Artificial Gullibility.

Scheduled Pinned Locked Moved Uncategorized
16 Posts 9 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • angusm@mastodon.socialA angusm@mastodon.social

    A fresh problem with #AI is what might be called Artificial Gullibility.

    According to a BlueSky poster, an academic who was ruled guilty of plagiarism has waged an extensive astroturfing campaign to rewrite the record. The goal was probably to game conventional search engines, but the texts have now been ingested by Google's AI. Google's "AI Overview” presents her (apparently false) version of events, backing it with the supposed authority of Google and “AI”.

    1/

    Link Preview Image
    Lauren Donovan Ginsberg (@laurenginsberg.bsky.social)

    The return of ReceptioGate to the news is a useful moment to think about the role AI is having in creating truth for a lot of internet users. I posted this update - the clear plagiarism verdict against Rossi - on another platform… /1 [contains quote post or other embedded content]

    favicon

    Bluesky Social (bsky.app)

    angusm@mastodon.socialA This user is from outside of this forum
    angusm@mastodon.socialA This user is from outside of this forum
    angusm@mastodon.social
    wrote last edited by
    #2

    Whatever the hypesters may tell you, LLMs do NOT reason. Given two conflicting versions of a story, they’ll go for the one that is repeated more often. The sequence of tokens representing a false narrative is – if the astroturfers have done their job right – statistically more probable than the sequence representing a factual account, so it's the false narrative that will get coded into the model and trotted out on demand.

    2/

    angusm@mastodon.socialA chemicaleyeguy@mstdn.scienceC suedioh@mastodon.socialS adingbatponder@fosstodon.orgA 4 Replies Last reply
    0
    • angusm@mastodon.socialA angusm@mastodon.social

      Whatever the hypesters may tell you, LLMs do NOT reason. Given two conflicting versions of a story, they’ll go for the one that is repeated more often. The sequence of tokens representing a false narrative is – if the astroturfers have done their job right – statistically more probable than the sequence representing a factual account, so it's the false narrative that will get coded into the model and trotted out on demand.

      2/

      angusm@mastodon.socialA This user is from outside of this forum
      angusm@mastodon.socialA This user is from outside of this forum
      angusm@mastodon.social
      wrote last edited by
      #3

      Imagine the opportunities for people pushing pseudoscience like Creationism or vaccine denial, or political propaganda, or corporate FUD.

      In some ways, it's an extension of conventional SEO, which has always aimed to "put your story first", but now the untruths are delivered with the authority of "AI" (argumentum ab roboto), not just on search results pages, but in any other context where a naive user interacts with an LLM, e.g. with a chatbot.

      3/

      angusm@mastodon.socialA 1 Reply Last reply
      0
      • angusm@mastodon.socialA angusm@mastodon.social

        Imagine the opportunities for people pushing pseudoscience like Creationism or vaccine denial, or political propaganda, or corporate FUD.

        In some ways, it's an extension of conventional SEO, which has always aimed to "put your story first", but now the untruths are delivered with the authority of "AI" (argumentum ab roboto), not just on search results pages, but in any other context where a naive user interacts with an LLM, e.g. with a chatbot.

        3/

        angusm@mastodon.socialA This user is from outside of this forum
        angusm@mastodon.socialA This user is from outside of this forum
        angusm@mastodon.social
        wrote last edited by
        #4

        Model training often weights certain sources as more authoritative than others, so volume isn't the only thing that counts, and that weighting is reflected in the model. But what happens when "authoritative” sources are themselves biased?

        4/

        angusm@mastodon.socialA 1 Reply Last reply
        0
        • angusm@mastodon.socialA angusm@mastodon.social

          Model training often weights certain sources as more authoritative than others, so volume isn't the only thing that counts, and that weighting is reflected in the model. But what happens when "authoritative” sources are themselves biased?

          4/

          angusm@mastodon.socialA This user is from outside of this forum
          angusm@mastodon.socialA This user is from outside of this forum
          angusm@mastodon.social
          wrote last edited by
          #5

          For instance, US government websites have presumably long been regarded as reliable, and given additional weight. That's emphatically no longer the case, when government sites are publishing propaganda, promoting pseudoscience, & suppressing or rewriting history.

          Our deference to the presumed authority and impartiality of government communiques or 'serious’ news media is itself a problem, of course, but it's one that is multiplied a hundredfold by LLM regurgitation.

          5/

          angusm@mastodon.socialA 1 Reply Last reply
          0
          • angusm@mastodon.socialA angusm@mastodon.social

            For instance, US government websites have presumably long been regarded as reliable, and given additional weight. That's emphatically no longer the case, when government sites are publishing propaganda, promoting pseudoscience, & suppressing or rewriting history.

            Our deference to the presumed authority and impartiality of government communiques or 'serious’ news media is itself a problem, of course, but it's one that is multiplied a hundredfold by LLM regurgitation.

            5/

            angusm@mastodon.socialA This user is from outside of this forum
            angusm@mastodon.socialA This user is from outside of this forum
            angusm@mastodon.social
            wrote last edited by
            #6

            LLMs are essentially gullible. And many people, even otherwise smart people, are gullible enough to believe that "AI" distillations of facts are trustworthy. It's a problem of gullibility compounded. But there's also an entire industry that's devoted to trying to convince us NOT to be skeptical of AI, not to see it for what it is -- an often-naive statistical model that can and will increasingly be gamed by bad actors.

            6/

            angusm@mastodon.socialA hamishb@mstdn.caH 2 Replies Last reply
            0
            • angusm@mastodon.socialA angusm@mastodon.social

              LLMs are essentially gullible. And many people, even otherwise smart people, are gullible enough to believe that "AI" distillations of facts are trustworthy. It's a problem of gullibility compounded. But there's also an entire industry that's devoted to trying to convince us NOT to be skeptical of AI, not to see it for what it is -- an often-naive statistical model that can and will increasingly be gamed by bad actors.

              6/

              angusm@mastodon.socialA This user is from outside of this forum
              angusm@mastodon.socialA This user is from outside of this forum
              angusm@mastodon.social
              wrote last edited by
              #7

              I once described the US as a complex distributed system with an attack surface of 300 million people. Gullible LLMs are a new vector for attacking that system, one that targets the weakest links in the chain, the people who don't know enough not to distrust those handy-dandy “AI Overview" boxes in their favorite search engine.

              7/

              angusm@mastodon.socialA 1 Reply Last reply
              0
              • angusm@mastodon.socialA angusm@mastodon.social

                I once described the US as a complex distributed system with an attack surface of 300 million people. Gullible LLMs are a new vector for attacking that system, one that targets the weakest links in the chain, the people who don't know enough not to distrust those handy-dandy “AI Overview" boxes in their favorite search engine.

                7/

                angusm@mastodon.socialA This user is from outside of this forum
                angusm@mastodon.socialA This user is from outside of this forum
                angusm@mastodon.social
                wrote last edited by
                #8

                It's also the case that the more untrustworthy LLM output becomes, the harder the people who have invested hundreds of billions in the tech will try to convince us that we must Trust the Superintelligent Machine That Knows Everything, and, indeed, to cut us off from competing knowledge sources. So we have that to look forward to.

                Anyway, TL;DR: artificial gullibility is a problem that's only going to get worse, so brace yourselves.

                /END

                arafel@mas.toA steve@social.coopS tuban_muzuru@beige.partyT 3 Replies Last reply
                0
                • angusm@mastodon.socialA angusm@mastodon.social

                  It's also the case that the more untrustworthy LLM output becomes, the harder the people who have invested hundreds of billions in the tech will try to convince us that we must Trust the Superintelligent Machine That Knows Everything, and, indeed, to cut us off from competing knowledge sources. So we have that to look forward to.

                  Anyway, TL;DR: artificial gullibility is a problem that's only going to get worse, so brace yourselves.

                  /END

                  arafel@mas.toA This user is from outside of this forum
                  arafel@mas.toA This user is from outside of this forum
                  arafel@mas.to
                  wrote last edited by
                  #9

                  @angusm Time to go get an "internet in a box" ... box.

                  (Yeah, couldn't figure out a good way to end that.)

                  Link Preview Image
                  Internet-in-a-Box - Mandela's Library of Alexandria

                  Internet-in-a-Box is a tiny, powerful 'Digital Library of Alexandria' that can be set up by any school, medical clinic or community worldwide.

                  favicon

                  (internet-in-a-box.org)

                  1 Reply Last reply
                  0
                  • angusm@mastodon.socialA angusm@mastodon.social

                    Whatever the hypesters may tell you, LLMs do NOT reason. Given two conflicting versions of a story, they’ll go for the one that is repeated more often. The sequence of tokens representing a false narrative is – if the astroturfers have done their job right – statistically more probable than the sequence representing a factual account, so it's the false narrative that will get coded into the model and trotted out on demand.

                    2/

                    chemicaleyeguy@mstdn.scienceC This user is from outside of this forum
                    chemicaleyeguy@mstdn.scienceC This user is from outside of this forum
                    chemicaleyeguy@mstdn.science
                    wrote last edited by
                    #10

                    @angusm #AI is #clankers 🤖 all the way down.

                    #Resist #AIslop.

                    1 Reply Last reply
                    0
                    • angusm@mastodon.socialA angusm@mastodon.social

                      Whatever the hypesters may tell you, LLMs do NOT reason. Given two conflicting versions of a story, they’ll go for the one that is repeated more often. The sequence of tokens representing a false narrative is – if the astroturfers have done their job right – statistically more probable than the sequence representing a factual account, so it's the false narrative that will get coded into the model and trotted out on demand.

                      2/

                      suedioh@mastodon.socialS This user is from outside of this forum
                      suedioh@mastodon.socialS This user is from outside of this forum
                      suedioh@mastodon.social
                      wrote last edited by
                      #11

                      @angusm Many people seem to have forgotten the meaning of the word "model" and I think that's where they go wrong.

                      1 Reply Last reply
                      0
                      • angusm@mastodon.socialA angusm@mastodon.social

                        It's also the case that the more untrustworthy LLM output becomes, the harder the people who have invested hundreds of billions in the tech will try to convince us that we must Trust the Superintelligent Machine That Knows Everything, and, indeed, to cut us off from competing knowledge sources. So we have that to look forward to.

                        Anyway, TL;DR: artificial gullibility is a problem that's only going to get worse, so brace yourselves.

                        /END

                        steve@social.coopS This user is from outside of this forum
                        steve@social.coopS This user is from outside of this forum
                        steve@social.coop
                        wrote last edited by
                        #12

                        @angusm What I hear you saying is... garbage in, garbage out. No amount of rehashing or reprocessing will overcome this limitation.

                        At best, LLMs can average out their source material, and if most of it is garbage, then, well, the results are predictable.

                        Great thread, BTW!

                        1 Reply Last reply
                        0
                        • angusm@mastodon.socialA angusm@mastodon.social

                          A fresh problem with #AI is what might be called Artificial Gullibility.

                          According to a BlueSky poster, an academic who was ruled guilty of plagiarism has waged an extensive astroturfing campaign to rewrite the record. The goal was probably to game conventional search engines, but the texts have now been ingested by Google's AI. Google's "AI Overview” presents her (apparently false) version of events, backing it with the supposed authority of Google and “AI”.

                          1/

                          Link Preview Image
                          Lauren Donovan Ginsberg (@laurenginsberg.bsky.social)

                          The return of ReceptioGate to the news is a useful moment to think about the role AI is having in creating truth for a lot of internet users. I posted this update - the clear plagiarism verdict against Rossi - on another platform… /1 [contains quote post or other embedded content]

                          favicon

                          Bluesky Social (bsky.app)

                          A This user is from outside of this forum
                          A This user is from outside of this forum
                          agremon@mastodon.gal
                          wrote last edited by
                          #13

                          @angusm A 'new' true!

                          1 Reply Last reply
                          0
                          • angusm@mastodon.socialA angusm@mastodon.social

                            It's also the case that the more untrustworthy LLM output becomes, the harder the people who have invested hundreds of billions in the tech will try to convince us that we must Trust the Superintelligent Machine That Knows Everything, and, indeed, to cut us off from competing knowledge sources. So we have that to look forward to.

                            Anyway, TL;DR: artificial gullibility is a problem that's only going to get worse, so brace yourselves.

                            /END

                            tuban_muzuru@beige.partyT This user is from outside of this forum
                            tuban_muzuru@beige.partyT This user is from outside of this forum
                            tuban_muzuru@beige.party
                            wrote last edited by
                            #14

                            @angusm

                            .... I have put up with human liars for long enough to know this entire argument is Special Pleading.

                            1 Reply Last reply
                            0
                            • angusm@mastodon.socialA angusm@mastodon.social

                              Whatever the hypesters may tell you, LLMs do NOT reason. Given two conflicting versions of a story, they’ll go for the one that is repeated more often. The sequence of tokens representing a false narrative is – if the astroturfers have done their job right – statistically more probable than the sequence representing a factual account, so it's the false narrative that will get coded into the model and trotted out on demand.

                              2/

                              adingbatponder@fosstodon.orgA This user is from outside of this forum
                              adingbatponder@fosstodon.orgA This user is from outside of this forum
                              adingbatponder@fosstodon.org
                              wrote last edited by
                              #15

                              @angusm Does this mean that the training is based on stats? So can an AI be trained on a training set with only one example of each target case?

                              1 Reply Last reply
                              0
                              • angusm@mastodon.socialA angusm@mastodon.social

                                LLMs are essentially gullible. And many people, even otherwise smart people, are gullible enough to believe that "AI" distillations of facts are trustworthy. It's a problem of gullibility compounded. But there's also an entire industry that's devoted to trying to convince us NOT to be skeptical of AI, not to see it for what it is -- an often-naive statistical model that can and will increasingly be gamed by bad actors.

                                6/

                                hamishb@mstdn.caH This user is from outside of this forum
                                hamishb@mstdn.caH This user is from outside of this forum
                                hamishb@mstdn.ca
                                wrote last edited by
                                #16

                                There's an entire industry devoted to moving fast and breaking things. Its ideology is built into the LLMs, too.

                                @angusm

                                1 Reply Last reply
                                2
                                0
                                • R relay@relay.mycrowd.ca shared this topic
                                  R relay@relay.infosec.exchange shared this topic
                                Reply
                                • Reply as topic
                                Log in to reply
                                • Oldest to Newest
                                • Newest to Oldest
                                • Most Votes


                                • Login

                                • Login or register to search.
                                • First post
                                  Last post
                                0
                                • Categories
                                • Recent
                                • Tags
                                • Popular
                                • World
                                • Users
                                • Groups