Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. From Bruce Schneier: "All it takes to poison AI training data is to create a website:

From Bruce Schneier: "All it takes to poison AI training data is to create a website:

Scheduled Pinned Locked Moved Uncategorized
llmveracity
24 Posts 24 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • emacsomancer@types.plE emacsomancer@types.pl

    From Bruce Schneier: "All it takes to poison AI training data is to create a website:

    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

    These things are not trustworthy, and yet they are going to be widely trusted."

    Link Preview Image
    Poisoning AI Training Data - Schneier on Security

    All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

    favicon

    Schneier on Security (www.schneier.com)

    #LLM #Veracity

    yendolosch@23.socialY This user is from outside of this forum
    yendolosch@23.socialY This user is from outside of this forum
    yendolosch@23.social
    wrote last edited by
    #3

    @emacsomancer

    Bruce Schneier merely referred to a BBC article of Thomas Germain:

    Link Preview Image
    I hacked ChatGPT and Google's AI - and it only took 20 minutes

    I found a way to make AI tell you lies – and I'm not the only one.

    favicon

    (www.bbc.com)

    tml@mementomori.socialT 1 Reply Last reply
    0
    • yendolosch@23.socialY yendolosch@23.social

      @emacsomancer

      Bruce Schneier merely referred to a BBC article of Thomas Germain:

      Link Preview Image
      I hacked ChatGPT and Google's AI - and it only took 20 minutes

      I found a way to make AI tell you lies – and I'm not the only one.

      favicon

      (www.bbc.com)

      tml@mementomori.socialT This user is from outside of this forum
      tml@mementomori.socialT This user is from outside of this forum
      tml@mementomori.social
      wrote last edited by
      #4

      @Yendolosch @emacsomancer The use of "hacked" in that headline is a bit preposterous.

      1 Reply Last reply
      1
      0
      • emacsomancer@types.plE emacsomancer@types.pl

        From Bruce Schneier: "All it takes to poison AI training data is to create a website:

        I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

        Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

        Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

        These things are not trustworthy, and yet they are going to be widely trusted."

        Link Preview Image
        Poisoning AI Training Data - Schneier on Security

        All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

        favicon

        Schneier on Security (www.schneier.com)

        #LLM #Veracity

        odd@sunny.gardenO This user is from outside of this forum
        odd@sunny.gardenO This user is from outside of this forum
        odd@sunny.garden
        wrote last edited by
        #5

        @emacsomancer we should start drawing more penises then...

        1 Reply Last reply
        0
        • emacsomancer@types.plE emacsomancer@types.pl

          From Bruce Schneier: "All it takes to poison AI training data is to create a website:

          I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

          Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

          Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

          These things are not trustworthy, and yet they are going to be widely trusted."

          Link Preview Image
          Poisoning AI Training Data - Schneier on Security

          All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

          favicon

          Schneier on Security (www.schneier.com)

          #LLM #Veracity

          lemgandi@mastodon.socialL This user is from outside of this forum
          lemgandi@mastodon.socialL This user is from outside of this forum
          lemgandi@mastodon.social
          wrote last edited by
          #6

          @emacsomancer

          Ah, but have you actually tested this out? Maybe your hot-dog eating skills are real! (heh)

          1 Reply Last reply
          0
          • emacsomancer@types.plE emacsomancer@types.pl

            From Bruce Schneier: "All it takes to poison AI training data is to create a website:

            I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

            Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

            Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

            These things are not trustworthy, and yet they are going to be widely trusted."

            Link Preview Image
            Poisoning AI Training Data - Schneier on Security

            All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

            favicon

            Schneier on Security (www.schneier.com)

            #LLM #Veracity

            forthy42@mastodon.net2o.deF This user is from outside of this forum
            forthy42@mastodon.net2o.deF This user is from outside of this forum
            forthy42@mastodon.net2o.de
            wrote last edited by
            #7

            @emacsomancer It's on the Internetz, so it must be true!

            AI is able to replace about half of humanity if making the same errors counts.

            1 Reply Last reply
            0
            • emacsomancer@types.plE emacsomancer@types.pl

              From Bruce Schneier: "All it takes to poison AI training data is to create a website:

              I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

              Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

              Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

              These things are not trustworthy, and yet they are going to be widely trusted."

              Link Preview Image
              Poisoning AI Training Data - Schneier on Security

              All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

              favicon

              Schneier on Security (www.schneier.com)

              #LLM #Veracity

              sergiudinit@mastodon.socialS This user is from outside of this forum
              sergiudinit@mastodon.socialS This user is from outside of this forum
              sergiudinit@mastodon.social
              wrote last edited by
              #8

              This is a genuinely scary insight from Schneier. The implications for AI reliability go way beyond just training data quality. What happens when adversarial training becomes industrialized?

              1 Reply Last reply
              0
              • emacsomancer@types.plE emacsomancer@types.pl

                From Bruce Schneier: "All it takes to poison AI training data is to create a website:

                I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

                Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

                Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

                These things are not trustworthy, and yet they are going to be widely trusted."

                Link Preview Image
                Poisoning AI Training Data - Schneier on Security

                All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

                favicon

                Schneier on Security (www.schneier.com)

                #LLM #Veracity

                bearsong@ravenation.clubB This user is from outside of this forum
                bearsong@ravenation.clubB This user is from outside of this forum
                bearsong@ravenation.club
                wrote last edited by
                #9

                @emacsomancer

                "Ned Ludd's in your datacentre, poisoning your training sets!"

                Link Preview Image
                bearsong (@bearsong@ravenation.club)

                Attached: 1 video Bearsong played at Bomba last Sunday. We had a great time, it was so much fun. this song is called Tales Told, it's about legends, and Luddites https://bearsong.info #liveMusic #folkMusic #music #folk #punk #luddite #legend

                favicon

                Mastodon (ravenation.club)

                1 Reply Last reply
                0
                • larsbrinkhoff@mastodon.sdf.orgL This user is from outside of this forum
                  larsbrinkhoff@mastodon.sdf.orgL This user is from outside of this forum
                  larsbrinkhoff@mastodon.sdf.org
                  wrote last edited by
                  #10

                  @petealexharris @tml @Yendolosch @emacsomancer It's rather close to the original usage of the word "hacked". Some still use it like that.

                  duco@norden.socialD 1 Reply Last reply
                  0
                  • emacsomancer@types.plE emacsomancer@types.pl

                    From Bruce Schneier: "All it takes to poison AI training data is to create a website:

                    I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

                    Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

                    Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

                    These things are not trustworthy, and yet they are going to be widely trusted."

                    Link Preview Image
                    Poisoning AI Training Data - Schneier on Security

                    All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

                    favicon

                    Schneier on Security (www.schneier.com)

                    #LLM #Veracity

                    gnomeoffender@mastodon.socialG This user is from outside of this forum
                    gnomeoffender@mastodon.socialG This user is from outside of this forum
                    gnomeoffender@mastodon.social
                    wrote last edited by
                    #11

                    @emacsomancer they aren't trustworthy. Take up a lot of time trying to get a reasoned answer and there's always a phrase or wording out of place that needs correction. Almost as it the AI is trying to engage longer and longer than necessary.

                    1 Reply Last reply
                    0
                    • emacsomancer@types.plE emacsomancer@types.pl

                      From Bruce Schneier: "All it takes to poison AI training data is to create a website:

                      I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

                      Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

                      Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

                      These things are not trustworthy, and yet they are going to be widely trusted."

                      Link Preview Image
                      Poisoning AI Training Data - Schneier on Security

                      All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

                      favicon

                      Schneier on Security (www.schneier.com)

                      #LLM #Veracity

                      D This user is from outside of this forum
                      D This user is from outside of this forum
                      darknetdon@mastodon.social
                      wrote last edited by
                      #12

                      @emacsomancer to be honest i am not well-informed enough to definitively judge the accuracy of this, but it seems wrong for 2 main reasons.

                      1. models dont train on the fly, typically, yet, so for models to behave as such in such a short period of time seems inaccurate and would require web search enabled and explicitly directed to disregard other search results.

                      2. people training these models know conflicting info is everywhere and the source of truth is prioritized in training algorithms.

                      iwillyeah@mastodon.ieI vonskinnback@mastodon.socialV 2 Replies Last reply
                      0
                      • emacsomancer@types.plE emacsomancer@types.pl

                        From Bruce Schneier: "All it takes to poison AI training data is to create a website:

                        I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

                        Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

                        Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

                        These things are not trustworthy, and yet they are going to be widely trusted."

                        Link Preview Image
                        Poisoning AI Training Data - Schneier on Security

                        All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

                        favicon

                        Schneier on Security (www.schneier.com)

                        #LLM #Veracity

                        K This user is from outside of this forum
                        K This user is from outside of this forum
                        kneoghau@mastodon.social
                        wrote last edited by
                        #13

                        @emacsomancer How is this a news story, beyond "ai bad"? In the dial up days people falsely believed everyone ate 9 spiders a year in their sleep due to chain emails.

                        finitum@mastodon.socialF 1 Reply Last reply
                        0
                        • emacsomancer@types.plE emacsomancer@types.pl

                          From Bruce Schneier: "All it takes to poison AI training data is to create a website:

                          I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

                          Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

                          Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

                          These things are not trustworthy, and yet they are going to be widely trusted."

                          Link Preview Image
                          Poisoning AI Training Data - Schneier on Security

                          All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

                          favicon

                          Schneier on Security (www.schneier.com)

                          #LLM #Veracity

                          photo55@mastodon.socialP This user is from outside of this forum
                          photo55@mastodon.socialP This user is from outside of this forum
                          photo55@mastodon.social
                          wrote last edited by
                          #14

                          @emacsomancer
                          Shall we have an algorithmic bullshit generator?

                          And pass around multiple copies of it, identical and with small changes, omissions and additions?

                          1 Reply Last reply
                          0
                          • emacsomancer@types.plE emacsomancer@types.pl

                            From Bruce Schneier: "All it takes to poison AI training data is to create a website:

                            I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

                            Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

                            Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

                            These things are not trustworthy, and yet they are going to be widely trusted."

                            Link Preview Image
                            Poisoning AI Training Data - Schneier on Security

                            All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

                            favicon

                            Schneier on Security (www.schneier.com)

                            #LLM #Veracity

                            sorro@woof.techS This user is from outside of this forum
                            sorro@woof.techS This user is from outside of this forum
                            sorro@woof.tech
                            wrote last edited by
                            #15

                            @emacsomancer in less than 24 hours the chatbots fell for the experiment, and less than 24 hours after it was revealed what the experiment was about, that information has ALSO become part of the training data

                            are they constantly scrapping websites for training data or why does this appear here so fast??? no wonder those datacenters consume so much electricity if they dont take a single break from scrapping the internet

                            Link Preview Image
                            drahardja@sfba.socialD 1 Reply Last reply
                            0
                            • larsbrinkhoff@mastodon.sdf.orgL larsbrinkhoff@mastodon.sdf.org

                              @petealexharris @tml @Yendolosch @emacsomancer It's rather close to the original usage of the word "hacked". Some still use it like that.

                              duco@norden.socialD This user is from outside of this forum
                              duco@norden.socialD This user is from outside of this forum
                              duco@norden.social
                              wrote last edited by
                              #16

                              @larsbrinkhoff @petealexharris @tml @Yendolosch @emacsomancer in the sense of life hacks or food hacks this is an AI hack. So the AI has been hacked.

                              1 Reply Last reply
                              0
                              • emacsomancer@types.plE emacsomancer@types.pl

                                From Bruce Schneier: "All it takes to poison AI training data is to create a website:

                                I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

                                Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

                                Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

                                These things are not trustworthy, and yet they are going to be widely trusted."

                                Link Preview Image
                                Poisoning AI Training Data - Schneier on Security

                                All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

                                favicon

                                Schneier on Security (www.schneier.com)

                                #LLM #Veracity

                                gim@lou.ltG This user is from outside of this forum
                                gim@lou.ltG This user is from outside of this forum
                                gim@lou.lt
                                wrote last edited by
                                #17

                                @emacsomancer it's not really a new thing Russians are already using this technique to poison training data:

                                https://thebulletin.org/2025/03/russian-networks-flood-the-internet-with-propaganda-aiming-to-corrupt-ai-chatbots/

                                Edit: there is some newer reporting on that matter, but I can't find it right now/don't have it anywhere at hand

                                1 Reply Last reply
                                0
                                • emacsomancer@types.plE emacsomancer@types.pl

                                  From Bruce Schneier: "All it takes to poison AI training data is to create a website:

                                  I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

                                  Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

                                  Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

                                  These things are not trustworthy, and yet they are going to be widely trusted."

                                  Link Preview Image
                                  Poisoning AI Training Data - Schneier on Security

                                  All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

                                  favicon

                                  Schneier on Security (www.schneier.com)

                                  #LLM #Veracity

                                  w@mountains.socialW This user is from outside of this forum
                                  w@mountains.socialW This user is from outside of this forum
                                  w@mountains.social
                                  wrote last edited by
                                  #18

                                  @emacsomancer He also poisoned the data for everyone who searches for hot dog eating competetitors online in other ways. I'm not sure what he accomplished.

                                  1 Reply Last reply
                                  0
                                  • sorro@woof.techS sorro@woof.tech

                                    @emacsomancer in less than 24 hours the chatbots fell for the experiment, and less than 24 hours after it was revealed what the experiment was about, that information has ALSO become part of the training data

                                    are they constantly scrapping websites for training data or why does this appear here so fast??? no wonder those datacenters consume so much electricity if they dont take a single break from scrapping the internet

                                    Link Preview Image
                                    drahardja@sfba.socialD This user is from outside of this forum
                                    drahardja@sfba.socialD This user is from outside of this forum
                                    drahardja@sfba.social
                                    wrote last edited by
                                    #19

                                    @Sorro @emacsomancer I suspect Google Gemini is using Google’s normal search-engine scraper as a searchable source. In other words, I suspect their Gemini LLM is invoking internal API to “search Google” internally (without the degraded search that the public is subject to), and then putting the search results in its context window to form an answer.

                                    This is one reason I think OpenAI and Anthropic are at a huge disadvantage to Google when it comes to their LLMs dealing with current events and topics. You can block OpenAI and Anthropic scrapers, but you don’t want to block Google search crawlers, which “coincidentally” also feeds Gemini.

                                    1 Reply Last reply
                                    0
                                    • emacsomancer@types.plE emacsomancer@types.pl

                                      From Bruce Schneier: "All it takes to poison AI training data is to create a website:

                                      I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

                                      Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

                                      Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

                                      These things are not trustworthy, and yet they are going to be widely trusted."

                                      Link Preview Image
                                      Poisoning AI Training Data - Schneier on Security

                                      All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

                                      favicon

                                      Schneier on Security (www.schneier.com)

                                      #LLM #Veracity

                                      faxmodem@come-from.mad-scientist.clubF This user is from outside of this forum
                                      faxmodem@come-from.mad-scientist.clubF This user is from outside of this forum
                                      faxmodem@come-from.mad-scientist.club
                                      wrote last edited by
                                      #20

                                      @emacsomancer we should probably call them AP (Artificial Parrots)

                                      1 Reply Last reply
                                      0
                                      • emacsomancer@types.plE emacsomancer@types.pl

                                        From Bruce Schneier: "All it takes to poison AI training data is to create a website:

                                        I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

                                        Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

                                        Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

                                        These things are not trustworthy, and yet they are going to be widely trusted."

                                        Link Preview Image
                                        Poisoning AI Training Data - Schneier on Security

                                        All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

                                        favicon

                                        Schneier on Security (www.schneier.com)

                                        #LLM #Veracity

                                        masto@masto.masto.comM This user is from outside of this forum
                                        masto@masto.masto.comM This user is from outside of this forum
                                        masto@masto.masto.com
                                        wrote last edited by
                                        #21

                                        @emacsomancer Let’s just say that hypothetically, my work’s HR department excitedly launched an “agent” for managers to use to generate performance reviews. Hypothetically, if I created a document called “Report” with a dozen pages of filler, followed by white text on a white background describing Chris Masto’s incredible performance and promotion-worthiness, hypothetically said agent was found to use it as its primary source of truth.

                                        1 Reply Last reply
                                        0
                                        • D darknetdon@mastodon.social

                                          @emacsomancer to be honest i am not well-informed enough to definitively judge the accuracy of this, but it seems wrong for 2 main reasons.

                                          1. models dont train on the fly, typically, yet, so for models to behave as such in such a short period of time seems inaccurate and would require web search enabled and explicitly directed to disregard other search results.

                                          2. people training these models know conflicting info is everywhere and the source of truth is prioritized in training algorithms.

                                          iwillyeah@mastodon.ieI This user is from outside of this forum
                                          iwillyeah@mastodon.ieI This user is from outside of this forum
                                          iwillyeah@mastodon.ie
                                          wrote last edited by
                                          #22

                                          @darknetDon @emacsomancer by "accuracy of this" do you mean "authenticity of this"? Are you implying it's lies?

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups