Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y.

I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y.

Scheduled Pinned Locked Moved Uncategorized
llmgenaiacademiaresearchresearchintegri
37 Posts 32 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • elenlefoll@fediscience.orgE This user is from outside of this forum
    elenlefoll@fediscience.orgE This user is from outside of this forum
    elenlefoll@fediscience.org
    wrote last edited by
    #1

    I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

    #LLM #GenAI #academia #research #ResearchIntegrity #humanities

    bms48@mastodon.socialB serfdeweb@mastodon.worldS kunev@blewsky.socialK grimblob@mastodon.socialG pineywoozle@masto.aiP 25 Replies Last reply
    3
    0
    • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

      I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

      #LLM #GenAI #academia #research #ResearchIntegrity #humanities

      bms48@mastodon.socialB This user is from outside of this forum
      bms48@mastodon.socialB This user is from outside of this forum
      bms48@mastodon.social
      wrote last edited by
      #2

      @ElenLeFoll Yeah that's totally getting ingested into my Zotero along with all the other corroborating evidence for "The only way is down". But to falsify I am looking at what working code generation studies there actually are.

      mycotropic@beige.partyM 1 Reply Last reply
      0
      • R relay@relay.publicsquare.global shared this topic
      • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

        I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

        #LLM #GenAI #academia #research #ResearchIntegrity #humanities

        serfdeweb@mastodon.worldS This user is from outside of this forum
        serfdeweb@mastodon.worldS This user is from outside of this forum
        serfdeweb@mastodon.world
        wrote last edited by
        #3

        @ElenLeFoll
        Lawyers already invented "affluenza" years ago.

        1 Reply Last reply
        0
        • R relay@relay.mycrowd.ca shared this topic
        • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

          I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

          #LLM #GenAI #academia #research #ResearchIntegrity #humanities

          kunev@blewsky.socialK This user is from outside of this forum
          kunev@blewsky.socialK This user is from outside of this forum
          kunev@blewsky.social
          wrote last edited by
          #4

          @ElenLeFoll@fediscience.org the "reasercher"'s name Lazljiv Izgubljenovic to a speaker of most slavic languages would read sort of like "Lying Loser" 😆😆😆

          elenlefoll@fediscience.orgE oranger@piaille.frO 2 Replies Last reply
          0
          • kunev@blewsky.socialK kunev@blewsky.social

            @ElenLeFoll@fediscience.org the "reasercher"'s name Lazljiv Izgubljenovic to a speaker of most slavic languages would read sort of like "Lying Loser" 😆😆😆

            elenlefoll@fediscience.orgE This user is from outside of this forum
            elenlefoll@fediscience.orgE This user is from outside of this forum
            elenlefoll@fediscience.org
            wrote last edited by
            #5

            @kunev Brilliant! I have to say I really do think that the real author did an excellent job all round!

            1 Reply Last reply
            0
            • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

              I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

              #LLM #GenAI #academia #research #ResearchIntegrity #humanities

              grimblob@mastodon.socialG This user is from outside of this forum
              grimblob@mastodon.socialG This user is from outside of this forum
              grimblob@mastodon.social
              wrote last edited by
              #6

              @ElenLeFoll as a lifelong sufferer of Bixonimania (undiagnosed by any medical professional) I find this article offensive. ChatGPT told me that I asked a really excellent question when I asked if I had Bixonimania, so there.

              1 Reply Last reply
              0
              • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

                I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

                #LLM #GenAI #academia #research #ResearchIntegrity #humanities

                pineywoozle@masto.aiP This user is from outside of this forum
                pineywoozle@masto.aiP This user is from outside of this forum
                pineywoozle@masto.ai
                wrote last edited by
                #7

                @ElenLeFoll I love this quote about the original article. “Acknowledgments also thanked “Professor Sideshow Bob” and a professor from the Starfleet Academy for access to a lab aboard the USS Enterprise” 🤣 🤣 🤣 #AI #AiSlop

                czarbucks@vmst.ioC naturemc@mastodon.onlineN markiejiang@mastodon.socialM 3 Replies Last reply
                0
                • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

                  I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

                  #LLM #GenAI #academia #research #ResearchIntegrity #humanities

                  djm62@beige.partyD This user is from outside of this forum
                  djm62@beige.partyD This user is from outside of this forum
                  djm62@beige.party
                  wrote last edited by
                  #8

                  @ElenLeFoll Chris Morris would be proud:

                  Just a moment...

                  favicon

                  (publications.parliament.uk)

                  Link Preview Image
                  1 Reply Last reply
                  0
                  • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

                    I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

                    #LLM #GenAI #academia #research #ResearchIntegrity #humanities

                    meneerdebruin@mastodon.nlM This user is from outside of this forum
                    meneerdebruin@mastodon.nlM This user is from outside of this forum
                    meneerdebruin@mastodon.nl
                    wrote last edited by
                    #9

                    @ElenLeFoll Good. Poison as much as possible, let it eat its own crap and dance while it burns to the ground.

                    naturemc@mastodon.onlineN 1 Reply Last reply
                    0
                    • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

                      I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

                      #LLM #GenAI #academia #research #ResearchIntegrity #humanities

                      pa27@mastodon.socialP This user is from outside of this forum
                      pa27@mastodon.socialP This user is from outside of this forum
                      pa27@mastodon.social
                      wrote last edited by
                      #10

                      @ElenLeFoll Yes, saw this at the time - but it's good to remind everyone of what comes out of artificial idiocy LLMs!

                      1 Reply Last reply
                      0
                      • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

                        I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

                        #LLM #GenAI #academia #research #ResearchIntegrity #humanities

                        seharinsights@mastodon.socialS This user is from outside of this forum
                        seharinsights@mastodon.socialS This user is from outside of this forum
                        seharinsights@mastodon.social
                        wrote last edited by
                        #11

                        @ElenLeFoll incredible post

                        1 Reply Last reply
                        0
                        • pineywoozle@masto.aiP pineywoozle@masto.ai

                          @ElenLeFoll I love this quote about the original article. “Acknowledgments also thanked “Professor Sideshow Bob” and a professor from the Starfleet Academy for access to a lab aboard the USS Enterprise” 🤣 🤣 🤣 #AI #AiSlop

                          czarbucks@vmst.ioC This user is from outside of this forum
                          czarbucks@vmst.ioC This user is from outside of this forum
                          czarbucks@vmst.io
                          wrote last edited by
                          #12

                          @Pineywoozle @ElenLeFoll

                          Wow. It couldn't be more obvious, but this dreck is infiltrating *everything.*

                          1 Reply Last reply
                          1
                          0
                          • R relay@relay.infosec.exchange shared this topic
                          • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

                            I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

                            #LLM #GenAI #academia #research #ResearchIntegrity #humanities

                            mkljczk@pl.fediverse.plM This user is from outside of this forum
                            mkljczk@pl.fediverse.plM This user is from outside of this forum
                            mkljczk@pl.fediverse.pl
                            wrote last edited by
                            #13

                            @ElenLeFoll@fediscience.org

                            a Google spokesperson said such results reflected the performance of an earlier model. They added, “We have always been transparent about the limitations of generative AI and provide in-app prompts to encourage users to double-check information. For sensitive matters such as medical advice, Gemini recommends users consult with qualified professionals.”

                            that would mostly work well if they released 'AI overview' as an opt-in feature instead of forcing it on users who have some trust in Google built over the past decades and don't expect it to suddenly start making stuff up lol

                            urban_hermit@mstdn.socialU 1 Reply Last reply
                            0
                            • drajt@fosstodon.orgD drajt@fosstodon.org shared this topic
                            • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

                              I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

                              #LLM #GenAI #academia #research #ResearchIntegrity #humanities

                              kolya@social.cologneK This user is from outside of this forum
                              kolya@social.cologneK This user is from outside of this forum
                              kolya@social.cologne
                              wrote last edited by
                              #14

                              @ElenLeFoll that iMac is approaching his 30s. he's earned the right to be a hypochondriac sometimes.

                              1 Reply Last reply
                              0
                              • kunev@blewsky.socialK kunev@blewsky.social

                                @ElenLeFoll@fediscience.org the "reasercher"'s name Lazljiv Izgubljenovic to a speaker of most slavic languages would read sort of like "Lying Loser" 😆😆😆

                                oranger@piaille.frO This user is from outside of this forum
                                oranger@piaille.frO This user is from outside of this forum
                                oranger@piaille.fr
                                wrote last edited by
                                #15

                                @ElenLeFoll @kunev
                                😂
                                🎯

                                1 Reply Last reply
                                0
                                • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

                                  I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

                                  #LLM #GenAI #academia #research #ResearchIntegrity #humanities

                                  savvyhomestead@mastodon.socialS This user is from outside of this forum
                                  savvyhomestead@mastodon.socialS This user is from outside of this forum
                                  savvyhomestead@mastodon.social
                                  wrote last edited by
                                  #16

                                  @ElenLeFoll

                                  Some do it, DELIBERATELY with a lot of other subjects too. AI is amoral especially if the material it is fed is amoral and we know a lot of those tech guys and politicians ARE amoral.

                                  1 Reply Last reply
                                  0
                                  • mkljczk@pl.fediverse.plM mkljczk@pl.fediverse.pl

                                    @ElenLeFoll@fediscience.org

                                    a Google spokesperson said such results reflected the performance of an earlier model. They added, “We have always been transparent about the limitations of generative AI and provide in-app prompts to encourage users to double-check information. For sensitive matters such as medical advice, Gemini recommends users consult with qualified professionals.”

                                    that would mostly work well if they released 'AI overview' as an opt-in feature instead of forcing it on users who have some trust in Google built over the past decades and don't expect it to suddenly start making stuff up lol

                                    urban_hermit@mstdn.socialU This user is from outside of this forum
                                    urban_hermit@mstdn.socialU This user is from outside of this forum
                                    urban_hermit@mstdn.social
                                    wrote last edited by
                                    #17

                                    @mkljczk @ElenLeFoll
                                    Good point.

                                    Google, if Gemini is as useful as you hope it will be, it is inevitable that it will just come to be known as "Google" and AI answers to direct questions is just a feature of Google search.

                                    Google, if Gemini is unreliable and can not be reliable, why are you letting it tarnish your brand?

                                    1 Reply Last reply
                                    0
                                    • bms48@mastodon.socialB bms48@mastodon.social

                                      @ElenLeFoll Yeah that's totally getting ingested into my Zotero along with all the other corroborating evidence for "The only way is down". But to falsify I am looking at what working code generation studies there actually are.

                                      mycotropic@beige.partyM This user is from outside of this forum
                                      mycotropic@beige.partyM This user is from outside of this forum
                                      mycotropic@beige.party
                                      wrote last edited by
                                      #18

                                      @bms48 @ElenLeFoll

                                      Johnathan Swift described The Machine (for writing) in 1726; https://en.wikipedia.org/wiki/The_Engine

                                      nev@flipping.rocksN 1 Reply Last reply
                                      0
                                      • mycotropic@beige.partyM This user is from outside of this forum
                                        mycotropic@beige.partyM This user is from outside of this forum
                                        mycotropic@beige.party
                                        wrote last edited by
                                        #19

                                        @bencourtice @FediThing @ElenLeFoll

                                        We have a whole module on how to cite and also how AI works (ethics, errors, foundational knowledge, all that) for our freshman public health students. We STILL had over 10% academic affairs referrals for using hidden prompts and hallucinated citations and that's with a requirement that they give us annotated PDF copies of every cited paper.

                                        1 Reply Last reply
                                        0
                                        • elenlefoll@fediscience.orgE elenlefoll@fediscience.org

                                          I was only made aware of this (frankly awesome) case of LLM poisoning today: https://www.nature.com/articles/d41586-026-01100-y. A researcher made up a disease and published two evidently fake preprints about it (including sentences such as “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”), which were almost immediately picked up by LLMs and documented in their output. Worse, actual – supposedly serious – medical papers also started citing the preprints, demonstrating that academics relying on LLMs to do their work is a genuine problem! Not that I had my doubts but, if anyone did, this seems like the perfect demonstration of the problem. Article immediately added to the syllabus of the class I am co-teaching with Iris Ferrazzo on LLMs for Romance Studies/Humanities!

                                          #LLM #GenAI #academia #research #ResearchIntegrity #humanities

                                          j12i@weirder.earthJ This user is from outside of this forum
                                          j12i@weirder.earthJ This user is from outside of this forum
                                          j12i@weirder.earth
                                          wrote last edited by
                                          #20

                                          boost with CN: "AI"

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups