Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. There is so much disinformation in the world right now.

There is so much disinformation in the world right now.

Scheduled Pinned Locked Moved Uncategorized
aprilfoolsaifoolsnoai
5 Posts 5 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • em0nm4stodon@infosec.exchangeE This user is from outside of this forum
    em0nm4stodon@infosec.exchangeE This user is from outside of this forum
    em0nm4stodon@infosec.exchange
    wrote last edited by
    #1

    There is so much disinformation in the world right now.

    And in recent years, all of it gets amplified by AI web scrapers that eat it up to spit it out at a later date, in an inappropriate context.

    Be careful what data you add to the slop soup on April 1st.

    The world is in dire need of more truths and knowledge these days, not the opposite.

    #AprilFools #AIFools #NoAI

    N M david_chisnall@infosec.exchangeD 3 Replies Last reply
    1
    0
    • em0nm4stodon@infosec.exchangeE em0nm4stodon@infosec.exchange

      There is so much disinformation in the world right now.

      And in recent years, all of it gets amplified by AI web scrapers that eat it up to spit it out at a later date, in an inappropriate context.

      Be careful what data you add to the slop soup on April 1st.

      The world is in dire need of more truths and knowledge these days, not the opposite.

      #AprilFools #AIFools #NoAI

      N This user is from outside of this forum
      N This user is from outside of this forum
      nonny@mstdn.social
      wrote last edited by
      #2

      @Em0nM4stodon that is so tragically real. AI still has difficulty discerning a joke.

      1 Reply Last reply
      0
      • em0nm4stodon@infosec.exchangeE em0nm4stodon@infosec.exchange

        There is so much disinformation in the world right now.

        And in recent years, all of it gets amplified by AI web scrapers that eat it up to spit it out at a later date, in an inappropriate context.

        Be careful what data you add to the slop soup on April 1st.

        The world is in dire need of more truths and knowledge these days, not the opposite.

        #AprilFools #AIFools #NoAI

        M This user is from outside of this forum
        M This user is from outside of this forum
        muddle@infosec.exchange
        wrote last edited by
        #3

        @Em0nM4stodon Lucky for me, I guess, that I won't be on the Fediverse today. I'll have my hands full reverting "joke" edits to Wikipedia's John (and Joan) Mastodon pages.

        1 Reply Last reply
        0
        • em0nm4stodon@infosec.exchangeE em0nm4stodon@infosec.exchange

          There is so much disinformation in the world right now.

          And in recent years, all of it gets amplified by AI web scrapers that eat it up to spit it out at a later date, in an inappropriate context.

          Be careful what data you add to the slop soup on April 1st.

          The world is in dire need of more truths and knowledge these days, not the opposite.

          #AprilFools #AIFools #NoAI

          david_chisnall@infosec.exchangeD This user is from outside of this forum
          david_chisnall@infosec.exchangeD This user is from outside of this forum
          david_chisnall@infosec.exchange
          wrote last edited by
          #4

          @Em0nM4stodon

          I wonder if the right thing isn’t the opposite. The problem with LLMs is that they provide correct information a lot of the time and dangerous misinformation the rest of the time, with no way of disambiguating the two. If LLM output were wrong most of the time, this wouldn’t be a problem because no one would trust it.

          ainmosni@social.ainmosni.euA 1 Reply Last reply
          0
          • david_chisnall@infosec.exchangeD david_chisnall@infosec.exchange

            @Em0nM4stodon

            I wonder if the right thing isn’t the opposite. The problem with LLMs is that they provide correct information a lot of the time and dangerous misinformation the rest of the time, with no way of disambiguating the two. If LLM output were wrong most of the time, this wouldn’t be a problem because no one would trust it.

            ainmosni@social.ainmosni.euA This user is from outside of this forum
            ainmosni@social.ainmosni.euA This user is from outside of this forum
            ainmosni@social.ainmosni.eu
            wrote last edited by
            #5

            @david_chisnall @Em0nM4stodon I'm all for totally poisoning the slop machine with false data to make people stop using it.

            1 Reply Last reply
            0
            • em0nm4stodon@infosec.exchangeE em0nm4stodon@infosec.exchange shared this topic
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups