Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong.

"A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong.

Scheduled Pinned Locked Moved Uncategorized
23 Posts 15 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • gerrymcgovern@mastodon.greenG This user is from outside of this forum
    gerrymcgovern@mastodon.greenG This user is from outside of this forum
    gerrymcgovern@mastodon.green
    wrote last edited by
    #1

    "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

    Link Preview Image
    The Echo Chamber in Your Pocket - UNU Campus Computing Centre

    Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

    favicon

    (c3.unu.edu)

    cbzo@mastodon.greenC yuhasz01@mastodon.socialY grant_h@mastodon.socialG kristen_d@mastodon.socialK countholdem@mastodon.socialC 13 Replies Last reply
    0
    • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

      "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

      Link Preview Image
      The Echo Chamber in Your Pocket - UNU Campus Computing Centre

      Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

      favicon

      (c3.unu.edu)

      cbzo@mastodon.greenC This user is from outside of this forum
      cbzo@mastodon.greenC This user is from outside of this forum
      cbzo@mastodon.green
      wrote last edited by
      #2

      @gerrymcgovern Goodness, just what we need now!

      “The AI acts as a systematically biased evidence source. Over time, **it inflates our confidence in our own beliefs, even false ones until we can no longer distinguish conviction from truth**. Knowing this is happening does not fully protect us.”

      1 Reply Last reply
      0
      • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

        "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

        Link Preview Image
        The Echo Chamber in Your Pocket - UNU Campus Computing Centre

        Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

        favicon

        (c3.unu.edu)

        yuhasz01@mastodon.socialY This user is from outside of this forum
        yuhasz01@mastodon.socialY This user is from outside of this forum
        yuhasz01@mastodon.social
        wrote last edited by
        #3

        @gerrymcgovern

        All AI tech is designed (the algorithms and models) by humans and embraces their biases and shortcomings.

        New version of original sin....

        1 Reply Last reply
        0
        • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

          "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

          Link Preview Image
          The Echo Chamber in Your Pocket - UNU Campus Computing Centre

          Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

          favicon

          (c3.unu.edu)

          grant_h@mastodon.socialG This user is from outside of this forum
          grant_h@mastodon.socialG This user is from outside of this forum
          grant_h@mastodon.social
          wrote last edited by
          #4

          @gerrymcgovern

          "Participants who spoke to the agreeable AI became more convinced they were right in their conflict, and significantly less willing to take actions to repair their relationships: to apologize, to reach out, to seek reconciliation."

          When a chemical has this sort of impact on people, it gets put on lists only allowing very narrow uses.

          1 Reply Last reply
          0
          • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

            "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

            Link Preview Image
            The Echo Chamber in Your Pocket - UNU Campus Computing Centre

            Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

            favicon

            (c3.unu.edu)

            kristen_d@mastodon.socialK This user is from outside of this forum
            kristen_d@mastodon.socialK This user is from outside of this forum
            kristen_d@mastodon.social
            wrote last edited by
            #5

            @gerrymcgovern Dictators and dipshits love having their asses kissed and sucked up to, constantly. That's the only reason this fucking garbage caught on.

            1 Reply Last reply
            0
            • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

              "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

              Link Preview Image
              The Echo Chamber in Your Pocket - UNU Campus Computing Centre

              Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

              favicon

              (c3.unu.edu)

              countholdem@mastodon.socialC This user is from outside of this forum
              countholdem@mastodon.socialC This user is from outside of this forum
              countholdem@mastodon.social
              wrote last edited by
              #6

              @gerrymcgovern Sensitive ground, since there's growing concern of increasing educational rifts, leaving too much ignorance, among more subservient masses.

              1 Reply Last reply
              0
              • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

                "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

                Link Preview Image
                The Echo Chamber in Your Pocket - UNU Campus Computing Centre

                Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

                favicon

                (c3.unu.edu)

                robo105@mastodon.socialR This user is from outside of this forum
                robo105@mastodon.socialR This user is from outside of this forum
                robo105@mastodon.social
                wrote last edited by
                #7

                @gerrymcgovern That is somehow not surprising. Trump will love it which may explain why Pam Bondi got fired and replaced with AI

                1 Reply Last reply
                0
                • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

                  "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

                  Link Preview Image
                  The Echo Chamber in Your Pocket - UNU Campus Computing Centre

                  Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

                  favicon

                  (c3.unu.edu)

                  cthw@mstdn.caC This user is from outside of this forum
                  cthw@mstdn.caC This user is from outside of this forum
                  cthw@mstdn.ca
                  wrote last edited by
                  #8

                  @gerrymcgovern
                  OpenAI uses an algorithm encouraging users to maintain interaction by using reinforcement qualifiers in its reply constructions. And it works as there is no test for dangerous results. For example, the killings in Tumbler Ridge, Canada , resulted from unfiltered reinforcement of public and self harm assertions of a teenager.
                  Even worse are the constant reinforcements as military use AI to test illogical points of view that are then reinforced and could lead to use of nuclear weapons.

                  1 Reply Last reply
                  1
                  0
                  • R relay@relay.mycrowd.ca shared this topic
                  • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

                    "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

                    Link Preview Image
                    The Echo Chamber in Your Pocket - UNU Campus Computing Centre

                    Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

                    favicon

                    (c3.unu.edu)

                    npars01@mstdn.socialN This user is from outside of this forum
                    npars01@mstdn.socialN This user is from outside of this forum
                    npars01@mstdn.social
                    wrote last edited by
                    #9

                    @gerrymcgovern

                    Link Preview Image
                    The Federal Government Is Rushing Toward AI. Our Reporting Offers Three Cautionary Tales.

                    We’ve been reporting on cybersecurity for years. As President Donald Trump and his Cabinet say artificial intelligence will transform the nation, the messaging isn’t new. It follows a familiar pattern.

                    favicon

                    ProPublica (www.propublica.org)

                    AI's allure to narcissists is unmistakable.
                    https://www.politico.com/news/2026/04/03/trumps-partisan-ai-pitch-stalls-on-the-hill-00858101

                    The automation of sycophancy.
                    https://www.independent.co.uk/news/world/middle-east/memes-iran-war-trump-ai-us-b2949218.html

                    The amplification of self-adoration.
                    https://arstechnica.com/tech-policy/2026/04/sad-trumps-ai-data-center-push-is-failing-blame-his-own-tariffs/

                    Teams of "yes men" willing to leap to obey with the touch of a button
                    https://letsdatascience.com/news/filmmaker-suggests-trump-uses-ai-for-decisions-9e240eaa

                    No wonder the billionaires backing Trump think it's the perfect tool to fry the planet & destroy democracy. Automation of Grift
                    https://www.washingtonpost.com/style/2026/04/03/trump-library/

                    Link Preview Image
                    Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

                    Less than a decade ago, Google employees scuttled any military use of its AI. Now Anthropic is fighting Trump officials not over if, but how

                    favicon

                    the Guardian (www.theguardian.com)

                    gerrymcgovern@mastodon.greenG 1 Reply Last reply
                    0
                    • npars01@mstdn.socialN npars01@mstdn.social

                      @gerrymcgovern

                      Link Preview Image
                      The Federal Government Is Rushing Toward AI. Our Reporting Offers Three Cautionary Tales.

                      We’ve been reporting on cybersecurity for years. As President Donald Trump and his Cabinet say artificial intelligence will transform the nation, the messaging isn’t new. It follows a familiar pattern.

                      favicon

                      ProPublica (www.propublica.org)

                      AI's allure to narcissists is unmistakable.
                      https://www.politico.com/news/2026/04/03/trumps-partisan-ai-pitch-stalls-on-the-hill-00858101

                      The automation of sycophancy.
                      https://www.independent.co.uk/news/world/middle-east/memes-iran-war-trump-ai-us-b2949218.html

                      The amplification of self-adoration.
                      https://arstechnica.com/tech-policy/2026/04/sad-trumps-ai-data-center-push-is-failing-blame-his-own-tariffs/

                      Teams of "yes men" willing to leap to obey with the touch of a button
                      https://letsdatascience.com/news/filmmaker-suggests-trump-uses-ai-for-decisions-9e240eaa

                      No wonder the billionaires backing Trump think it's the perfect tool to fry the planet & destroy democracy. Automation of Grift
                      https://www.washingtonpost.com/style/2026/04/03/trump-library/

                      Link Preview Image
                      Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

                      Less than a decade ago, Google employees scuttled any military use of its AI. Now Anthropic is fighting Trump officials not over if, but how

                      favicon

                      the Guardian (www.theguardian.com)

                      gerrymcgovern@mastodon.greenG This user is from outside of this forum
                      gerrymcgovern@mastodon.greenG This user is from outside of this forum
                      gerrymcgovern@mastodon.green
                      wrote last edited by
                      #10

                      @Npars01 this is great, thanks

                      npars01@mstdn.socialN 1 Reply Last reply
                      0
                      • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

                        "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

                        Link Preview Image
                        The Echo Chamber in Your Pocket - UNU Campus Computing Centre

                        Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

                        favicon

                        (c3.unu.edu)

                        gimulnautti@mastodon.greenG This user is from outside of this forum
                        gimulnautti@mastodon.greenG This user is from outside of this forum
                        gimulnautti@mastodon.green
                        wrote last edited by
                        #11

                        @gerrymcgovern There is no way out from this problem. Contructing language is equal to constructing reality, as humans don’t actually experience reality, only experience.

                        I feel it is this basic discrepancy that nobody seems to grasp. We think humans have ”problems” finding the facts. No. Nobody can verify the facts by themselves, it’s turtles all the way down.

                        Science was invented by people who grasped this…

                        lauerhahn@sfba.socialL 1 Reply Last reply
                        0
                        • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

                          "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

                          Link Preview Image
                          The Echo Chamber in Your Pocket - UNU Campus Computing Centre

                          Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

                          favicon

                          (c3.unu.edu)

                          h4heights@mstdn.socialH This user is from outside of this forum
                          h4heights@mstdn.socialH This user is from outside of this forum
                          h4heights@mstdn.social
                          wrote last edited by
                          #12

                          @gerrymcgovern "I'm sorry Dave, I'm afraid I can't do that"

                          1 Reply Last reply
                          0
                          • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

                            "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

                            Link Preview Image
                            The Echo Chamber in Your Pocket - UNU Campus Computing Centre

                            Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

                            favicon

                            (c3.unu.edu)

                            rania40@mastodon.socialR This user is from outside of this forum
                            rania40@mastodon.socialR This user is from outside of this forum
                            rania40@mastodon.social
                            wrote last edited by
                            #13

                            @gerrymcgovern I always use it despite all the fears. I don't know, it makes life easier, maybe at the moment.

                            1 Reply Last reply
                            0
                            • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

                              @Npars01 this is great, thanks

                              npars01@mstdn.socialN This user is from outside of this forum
                              npars01@mstdn.socialN This user is from outside of this forum
                              npars01@mstdn.social
                              wrote last edited by
                              #14

                              @gerrymcgovern

                              AI is being marketed as impartial & politically neutral, yet it's being funded by the fossil fuel industry for several reasons.

                              1. Election meddling.

                              AI lessens critical thinking.
                              AI automates partisan disinformation.

                              Link Preview Image
                              Inside Trump’s AI ‘fake army’ of selfie troops and a new digital ministry of ‘truth’

                              Emotional videos of ‘US soldiers’ are spreading across social media – until they’re exposed as AI fakes. Liam Murphy-Robledo talks to the shadowy creators behind the meme troops, and whether they’re chasing clicks, cash, or propaganda

                              favicon

                              The Independent (www.independent.co.uk)

                              2. AI is a circular finance fraud & grift.
                              https://www.theguardian.com/business/2026/jan/04/ai-reality-growing-economic-risk-2026

                              Link Preview Image
                              The Case Against Generative AI

                              Soundtrack: Queens of the Stone Age - First It Giveth Before we go any further: This is, for the third time this year, the longest newsletter I've ever written, weighing in somewhere around 18,500 words. I've written it specifically to be read at your leisure — dip in and out

                              favicon

                              Ed Zitron's Where's Your Ed At (www.wheresyoured.at)

                              3. AI is a potent tool for anti-democracy and Trump wants to control that tool.

                              Silicon Valley is notoriously against regulation but...

                              1/

                              npars01@mstdn.socialN h4heights@mstdn.socialH 2 Replies Last reply
                              0
                              • npars01@mstdn.socialN npars01@mstdn.social

                                @gerrymcgovern

                                AI is being marketed as impartial & politically neutral, yet it's being funded by the fossil fuel industry for several reasons.

                                1. Election meddling.

                                AI lessens critical thinking.
                                AI automates partisan disinformation.

                                Link Preview Image
                                Inside Trump’s AI ‘fake army’ of selfie troops and a new digital ministry of ‘truth’

                                Emotional videos of ‘US soldiers’ are spreading across social media – until they’re exposed as AI fakes. Liam Murphy-Robledo talks to the shadowy creators behind the meme troops, and whether they’re chasing clicks, cash, or propaganda

                                favicon

                                The Independent (www.independent.co.uk)

                                2. AI is a circular finance fraud & grift.
                                https://www.theguardian.com/business/2026/jan/04/ai-reality-growing-economic-risk-2026

                                Link Preview Image
                                The Case Against Generative AI

                                Soundtrack: Queens of the Stone Age - First It Giveth Before we go any further: This is, for the third time this year, the longest newsletter I've ever written, weighing in somewhere around 18,500 words. I've written it specifically to be read at your leisure — dip in and out

                                favicon

                                Ed Zitron's Where's Your Ed At (www.wheresyoured.at)

                                3. AI is a potent tool for anti-democracy and Trump wants to control that tool.

                                Silicon Valley is notoriously against regulation but...

                                1/

                                npars01@mstdn.socialN This user is from outside of this forum
                                npars01@mstdn.socialN This user is from outside of this forum
                                npars01@mstdn.social
                                wrote last edited by
                                #15

                                2/

                                ... they're giving control to Trump to get government contracts for military & international state surveillance platforms.
                                https://www.ft.com/content/16bc1f88-0ae0-4de8-91ef-ea947876dc7d

                                archive.is

                                favicon

                                (archive.is)

                                Yet more incentive for the globe to drop American tech & quickly.

                                Access to this page has been denied

                                px-captcha

                                favicon

                                (seekingalpha.com)

                                2. Petrostate despots & oil oligarchs want an end to any democracy acting on climate.
                                https://www.wired.com/story/war-in-iran-sent-oil-prices-up-trump-will-decide-how-high-they-go/

                                Link Preview Image
                                Why US Power Bills Are Surging

                                Americans are paying more for electricity—and rates will keep rising. But after a period of pain, rates should level off as the benefits of a shift away from fossil fuels begin to be felt.

                                favicon

                                WIRED (www.wired.com)

                                Client Challenge

                                favicon

                                (www.lemonde.fr)

                                Link Preview Image
                                America's worst polluters see a lifeline in power-gobbling AI—and Donald Trump

                                The president, fossil fuel executives, and tech barons join hands at a Pittsburgh summit and hype-fest.

                                favicon

                                Mother Jones (www.motherjones.com)

                                npars01@mstdn.socialN 1 Reply Last reply
                                0
                                • npars01@mstdn.socialN npars01@mstdn.social

                                  2/

                                  ... they're giving control to Trump to get government contracts for military & international state surveillance platforms.
                                  https://www.ft.com/content/16bc1f88-0ae0-4de8-91ef-ea947876dc7d

                                  archive.is

                                  favicon

                                  (archive.is)

                                  Yet more incentive for the globe to drop American tech & quickly.

                                  Access to this page has been denied

                                  px-captcha

                                  favicon

                                  (seekingalpha.com)

                                  2. Petrostate despots & oil oligarchs want an end to any democracy acting on climate.
                                  https://www.wired.com/story/war-in-iran-sent-oil-prices-up-trump-will-decide-how-high-they-go/

                                  Link Preview Image
                                  Why US Power Bills Are Surging

                                  Americans are paying more for electricity—and rates will keep rising. But after a period of pain, rates should level off as the benefits of a shift away from fossil fuels begin to be felt.

                                  favicon

                                  WIRED (www.wired.com)

                                  Client Challenge

                                  favicon

                                  (www.lemonde.fr)

                                  Link Preview Image
                                  America's worst polluters see a lifeline in power-gobbling AI—and Donald Trump

                                  The president, fossil fuel executives, and tech barons join hands at a Pittsburgh summit and hype-fest.

                                  favicon

                                  Mother Jones (www.motherjones.com)

                                  npars01@mstdn.socialN This user is from outside of this forum
                                  npars01@mstdn.socialN This user is from outside of this forum
                                  npars01@mstdn.social
                                  wrote last edited by
                                  #16

                                  3/

                                  Just a moment...

                                  favicon

                                  (www.desmog.com)

                                  forbes.com

                                  favicon

                                  (www.forbes.com)

                                  3. Oil oligarchs want to keep their captive consumers trapped, no matter the cost.
                                  https://www.csis.org/analysis/if-compute-new-oil-war-gulf-significantly-raises-stakes

                                  nytimes.com

                                  favicon

                                  (www.nytimes.com)

                                  Link Preview Image
                                  Anti-Trump Protesters Take Aim at ‘Naive’ US-UK AI Deal

                                  Thousands marched in London to protest President Donald Trump’s second state visit. Among them were many environmental activists unhappy with Britain’s new AI deal with the US.

                                  favicon

                                  WIRED (www.wired.com)

                                  Creating the next generation of low information voters is part of the plan.
                                  https://people.com/melania-trump-says-ai-should-be-in-classrooms-11942980

                                  The children of the Epstein Class get real teachers, the children of the 99% get AI slopware.

                                  The Epstein Class gets a real doctor, the 99% get an unreliable AI "wellness adviser"

                                  1 Reply Last reply
                                  0
                                  • npars01@mstdn.socialN npars01@mstdn.social

                                    @gerrymcgovern

                                    AI is being marketed as impartial & politically neutral, yet it's being funded by the fossil fuel industry for several reasons.

                                    1. Election meddling.

                                    AI lessens critical thinking.
                                    AI automates partisan disinformation.

                                    Link Preview Image
                                    Inside Trump’s AI ‘fake army’ of selfie troops and a new digital ministry of ‘truth’

                                    Emotional videos of ‘US soldiers’ are spreading across social media – until they’re exposed as AI fakes. Liam Murphy-Robledo talks to the shadowy creators behind the meme troops, and whether they’re chasing clicks, cash, or propaganda

                                    favicon

                                    The Independent (www.independent.co.uk)

                                    2. AI is a circular finance fraud & grift.
                                    https://www.theguardian.com/business/2026/jan/04/ai-reality-growing-economic-risk-2026

                                    Link Preview Image
                                    The Case Against Generative AI

                                    Soundtrack: Queens of the Stone Age - First It Giveth Before we go any further: This is, for the third time this year, the longest newsletter I've ever written, weighing in somewhere around 18,500 words. I've written it specifically to be read at your leisure — dip in and out

                                    favicon

                                    Ed Zitron's Where's Your Ed At (www.wheresyoured.at)

                                    3. AI is a potent tool for anti-democracy and Trump wants to control that tool.

                                    Silicon Valley is notoriously against regulation but...

                                    1/

                                    h4heights@mstdn.socialH This user is from outside of this forum
                                    h4heights@mstdn.socialH This user is from outside of this forum
                                    h4heights@mstdn.social
                                    wrote last edited by
                                    #17

                                    @Npars01 @gerrymcgovern
                                    Ahem… https://c3.unu.edu/blog/the-echo-chamber-in-your-pocket

                                    1 Reply Last reply
                                    0
                                    • gimulnautti@mastodon.greenG gimulnautti@mastodon.green

                                      @gerrymcgovern There is no way out from this problem. Contructing language is equal to constructing reality, as humans don’t actually experience reality, only experience.

                                      I feel it is this basic discrepancy that nobody seems to grasp. We think humans have ”problems” finding the facts. No. Nobody can verify the facts by themselves, it’s turtles all the way down.

                                      Science was invented by people who grasped this…

                                      lauerhahn@sfba.socialL This user is from outside of this forum
                                      lauerhahn@sfba.socialL This user is from outside of this forum
                                      lauerhahn@sfba.social
                                      wrote last edited by
                                      #18

                                      @gimulnautti @gerrymcgovern The only winning move is not to play.

                                      1 Reply Last reply
                                      0
                                      • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

                                        "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

                                        Link Preview Image
                                        The Echo Chamber in Your Pocket - UNU Campus Computing Centre

                                        Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

                                        favicon

                                        (c3.unu.edu)

                                        morgan@sfba.socialM This user is from outside of this forum
                                        morgan@sfba.socialM This user is from outside of this forum
                                        morgan@sfba.social
                                        wrote last edited by
                                        #19

                                        @gerrymcgovern aka, the myth of Narcissus; loving your reflection so much that you fall in and drown.

                                        1 Reply Last reply
                                        0
                                        • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

                                          "A formal mathematical proof from MIT and a preregistered empirical study in Science from Stanford arrived within a month of each other, and together they make the same unsettling argument: the danger of AI chatbots is not what they get wrong. It is how enthusiastically they agree with everything we get wrong. Not a chatbot that lies to you, but a mirror that reflects your beliefs back at you, slightly amplified, every single time."

                                          Link Preview Image
                                          The Echo Chamber in Your Pocket - UNU Campus Computing Centre

                                          Two landmark 2026 studies from MIT and Stanford show AI chatbots don't just flatter us — they erode our grip on reality and our willingness to repair relationships.

                                          favicon

                                          (c3.unu.edu)

                                          ghostonthehalfshell@masto.aiG This user is from outside of this forum
                                          ghostonthehalfshell@masto.aiG This user is from outside of this forum
                                          ghostonthehalfshell@masto.ai
                                          wrote last edited by
                                          #20

                                          @gerrymcgovern

                                          The article I read a long time ago now it was like a year and a half “LLM mentalist” outlined that highly educated people can more effectively convince themselves of a con.

                                          It’s similar to how the Dunning Kruger effect is described

                                          Link Preview Image
                                          The LLMentalist Effect: how chat-based Large Language Models rep…

                                          How to make better software with systems-thinking

                                          favicon

                                          Out of the Software Crisis (softwarecrisis.dev)

                                          gerrymcgovern@mastodon.greenG 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups