Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Beyond "Appropriate Use" of a Chatbot: The AI Literacy No One is Teaching https://stefanbauschard.substack.com/p/beyond-appropriate-use-of-a-chatbot #AI #literacy #students

Beyond "Appropriate Use" of a Chatbot: The AI Literacy No One is Teaching https://stefanbauschard.substack.com/p/beyond-appropriate-use-of-a-chatbot #AI #literacy #students

Scheduled Pinned Locked Moved Uncategorized
literacystudents
4 Posts 2 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • nic221@techhub.socialN This user is from outside of this forum
    nic221@techhub.socialN This user is from outside of this forum
    nic221@techhub.social
    wrote last edited by
    #1

    Beyond "Appropriate Use" of a Chatbot: The AI Literacy No One is Teaching https://stefanbauschard.substack.com/p/beyond-appropriate-use-of-a-chatbot #AI #literacy #students

    ramin_hal9001@fe.disroot.orgR 1 Reply Last reply
    0
    • nic221@techhub.socialN nic221@techhub.social

      Beyond "Appropriate Use" of a Chatbot: The AI Literacy No One is Teaching https://stefanbauschard.substack.com/p/beyond-appropriate-use-of-a-chatbot #AI #literacy #students

      ramin_hal9001@fe.disroot.orgR This user is from outside of this forum
      ramin_hal9001@fe.disroot.orgR This user is from outside of this forum
      ramin_hal9001@fe.disroot.org
      wrote last edited by
      #2

      @nic221@techhub.social I take issue with this claim in the article:

      "AI is also one of the most powerful tools ever created for evaluating other people’s claims ... A student can paste a viral social media post into an AI and ask: Is this claim supported by the evidence? What’s the original study? What are the methodological limitations? Who funded this research? They can feed in a news article and ask the AI to identify unsupported assertions, logical fallacies, or missing context. They can take a politician’s speech and have it fact-checked against available data in minutes."

      The AI models we use now are trained on the content of the Internet without and the Internet is not an unbiased source of truth. We are already hearing news of guys like Elon Musk trying to bias the training data in his "Grok" AI.

      Billionaires with the tiniest bit more cunning, like Jeff Bezos for example, would put higher weight on news articles used to train AI on the news outlets that he owns, even if these news articles are completely fake, even if these articles are fabricated for the purpose of skewing the opinions expressed by chat bots.

      AI is not, and will never be good at validating truth claims as long as these models are produced by privately owned tech companies. I can't even really think of a way you could find reliable training data that could produce an unbiased AI.

      nic221@techhub.socialN 1 Reply Last reply
      0
      • ramin_hal9001@fe.disroot.orgR ramin_hal9001@fe.disroot.org

        @nic221@techhub.social I take issue with this claim in the article:

        "AI is also one of the most powerful tools ever created for evaluating other people’s claims ... A student can paste a viral social media post into an AI and ask: Is this claim supported by the evidence? What’s the original study? What are the methodological limitations? Who funded this research? They can feed in a news article and ask the AI to identify unsupported assertions, logical fallacies, or missing context. They can take a politician’s speech and have it fact-checked against available data in minutes."

        The AI models we use now are trained on the content of the Internet without and the Internet is not an unbiased source of truth. We are already hearing news of guys like Elon Musk trying to bias the training data in his "Grok" AI.

        Billionaires with the tiniest bit more cunning, like Jeff Bezos for example, would put higher weight on news articles used to train AI on the news outlets that he owns, even if these news articles are completely fake, even if these articles are fabricated for the purpose of skewing the opinions expressed by chat bots.

        AI is not, and will never be good at validating truth claims as long as these models are produced by privately owned tech companies. I can't even really think of a way you could find reliable training data that could produce an unbiased AI.

        nic221@techhub.socialN This user is from outside of this forum
        nic221@techhub.socialN This user is from outside of this forum
        nic221@techhub.social
        wrote last edited by
        #3

        @ramin_hal9001 Here’s an example of using AI for fact checking using methods from Mike Caulfield. https://open.substack.com/pub/wfryer/p/fact-checking-a-misleading-iran-war

        ramin_hal9001@fe.disroot.orgR 1 Reply Last reply
        0
        • nic221@techhub.socialN nic221@techhub.social

          @ramin_hal9001 Here’s an example of using AI for fact checking using methods from Mike Caulfield. https://open.substack.com/pub/wfryer/p/fact-checking-a-misleading-iran-war

          ramin_hal9001@fe.disroot.orgR This user is from outside of this forum
          ramin_hal9001@fe.disroot.orgR This user is from outside of this forum
          ramin_hal9001@fe.disroot.org
          wrote last edited by
          #4

          @nic221@techhub.social I would trust an LLM only enough to do similarity search through real documents written by humans, to act as a kind of glorified search engine, which is what Mike Claufield seems to be doing.

          I would not fully trust an LLM summary of an article unless I already had a lot of expertise on the topic and could use my own knowledge to check the truth claims myself. Because Elon Musk has been known to tamper with his AI apps. Like I said, much more subtle tampering is possible, and likely being done right now by the likes of Amazon and Microsoft.

          I wonder what will happen if Mike Claufield tries to fact check that exact same meme a year from now. It would be interesting to see how much, and in what ways, these LLMs change their analysis of memes as they are retrained on new, possibly more biased sources of information.

          1 Reply Last reply
          1
          0
          • R relay@relay.mycrowd.ca shared this topic
          Reply
          • Reply as topic
          Log in to reply
          • Oldest to Newest
          • Newest to Oldest
          • Most Votes


          • Login

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • Popular
          • World
          • Users
          • Groups