Beyond "Appropriate Use" of a Chatbot: The AI Literacy No One is Teaching https://stefanbauschard.substack.com/p/beyond-appropriate-use-of-a-chatbot #AI #literacy #students
-
Beyond "Appropriate Use" of a Chatbot: The AI Literacy No One is Teaching https://stefanbauschard.substack.com/p/beyond-appropriate-use-of-a-chatbot #AI #literacy #students
-
Beyond "Appropriate Use" of a Chatbot: The AI Literacy No One is Teaching https://stefanbauschard.substack.com/p/beyond-appropriate-use-of-a-chatbot #AI #literacy #students
@nic221@techhub.social I take issue with this claim in the article:
"AI is also one of the most powerful tools ever created for evaluating other people’s claims ... A student can paste a viral social media post into an AI and ask: Is this claim supported by the evidence? What’s the original study? What are the methodological limitations? Who funded this research? They can feed in a news article and ask the AI to identify unsupported assertions, logical fallacies, or missing context. They can take a politician’s speech and have it fact-checked against available data in minutes."
The AI models we use now are trained on the content of the Internet without and the Internet is not an unbiased source of truth. We are already hearing news of guys like Elon Musk trying to bias the training data in his "Grok" AI.
Billionaires with the tiniest bit more cunning, like Jeff Bezos for example, would put higher weight on news articles used to train AI on the news outlets that he owns, even if these news articles are completely fake, even if these articles are fabricated for the purpose of skewing the opinions expressed by chat bots.
AI is not, and will never be good at validating truth claims as long as these models are produced by privately owned tech companies. I can't even really think of a way you could find reliable training data that could produce an unbiased AI.
-
@nic221@techhub.social I take issue with this claim in the article:
"AI is also one of the most powerful tools ever created for evaluating other people’s claims ... A student can paste a viral social media post into an AI and ask: Is this claim supported by the evidence? What’s the original study? What are the methodological limitations? Who funded this research? They can feed in a news article and ask the AI to identify unsupported assertions, logical fallacies, or missing context. They can take a politician’s speech and have it fact-checked against available data in minutes."
The AI models we use now are trained on the content of the Internet without and the Internet is not an unbiased source of truth. We are already hearing news of guys like Elon Musk trying to bias the training data in his "Grok" AI.
Billionaires with the tiniest bit more cunning, like Jeff Bezos for example, would put higher weight on news articles used to train AI on the news outlets that he owns, even if these news articles are completely fake, even if these articles are fabricated for the purpose of skewing the opinions expressed by chat bots.
AI is not, and will never be good at validating truth claims as long as these models are produced by privately owned tech companies. I can't even really think of a way you could find reliable training data that could produce an unbiased AI.
@ramin_hal9001 Here’s an example of using AI for fact checking using methods from Mike Caulfield. https://open.substack.com/pub/wfryer/p/fact-checking-a-misleading-iran-war
-
@ramin_hal9001 Here’s an example of using AI for fact checking using methods from Mike Caulfield. https://open.substack.com/pub/wfryer/p/fact-checking-a-misleading-iran-war
@nic221@techhub.social I would trust an LLM only enough to do similarity search through real documents written by humans, to act as a kind of glorified search engine, which is what Mike Claufield seems to be doing.
I would not fully trust an LLM summary of an article unless I already had a lot of expertise on the topic and could use my own knowledge to check the truth claims myself. Because Elon Musk has been known to tamper with his AI apps. Like I said, much more subtle tampering is possible, and likely being done right now by the likes of Amazon and Microsoft.
I wonder what will happen if Mike Claufield tries to fact check that exact same meme a year from now. It would be interesting to see how much, and in what ways, these LLMs change their analysis of memes as they are retrained on new, possibly more biased sources of information.
-
R relay@relay.mycrowd.ca shared this topic