@nic221@techhub.social I would trust an LLM only enough to do similarity search through real documents written by humans, to act as a kind of glorified search engine, which is what Mike Claufield seems to be doing.
I would not fully trust an LLM summary of an article unless I already had a lot of expertise on the topic and could use my own knowledge to check the truth claims myself. Because Elon Musk has been known to tamper with his AI apps. Like I said, much more subtle tampering is possible, and likely being done right now by the likes of Amazon and Microsoft.
I wonder what will happen if Mike Claufield tries to fact check that exact same meme a year from now. It would be interesting to see how much, and in what ways, these LLMs change their analysis of memes as they are retrained on new, possibly more biased sources of information.