This paper also illustrates a small exception: if the agent knows of a systematic bias it is susceptible to (ie, racial stereotypes) it can correct (or even overcorrect) its responses.This is fascinating to me, because it's so similar to human cognitive bias. Unlike an LLM, we have some degree of introspection, but we often can't see our own bias. Remembering that a bias exists, assuming you are susceptible to it, and correcting yourself even when you don't think you need to is often the best strategy.Unfortunately, our stereotypes around AI (mostly from SciFi) are that they are more rational and reliable than human beings. LLMs can only be less rational and reliable, because they are trained to mimic human performance, and they do so unreliably. They have access to more information, so in theory they could have better answers. But they also have more conflicting, incorrect, and fictional information, and this all gets blended together without in the training process.(3/3)#science #llm #ai