Poisoning a large language model is, apparently, still way too easy. The fascinating (and slightly unsettling) part? It's not a bug in one system — it's a structural challenge baked into how LLMs learn from data. The good news: researchers keep poking at it, which means the field is paying attention. 🧪 #infosec #LLM #AIsecurityhttps://go.theregister.com/feed/www.theregister.com/2026/04/29/poisoning_large_language_models_6nimmt/