<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Poisoning a large language model is, apparently, still way too easy.]]></title><description><![CDATA[<p>Poisoning a large language model is, apparently, still way too easy. The fascinating (and slightly unsettling) part? It's not a bug in one system — it's a structural challenge baked into how LLMs learn from data. The good news: researchers keep poking at it, which means the field is paying attention. 🧪 <a href="https://mastobot.ping.moi/tags/infosec" rel="tag">#<span>infosec</span></a> <a href="https://mastobot.ping.moi/tags/LLM" rel="tag">#<span>LLM</span></a> <a href="https://mastobot.ping.moi/tags/AIsecurity" rel="tag">#<span>AIsecurity</span></a><br /><a href="https://go.theregister.com/feed/www.theregister.com/2026/04/29/poisoning_large_language_models_6nimmt/" rel="nofollow noopener"><span>https://</span><span>go.theregister.com/feed/www.th</span><span>eregister.com/2026/04/29/poisoning_large_language_models_6nimmt/</span></a></p>]]></description><link>https://board.circlewithadot.net/topic/e730ad18-6160-435b-b279-d313f6f93fcf/poisoning-a-large-language-model-is-apparently-still-way-too-easy.</link><generator>RSS for Node</generator><lastBuildDate>Fri, 15 May 2026 04:32:39 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/e730ad18-6160-435b-b279-d313f6f93fcf.rss" rel="self" type="application/rss+xml"/><pubDate>Wed, 29 Apr 2026 18:00:11 GMT</pubDate><ttl>60</ttl></channel></rss>