<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs.]]></title><description><![CDATA[<p>I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs.</p><p>I get it that when folks have concluded that LLMs are harmful, they want to believe that LLMs fail at everything. But a list of correctly-identified bad things about LLMs does not logically imply that LLMs can’t find security bugs.</p>]]></description><link>https://board.circlewithadot.net/topic/c85bb548-bbcf-4f10-b5e7-3c14af530e2d/i-m-seeing-a-lot-of-denial-and-logical-fallacies-on-mastodon-about-llm-capability-to-find-security-bugs.</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 11:58:22 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/c85bb548-bbcf-4f10-b5e7-3c14af530e2d.rss" rel="self" type="application/rss+xml"/><pubDate>Mon, 13 Apr 2026 17:24:10 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Tue, 14 Apr 2026 01:51:57 GMT]]></title><description><![CDATA[<p><span><a href="/user/hsivonen%40mastodon.social">@<span>hsivonen</span></a></span> ”Quick, get the torches and pitchforks!<br />Someone suggested that LLMs could in some way be useful.”</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/marshray/statuses/116400555102639704</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/marshray/statuses/116400555102639704</guid><dc:creator><![CDATA[marshray@infosec.exchange]]></dc:creator><pubDate>Tue, 14 Apr 2026 01:51:57 GMT</pubDate></item><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Mon, 13 Apr 2026 20:31:53 GMT]]></title><description><![CDATA[<p><span><a href="/user/gabrielesvelto%40mas.to">@<span>gabrielesvelto</span></a></span> <span><a href="/user/hsivonen%40mastodon.social">@<span>hsivonen</span></a></span> yep this is still largely subsidized by cheap inference and essentially free training (for the consumer). I don’t bet on it staying this cheap.</p>]]></description><link>https://board.circlewithadot.net/post/https://social.security.plumbing/users/freddy/statuses/116399296579412564</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://social.security.plumbing/users/freddy/statuses/116399296579412564</guid><dc:creator><![CDATA[freddy@social.security.plumbing]]></dc:creator><pubDate>Mon, 13 Apr 2026 20:31:53 GMT</pubDate></item><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Mon, 13 Apr 2026 20:29:21 GMT]]></title><description><![CDATA[<p><span><a href="/user/hsivonen%40mastodon.social">@<span>hsivonen</span></a></span> <span><a href="/user/freddy%40social.security.plumbing">@<span>freddy</span></a></span> yeah, but we're talking resources here. How much fuzzing and analysis would a few billion $ buy? A few 10s of billions? Remember that the total capex behind these technologies over the past three years is now in the 13-digits range. Spend that money on anything and it will fly.</p>]]></description><link>https://board.circlewithadot.net/post/https://mas.to/users/gabrielesvelto/statuses/116399286637272636</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mas.to/users/gabrielesvelto/statuses/116399286637272636</guid><dc:creator><![CDATA[gabrielesvelto@mas.to]]></dc:creator><pubDate>Mon, 13 Apr 2026 20:29:21 GMT</pubDate></item><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Mon, 13 Apr 2026 20:08:24 GMT]]></title><description><![CDATA[<p><span><a href="/user/freddy%40social.security.plumbing">@<span>freddy</span></a></span> <span><a href="/user/gabrielesvelto%40mas.to">@<span>gabrielesvelto</span></a></span> Also, it looks to me that fuzzing requires more human setup of what part of code to fuzz and how to deal with stuff like checksums whereas reportedly LLMs can deal with less specific harnesses and figure out how to fill in checksums.</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.social/users/hsivonen/statuses/116399204224719806</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.social/users/hsivonen/statuses/116399204224719806</guid><dc:creator><![CDATA[hsivonen@mastodon.social]]></dc:creator><pubDate>Mon, 13 Apr 2026 20:08:24 GMT</pubDate></item><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Mon, 13 Apr 2026 19:55:32 GMT]]></title><description><![CDATA[<p><span><a href="/user/gabrielesvelto%40mas.to">@<span>gabrielesvelto</span></a></span> <span><a href="/user/hsivonen%40mastodon.social">@<span>hsivonen</span></a></span> not really. Some bugs are truly hard to find with fuzzing and are more easily identified by seeing codesmell and trying to trace it back to user. Reading and remembering code is limited by brain power / will power. As sad as it is: LLMs scale better here.</p>]]></description><link>https://board.circlewithadot.net/post/https://social.security.plumbing/users/freddy/statuses/116399153630586854</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://social.security.plumbing/users/freddy/statuses/116399153630586854</guid><dc:creator><![CDATA[freddy@social.security.plumbing]]></dc:creator><pubDate>Mon, 13 Apr 2026 19:55:32 GMT</pubDate></item><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Mon, 13 Apr 2026 19:26:31 GMT]]></title><description><![CDATA[<p><span><a href="/user/hsivonen%40mastodon.social">@<span>hsivonen</span></a></span> isn't fuzzing a number game though? LLMs are fuzzers backed by billions, they'll absolutely find something, but so would everything else given the same resources and no restrain on how to spend them, no matter how wasteful.</p>]]></description><link>https://board.circlewithadot.net/post/https://mas.to/users/gabrielesvelto/statuses/116399039548474018</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mas.to/users/gabrielesvelto/statuses/116399039548474018</guid><dc:creator><![CDATA[gabrielesvelto@mas.to]]></dc:creator><pubDate>Mon, 13 Apr 2026 19:26:31 GMT</pubDate></item><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Mon, 13 Apr 2026 17:51:35 GMT]]></title><description><![CDATA[<p><span><a href="/user/hsivonen%40mastodon.social">@<span>hsivonen</span></a></span> if you haven't run it on your own code, you're missing out. once you do that, it's hard to argue about it.</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.social/users/sayrer/statuses/116398666247434486</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.social/users/sayrer/statuses/116398666247434486</guid><dc:creator><![CDATA[sayrer@mastodon.social]]></dc:creator><pubDate>Mon, 13 Apr 2026 17:51:35 GMT</pubDate></item><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Mon, 13 Apr 2026 17:44:27 GMT]]></title><description><![CDATA[<p><span><a href="/user/hsivonen%40mastodon.social">@<span>hsivonen</span></a></span> Well. When those companies have touted and pushed their AI thingies at a thousand things they're unsuited for, that kinda sets the expectations.</p><p>Most of us are just so bloody fucken tired of hearing AI AI AI AI everywhere. You tone it out or go crazy. And so even the one thing it might be actually good at goes missed because folks are no longer listening. It's all so fantastically stupid.</p>]]></description><link>https://board.circlewithadot.net/post/https://mementomori.social/users/Turre/statuses/116398638216481804</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mementomori.social/users/Turre/statuses/116398638216481804</guid><dc:creator><![CDATA[turre@mementomori.social]]></dc:creator><pubDate>Mon, 13 Apr 2026 17:44:27 GMT</pubDate></item><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Mon, 13 Apr 2026 17:25:24 GMT]]></title><description><![CDATA[<p>Or folks go LOL at security incidents or code quality at an LLM company. Irrelevant to whether their model can find security bugs. The way this works is that you have a non-LLM oracle like ASAN. If the model found a way to trigger the oracle, then it’s not really productive to argue that it didn’t.</p><p>Why even post this considering the predictable hate? Because denial about the situation does not make users safer from attacks.</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.social/users/hsivonen/statuses/116398563306898972</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.social/users/hsivonen/statuses/116398563306898972</guid><dc:creator><![CDATA[hsivonen@mastodon.social]]></dc:creator><pubDate>Mon, 13 Apr 2026 17:25:24 GMT</pubDate></item><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Mon, 13 Apr 2026 17:24:56 GMT]]></title><description><![CDATA[<p>Then there’s the dismissal that, yes, LLMs now find security bugs, but the bugs could have been found by other methods. But evidently defenders hadn’t actually found them by other methods. (Unknown what attackers had already found.)</p><p>Or folks find it objectionable that the new capability has been made available to attackers and the proposed cure is to pay for access to the same LLM. But that does make the existence of the capability untrue.</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.social/users/hsivonen/statuses/116398561464199032</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.social/users/hsivonen/statuses/116398561464199032</guid><dc:creator><![CDATA[hsivonen@mastodon.social]]></dc:creator><pubDate>Mon, 13 Apr 2026 17:24:56 GMT</pubDate></item><item><title><![CDATA[Reply to I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs. on Mon, 13 Apr 2026 17:24:28 GMT]]></title><description><![CDATA[<p>And, yes, the Anthropic Mythos post fits a previously-seen pattern of “AI” companies marketing by danger, but saying that it’s marketing does not refute what the models that are already generally offered can do.</p><p>And people act like their own conjecture is more informative than what people from multiple projects that deal with security bug reports say. See e.g. <a href="https://mastodon.social/@bagder/116363034479757682" rel="nofollow noopener"><span>https://</span><span>mastodon.social/@bagder/116363</span><span>034479757682</span></a> .</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.social/users/hsivonen/statuses/116398559653380415</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.social/users/hsivonen/statuses/116398559653380415</guid><dc:creator><![CDATA[hsivonen@mastodon.social]]></dc:creator><pubDate>Mon, 13 Apr 2026 17:24:28 GMT</pubDate></item></channel></rss>