<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Anthropic&#x27;s &quot;too dangerous&quot; AI was accessed by guessing the URL]]></title><description><![CDATA[<p>Anthropic's "too dangerous" AI was accessed by guessing the URL</p><p>l o l</p><p></p><div class="card col-md-9 col-lg-6 position-relative link-preview p-0">



<a href="https://boingboing.net/2026/04/23/anthropics-too-dangerous-ai-was-accessed-by-guessing-the-url.html" title="Anthropic's ">
<img src="https://boingboing.net/wp-content/uploads/2026/03/anthropic.jpg" class="card-img-top not-responsive" style="max-height:15rem" alt="Link Preview Image" />
</a>



<div class="card-body">
<h5 class="card-title">
<a href="https://boingboing.net/2026/04/23/anthropics-too-dangerous-ai-was-accessed-by-guessing-the-url.html">
Anthropic's "too dangerous" AI was accessed by guessing the URL
</a>
</h5>
<p class="card-text line-clamp-3">The Linux bug Anthropic highlighted as proof? Found by their public model. The 271 Firefox bugs? None beat a human expert.</p>
</div>
<a href="https://boingboing.net/2026/04/23/anthropics-too-dangerous-ai-was-accessed-by-guessing-the-url.html" class="card-footer text-body-secondary small d-flex gap-2 align-items-center lh-2">



<img src="https://boingboing.net/wp-content/uploads/fbrfg/favicon-32x32.png" alt="favicon" class="not-responsive overflow-hiddden" style="max-width:21px;max-height:21px" />















<p class="d-inline-block text-truncate mb-0">Boing Boing <span class="text-secondary">(boingboing.net)</span></p>
</a>
</div><p></p><p>the story they cribbed:<br /><a href="https://www.theregister.com/2026/04/22/anthropic_mythos_hype_nothingburger/" rel="nofollow noopener"><span>https://www.</span><span>theregister.com/2026/04/22/ant</span><span>hropic_mythos_hype_nothingburger/</span></a></p>]]></description><link>https://board.circlewithadot.net/topic/7b518321-9852-480b-b850-f41ab8dad487/anthropic-s-too-dangerous-ai-was-accessed-by-guessing-the-url</link><generator>RSS for Node</generator><lastBuildDate>Fri, 15 May 2026 00:13:34 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/7b518321-9852-480b-b850-f41ab8dad487.rss" rel="self" type="application/rss+xml"/><pubDate>Fri, 24 Apr 2026 23:34:15 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to Anthropic&#x27;s &quot;too dangerous&quot; AI was accessed by guessing the URL on Fri, 24 Apr 2026 23:43:36 GMT]]></title><description><![CDATA[<p><span><a href="https://circumstances.run/@davidgerard">@<span>davidgerard</span></a></span> It seems very (sarcasm) responsible and ethical to me to put a "too dangerous" AI straight up on the Internet. Are we vibe networking now?</p>]]></description><link>https://board.circlewithadot.net/post/https://rage.love/users/perigee/statuses/116462335820432115</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://rage.love/users/perigee/statuses/116462335820432115</guid><dc:creator><![CDATA[perigee@rage.love]]></dc:creator><pubDate>Fri, 24 Apr 2026 23:43:36 GMT</pubDate></item><item><title><![CDATA[Reply to Anthropic&#x27;s &quot;too dangerous&quot; AI was accessed by guessing the URL on Fri, 24 Apr 2026 23:41:47 GMT]]></title><description><![CDATA[<p>@davidgerard@circumstances.runif only they had some super intelligence that could have pointed out this cybersecurity flaw.</p>]]></description><link>https://board.circlewithadot.net/post/https://ecoevo.social/users/GodsoeWilliam/statuses/116462328680178052</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://ecoevo.social/users/GodsoeWilliam/statuses/116462328680178052</guid><dc:creator><![CDATA[godsoewilliam@ecoevo.social]]></dc:creator><pubDate>Fri, 24 Apr 2026 23:41:47 GMT</pubDate></item><item><title><![CDATA[Reply to Anthropic&#x27;s &quot;too dangerous&quot; AI was accessed by guessing the URL on Fri, 24 Apr 2026 23:37:09 GMT]]></title><description><![CDATA[<p><span><a href="https://circumstances.run/@davidgerard">@<span>davidgerard</span></a></span> <a href="https://www.youtube.com/watch?v=znWxaDZ2OjY" rel="nofollow noopener"><span>https://www.</span><span>youtube.com/watch?v=znWxaDZ2OjY</span><span></span></a> "One is that the LLM systems are designed in such a way that they cannot tell us anything about language, learning, or other aspects of cognition, a matter of principle, irremediable.... The reason is elementary: The systems work just as well with impossible languages that infants cannot acquire as with those they acquire quickly and virtually reflexively." -- Noam Chomsky</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.social/ap/users/116175731239673526/statuses/116462310493418900</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.social/ap/users/116175731239673526/statuses/116462310493418900</guid><dc:creator><![CDATA[bms48@mastodon.social]]></dc:creator><pubDate>Fri, 24 Apr 2026 23:37:09 GMT</pubDate></item><item><title><![CDATA[Reply to Anthropic&#x27;s &quot;too dangerous&quot; AI was accessed by guessing the URL on Fri, 24 Apr 2026 23:36:53 GMT]]></title><description><![CDATA[<span><a href="https://circumstances.run/@davidgerard" rel="ugc">@<span>davidgerard</span></a></span> can the crash come already the stupidity of these guys is getting beyond boring]]></description><link>https://board.circlewithadot.net/post/https://social.bsdlab.au/objects/0e72f575-307f-4459-9bfe-ee3d888db412</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://social.bsdlab.au/objects/0e72f575-307f-4459-9bfe-ee3d888db412</guid><dc:creator><![CDATA[oxy@social.bsdlab.au]]></dc:creator><pubDate>Fri, 24 Apr 2026 23:36:53 GMT</pubDate></item></channel></rss>