<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[So, something that&#x27;s been bugging the shit out of me?]]></title><description><![CDATA[<p>So, something that's been bugging the shit out of me?</p><p>These fucking assholes who let LLMs run rampant and delete prod?</p><p>They query the LLM for "why" it did that.</p><p>This is delusional behavior.</p><p>LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.</p><p>LLMs do not have the ability to have motivation. It is a machine.</p><p>LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:</p><p>which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:</p><p>It cannot have a why;<br />It cannot have a self to have motivations;<br />And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.</p><p>Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.</p><p>Touch some grass and get a fucking therapist.</p>]]></description><link>https://board.circlewithadot.net/topic/e5a57497-18a2-461d-90af-31fc33bd6190/so-something-that-s-been-bugging-the-shit-out-of-me</link><generator>RSS for Node</generator><lastBuildDate>Fri, 15 May 2026 05:30:36 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/e5a57497-18a2-461d-90af-31fc33bd6190.rss" rel="self" type="application/rss+xml"/><pubDate>Mon, 27 Apr 2026 20:51:12 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 18:17:17 GMT]]></title><description><![CDATA[<p><span><a href="/user/resuna%40ohai.social">@<span>resuna</span></a></span><span> </span><span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span><span> </span><span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span><span> Here are some articles explaining what "reasoning models" do, because clearly you need some education:<br /></span><a href="https://magazine.sebastianraschka.com/i/156484949/how-do-we-define-reasoning-model">magazine.sebastianraschka.com/i/156484949/how-do-we-define-reasoning-model</a><span><br /></span><a href="https://www.ibm.com/think/topics/reasoning-model">www.ibm.com/think/topics/reasoning-model</a><span><br /></span><a href="https://newsletter.maartengrootendorst.com/i/153314921/what-are-reasoning-llms">newsletter.maartengrootendorst.com/i/153314921/what-are-reasoning-llms</a><span><br /><br />I could post a lot more examples, but the TLDR (because I know you won't read them): "reasoning" models add intermediate "reasoning" steps that are just made to mimick human reasoning given the context, and that's the part we don't see ("under the hood") when AI tools spin (that and tool calling, which is another kind of training modern models have to return structured responses executing function calls).</span></p>]]></description><link>https://board.circlewithadot.net/post/https://peculiar.florist/notes/alpdeizep7kzx6op</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://peculiar.florist/notes/alpdeizep7kzx6op</guid><dc:creator><![CDATA[varpie@peculiar.florist]]></dc:creator><pubDate>Thu, 30 Apr 2026 18:17:17 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 18:12:54 GMT]]></title><description><![CDATA[<p><span><a href="/user/varpie%40peculiar.florist">@<span>Varpie</span></a></span> <span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span> <span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span> </p><p>"Just try whatever open weight small LLM model with "thinking" or "reasoning" or whatever they market it as"</p><p>That's what they market it as, but it's not what it's actually doing. Everything that it generates is a story. They are not showing you "what is going on under the hood", they are writing a story.</p>]]></description><link>https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116495009332326615</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116495009332326615</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Thu, 30 Apr 2026 18:12:54 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 18:11:02 GMT]]></title><description><![CDATA[<p><span><a href="/user/resuna%40ohai.social">@<span>resuna</span></a></span><span> </span><span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span><span> </span><span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span><span> Yes it is. It's literally how it works. Just try whatever open weight small LLM model with "thinking" or "reasoning" or whatever they market it as, and try for yourself using Ollama or whatever tool that actually shows the full context and not just a spinner with "Thinking... Combobulating... Crafting...". "Thinking" "agentic" AI tools / models just add extra steps trained to simulate human reasoning, and the example I gave is actually fairly accurate to what you could see under the hood of an AI tool like Claude Code.</span></p>]]></description><link>https://board.circlewithadot.net/post/https://peculiar.florist/notes/alpd6hjpajx7x7u9</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://peculiar.florist/notes/alpd6hjpajx7x7u9</guid><dc:creator><![CDATA[varpie@peculiar.florist]]></dc:creator><pubDate>Thu, 30 Apr 2026 18:11:02 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 18:09:28 GMT]]></title><description><![CDATA[<p><span><a href="/user/varpie%40peculiar.florist">@<span>Varpie</span></a></span> <span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span> <span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span> </p><p>Any text it generates that says things like "the table does not exist in the schema, so it is probably part of an old project and is no longer relevant" or "NEVER FUCKING GUESS!” – and that’s exactly what I did." is not telling you anything about the process the LLM went through, it is recreating a story about what a hypothetical human might have done.</p><p>The "reasoning steps" that you are writing about don't actually exist.</p>]]></description><link>https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494995811964689</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494995811964689</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Thu, 30 Apr 2026 18:09:28 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 18:06:48 GMT]]></title><description><![CDATA[<p><span><a href="/user/varpie%40peculiar.florist">@<span>Varpie</span></a></span> <span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span> <span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span> </p><p>"LLM "reasoning", not shown to the user but still part of the context because that's how "thinking" agents work:"</p><p>No, that's not how "thinking agents work". That's how humans work, but not how large language models work. Large language models don't think "the table is part of an old project", they don't "know" what "old" or "project" mean. They match the request against similar requests and generate similar code.</p>]]></description><link>https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494985393766504</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494985393766504</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Thu, 30 Apr 2026 18:06:48 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 17:54:17 GMT]]></title><description><![CDATA[<p><span><a href="/user/resuna%40ohai.social">@<span>resuna</span></a></span><span> </span><span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span><span> </span><span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span><span> Alright, let's give a more detailed example since clearly you don't get my point.<br /><br />User: create a new table for students<br />(AI tool adds parts of existing database schema and code as context)<br />LLM "reasoning", not shown to the user but still part of the context because that's how "thinking" agents work: Ok, the user asked me to create a table for students. According to students.ts, the students contain a name, and a classroom. Ah, but according to the schema I received, the classroom table does not exist. Let me start by creating that.<br />(LLM sends request to create table to the AI tool, which fails because the table already exists)<br />LLM "reasoning", still not shown to the user but still relevant: Ah, I received an error. Ok, let's see... the table does not exist in the schema, so it is probably part of an old project and is no longer relevant. I should delete it and create the table again, so that we do not have irrelevant information in the database.<br />(LLM sends request to drop tables, the AI tool executes, user panics)<br />LLM answer: Ok, I have created the students table and the related classroom table.<br />User: You deleted my classrooms table... Why did you do that?<br /><br />That would be the full context of the discussion, not just the last message. And the answer is likely to come from those "reasoning" steps that are hidden to the user. Which makes it a relevant question to ask.</span></p>]]></description><link>https://board.circlewithadot.net/post/https://peculiar.florist/notes/alpckyj1po5n7vzh</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://peculiar.florist/notes/alpckyj1po5n7vzh</guid><dc:creator><![CDATA[varpie@peculiar.florist]]></dc:creator><pubDate>Thu, 30 Apr 2026 17:54:17 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 17:41:38 GMT]]></title><description><![CDATA[<p><span><a href="/user/varpie%40peculiar.florist">@<span>Varpie</span></a></span> <span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span> <span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span> </p><p>That is context in the prompt, not in the source text that created the model that you are asking the question "why did you do X".</p><p>The answer you get is from that source corpus, and contains lots of text about what a human might do, but the LLM doesn't do anything for those reasons.</p><p>The "why" of "why did you do X" is always "because those were the next likely tokens" and never anything related to "what would a human say if you asked them".</p>]]></description><link>https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494886399783548</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494886399783548</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Thu, 30 Apr 2026 17:41:38 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 17:11:15 GMT]]></title><description><![CDATA[<p><span><a href="/user/resuna%40ohai.social">@<span>resuna</span></a></span><span> </span><span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span><span> </span><span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span><span> What happens if you ask an LLM to summarize a text into 4 bullet points, then in the next prompt ask it: "Remove the 2nd point"?<br />What happens if you ask an LLM to translate something, then ask it: "Do it again in [a different language]"?<br /><br />Taken out of context, those questions are impossible to answer, so according to you, it will just give nothing relevant. But it doesn't, because every time you ask a follow-up question, it includes the context from the discussion. Which is what makes simple questions like "Why did you do that?" tasks that give statistically relevant output, not "fanfic about itself".</span></p>]]></description><link>https://board.circlewithadot.net/post/https://peculiar.florist/notes/alpb1ly7xiii7try</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://peculiar.florist/notes/alpb1ly7xiii7try</guid><dc:creator><![CDATA[varpie@peculiar.florist]]></dc:creator><pubDate>Thu, 30 Apr 2026 17:11:15 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 17:00:10 GMT]]></title><description><![CDATA[<p><span><a href="/user/varpie%40peculiar.florist">@<span>Varpie</span></a></span> <span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span> <span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span> </p><p>"You're assuming that there is no other context provided with the question, and that the training does not take into account that context. "</p><p>Well, yes, I am assuming that. Because the question is "why did you do this thing that nobody expected you to do". The context-specific answer that you *need* is far too nuanced and unpredictable to possibly be explicitly in the training data.</p>]]></description><link>https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494723360847061</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494723360847061</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Thu, 30 Apr 2026 17:00:10 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 16:21:08 GMT]]></title><description><![CDATA[<p><span><a href="/user/resuna%40ohai.social">@<span>resuna</span></a></span><span> </span><span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span><span> </span><span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span><span> You're assuming that there is no other context provided with the question, and that the training does not take into account that context. If I had to train for this specific question, I'd make sure to score positively answers that are relevant to the previous context. Which is what happens, and why it is a valid question to ask your LLM if you want some insight into the context that isn't shown in the UI but still in the discussion.</span></p>]]></description><link>https://board.circlewithadot.net/post/https://peculiar.florist/notes/alp9962dmaxxcixu</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://peculiar.florist/notes/alp9962dmaxxcixu</guid><dc:creator><![CDATA[varpie@peculiar.florist]]></dc:creator><pubDate>Thu, 30 Apr 2026 16:21:08 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 15:49:11 GMT]]></title><description><![CDATA[<p><span><a href="/user/varpie%40peculiar.florist">@<span>Varpie</span></a></span> <span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span> <span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span> </p><p>"Why" is definitely a word from the training data, and "why did you do that?" is definitely also part of things asked a lot, that OpenAI and others have trained on,"</p><p>Yes, and the text that follows is an answer to *a different situation*, and so it's basically fanfic about itself. That's all it can ever produce when you ask it "why". Fanfic.</p>]]></description><link>https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494444217468095</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494444217468095</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Thu, 30 Apr 2026 15:49:11 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 15:47:06 GMT]]></title><description><![CDATA[<p><span><a href="/user/varpie%40peculiar.florist">@<span>Varpie</span></a></span> <span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span> <span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span> </p><p>I absolutely have. I keep this in mind ALL THE TIME when I test these things and EVERY TIME they can trivially be led into generating pure nonsense by exploiting that fact.</p>]]></description><link>https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494436062503573</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494436062503573</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Thu, 30 Apr 2026 15:47:06 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 15:45:22 GMT]]></title><description><![CDATA[<p><span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span> <span><a href="/user/varpie%40peculiar.florist">@<span>Varpie</span></a></span> <span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span> </p><p>"the LLM has no semantic model of reality, only a surface statistical model of language present in the training data."</p><p>Absolutely this.</p>]]></description><link>https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494429257129370</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494429257129370</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Thu, 30 Apr 2026 15:45:22 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Thu, 30 Apr 2026 15:42:44 GMT]]></title><description><![CDATA[<p><span><a href="/user/f4grx%40chaos.social">@<span>f4grx</span></a></span> <span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span> </p><p>Deliberately so. LLMs are the end result of 50 years of cynical software developers trying to "beat the Turing test". They are automated gaslighting.</p>]]></description><link>https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494418847520866</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://ohai.social/users/resuna/statuses/116494418847520866</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Thu, 30 Apr 2026 15:42:44 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Wed, 29 Apr 2026 16:51:55 GMT]]></title><description><![CDATA[<p><span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span><span> </span><span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span><span> Sure, I'll go touch some grass and talk to my therapist about this philosophical horseshit </span>​<img class="not-responsive emoji" src="https://peculiar-florist.s3.fr-par.scw.cloud/files/a653d0b4-b5d2-4b66-9573-f53bb856e89e.png" title=":meow_ok_fine:" />​</p>]]></description><link>https://board.circlewithadot.net/post/https://peculiar.florist/notes/alnuwwh3z1jp70iw</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://peculiar.florist/notes/alnuwwh3z1jp70iw</guid><dc:creator><![CDATA[varpie@peculiar.florist]]></dc:creator><pubDate>Wed, 29 Apr 2026 16:51:55 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Wed, 29 Apr 2026 16:50:51 GMT]]></title><description><![CDATA[<p><span><a href="/user/varpie%40peculiar.florist">@<span>Varpie</span></a></span> <span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span> </p><p>can you two take your semantics argument elsewhere; I am not interested in philosophical horseshit when there are specific, practical considerations that are causing specific, enumerable harms.</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/munin/statuses/116489024408112450</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/munin/statuses/116489024408112450</guid><dc:creator><![CDATA[munin@infosec.exchange]]></dc:creator><pubDate>Wed, 29 Apr 2026 16:50:51 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Wed, 29 Apr 2026 16:48:53 GMT]]></title><description><![CDATA[<p><span><a href="/user/rubinjoni%40mastodon.social">@<span>rubinjoni</span></a></span> <span><a href="/user/arclight%40oldbytes.space">@<span>arclight</span></a></span> </p><p>Quentin Tarantino would not think so.</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/munin/statuses/116489016700173189</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/munin/statuses/116489016700173189</guid><dc:creator><![CDATA[munin@infosec.exchange]]></dc:creator><pubDate>Wed, 29 Apr 2026 16:48:53 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Wed, 29 Apr 2026 16:48:24 GMT]]></title><description><![CDATA[<p><span><a href="/user/sand%40kitty.haus">@<span>sand</span></a></span> </p><p>no.</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/munin/statuses/116489014791040006</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/munin/statuses/116489014791040006</guid><dc:creator><![CDATA[munin@infosec.exchange]]></dc:creator><pubDate>Wed, 29 Apr 2026 16:48:24 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Wed, 29 Apr 2026 16:09:13 GMT]]></title><description><![CDATA[<p><span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span><span> I totally agree with you. And that is also a very different take from the beginning of the discussion, where Fi said that querying LLMs for "why" it does something is "thrice-divorced from reality" and "fucking delusional" and that people doing that should "touch some grass and get a fucking therapist"...</span></p>]]></description><link>https://board.circlewithadot.net/post/https://peculiar.florist/notes/alntdzfxkxxejo4g</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://peculiar.florist/notes/alntdzfxkxxejo4g</guid><dc:creator><![CDATA[varpie@peculiar.florist]]></dc:creator><pubDate>Wed, 29 Apr 2026 16:09:13 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Wed, 29 Apr 2026 15:42:05 GMT]]></title><description><![CDATA[<p><span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span><span> </span><span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span><span> You misread me. Whether the model "understands" the question is a philosophical question. The non-philosophical question of whether it can give a useful answer is the relevant part, and my whole point is that pointing at the philosophical aspect to belittle people that look at the practical part, assuming that they don't understand it, is dumb.</span></p>]]></description><link>https://board.circlewithadot.net/post/https://peculiar.florist/notes/alnsf3l0qkg8j9nb</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://peculiar.florist/notes/alnsf3l0qkg8j9nb</guid><dc:creator><![CDATA[varpie@peculiar.florist]]></dc:creator><pubDate>Wed, 29 Apr 2026 15:42:05 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Wed, 29 Apr 2026 10:41:23 GMT]]></title><description><![CDATA[<p><span><a href="https://mastodon.scot/@petealexharris">@<span>petealexharris</span></a></span><span> </span><span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span><span> "Why" is definitely a word from the training data, and "why did you do that?" is definitely also part of things asked a lot, that OpenAI and others have trained on, so my point still stands that it is a valid question to ask. Whether the model "understands" the question is just a philosophical question that is irrelevant for the fact that it is a useful question. Of course if you're using it in Prod and it deletes your DB and you think it understands and can improve itself, there are plenty of things you'd need to be corrected on, but saying that everyone asking that question is delusional is just wrong.</span></p>]]></description><link>https://board.circlewithadot.net/post/https://peculiar.florist/notes/alnhody1e5ukt1ts</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://peculiar.florist/notes/alnhody1e5ukt1ts</guid><dc:creator><![CDATA[varpie@peculiar.florist]]></dc:creator><pubDate>Wed, 29 Apr 2026 10:41:23 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Wed, 29 Apr 2026 09:15:24 GMT]]></title><description><![CDATA[<p><a href="/user/wilbr%40glitch.social">@wilbr@glitch.social</a><span> </span><a href="/user/munin%40infosec.exchange">@munin@infosec.exchange</a><span> The core problem is that capitalist forces push us to make tradeoffs between getting things shipped and doing things the right way </span><img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f920.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--face_with_cowboy_hat" style="height:23px;width:auto;vertical-align:middle" title="🤠" alt="🤠" /><span><br /><br />But yeah, people shouldn't be able to make this class of mistake in the first place. But they do, for the same reason (in my experience) they end up using LLMs: because it solves the task with less effort, and there is some force pushing them to go for less effort over higher quality/resilience/etc.</span></p>]]></description><link>https://board.circlewithadot.net/post/https://nothing-ever.works/notes/alnelt36b93jsup5</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://nothing-ever.works/notes/alnelt36b93jsup5</guid><dc:creator><![CDATA[addison@nothing-ever.works]]></dc:creator><pubDate>Wed, 29 Apr 2026 09:15:24 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Wed, 29 Apr 2026 08:30:19 GMT]]></title><description><![CDATA[<p><span><a href="/user/addison%40nothing-ever.works">@<span>addison</span></a></span> the quote in question:</p><p>&gt; One of the worst mistakes the opposition can make is extending contempt for the tyrant into contempt for the tyrant’s supporters. Most of these supporters sincerely believed that the tyrant would be more likely to solve their problems — often real grievances that the opposition had failed to address. Blaming the supporters denies the reality of the failures and reinforces their support for the tyrant.</p><p><span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span></p>]]></description><link>https://board.circlewithadot.net/post/https://fosstodon.org/users/badrihippo/statuses/116487056203955352</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://fosstodon.org/users/badrihippo/statuses/116487056203955352</guid><dc:creator><![CDATA[badrihippo@fosstodon.org]]></dc:creator><pubDate>Wed, 29 Apr 2026 08:30:19 GMT</pubDate></item><item><title><![CDATA[Reply to So, something that&#x27;s been bugging the shit out of me? on Wed, 29 Apr 2026 08:29:36 GMT]]></title><description><![CDATA[<p><span><a href="/user/addison%40nothing-ever.works">@<span>addison</span></a></span> I agree with you. Which is not to say we should forgive what happened (I don't have the complete context but it sounds like something bad to do with production customers) but that we should understand where the people who did this came from</p><p>My view *might* be partially influenced by a quote from this piece on "The Rise and Fall of Petty Tyrants" (quote in next message) <img
      src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f609.png?v=28325c671da"
      class="not-responsive emoji emoji-android emoji--wink"
      style="height: 23px; width: auto; vertical-align: middle;"
      title="😉"
      alt="😉"
    /></p><p><div class="card col-md-9 col-lg-6 position-relative link-preview p-0">



<a href="https://www.noemamag.com/the-rise-and-fall-of-petty-tyrants/?ref=thebrowser.com" title="The Rise & Fall Of ‘Petty Tyrants’ | NOEMA">
<img src="https://noemamag.imgix.net/2026/04/Nico_Tyrant_infocus.png?fit=crop&fm=png&h=628&ixlib=php-3.3.1&w=1200&wpsize=noema-social-facebook&s=d4044d0a09949840518921bc555367f0" class="card-img-top not-responsive" style="max-height: 15rem;" alt="Link Preview Image" />
</a>



<div class="card-body">
<h5 class="card-title">
<a href="https://www.noemamag.com/the-rise-and-fall-of-petty-tyrants/?ref=thebrowser.com">
The Rise & Fall Of ‘Petty Tyrants’ | NOEMA
</a>
</h5>
<p class="card-text line-clamp-3">History shows that bad leaders can successfully undermine democracy — but the story always ends the same way.</p>
</div>
<a href="https://www.noemamag.com/the-rise-and-fall-of-petty-tyrants/?ref=thebrowser.com" class="card-footer text-body-secondary small d-flex gap-2 align-items-center lh-2">



<img src="https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png" alt="favicon" class="not-responsive overflow-hiddden" style="max-width: 21px; max-height: 21px;" />







<p class="d-inline-block text-truncate mb-0">NOEMA <span class="text-secondary">(www.noemamag.com)</span></p>
</a>
</div></p><p><span><a href="/user/munin%40infosec.exchange">@<span>munin</span></a></span></p>]]></description><link>https://board.circlewithadot.net/post/https://fosstodon.org/users/badrihippo/statuses/116487053439331785</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://fosstodon.org/users/badrihippo/statuses/116487053439331785</guid><dc:creator><![CDATA[badrihippo@fosstodon.org]]></dc:creator><pubDate>Wed, 29 Apr 2026 08:29:36 GMT</pubDate></item></channel></rss>