<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[When we use words like &quot;introspection&quot;, &quot;hallucination&quot;, &quot;understand&quot;, &quot;discover&quot;, and so on when we&#x27;re talking about LLMs, we make a dangerous mistake.]]></title><description><![CDATA[<p>When we use words like "introspection", "hallucination", "understand", "discover", and so on when we're talking about LLMs, we make a dangerous mistake. LLMs have no consciousness, agency, nor self-awareness, and using such terms can make it seem like they do.</p><p>(Even "writing code" hits different than "generates code".)</p><p>This isn't a pro- or anti-AI comment, it's a truth vs. lying (perhaps to oneself) comment. How we (especially the sellers of trained models) talk about these statistical token generators affects how/when/if we use them and what we expect of them.</p>]]></description><link>https://board.circlewithadot.net/topic/406c8c01-0602-4ba0-8e70-bc5cd038c1be/when-we-use-words-like-introspection-hallucination-understand-discover-and-so-on-when-we-re-talking-about-llms-we-make-a-dangerous-mistake.</link><generator>RSS for Node</generator><lastBuildDate>Thu, 30 Apr 2026 23:57:12 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/406c8c01-0602-4ba0-8e70-bc5cd038c1be.rss" rel="self" type="application/rss+xml"/><pubDate>Tue, 07 Apr 2026 05:02:36 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to When we use words like &quot;introspection&quot;, &quot;hallucination&quot;, &quot;understand&quot;, &quot;discover&quot;, and so on when we&#x27;re talking about LLMs, we make a dangerous mistake. on Tue, 07 Apr 2026 18:24:47 GMT]]></title><description><![CDATA[<p><span><a href="/user/jitterted%40sfba.social" rel="nofollow noopener">@<span>jitterted</span></a></span> thank you! this language is so annoying</p>]]></description><link>https://board.circlewithadot.net/post/https://neuromatch.social/users/elduvelle/statuses/116364822956322840</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://neuromatch.social/users/elduvelle/statuses/116364822956322840</guid><dc:creator><![CDATA[elduvelle@neuromatch.social]]></dc:creator><pubDate>Tue, 07 Apr 2026 18:24:47 GMT</pubDate></item><item><title><![CDATA[Reply to When we use words like &quot;introspection&quot;, &quot;hallucination&quot;, &quot;understand&quot;, &quot;discover&quot;, and so on when we&#x27;re talking about LLMs, we make a dangerous mistake. on Tue, 07 Apr 2026 06:28:26 GMT]]></title><description><![CDATA[<p><span><a href="/user/jitterted%40sfba.social">@<span>jitterted</span></a></span> Agreed. I try and use terms like “generates code”, “statistically likely output” and of course “stochastic parroting” (a remarkably accurate term)</p><p>Still struggling to find a phrase that hits home hard enough for the mistakes though - currently I usually say it has generated bad output.</p>]]></description><link>https://board.circlewithadot.net/post/https://agilodon.social/users/thirstybear/statuses/116362006142158923</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://agilodon.social/users/thirstybear/statuses/116362006142158923</guid><dc:creator><![CDATA[thirstybear@agilodon.social]]></dc:creator><pubDate>Tue, 07 Apr 2026 06:28:26 GMT</pubDate></item></channel></rss>