<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.]]></title><description><![CDATA[<p>At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.</p><p>When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".</p><p>I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.</p><p>So, I feel that we have three choices:</p><p>1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.</p><p>2) Embrace it and have my agent argue with your agent forever, for internet points.</p><p>3) Call it quits and move to an off-the-grid cabin in the woods.</p>]]></description><link>https://board.circlewithadot.net/topic/1f7eb213-c6d0-4541-a2a3-a93f4f888700/at-this-point-llm-written-think-pieces-make-up-about-half-of-all-long-form-writing-in-my-social-media-feed.</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 20:06:42 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/1f7eb213-c6d0-4541-a2a3-a93f4f888700.rss" rel="self" type="application/rss+xml"/><pubDate>Sun, 12 Apr 2026 15:35:35 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 18:18:52 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> option 3 is much more rewarding at least</p>

<div class="row mt-3"><div class="col-12 mt-3"><img class="img-thumbnail" src="https://cdn.masto.host/snabelenno/media_attachments/files/116/393/105/380/139/999/original/55ec03627ddb3405.jpg" alt="Link Preview Image" /></div></div>]]></description><link>https://board.circlewithadot.net/post/https://snabelen.no/users/atlefren/statuses/116393111227265139</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://snabelen.no/users/atlefren/statuses/116393111227265139</guid><dc:creator><![CDATA[atlefren@snabelen.no]]></dc:creator><pubDate>Sun, 12 Apr 2026 18:18:52 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:56:29 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> </p><p>"LLM-written think pieces make up about half of all long-form writing in my social media feed"</p><p>fourth choice-- get tf off whatever <img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f480.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--skull" style="height:23px;width:auto;vertical-align:middle" title="💀" alt="💀" /> hellscape <img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f480.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--skull" style="height:23px;width:auto;vertical-align:middle" title="💀" alt="💀" /> masquerading as "social" media you're seeing this on!</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.social/ap/users/115736792646413589/statuses/116393023220245220</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.social/ap/users/115736792646413589/statuses/116393023220245220</guid><dc:creator><![CDATA[kitkat_blue@mastodon.social]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:56:29 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:56:10 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange" rel="nofollow noopener">@<span>lcamtuf</span></a></span> I don't understand the "LLM helped the poor sod whose first language is English express himself" point because every time I read an LLMism like "it is not x, its y" I feel like a part of my soul has been devoured. Bad human-written prose is better than copy-pasting LLM generated text. At that point, the friction of constructing prose which makes your thoughts coherent has been eliminated. No one should waste time reading it.</p>]]></description><link>https://board.circlewithadot.net/post/https://toots.matapacos.dog/users/loathsome_dongeater/statuses/116393021963891048</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://toots.matapacos.dog/users/loathsome_dongeater/statuses/116393021963891048</guid><dc:creator><![CDATA[loathsome_dongeater@toots.matapacos.dog]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:56:10 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:56:08 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> When someone needs genAI to express themselves, they aren't. They do not - by their own unconscious admission - have anything to add. They do not have an original thought, nor created something beyond a vague concept. Their input is, in its current form, useless.</p><p>Until now, those people just wouldn't express themselves at length. We could smile, shrug, and remain friends. Pretend they have valuable thoughts.</p><p>We may have to just stop pretending. But it's rude. Now what?</p>]]></description><link>https://board.circlewithadot.net/post/https://mas.to/users/helge/statuses/116393021833376540</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mas.to/users/helge/statuses/116393021833376540</guid><dc:creator><![CDATA[helge@mas.to]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:56:08 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:55:16 GMT]]></title><description><![CDATA[<p><span><a href="/user/dalias%40hachyderm.io">@<span>dalias</span></a></span> <span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> it's making me read a lot fewer think pieces, that's for sure</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.social/users/regehr/statuses/116393018412188804</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.social/users/regehr/statuses/116393018412188804</guid><dc:creator><![CDATA[regehr@mastodon.social]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:55:16 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:53:29 GMT]]></title><description><![CDATA[<p><span><a href="/user/jztusk%40mastodon.social">@<span>jztusk</span></a></span> <span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> This. If they're not saying anything that *couldn't have been interpolated from existing Orange Site drivel*, I don't much care if a human spent time slopping it together manually or used an LLM for it. Either way it's not reflecting any genuine thought and not worth reading.</p>]]></description><link>https://board.circlewithadot.net/post/https://hachyderm.io/users/dalias/statuses/116393011403012075</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://hachyderm.io/users/dalias/statuses/116393011403012075</guid><dc:creator><![CDATA[dalias@hachyderm.io]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:53:29 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:51:02 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> I don't engage with that shit even when humans write it. I'm sure as hell not engaging when they didn't even bother.</p>]]></description><link>https://board.circlewithadot.net/post/https://hachyderm.io/users/dalias/statuses/116393001765594530</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://hachyderm.io/users/dalias/statuses/116393001765594530</guid><dc:creator><![CDATA[dalias@hachyderm.io]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:51:02 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:45:29 GMT]]></title><description><![CDATA[<p><span><a href="/user/fritzadalis%40infosec.exchange">@<span>FritzAdalis</span></a></span> I have Starlink on the roof, but I guess it wouldn't be hard to shoot it off...</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/lcamtuf/statuses/116392979979582559</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/lcamtuf/statuses/116392979979582559</guid><dc:creator><![CDATA[lcamtuf@infosec.exchange]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:45:29 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:37:35 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> You chose the combination of 3) AND ... ??? <img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f601.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--grin" style="height:23px;width:auto;vertical-align:middle" title="😁" alt="😁" /></p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/ruslan/statuses/116392948859425308</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/ruslan/statuses/116392948859425308</guid><dc:creator><![CDATA[ruslan@infosec.exchange]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:37:35 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:34:54 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> 3</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/Sikorsky78/statuses/116392938345286568</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/Sikorsky78/statuses/116392938345286568</guid><dc:creator><![CDATA[sikorsky78@infosec.exchange]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:34:54 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:32:01 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> 4) Reply "That's a good post, but I think a more valid point would be if you could go ahead and calculate this double SHA256 hash with a bunch of leading zeros" ?</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/mikesiegel/statuses/116392927018321996</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/mikesiegel/statuses/116392927018321996</guid><dc:creator><![CDATA[mikesiegel@infosec.exchange]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:32:01 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:22:34 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> personally I think humans have a critical vulnerability in the interaction of being handed a completely plausible thought, whether encoded as speech/electrical signals/vision that once holding it will invent reasons why it is correct. That or we are just lazy haven't decided</p>]]></description><link>https://board.circlewithadot.net/post/https://social.seattle.wa.us/users/OwOday/statuses/116392889873508411</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://social.seattle.wa.us/users/OwOday/statuses/116392889873508411</guid><dc:creator><![CDATA[owoday@social.seattle.wa.us]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:22:34 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:20:27 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> honestly it reminds me of this study <a href="https://people.psych.ucsb.edu/gazzaniga/PDF/Language%20after%20section%20of%20the%20cerebral%20commissueres%20(1967).pdf" rel="nofollow noopener"><span>https://</span><span>people.psych.ucsb.edu/gazzanig</span><span>a/PDF/Language%20after%20section%20of%20the%20cerebral%20commissueres%20(1967).pdf</span></a></p><p>They seperate the sides of the brain and try to communicate with them individually.</p><p>&gt; when an object was placed in the left hand (right hemisphere sensing it), the speaking left hemisphere fabricated a verbal explanation for why the patient was holding it</p><p>Later studies (60s so could be horseshit) worked with a theory of one side being more of an interpreter.</p>]]></description><link>https://board.circlewithadot.net/post/https://social.seattle.wa.us/users/OwOday/statuses/116392881493933293</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://social.seattle.wa.us/users/OwOday/statuses/116392881493933293</guid><dc:creator><![CDATA[owoday@social.seattle.wa.us]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:20:27 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:13:23 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange" rel="nofollow noopener">@<span>lcamtuf</span></a></span> when i notice something is untagged LLM output posing as human authorship, i back out and issue all the negative feedback signals i have access to</p>]]></description><link>https://board.circlewithadot.net/post/https://tiny.tilde.website/users/astrid/statuses/116392853728951863</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://tiny.tilde.website/users/astrid/statuses/116392853728951863</guid><dc:creator><![CDATA[astrid@tiny.tilde.website]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:13:23 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:10:33 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> <br />You're probably a chainsaw vs. telephone pole away from #3.</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/FritzAdalis/statuses/116392842569587585</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/FritzAdalis/statuses/116392842569587585</guid><dc:creator><![CDATA[fritzadalis@infosec.exchange]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:10:33 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:07:55 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> </p><p>I don't have to *prove* something is LLM-produced to conclude "this writer didn't bother to make sure that their writing clearly isn't LLM", and then yeet them permanently into the "don't bother" list.</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.social/users/jztusk/statuses/116392832237275811</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.social/users/jztusk/statuses/116392832237275811</guid><dc:creator><![CDATA[jztusk@mastodon.social]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:07:55 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 17:07:53 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> If you can afford an off the grid cabin why wouldn’t already be there</p>]]></description><link>https://board.circlewithadot.net/post/https://mementomori.social/ap/users/115999354548045560/statuses/116392832132695541</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mementomori.social/ap/users/115999354548045560/statuses/116392832132695541</guid><dc:creator><![CDATA[mojala@mementomori.social]]></dc:creator><pubDate>Sun, 12 Apr 2026 17:07:53 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 16:54:39 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> Only half? Seriously, I'd tweak your #1 to make it less dependent on detecting LLM writing [1] and alter the condition to include quality [2]. If the writing is <em>well written AND makes a good point</em> I'd say it's worthwhile.<br />I doubt there's much of this at all today, but why would it be so bad <em>if</em> it became a thing? <br />NOTES: [1] this isn't easy to detect accurate by software (and will get harder) and manually time consuming, plus false positives would be a loss.<br />[2] Low quality writing (LLM of human) is best avoided and can be detected quickly and accurately.</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/lmk/statuses/116392780065309007</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/lmk/statuses/116392780065309007</guid><dc:creator><![CDATA[lmk@infosec.exchange]]></dc:creator><pubDate>Sun, 12 Apr 2026 16:54:39 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 16:31:48 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> I think 3) sounds the most appealing <img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f609.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--wink" style="height:23px;width:auto;vertical-align:middle" title="😉" alt="😉" /></p>]]></description><link>https://board.circlewithadot.net/post/https://social.security.plumbing/users/asante/statuses/116392690208667177</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://social.security.plumbing/users/asante/statuses/116392690208667177</guid><dc:creator><![CDATA[asante@social.security.plumbing]]></dc:creator><pubDate>Sun, 12 Apr 2026 16:31:48 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 16:10:39 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> I had a similar discussion with a family member last night who still uses FB. </p><p>Since, globally, capitalism is destroying not just the environment but culture, and all forms of media, at this stage, are ingrained in capitalism, I break it down like this:</p><p>You are born with a single resource that has value: Your Minutes. Everything you will ever do is based, simply, on how you use/spend your minutes. </p><p>There are just over 1500 billionaires in the US. They did not get there via hard work. They became billionaires from taking minutes. All forms of media are designed to exchange your minutes for an interaction. Some are far more destructive and consume the most (FB, X, AMZ, et al) and some are a high value (Fediverse, wikipedia, archive, a phone). The ones who are barely survive are the most beneficial to me. The others, pure cancer. </p><p>The easiest, least life disrupting, highest ROI action anyone can take to stop the billionaires, save minutes to spend elsewhere, and recoup personal and societal social/mental health is to simply stop wasting minutes on any of their shit. If even 30% of the subscribers to any monthly fee thing stopped, right now, their "wealth" would crash in months. The remain in that slot because they are betting on people staying glued and giving them minutes. Reading a book or wiki article, meeting a friend for lunch, learning literally anything new by reading a book, taking a class, practicing, etc. is better for your life and a wiser spend of the minutes.</p><p>Could you get your information from something like wikipedia, any of these sources: <a href="https://www.trustworthymedia.org/list-of-independent-media/" rel="nofollow noopener"><span>https://www.</span><span>trustworthymedia.org/list-of-i</span><span>ndependent-media/</span></a> and walk away from the AI shit show? </p><p>I have zero subscriptions, haven't for years, have no other social media than this acct, make art while listening to music, audiobooks, or watching movies or TV (btw, archive.org has a littany of old tv and film and an inconceivable amount of music and reading material). If I need to learn something new, I generally call a friend or ask around until I can find someone willing to teach me enough to get me started. Or I just keep at it until I sort it out. And, quite honestly, my only regret is not walking away back in 2000 when I bitched about this very thing happening. </p><p>The greatest boondoggle capitalism every pulled, and is the root of all of it's evil and issues, is convincing people that if they were not working for pay for someone else their time had no value. Baked into societal mindset is the brainwash of "I'm not doing anything right now. So the time is useless." Social media is born of that. Lack of motivation thrives in that. What better way to get people to give up their minutes than convince them they have no value at all. They'd not be billionaires and not be actively trying to keep it, if they could not convince you to give them your minutes. </p><p>You can stay here without engaging with the shit and still having value and advancing yourself. Or the cabin, to me, is a beautiful way to go. Everyone dreams of longer vacations to "get away", convincing you that living like that daily is useless is just their fear of losing access to your time.</p>]]></description><link>https://board.circlewithadot.net/post/https://defcon.social/users/retech/statuses/116392607080850767</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://defcon.social/users/retech/statuses/116392607080850767</guid><dc:creator><![CDATA[retech@defcon.social]]></dc:creator><pubDate>Sun, 12 Apr 2026 16:10:39 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 16:07:36 GMT]]></title><description><![CDATA[@lcamtuf Not to brag, but my RSS reader is quite clean, which makes #1 a lot easier. It often takes quite a while to notice if an article is fake, by which point you've spent 10 minutes reading an LLM generated post.]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/nothacking/statuses/116392595051736661</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/nothacking/statuses/116392595051736661</guid><dc:creator><![CDATA[nothacking@infosec.exchange]]></dc:creator><pubDate>Sun, 12 Apr 2026 16:07:36 GMT</pubDate></item><item><title><![CDATA[Reply to At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed. on Sun, 12 Apr 2026 15:37:13 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> I heavily curate my feed / intake. There’s only so many high quality contents I can take in per day anyway.</p>]]></description><link>https://board.circlewithadot.net/post/https://hachyderm.io/users/mnl/statuses/116392475594539602</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://hachyderm.io/users/mnl/statuses/116392475594539602</guid><dc:creator><![CDATA[mnl@hachyderm.io]]></dc:creator><pubDate>Sun, 12 Apr 2026 15:37:13 GMT</pubDate></item></channel></rss>