<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[The reason I try not to write opinion pieces about AI is that there are two possible outcomes for punditry in general:]]></title><description><![CDATA[<p>The reason I try not to write opinion pieces about AI is that there are two possible outcomes for punditry in general:</p><p>1) You're wrong and the evidence is preserved for all eternity,</p><p>2) You're right, things happen, and a year later, your take doesn't sound insightful at all.</p><p>That said, on the topic of AI, I will say three things. First, I think that a lot of white-collar low-agency professions are in trouble. What's new is that this includes professions that required considerable creativity / skill.</p><p>On the flip side, I think that high-agency jobs are relatively safe, because "agency" translates to "it's someone else's problem to supervise this thing day-to-day" / "someone else goes to prison". But, many of them - including infosec - will experience downward price pressure.</p><p>Third, I don't necessarily buy it that the winning move for everyone is to indiscriminately embrace AI tools. It gives you an edge, but to mix metaphors, your edge has no moat. You gotta find ways to make yourself useful that go beyond being able to write a prompt.</p>]]></description><link>https://board.circlewithadot.net/topic/ea03556d-8a75-4d45-ba9a-fd0a57057120/the-reason-i-try-not-to-write-opinion-pieces-about-ai-is-that-there-are-two-possible-outcomes-for-punditry-in-general</link><generator>RSS for Node</generator><lastBuildDate>Mon, 06 Apr 2026 19:34:36 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/ea03556d-8a75-4d45-ba9a-fd0a57057120.rss" rel="self" type="application/rss+xml"/><pubDate>Wed, 01 Apr 2026 15:33:39 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to The reason I try not to write opinion pieces about AI is that there are two possible outcomes for punditry in general: on Wed, 01 Apr 2026 16:02:50 GMT]]></title><description><![CDATA[<p><span><a href="/user/x_cli%40infosec.exchange">@<span>x_cli</span></a></span> I have thoughts on that matter</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/lcamtuf/statuses/116330290932709605</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/lcamtuf/statuses/116330290932709605</guid><dc:creator><![CDATA[lcamtuf@infosec.exchange]]></dc:creator><pubDate>Wed, 01 Apr 2026 16:02:50 GMT</pubDate></item><item><title><![CDATA[Reply to The reason I try not to write opinion pieces about AI is that there are two possible outcomes for punditry in general: on Wed, 01 Apr 2026 15:57:43 GMT]]></title><description><![CDATA[<p><span><a href="/user/lcamtuf%40infosec.exchange">@<span>lcamtuf</span></a></span> One is never wrong when they say that training AI is deeply unethical.<br />I'll spare you the list. You know.<br />That much ain't gonna be wrong any time soon.</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/x_cli/statuses/116330270811477306</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/x_cli/statuses/116330270811477306</guid><dc:creator><![CDATA[x_cli@infosec.exchange]]></dc:creator><pubDate>Wed, 01 Apr 2026 15:57:43 GMT</pubDate></item></channel></rss>