<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[I think I&#x27;m just going to say it.]]></title><description><![CDATA[<p>I think I'm just going to say it.</p><p>Fediverse has a strong negative echochamber about AI. And I get it, there are many issues I think the industry needs to address.</p><p>But there are also many many people out there, big and small, on all sorts of projects, open and closed, that are successfully using it to do good work.</p><p>I think it's time to let go of the "all or nothing" bandwagon stances on AI. To exert control we need to condemn the actual issues; <a href="https://mastodon.derg.nz/@anthropy/114975700920581718" rel="nofollow noopener"><span>https://</span><span>mastodon.derg.nz/@anthropy/114</span><span>975700920581718</span></a></p>]]></description><link>https://board.circlewithadot.net/topic/030bad5f-0ca7-457a-a5c7-2158d171ad2e/i-think-i-m-just-going-to-say-it.</link><generator>RSS for Node</generator><lastBuildDate>Sat, 18 Apr 2026 04:55:52 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/030bad5f-0ca7-457a-a5c7-2158d171ad2e.rss" rel="self" type="application/rss+xml"/><pubDate>Fri, 27 Mar 2026 09:24:53 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Mon, 13 Apr 2026 18:18:24 GMT]]></title><description><![CDATA[<p><span><a href="/user/avincentinspace%40furry.engineer">@<span>AVincentInSpace</span></a></span> the growth of datacenters seems to be slowing, and all of AI combined is absolutely nothing compared to the simple good old internet usage of datacenters, which still grows regardless of AI.</p><p>If you want regulation on where they're put, tell them that, and your lawmakers that keep approving that shit.</p><p>xAI is indeed a shit company, e.g Google runs on mostly green energy but I'm sure that's also bad somehow</p><p>Molotovs won't make them fix the chatbot <img src="https://board.circlewithadot.net/assets/plugins/nodebb-plugin-emoji/emoji/android/1f937.png?v=28325c671da" class="not-responsive emoji emoji-android emoji--shrug" style="height:23px;width:auto;vertical-align:middle" title="🤷" alt="🤷" /></p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.derg.nz/users/anthropy/statuses/116398771691537602</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.derg.nz/users/anthropy/statuses/116398771691537602</guid><dc:creator><![CDATA[anthropy@mastodon.derg.nz]]></dc:creator><pubDate>Mon, 13 Apr 2026 18:18:24 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Mon, 13 Apr 2026 18:13:27 GMT]]></title><description><![CDATA[<p><span><a href="/user/anthropy%40mastodon.derg.nz">@<span>anthropy</span></a></span> </p><p>"actually just the clothes dryers in the USA combined use more" oh good.  in that case i'm sure glad that every AI company isn't planning to build more datacenters to meet the obviously growing demand with no regard for the people who will live near them (don't worry, they only build them in rural backwaters where only poor people live).  it was really infuriating when bitcoin miners started buying and reactivating decomissioned coal power plants to power their bitcoin farms, and it's reassuring to know AI is less bad than that.  wait, what was that about xAI and natural gas turbines?</p><p>"why not hold PEOPLE accountable?" okay. sam altman, take down your chatbot, it's making people kill themselves. i hope the third molotov thrown through your window does the job</p>]]></description><link>https://board.circlewithadot.net/post/https://furry.engineer/users/AVincentInSpace/statuses/116398752203262152</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://furry.engineer/users/AVincentInSpace/statuses/116398752203262152</guid><dc:creator><![CDATA[avincentinspace@furry.engineer]]></dc:creator><pubDate>Mon, 13 Apr 2026 18:13:27 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Mon, 13 Apr 2026 17:53:04 GMT]]></title><description><![CDATA[<p>and please, again, I'm not saying "AI good", I'm specifically asking for nuance, because neither 'AI good' nor 'AI bad' is going to tell legislators what to ban exactly, and it's also exactly why those techbros and shareholdery folk are so confused, they literally don't understand why people are upset.</p><p>And I get it, you didn't ask for AI and now you have to deal with it, but it's not going away, so it really really pays to actually articulate what you dislike about 'AI'</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.derg.nz/users/anthropy/statuses/116398672085614077</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.derg.nz/users/anthropy/statuses/116398672085614077</guid><dc:creator><![CDATA[anthropy@mastodon.derg.nz]]></dc:creator><pubDate>Mon, 13 Apr 2026 17:53:04 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Mon, 13 Apr 2026 17:48:50 GMT]]></title><description><![CDATA[<p>"AI uses infinite power" actually just the tumbledryers in the USA combined use more, and that's truly wasted power because clothes racks use nothing. Bitcoin also uses like double.</p><p>"AI does bad things" AI isn't a person. Why not hold people accountable?</p><p>"AI uses stolen training data" Some do yes! PLEASE say that instead of 'AI bad' because it doesn't land when you do that!</p><p>"Hardware-" <a href="https://en.wikipedia.org/wiki/DRAM_price_fixing_scandal" rel="nofollow noopener"><span>https://</span><span>en.wikipedia.org/wiki/DRAM_pri</span><span>ce_fixing_scandal</span></a></p><p>Mass hysteria removes the nuance we need to have impact.</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.derg.nz/users/anthropy/statuses/116398655449269822</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.derg.nz/users/anthropy/statuses/116398655449269822</guid><dc:creator><![CDATA[anthropy@mastodon.derg.nz]]></dc:creator><pubDate>Mon, 13 Apr 2026 17:48:50 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Fri, 10 Apr 2026 09:25:26 GMT]]></title><description><![CDATA[<p></p><div class="card col-md-9 col-lg-6 position-relative link-preview p-0">

<div class="card-body">
<h5 class="card-title">
<a href="https://hackaday.social/@hackaday/116369524820436454">
hackaday (@hackaday@hackaday.social)
</a>
</h5>
<p class="card-text line-clamp-3">AI For The Skeptics: Pick Your Reasons To Be Excited

https://hackaday.com/2026/04/08/ai-for-the-skeptics-pick-your-reasons-to-be-excited/</p>
</div>
<a href="https://hackaday.social/@hackaday/116369524820436454" class="card-footer text-body-secondary small d-flex gap-2 align-items-center lh-2">



<img src="https://hackaday.social/packs/media/icons/favicon-16x16-c58fdef40ced38d582d5b8eed9d15c5a.png" alt="favicon" class="not-responsive overflow-hiddden" style="max-width:21px;max-height:21px" />





























<p class="d-inline-block text-truncate mb-0">hackaday.social <span class="text-secondary">(hackaday.social)</span></p>
</a>
</div><p></p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.derg.nz/users/anthropy/statuses/116379689043522493</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.derg.nz/users/anthropy/statuses/116379689043522493</guid><dc:creator><![CDATA[anthropy@mastodon.derg.nz]]></dc:creator><pubDate>Fri, 10 Apr 2026 09:25:26 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Fri, 27 Mar 2026 18:49:37 GMT]]></title><description><![CDATA[<p><span><a href="/user/anthropy%40mastodon.derg.nz">@<span>anthropy</span></a></span> I'm sure AI will follow the .com crash path. This means it will not go away, but the hype will drop and it will settle into the places were it works.</p>]]></description><link>https://board.circlewithadot.net/post/https://infosec.exchange/users/RueNahcMohr/statuses/116302635142620512</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://infosec.exchange/users/RueNahcMohr/statuses/116302635142620512</guid><dc:creator><![CDATA[ruenahcmohr@infosec.exchange]]></dc:creator><pubDate>Fri, 27 Mar 2026 18:49:37 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Fri, 27 Mar 2026 13:55:10 GMT]]></title><description><![CDATA[<p><span><a href="/user/anthropy%40mastodon.derg.nz">@<span>anthropy</span></a></span> Respectfully, no. I have deep ethical, moral, and environmental objections to generative AI, and even if some of those can be resolved through continued development and stricter guardrails, not all of my objections can be addressed and fixed - they're inherent to the use of the technology. I fully reject generative AI and see it of no net benefit to society, akin to nfts and cryptocurrency - the documented harms it causes heavily outweigh the few places where it might have a novel, legitimately positive use.</p><p>Do note that I specifically said "generative AI" because unlike some, I am actually perfectly okay with traditional machine learning, which I believe does offer a net benefit for society. I can't say the same for generative slop, however, and I want absolutely nothing to do with that.</p>]]></description><link>https://board.circlewithadot.net/post/https://dragonchat.org/users/baralheia/statuses/116301477325058162</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://dragonchat.org/users/baralheia/statuses/116301477325058162</guid><dc:creator><![CDATA[baralheia@dragonchat.org]]></dc:creator><pubDate>Fri, 27 Mar 2026 13:55:10 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Fri, 27 Mar 2026 13:22:12 GMT]]></title><description><![CDATA[<p><span><a href="/user/anthropy%40mastodon.derg.nz">@<span>anthropy</span></a></span> This would be a valid stance but for the fact that the commercial models are all still built on a complete and total lack of consent. Unless this is rectified by *burning them to the ground and starting over*, I have no desire to give anyone willingly using these tools the benefit of the doubt.</p>]]></description><link>https://board.circlewithadot.net/post/https://furry.engineer/users/bersl2/statuses/116301347724810600</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://furry.engineer/users/bersl2/statuses/116301347724810600</guid><dc:creator><![CDATA[bersl2@furry.engineer]]></dc:creator><pubDate>Fri, 27 Mar 2026 13:22:12 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Fri, 27 Mar 2026 12:55:28 GMT]]></title><description><![CDATA[<p><span><a href="/user/anthropy%40mastodon.derg.nz">@<span>anthropy</span></a></span> they're doing the Brexit special: opposing AI on AI's terms</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.cysioland.pl/users/me/statuses/116301242569311822</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.cysioland.pl/users/me/statuses/116301242569311822</guid><dc:creator><![CDATA[me@mastodon.cysioland.pl]]></dc:creator><pubDate>Fri, 27 Mar 2026 12:55:28 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Fri, 27 Mar 2026 11:18:19 GMT]]></title><description><![CDATA[<p><span><a href="/user/anthropy%40mastodon.derg.nz">@<span>anthropy</span></a></span> I think it really depends, AI(not LLMs) has been used for a while and we didn't really notice, such as Speech-To-Text.</p><p>I believe the big problem is the overuse of LLMs and giving them the ability to make a final decision. LLMs are quantity over quality, and such make anything it has a larger role in a product of likely lesser quality, this is why it being used to write vital components in important projects is worrying.</p><p>AI is innefficent by design, that's why complicated models aren't fit for this much mass-cosumption or general use, it is in a lot of cases used as a (generally less reliable) replacement for efficently coded functions.</p><p>AI in it's core is just a program that creates the illusion of intelligence, but it is not intelligent, and unfortunately some seem to not understand that. </p><p>It's used by people who don't know or don't care what it actually is and try to make promises that don't match reality.</p>]]></description><link>https://board.circlewithadot.net/post/https://tech.lgbt/users/webbop/statuses/116300860612091196</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://tech.lgbt/users/webbop/statuses/116300860612091196</guid><dc:creator><![CDATA[webbop@tech.lgbt]]></dc:creator><pubDate>Fri, 27 Mar 2026 11:18:19 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Fri, 27 Mar 2026 10:10:30 GMT]]></title><description><![CDATA[<p><span><a href="/user/anthropy%40mastodon.derg.nz">@<span>anthropy</span></a></span> I think comparing it to hating crime might be a little bit of a false equivalence. Hating crime has some pretty nasty real-world consequences where we treat prisons like retribution instead of rehabilitation, and where certain things that are deemed crimes aren't actually criminal and were only made illegal to disenfranchise people. On the other hand, hating generative AI and LLMs is pretty victimless outside of maybe having a disproportionate negative response when someone uses them.</p><p>I do see some practical value in AI and LLMs but I also think the harms and negative impact of it massively outweigh the value they provide for the average person. People's power bills are being sharply driven upwards, its inclusion in more and more corners of our operating systems is actively making the experience worse, people are being pushed deeper into mental illnesses by interacting with it, and social media is impossible to use without being duped into thinking something generated by AI is real. It's honestly maddening.</p><p>I know your point is that AI should be more regulated so it doesn't have these problems, but I also think it's fair that people continue to hate on AI until these regulations are put in place. There are enough experts pushing for these regulations that I don't think the bulk of people putting their energy into hating AI instead of pushing for regulations is really detracting from the movement that much.</p>]]></description><link>https://board.circlewithadot.net/post/https://cubhub.social/users/Rusty/statuses/116300593912040435</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://cubhub.social/users/Rusty/statuses/116300593912040435</guid><dc:creator><![CDATA[rusty@cubhub.social]]></dc:creator><pubDate>Fri, 27 Mar 2026 10:10:30 GMT</pubDate></item><item><title><![CDATA[Reply to I think I&#x27;m just going to say it. on Fri, 27 Mar 2026 09:30:06 GMT]]></title><description><![CDATA[<p>and just to be clear, everyone is entitled to their own opinions on this, I'm not here to change your mind.</p><p>But if you hate and push off the subject as a whole, you also step out of the constructive debate to form and improve this tech away from a direction you don't want and towards something you do.</p><p>I've said it before and I'll say it again, hating AI is like hating crime. you're not going to lower the prevalence of crime that way. we need tangible and real solutions</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.derg.nz/users/anthropy/statuses/116300435079256743</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.derg.nz/users/anthropy/statuses/116300435079256743</guid><dc:creator><![CDATA[anthropy@mastodon.derg.nz]]></dc:creator><pubDate>Fri, 27 Mar 2026 09:30:06 GMT</pubDate></item></channel></rss>