<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[People aren&#x27;t building &quot;AI&quot; *tools* for Wikipedians, they&#x27;re building &quot;AI&quot; weapons to deliberately injure the maintenance of human knowledge.]]></title><description><![CDATA[<p class="quote-inline">RE: <a href="https://social.coop/@luis_in_brief/116388921310943162" rel="nofollow noopener"><span>https://</span><span>social.coop/@luis_in_brief/116</span><span>388921310943162</span></a></p><p>People aren't building "AI" *tools* for Wikipedians, they're building "AI" weapons to deliberately injure the maintenance of human knowledge.</p><p>That nuance is extremely important because something that's genuinely a tool must be genuinely useful in some way. Knives, lighters, and cars get to be tools, because for all the immense harm that can be done with them if they're misused... There are actual, genuine use cases for them.</p><p>That absolutely doesn't apply to "Gen AI," and I will die on that hill.</p>]]></description><link>https://board.circlewithadot.net/topic/2b2a8682-3418-43d2-8d5c-7e897599f286/people-aren-t-building-ai-tools-for-wikipedians-they-re-building-ai-weapons-to-deliberately-injure-the-maintenance-of-human-knowledge.</link><generator>RSS for Node</generator><lastBuildDate>Thu, 30 Apr 2026 17:40:36 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/2b2a8682-3418-43d2-8d5c-7e897599f286.rss" rel="self" type="application/rss+xml"/><pubDate>Sun, 12 Apr 2026 07:48:13 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to People aren&#x27;t building &quot;AI&quot; *tools* for Wikipedians, they&#x27;re building &quot;AI&quot; weapons to deliberately injure the maintenance of human knowledge. on Sun, 12 Apr 2026 14:51:03 GMT]]></title><description><![CDATA[<p><span><a href="/user/kimcrawley%40zeroes.ca">@<span>kimcrawley</span></a></span> Agree with this. GenAI is incapable of critical evaluation, only statistical significance. It can't tell if a translation preserves meaning; it has no concept of meaning. It can't tell if a statement is true; it has no concept of truth.</p><p>GenAI can do one thing: output. Wikipedia faces many challenges and "not enough text" has never been one of them. The vision of generating text, and therefore inaccuracies, at scale for editors to fix is utterly demented.</p>]]></description><link>https://board.circlewithadot.net/post/https://mastodon.social/ap/users/115884962248073052/statuses/116392294080371491</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://mastodon.social/ap/users/115884962248073052/statuses/116392294080371491</guid><dc:creator><![CDATA[eriksonarias@mastodon.social]]></dc:creator><pubDate>Sun, 12 Apr 2026 14:51:03 GMT</pubDate></item><item><title><![CDATA[Reply to People aren&#x27;t building &quot;AI&quot; *tools* for Wikipedians, they&#x27;re building &quot;AI&quot; weapons to deliberately injure the maintenance of human knowledge. on Sun, 12 Apr 2026 13:54:29 GMT]]></title><description><![CDATA[<p><span><a href="/user/evoscale%40c.im">@<span>EvoScale</span></a></span> <br />Please check out <a href="https://stopgenai.com" rel="nofollow noopener"><span>https://</span><span>stopgenai.com</span><span></span></a></p>]]></description><link>https://board.circlewithadot.net/post/https://zeroes.ca/users/kimcrawley/statuses/116392071659194576</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://zeroes.ca/users/kimcrawley/statuses/116392071659194576</guid><dc:creator><![CDATA[kimcrawley@zeroes.ca]]></dc:creator><pubDate>Sun, 12 Apr 2026 13:54:29 GMT</pubDate></item><item><title><![CDATA[Reply to People aren&#x27;t building &quot;AI&quot; *tools* for Wikipedians, they&#x27;re building &quot;AI&quot; weapons to deliberately injure the maintenance of human knowledge. on Sun, 12 Apr 2026 12:23:24 GMT]]></title><description><![CDATA[<p><span><a href="/user/kimcrawley%40zeroes.ca">@<span>kimcrawley</span></a></span> 'Taking candy from a baby'...</p><p>It's no mystery that abuse of power often relies upon the masses being effectively 'hypnotized', by awe-inspiring unknowns, resulting in obedient subservience. Proffering AI as a miraculous cure for inadequately exercised emotions, along with unfulfilled proficiencies failed by diminishing educational systems, as a prioritized global investment, calls for further investigation - especially where <a href="https://c.im/tags/TechBros" rel="tag">#<span>TechBros</span></a> working to corner the vital resource of fuel (too much of which, is burning fossil fuels) in the form of <a href="https://c.im/tags/DataCenters" rel="tag">#<span>DataCenters</span></a>. </p><p>It's continued <a href="https://c.im/tags/AbuseOfPower" rel="tag">#<span>AbuseOfPower</span></a> for the wealthiest 1% (more .01% now) to further subjugate subservience, in the name of <a href="https://c.im/tags/Capitalism" rel="tag">#<span>Capitalism</span></a>.</p>]]></description><link>https://board.circlewithadot.net/post/https://c.im/ap/users/115691859295411580/statuses/116391713488487148</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://c.im/ap/users/115691859295411580/statuses/116391713488487148</guid><dc:creator><![CDATA[evoscale@c.im]]></dc:creator><pubDate>Sun, 12 Apr 2026 12:23:24 GMT</pubDate></item></channel></rss>