<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[#luddites : &quot;WAAH LLMs eat the planet with huge energy hungry Datacentres !!!!&quot;]]></title><description><![CDATA[<p><a href="https://infosec.exchange/tags/luddites" rel="tag">#<span>luddites</span></a> : "WAAH LLMs eat the planet with huge energy hungry Datacentres !!!!"</p><p><a href="https://infosec.exchange/tags/Google" rel="tag">#<span>Google</span></a> : Here is one that runs on 4 watts of power of your prayer tablet.</p><p><a href="https://infosec.exchange/tags/luddites" rel="tag">#<span>luddites</span></a> : (Waaah ambulacing intensifies)</p><p>Time to admit it was never about Planet destroying <a href="https://infosec.exchange/tags/Ai" rel="tag">#<span>Ai</span></a> but your reluctance to learn new shit and hanging out with all the Kool kids, dancing around the bonfires in the woods?</p><p><a href="https://infosec.exchange/tags/gemininano" rel="tag">#<span>gemininano</span></a></p>]]></description><link>https://board.circlewithadot.net/topic/32bba244-d2a3-4a4d-9744-36dedf1c2acd/luddites-waah-llms-eat-the-planet-with-huge-energy-hungry-datacentres</link><generator>RSS for Node</generator><lastBuildDate>Fri, 15 May 2026 00:23:21 GMT</lastBuildDate><atom:link href="https://board.circlewithadot.net/topic/32bba244-d2a3-4a4d-9744-36dedf1c2acd.rss" rel="self" type="application/rss+xml"/><pubDate>Tue, 05 May 2026 17:35:41 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to #luddites : &quot;WAAH LLMs eat the planet with huge energy hungry Datacentres !!!!&quot; on Thu, 07 May 2026 08:14:50 GMT]]></title><description><![CDATA[<p><span><a href="https://troet.cafe/@bhg">@<span>bhg</span></a></span> <span><a href="/user/n_dimension%40infosec.exchange">@<span>n_dimension</span></a></span> </p><p>Once we dissect the truly colossal power consumption involved in training these beasts,  the staggering inefficiencies of the MAC operation == multiply and accumulate, the Really Big Story is how the indexers are address the power consumption:</p><p>Algorithm efficiency is a big story right now:  DeepSeek's V3 model reportedly cost just $5.576 million to train and used only around 2,000 chips, where competitors were using 16,000+. </p><p>As one Rhodium Group analyst put it, DeepSeek "demonstrates that training high-performance models can take far less electricity than previously thought." The catch, as some researchers note, is that cheaper training may just unleash more demand overall:  Jevon's Paradox</p><p></p><div class="card col-md-9 col-lg-6 position-relative link-preview p-0">

<div class="card-body">
<h5 class="card-title">
<a href="https://www.axios.com/2025/01/28/deepseek-ai-model-energy-power-demand">
Just a moment...
</a>
</h5>
<p class="card-text line-clamp-3"></p>
</div>
<a href="https://www.axios.com/2025/01/28/deepseek-ai-model-energy-power-demand" class="card-footer text-body-secondary small d-flex gap-2 align-items-center lh-2">



<img src="https://www.axios.com/favicon.ico" alt="favicon" class="not-responsive overflow-hiddden" style="max-width:21px;max-height:21px" />



<p class="d-inline-block text-truncate mb-0"> <span class="text-secondary">(www.axios.com)</span></p>
</a>
</div><p></p>]]></description><link>https://board.circlewithadot.net/post/https://beige.party/users/tuban_muzuru/statuses/116532293860616590</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://beige.party/users/tuban_muzuru/statuses/116532293860616590</guid><dc:creator><![CDATA[tuban_muzuru@beige.party]]></dc:creator><pubDate>Thu, 07 May 2026 08:14:50 GMT</pubDate></item><item><title><![CDATA[Reply to #luddites : &quot;WAAH LLMs eat the planet with huge energy hungry Datacentres !!!!&quot; on Tue, 05 May 2026 20:35:04 GMT]]></title><description><![CDATA[<p><span><a href="/user/n_dimension%40infosec.exchange">@<span>n_dimension</span></a></span> I don't get what you mean. Running the LLMs is not the issue, especially if they're lightweight, you and I know. Of course you can run a lightweight model on a low ressource price tag. The training of LLMs is ressource hungry as hell. This is what a large part of the ressources of datacentres are being used for.</p>]]></description><link>https://board.circlewithadot.net/post/https://troet.cafe/users/bhg/statuses/116523879932662908</link><guid isPermaLink="true">https://board.circlewithadot.net/post/https://troet.cafe/users/bhg/statuses/116523879932662908</guid><dc:creator><![CDATA[bhg@troet.cafe]]></dc:creator><pubDate>Tue, 05 May 2026 20:35:04 GMT</pubDate></item></channel></rss>