@bhg @n_dimension Once we dissect the truly colossal power consumption involved in training these beasts, the staggering inefficiencies of the MAC operation == multiply and accumulate, the Really Big Story is how the indexers are address the power consumption:Algorithm efficiency is a big story right now: DeepSeek's V3 model reportedly cost just $5.576 million to train and used only around 2,000 chips, where competitors were using 16,000+. As one Rhodium Group analyst put it, DeepSeek "demonstrates that training high-performance models can take far less electricity than previously thought." The catch, as some researchers note, is that cheaper training may just unleash more demand overall: Jevon's Paradoxhttps://www.axios.com/2025/01/28/deepseek-ai-model-energy-power-demand