Weeknotes 2026-W09
-
Weeknotes 2026-W09
+ A week at home...
+ Experiments with agentic coding
+ Reading corner: Elder Race -
Weeknotes 2026-W09
+ A week at home...
+ Experiments with agentic coding
+ Reading corner: Elder Race@jonmsterling Elder Race was great! It also reminded me of Rocanon's World and Planet of Exile. I'm reading Children of Time now.
-
Weeknotes 2026-W09
+ A week at home...
+ Experiments with agentic coding
+ Reading corner: Elder RaceI would really like to have help for things that are routine when programming or proving.
But do we really need an expensive farm of GPU's, with an even more expensive amount of energy doing something blindly based on probabilities gathered from training data to get this done?
For example, one thing I would like is some tool that can read my TypeTopology codebase and answer some more or less trivial questions that are very time consuming for me to answer myself.
Surely, a system such as the one you describe can do it. But do we really need such a brute-force approach to that?
I thought the purpose of computer science was to get things done cleverly, and not by brute force.
Ironically, brute force is now called AI.
But it seems we are going back to brute force, if only because there is enough money now to get this done that way (and not for solving other more fundamental problems humanity has).
-
I would really like to have help for things that are routine when programming or proving.
But do we really need an expensive farm of GPU's, with an even more expensive amount of energy doing something blindly based on probabilities gathered from training data to get this done?
For example, one thing I would like is some tool that can read my TypeTopology codebase and answer some more or less trivial questions that are very time consuming for me to answer myself.
Surely, a system such as the one you describe can do it. But do we really need such a brute-force approach to that?
I thought the purpose of computer science was to get things done cleverly, and not by brute force.
Ironically, brute force is now called AI.
But it seems we are going back to brute force, if only because there is enough money now to get this done that way (and not for solving other more fundamental problems humanity has).
@MartinEscardo @jonmsterling I think this might have an implicit bias due to novelty.
The same arguments apply to programming languages, web browsing, operating systems. We could always spend more effort to remove generic platforms and write everything by hand. But this is quite expensive.
The point of a large swathe of computer science is not just algorithms, but how do we build platforms that can automate repetitive work.I think the impression of the environmental costs is greatly hampered by the shortages on hardware. You can run models on a home GPU or a modern CPU and it does not take long at all for these tasks. So compared to a few minutes of any other application, or streaming a second or two of video, it's really quite minimal.
-
@MartinEscardo @jonmsterling I think this might have an implicit bias due to novelty.
The same arguments apply to programming languages, web browsing, operating systems. We could always spend more effort to remove generic platforms and write everything by hand. But this is quite expensive.
The point of a large swathe of computer science is not just algorithms, but how do we build platforms that can automate repetitive work.I think the impression of the environmental costs is greatly hampered by the shortages on hardware. You can run models on a home GPU or a modern CPU and it does not take long at all for these tasks. So compared to a few minutes of any other application, or streaming a second or two of video, it's really quite minimal.
@jac @MartinEscardo I think it is different, I agree with Martin. Actual energy cost of inference is not high, but cost of training is astronomical and the models get thrown away every several months.
-
I would really like to have help for things that are routine when programming or proving.
But do we really need an expensive farm of GPU's, with an even more expensive amount of energy doing something blindly based on probabilities gathered from training data to get this done?
For example, one thing I would like is some tool that can read my TypeTopology codebase and answer some more or less trivial questions that are very time consuming for me to answer myself.
Surely, a system such as the one you describe can do it. But do we really need such a brute-force approach to that?
I thought the purpose of computer science was to get things done cleverly, and not by brute force.
Ironically, brute force is now called AI.
But it seems we are going back to brute force, if only because there is enough money now to get this done that way (and not for solving other more fundamental problems humanity has).
@jonmsterling Time to blame the so-called invisible hand.

-
@jac @MartinEscardo I think it is different, I agree with Martin. Actual energy cost of inference is not high, but cost of training is astronomical and the models get thrown away every several months.
@jonmsterling But, even if you disregard the energy costs, this seems to me like the most inefficient way of getting things done.
Can I just get things done in my own computer, rather than in a farm of who knows how many computers?
I am absolutely sure this is possible.
-
@jonmsterling But, even if you disregard the energy costs, this seems to me like the most inefficient way of getting things done.
Can I just get things done in my own computer, rather than in a farm of who knows how many computers?
I am absolutely sure this is possible.
@MartinEscardo @jonmsterling Indeed you can! I know there's minor caveats, but a lot of modern CPUs and devices have NPUs that are optimized for running neural nets on a low amount of energy. And it does not take much intelligence at all to generate small scripts and searches, although we sometimes need better harnesses for small language models.
-
@MartinEscardo @jonmsterling Indeed you can! I know there's minor caveats, but a lot of modern CPUs and devices have NPUs that are optimized for running neural nets on a low amount of energy. And it does not take much intelligence at all to generate small scripts and searches, although we sometimes need better harnesses for small language models.
@jac @MartinEscardo Also, we can print the models onto silicon. The problem is, I think, that the models are basically disposable. Once the money runs out, that may change. I can only hope.
-
@jonmsterling But, even if you disregard the energy costs, this seems to me like the most inefficient way of getting things done.
Can I just get things done in my own computer, rather than in a farm of who knows how many computers?
I am absolutely sure this is possible.
@MartinEscardo @jonmsterling @jac
Regarding energy cost, the best new approach to addressing that issue is probably Thermodynamic Computing. I think if the claims by the startup extropic.ai are true then that's the future.I'm thinking perhaps in the future there will be changes to hardware which will incorporate these new architectures specifically for AI, like we have a CPU, GPU and hopefully a "TSU" (Thermodynamic Sampling Unit) for probabilistic computing.
-
R relay@relay.mycrowd.ca shared this topic