In the last week, I’ve seen an uptick in ‘AI is good for boilerplate’ posts.
-
In the last week, I’ve seen an uptick in ‘AI is good for boilerplate’ posts. It is 2026. Metaprogramming is over 50 years old. Why are we writing boilerplate at all, much less creating expensive tools that let us write more of it faster?
@david_chisnall @simon because you see if we had people with expertise in metaprogramming they would demand to be paid like experts and it’s much easier to have some opex to a third party than to have expensive humans do the work. Bonus if labour costs lessen as a result.
-
In the last week, I’ve seen an uptick in ‘AI is good for boilerplate’ posts. It is 2026. Metaprogramming is over 50 years old. Why are we writing boilerplate at all, much less creating expensive tools that let us write more of it faster?
@david_chisnall because quite often the source of boilerplate isn't mine, e.g. terraform or fitting into an existing project. I wish my life was just the interesting parts!
-
@david_chisnall I got this argument two years ago from various colleagues, as "look, LLM is very good at generating boilerplate for front-end tests, back-end api functions...".
my immediate reply was *why do we need those boilerplates? why don't we create an abstraction layer that does all of those and call that every time we need it?*
they usually just answered with dismissal and vaguely explaining it would be too much work - so instead of reducing complexity, we added more via "AI" agents 🫠@pcdevil
In any working group there will be more stupid than sense.This only matters when stupid doesn't recognise sense, but unfortunately GenAI somehow evokes that failure in stupid. I've given up trying to explain and could spend days writing up all the reasons why using GenAI based on LLMs to create software for actual use is almost always a bad thing. But it wouldn't make any difference.
@david_chisnall -
@pcdevil
In any working group there will be more stupid than sense.This only matters when stupid doesn't recognise sense, but unfortunately GenAI somehow evokes that failure in stupid. I've given up trying to explain and could spend days writing up all the reasons why using GenAI based on LLMs to create software for actual use is almost always a bad thing. But it wouldn't make any difference.
@david_chisnall@markhughes @david_chisnall I agree with you, but to be fair it was around the time when people just started using LLMs and code agents (or at least started around me), so I was in the impression that - as with other technologies - my peers will weigh the pros&cons and my opinion will matter on the subject.
since then I know that's not the case and I just quietly do my work without "AI", and snarkly smile when a stupid implementation turn out to be an LLM output without review by the author 🫠
-
@david_chisnall @simon because you see if we had people with expertise in metaprogramming they would demand to be paid like experts and it’s much easier to have some opex to a third party than to have expensive humans do the work. Bonus if labour costs lessen as a result.
The thing is, most of these projects are using some framework. And so the number of people who need to have that expertise is quite small: it's the folks who write the framework. But very few popular frameworks seem to put much effort into this.