Armin was once one of the most prolific programmers in Python.
-
Armin was once one of the most prolific programmers in Python. Says he never writes code anymore. Seeing more and more people like him write stuff like this on what are supposedly computer programming forums. https://lobste.rs/s/qmjejh/ai_is_slowly_munching_away_my_passion#c_jcgdju
Notably, once a person crosses this threshold, I see them still hang out on programming forums, but they never talk about any of the puzzles of programming anymore. Only about running agents. Which feels strange and sad. Why hang out on the forums at all then?
@cwebber yeah, even without my and many others’ objections to LLMs, it’s depressing to read about someone essentially giving up a skill.
-
Very nice! I have watched experienced devs have to work at this too. They often lean towards overcomplicating things because they want to avoid hardcoding the patterns. But this then leads to a nice little discussion.
-
@cwebber Also, don't use it for "summarize" because it literally can't do that.
When ChatGPT summarises, it actually does nothing of the kind.
One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn't summarising at all, it only looks like it. What it does is something else and that something else only becomes summarising in very specific circumstances.
R&A IT Strategy & Architecture (ea.rna.nl)
@jwcph @cwebber Also see “ChatGPT trust is risky, as a recent study by the European Broadcasting Union (EBU) shows. The association of 68 public broadcasters from 56 countries systematically tested the reliability of the most popular AI systems. The alarming result: ChatGPT, Gemini, and other chatbots invent up to 40 percent of their answers and present them as facts.”
EBU – European Broadcasting Union (2025) News Integrity in AI Assistants. An international PSM study, https://www.ebu.ch/Report/MIS-BBC/NI_AI_2025.pdf
-
R relay@relay.an.exchange shared this topic
-
@wordshaper @cwebber I don't think you appreciate just how many man years go into writing production level code. My productivity has tripled but if takes weeks to get a prototype in front of 100k+ users. Is not like we're going to release clawd and watch the world burn
-
I have my fifth graders write a program that will convert decimal numbers to Roman numerals. They know that there are already webpages that do this with smart trim programs that always give the right answer. They know they could ask an LLM and probably get the right answers most of the time.
They still want to solve the puzzle.
"It works! It works!"
I've love hearing that when I'm teaching.
I feel sad that some people don't get that "it works it works" feeling anymore. That's depressing. Honestly what is the point of going on even?
-
Feeling FOMO about AI? Well here's my advice!
Stay on top of what's happening. Which doesn't really require *using* the tools. Just see what people are doing.
Whether or not you do use it, stay a practitioner. And don't fall for the FOMO.
Your career won't end because you're not making the choice to use AI. (If your employer makes you use it, that's another thing.)
If you use AI, use it for "summarize and explore" tasks. DO NOT use it for *generate* tasks. That's a different thing.
If you want to differentiate yourself, *learning skills* is the differentiation space right now.
These things are easy to pick up. You can do it whenever. But keep learning.
If you see generated examples, don't paste or accept them. Type them in by hand! The hands on imperative: actually trying things congeals core ideas.
And if it doesn't help your career... well, your consolation prize is: you'll stay interesting.
@cwebber so, just do the thing using AI as tutor/supporter - not a s a slave, who makes all instead of you. By this way we learn other things much faster.
This is my experience. -
@cwebber @jalefkowit in Armin's case specifically, a not-insubstantial part of the answer seems to be sneering at people who don't use "AI" (including here on Mastodon)
That's not a very charitable read, but I have run out of charity for the way he has performed his enthusiasm to the community
I'm seeing the same thing in some of the Python spaces I inhabit. The users who go all-in on it stop talking about programming.
@SnoopJ @cwebber @jalefkowit I wouldn't mind so much if they just stopped talking about programming, it is the downstream detritus of their activities that makes ME stop talking about programming that pisses me off
-
@wordshaper @cwebber every line of code is a liability. it's funny that suddenly "lines of code generated" is a metric and they're all smiling, proud.
meanwhile... some AWS agent decided to rewrite half the code base on its own and deploy it to production which took down some important AWS services.
we'll just keep generating more, faster. tech debt creation at scale.
@agentultra @wordshaper @cwebber in my experience there have always been a faction of software engineers who think LOC is a valid metric
A peer of mine once said "You're going there? Sure, then. Give me a couple days and I'll unroll all my for loops," but nobody considers that.
-
Steve Klabnik also had an interview on lobste.rs. There's a lot in it! It's a cool read! https://alexalejandre.com/programming/steve-klabnik-interview/
And then it gets to the AI part and he's just like "oh I don't write code anymore".
And notably Steve Klabnik has a lot to say about code, but it's *all in the past*.
Lots of brilliant people are becoming non-practitioners.
@cwebber@social.coop Eh. At least he's honest?
A lot of people would present code generated by AI as "I made it".
-
Also, I think using hosted models is strictly unethical for surveillance and energy usage reasons.
It *is* true that there are models you can run locally that are much, much more efficient, and I suspect the energy costs on training them can be dramatically reduced.
I don't use either presently, but using a local model to help you navigate a codebase (as opposed to generating code) is a very different thing, I think. But it's also not what most people are doing!
And hosted AI models, as I said, I think are fully objectionable from an ethics perspective.
Datacenters are an antipattern in the general case. AI datacenters, triply so.
@cwebber my impression was that small datacenters are probably better for the environment than local computing because they’re more efficient, although i could see there being economic and political downsides. but i haven’t researched the topic deeply.
perhaps this is a theory and practice thing—in our political and economic reality i could see there being a lot of issues that real-world data centers have that a hypothetical one designed for environmental and human welfare wouldn’t.
-
Armin was once one of the most prolific programmers in Python. Says he never writes code anymore. Seeing more and more people like him write stuff like this on what are supposedly computer programming forums. https://lobste.rs/s/qmjejh/ai_is_slowly_munching_away_my_passion#c_jcgdju
Notably, once a person crosses this threshold, I see them still hang out on programming forums, but they never talk about any of the puzzles of programming anymore. Only about running agents. Which feels strange and sad. Why hang out on the forums at all then?
@cwebber I have used llms for generation when it's something I should remember how to do but don't. Like I don't remember the exact name of the method I want or the order of the arguments.
It produced code that I was able to understand, since I knew in general what I wanted to do, and fixing the parts it got wrong was faster than writing the whole thing from scratch.
-
Steve Klabnik also had an interview on lobste.rs. There's a lot in it! It's a cool read! https://alexalejandre.com/programming/steve-klabnik-interview/
And then it gets to the AI part and he's just like "oh I don't write code anymore".
And notably Steve Klabnik has a lot to say about code, but it's *all in the past*.
Lots of brilliant people are becoming non-practitioners.
@cwebber Some of the big names in the rust community are very bullish on AI. This has actually changed how I feel about rust.
-
Armin was once one of the most prolific programmers in Python. Says he never writes code anymore. Seeing more and more people like him write stuff like this on what are supposedly computer programming forums. https://lobste.rs/s/qmjejh/ai_is_slowly_munching_away_my_passion#c_jcgdju
Notably, once a person crosses this threshold, I see them still hang out on programming forums, but they never talk about any of the puzzles of programming anymore. Only about running agents. Which feels strange and sad. Why hang out on the forums at all then?
@cwebber It is sad and I wonder why it is not more widely recognized in free software circles as going fundamentally against what brought us here: hacking the good hack and sharing knowledge.
-
Also, I think using hosted models is strictly unethical for surveillance and energy usage reasons.
It *is* true that there are models you can run locally that are much, much more efficient, and I suspect the energy costs on training them can be dramatically reduced.
I don't use either presently, but using a local model to help you navigate a codebase (as opposed to generating code) is a very different thing, I think. But it's also not what most people are doing!
And hosted AI models, as I said, I think are fully objectionable from an ethics perspective.
Datacenters are an antipattern in the general case. AI datacenters, triply so.
@cwebber i don't know that i'd trust these models for summarization or navigation. even when the outputs are technically correct, they can leave out certain information or frame the information in a misleading way, papering over whatever makes the code unique and materially suited for the task at hand
-
I don't really get how one could use an LLM to help with coding without reading the code?
That's baffling. But I don't make apps I teach young people to think and solve problems. So maybe that's why I don't get it.
@futurebird @cwebber Interesting thought actually. Why bother having an LLM generate source code or a script you cannot read? Instruct the LLM to generate the app in low level machine instructions, produce an executable, and skip all the overhead.
-
A solution that 5th graders can complete
elegant? eh
print("Roman Numerals")
ones = ["","I","II", "III", "IV", "V","VI", "VII", "VIII", "IX"]
tens = ["", "X", "XX","XXX", "XL", "L", "LX", "LXX", "LXXX", "XC"]
hundreds = ["", "C", "CC", "CCC", "CD", "D", "DC", "DCC", "DCC", "CM"]
thousands = ["", "M", "MM", "MMM"]n = input("enter a number 1 to 3999")
n=int(n)m=n//1000
n=n-m*1000h=n//100
n=n-h*100t=n//10
n=n-t*10print(thousands[m]+hundreds[h]+tens[t]+ones[n])
@futurebird @faassen @cwebber Hmm, I see how this has some interesting advantages over using to/from Hex as the challenge. Cool!
-
@cwebber I think I'm comfortable waiting til the economics sorts itself out (and fortunate to work a software engineering job where at the moment they don't really care which tools I use). Like, if it turns out Anthropic is making a profit off of their $20/mo plan and it is genuinely making developers 50% more productive then I get it. But, at the same time, it could absolutely turn out that I'd have to pay $500/mo to be 10% more effective and at that point I won't really care to jump on that.
Similarly, last week I was in a meeting for an hour to discuss the impacts of changing one line of code, so while there are parts of my job that are coding-heavy maybe my "software engineering" role as a whole isn't limited by how fast I can read/write code and I doubt an LLM would help me out in that situation.
@tom @cwebber > so while there are parts of my job that are coding-heavy maybe my "software engineering" role as a whole isn't limited by how fast I can read/write code and I doubt an LLM would help me out in that situation.
Cosign. On some days, a good part of my day job is stuff that looks like yak shaving to developers in general, and ultra-trivial shit to techbros, but is actually quite meaningful to the rest of the business:
"Hey, we noticed that the `fimbledonker` field in the data stream for Project Gribbleblot is showing `Team PDQ` as the primary fimbledonker about 5% of the time. We need to make this be `Team WTF` instead. Could you please make sure the business logic is correct?"
and this is a one-liner (or an update to a template buried somewhere last seen 3 projects, 4 project managers, and 5 company reorgs ago).
Fixing this sort of thing is not something generative "AI" speeds up meaningfully. What speeds things up meaningfully is better process to define requirements and expectations, and better communication between interested parties.
-
Steve Klabnik also had an interview on lobste.rs. There's a lot in it! It's a cool read! https://alexalejandre.com/programming/steve-klabnik-interview/
And then it gets to the AI part and he's just like "oh I don't write code anymore".
And notably Steve Klabnik has a lot to say about code, but it's *all in the past*.
Lots of brilliant people are becoming non-practitioners.
@cwebber what worries me too is how many of these people, who are on their own brilliant programmers, spending most of their time writing programs for companies that can be more or less copy-pasted from elsewhere - thus being perfect for AI generation.
In other words - it worries me that people's skillsets are dwindling just because of the job they work at. Something they did as a way to support themselves now jeopardizes the truly brilliant work they may have actually been doing, or the times at their job where something truly challenging did come up.
AI still can't really do anything of use for me, the actual cost of correctness is so high for what I do I don't see a future it could ever work for me. I'm so infinitely glad for this, I can just point to fact and say "this tool can't work for me". I feel terrible for people who can't do that, especially when their skill sets are so much more then their jobs are enabling them to do. -
@cwebber i don't know that i'd trust these models for summarization or navigation. even when the outputs are technically correct, they can leave out certain information or frame the information in a misleading way, papering over whatever makes the code unique and materially suited for the task at hand
@cwebber (this is actually my main concern about llms. i think people really underestimate how much llms reproduce the values and expectations in their corpus, their reinforcement learning tasks, their explicit engineering, and their product design. and they underestimate the effects that this will have on their understanding of code and the horizon of what's possible to do with code)
-
It took me a long time to find a programing puzzle at the right level for 5th grade. Many things that might seem simple are too complex.
Making the Roman numeral converter they learn about indexes and lists, place value, and modular division.
It's really math, and logic. Working out how to present the question made *me* smarter since I had to think about the problem in a new way that avoided aspects of coding that were ... technical without really teaching much.
Related problem: translating from digits to spelled-out words -- e.g. "sixty-five thousand, five-hundred and thirty five" -- not sure if that's easier or harder than Roman numerals, but it's definitely both related and different ^.^