If you replace a junior with #LLM and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."
-
@pseudonym We are using AI inexactly the worst ways possible.
Caveat: I am a never AI-er, due to the ethical issues surrounding how training data is gathered, the severe ecological and economic impacts, and the fact that deepfakes are objectively making the world a shittier place.
But pretend for a second, none of those are a problem anymore. We are still using AI wrong. You don't have it produce a mountain of code and have a human review it. You still use humans to produce the code, and have AI help other humans to review it. AI isn't terribly good at writing code, but it has been shown to be effective at finding a few classes of bugs humans are typically very bad at finding.
But that won't allow you to fire people and replace them with monkeys on typewriters, so it'll never happen.
@nuintari what is AI?
Reason I ask is that for everything containing the least bit of software I can find a techbro willing to confabulate an 'ai' themed pitch deck. I'm not even kidding.
I surely hope to keep my dishwasher, if I promise not to call it 'ai' (but I'm sure someone else will)

-
@nuintari what is AI?
Reason I ask is that for everything containing the least bit of software I can find a techbro willing to confabulate an 'ai' themed pitch deck. I'm not even kidding.
I surely hope to keep my dishwasher, if I promise not to call it 'ai' (but I'm sure someone else will)

@iwein Sorry, I've taken to just using the term AI when I mean LLM, even though I actually mean "Almost Incompetent," in my own head.
-
If you replace a junior with #LLM and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."
That's a cognitively brutal task.
Humans are terrible at sustained vigilance for rare events in high-volume streams. Aviation, nuclear, radiology all have extensive literature on exactly this failure mode.
I propose any productivity gains will be consumed by false negative review failures.
@pseudonym@mastodon.online
Yesterday, I was working on some PowerShell-based automation. I'm a UNIX/Linux guy. I'm used to Bash. I'm used to Python and pythonic DSLs. I'm… You get the drift. I'm not a Windows guy and I'm not PowerShell guy.
A few days ago, I got an email from Google telling me that, because I have a storage plan (mostly for photos storage), that use of Gemini was now included. So, I opted to try to use Gemini to bridge my PowerShell knowledge-gaps. I came to a couple conclusions:
• If you're a truly junior "coder" (haven't mastered at least one "language" and regularly applied that master to "the real world), relying on LLMs is likely to lead you to creating smoking holes
• Those "smoking holes" are the results of the LLM sometimes providing partially or wholly incorrect answers: I've had to correct Gemini several times
• Even where "smoking holes" aren't a risk, LLMs are not adequately speculative. To illustrate, I was trying to solve a problem. Gemini suggested a given path to take. The suggested-path looked more generalizable, so I asked, "I feel like there's a good chance I can do similar within this other, very analogous component. I'm going to run a test to validate." Gemini's response was effectively, "don't bother: the documentation doesn't indicate that that will work." A couple decades' experience under my belt, I know that documentation is sometimes incomplete or wrong (out of date). So, I proceeded to test my suspicion and, lo and behold, it worked. If you're lacking "feel" for things, you'd likely take the LLM's "don't bother" guidance and go down a different path, a path that might be a lot more byzantine. -
If you replace a junior with #LLM and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."
That's a cognitively brutal task.
Humans are terrible at sustained vigilance for rare events in high-volume streams. Aviation, nuclear, radiology all have extensive literature on exactly this failure mode.
I propose any productivity gains will be consumed by false negative review failures.
@pseudonym Yes. Very well put. I’m gonna use this …
-
@iwein Sorry, I've taken to just using the term AI when I mean LLM, even though I actually mean "Almost Incompetent," in my own head.
@nuintari thanks for that

-
If you replace a junior with #LLM and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."
That's a cognitively brutal task.
Humans are terrible at sustained vigilance for rare events in high-volume streams. Aviation, nuclear, radiology all have extensive literature on exactly this failure mode.
I propose any productivity gains will be consumed by false negative review failures.
@pseudonym
Looks like Harvard Business Review agrees with you
AI Doesn’t Reduce Work—It Intensifies It
One of the promises of AI is that it can reduce workloads so employees can focus more on higher-value and more engaging tasks. But according to new research, AI tools don’t reduce work, they consistently intensify it: In the study, employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so. That may sound like a win, but it’s not quite so simple. These changes can be unsustainable, leading to workload creep, cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems. To correct for this, companies need to adopt an “AI practice,” or a set of norms and standards around AI use that can include intentional pauses, sequencing work, and adding more human grounding.
Harvard Business Review (hbr.org)
I did not read the whole thing but summary says
"One of the promises of AI is that it can reduce workloads so employees can focus more on higher-value and more engaging tasks. But according to new research, AI tools don’t reduce work, they consistently intensify it ..."
-
If you replace a junior with #LLM and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."
That's a cognitively brutal task.
Humans are terrible at sustained vigilance for rare events in high-volume streams. Aviation, nuclear, radiology all have extensive literature on exactly this failure mode.
I propose any productivity gains will be consumed by false negative review failures.
-
@JizzelEtBass
Thanks
️ -
Yeah. Pretty sure I read that earlier and it influenced my thinking about this, leading to my post.
Thanks for the reference.
-
@pseudonym Yes. Very well put. I’m gonna use this …
-
@pseudonym@mastodon.online
Yesterday, I was working on some PowerShell-based automation. I'm a UNIX/Linux guy. I'm used to Bash. I'm used to Python and pythonic DSLs. I'm… You get the drift. I'm not a Windows guy and I'm not PowerShell guy.
A few days ago, I got an email from Google telling me that, because I have a storage plan (mostly for photos storage), that use of Gemini was now included. So, I opted to try to use Gemini to bridge my PowerShell knowledge-gaps. I came to a couple conclusions:
• If you're a truly junior "coder" (haven't mastered at least one "language" and regularly applied that master to "the real world), relying on LLMs is likely to lead you to creating smoking holes
• Those "smoking holes" are the results of the LLM sometimes providing partially or wholly incorrect answers: I've had to correct Gemini several times
• Even where "smoking holes" aren't a risk, LLMs are not adequately speculative. To illustrate, I was trying to solve a problem. Gemini suggested a given path to take. The suggested-path looked more generalizable, so I asked, "I feel like there's a good chance I can do similar within this other, very analogous component. I'm going to run a test to validate." Gemini's response was effectively, "don't bother: the documentation doesn't indicate that that will work." A couple decades' experience under my belt, I know that documentation is sometimes incomplete or wrong (out of date). So, I proceeded to test my suspicion and, lo and behold, it worked. If you're lacking "feel" for things, you'd likely take the LLM's "don't bother" guidance and go down a different path, a path that might be a lot more byzantine.Same background (Unix grey beard) with current focus on security, and your experience matched my own.
I was soaking in a lot more AI tools at last job, and experience and insight are key.
Recently I had a system suggest multiple times to do it "the easy way" which emphatically was not how I wanted it to work. I was able to gently guide it back to what I wanted.
Letting a senior dev do the work of a senior guiding a junior is about right. But still can't replace either.
-
@pseudonym I have posed this conundrum before and the answer I received is that there is also an opportunity cost to not moving faster and the risk of a catastrophic bug may not outweigh the risk of being overtaken by competitors, especially since that was already happening before LLMs anyway.
Also, it *seems* models are improving at detecting these bugs, so they are being used to review changes, which, for the reasons you point out, they might be better at than people.
The models may indeed get better at finding and fixing their own mistakes, and would not be subject to human fatigue, that's true. But it is never perfect, so you still need a human in the loop. You've just pushed back the time a bit before you missed a harder-to-detect error. Which is inevitable, because hallucinations / confabulations are a feature, not a bug, of essential LLM operations.
So you make more, faster, harder to spot errors. Better LLM checkers increase the risk.
-
@pseudonym @mayintoronto … and: there will be no juniors to grow into seniors.

Yup. This is my biggest structural concern, really. But I only had 500 characters to consider the previous post, and wanted to focus on the review cost of any "gains" one might have.
There are more related topics to discuss, but the breaking of the funnel to train the next generation of skilled people is huge.
-
@pseudonym This, %100. The Glass Cage by Nicholas Carr dives into this in depth with examples from aviation, and how full-automation of flight, makes it harder to recover from a disaster situation for pilots.
Thanks for the reference. Didn't know that one.
-
@xrisk @malstrom @pseudonym just for clarity, LLMs don't learn concepts
Correct. They don't learn concepts. That's the key confusion in so much of the discussion and use around them.
They have no world model, and don't reason at all. But they perform a very good facsimile of reasoning, because reasoning is embedded in and has shaped the patterns of speech, text, and code.
They pattern match. That's all. Full stop. But they do it so well it looks like speech, or code, or understanding.
-
@pseudonym This.
I do a lot of "computer science labs", where students learn to write code, and they wave me down when they have questions. When their code doesn't do what they expect, it's often easy to figure out what went wrong because you can spot a bit of code that looks funky. And usually, the problem is in those few lines.
LLM code is meant to look like good code, so you don't get these little shortcuts.
Good example I hadn't thought of.
Yes, human novice code mistakes have a "shape" to them a teacher can recognize quickly, or suspect because of how the error manifests.
These are different classes of "good looking" failures.
-
If you replace a junior with #LLM and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."
That's a cognitively brutal task.
Humans are terrible at sustained vigilance for rare events in high-volume streams. Aviation, nuclear, radiology all have extensive literature on exactly this failure mode.
I propose any productivity gains will be consumed by false negative review failures.
@pseudonym i think it depends on the domain. like, code review is not seriously expected to catch all bugs; it's merely a step in a process. if you need absolute correctness (most don't!) then formal methods, a shockingly rare practice in the most critical industries, might be the right choice.
a stronger argument would be "the bugs are less obvious" though i think that too can be fought with observability. but that strategy only works well in application code, i.e. code which "makes money" (a notion which should be challenged, but that's another issue), rather than infra layer stuff with higher correctness needs and worse observability. and you know how the old saying goes: "if the code is good it's probably not making money". idk, people write slop where they already wrote slop due to the same pressures as before.
-
@pseudonym This was my experience from the start, and is what made me gave up on LLM assisted coding. Of course, that was before I was aware of the abhorrent externalities that came with using the slop machine...
Yup.
My thoughts aren't new.
Just felt the need to to pack them up into something bite-sized.
To explain where I see one of the fundamental design failures, as a function of even any potential "good stuff" that may arise.
-
@adrianmorales @pseudonym Stop that, I love dark star!
-
@pseudonym and because the high volume consists of what I’ve dubbed “plausible bullshit”, reviewers will have to battle a plethora of their biases as well.
There are fields (I’ve heard stories about protein and material design, and vulnerability discovery) where filtering the BS for real discoveries can be worth it. I’m guessing it works because there is a reality to test against.
But for the love of humanity, don’t use it for anything descriptive or abstract.
I like to say that LLMS are a great way to reduce junior development time at the cost of senior review time.