I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.
-
The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. Which is deeply ironic because it pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/
@glyph You're so extremely full of yourself that I didn't even finish reading your comment, and I no longer care about anything you have to say. Go touch grass.
-
2. The filter of oppression *itself* means we only hear the accounts of people who are not only probably telling the truth in the first place but had to push through aggressive filtering to even get heard. If you hear one complaint of police violence or SA there's probably hundreds more where that came from. That also doesn't apply. The archetypical LLM user is not silenced by oppression, they're being massively amplified by the largest propaganda apparatus on earth.
@jacob Consider another type of "lived experience" — the racist who says "DEI took my job". It would be a mistake to think that this person is *lying* about their experience — they are clearly motivated to their racism by genuine animus, and maybe they did lose their job — but their indirect, abstract experience of the nebulous entity of "DEI" is not reliable, particularly not in terms of employment statistics. So we are more skeptical in that case, and we look at the numbers.
-
@ddelemeny @glyph Yup, I read that and smirked ...
Again, "investing" in an open source tooling that will speed up your CI/CD is almost a no-brainer for an organization. They spend zero dollars and reduce costs/risks associated to the problem that the tool is designed to solve. But even then, there are security risks based on supply chain/dependencies that are often scrutinized to no end.
Investing in LLM tooling is supposedly "cheap" (due to subsidies), but the risks include vendor lock in, security vulnerabilities, and weakening worker autonomy (among others). But there seems to be zero scrutiny in spite of that.
@ddelemeny @glyph Early on at my current job, I built a tool that I thought was very useful and mentioned that I would like to open source it...
I was ultimately shut down. In the interest of "intellectual property" and other sorts of red tape... And I didn't really feel like fighting it.
So, I couldn't share my tool with the commons, but there are absolutely no qualms about feeding my code to a company that WE PAY, so they can ingest it and charge others for benefitting off of it? ...
Sigh...
-
@jacob Perhaps "dismiss" wasn't the best word choice there, but that's why I included "even if the LLM user is yourself". I dismiss _my own_ experience of LLMs, _as evidence of their quantitative efficacy_. As evidence of their subjective experience, of course it is valid. If it didn't produce the intense subjective experience then there wouldn't be a problem!
There are two reasons that activism teaches us to believe people's lived experience, and neither apply here: …
@glyph You’ve reasoned yourself into a position where anyone who says anything contrary to you is either delusional or lying. You might be right — I don’t think you are but who knows maybe — but even so, that’s just not a position I’m willing to take about anything ever.
-
@jacob Consider another type of "lived experience" — the racist who says "DEI took my job". It would be a mistake to think that this person is *lying* about their experience — they are clearly motivated to their racism by genuine animus, and maybe they did lose their job — but their indirect, abstract experience of the nebulous entity of "DEI" is not reliable, particularly not in terms of employment statistics. So we are more skeptical in that case, and we look at the numbers.
@glyph Honestly? The left would be in a better place if we didn’t instantly dismiss that person but actually explored that feeling and engaged with him. “You’re wrong” may be true, and feels good to say, but “what makes feel that way?” is a much better opening if you want to win people over to your side.
-
@glyph You’ve reasoned yourself into a position where anyone who says anything contrary to you is either delusional or lying. You might be right — I don’t think you are but who knows maybe — but even so, that’s just not a position I’m willing to take about anything ever.
@jacob No. I do not think it is a "delusion" to have an inaccurate quantitative understanding of a subjective experience. As the thread explains, I personally have that experience every single day. And I certainly do not believe that anyone saying "anything contrary to me" is doing that. What I am saying is that *one specific type of experience* — the feeling of LLMs positively impacting productivity — is poor evidence of *one specific type of claim*.
-
@jacob No. I do not think it is a "delusion" to have an inaccurate quantitative understanding of a subjective experience. As the thread explains, I personally have that experience every single day. And I certainly do not believe that anyone saying "anything contrary to me" is doing that. What I am saying is that *one specific type of experience* — the feeling of LLMs positively impacting productivity — is poor evidence of *one specific type of claim*.
@glyph Ok but that’s a much more narrow version of what you said. You said you must dismiss ALL experiences. If you want to argue specifically about the productivity claims fine whatever I still think you’re wrong but not in a way that matters to me, it’s a narrow thing and I’m only going off vibes anyway. It’s specifically the “all” I’m objecting to.
-
@glyph Honestly? The left would be in a better place if we didn’t instantly dismiss that person but actually explored that feeling and engaged with him. “You’re wrong” may be true, and feels good to say, but “what makes feel that way?” is a much better opening if you want to win people over to your side.
@jacob @glyph I have to say that's only true for folks who are capable of reason. I mean, it's possible that everyone is, but I have some uncles who will do something to the form of:
"Because all X are Y, because Z said so"
And regardless of any further lines of inquiry, it's always, always dismissed with "... Z said so!"
-
@glyph Ok but that’s a much more narrow version of what you said. You said you must dismiss ALL experiences. If you want to argue specifically about the productivity claims fine whatever I still think you’re wrong but not in a way that matters to me, it’s a narrow thing and I’m only going off vibes anyway. It’s specifically the “all” I’m objecting to.
@jacob I am frustrated that you read it that way, but perhaps it's my fault. I thought the meaning of "all" was obvious in context but it's the reader who gets to decide the meaning. I guess I will see if I can edit this to remove that ambiguity.
And I guess to be fair even this qualification is maybe a *little* narrower than what I meant, because I also mean things like the subjective impression of LLM factual accuracy or output quality, not *just and only* productivity.
-
@jacob @glyph I have to say that's only true for folks who are capable of reason. I mean, it's possible that everyone is, but I have some uncles who will do something to the form of:
"Because all X are Y, because Z said so"
And regardless of any further lines of inquiry, it's always, always dismissed with "... Z said so!"
-
I'm open to a future where we do some research and figure out the limits of how AI influence works, and where the safety valves are, and also the extent to which it's *fine* that AI can influence our views because honestly many different kinds of stimuli can influence our views, not least of which is each other. But it sure looks right now like it has a bunch of very dangerous feedback loops built-in, and it's not clear how to know if you're touching one.
@glyph this is my constant. AI, is not inherently bad.
We, as society, have simply started with the Evil dialed up to 11 and are furiously cranking with all our might for a 12.
-
@jacob I am frustrated that you read it that way, but perhaps it's my fault. I thought the meaning of "all" was obvious in context but it's the reader who gets to decide the meaning. I guess I will see if I can edit this to remove that ambiguity.
And I guess to be fair even this qualification is maybe a *little* narrower than what I meant, because I also mean things like the subjective impression of LLM factual accuracy or output quality, not *just and only* productivity.
@jacob I've changed it as best I can, to really focus in on "LLM use" rather than "LLM users" and subjective experience / objective phenomena distinction.
-
@jacob @glyph I'll agree with that.
But I'm also *tired* of 30 year old repetitions of the same bigotry from people who ostensibly should know better. People who have proven the ability to gain skills and knowledge and move successfully throughout life... And yet still choose to ignore their bias that has been put on display like pearls before swine.
-
@jacob @ketmorco yeah one of the reasons I eventually took your note and made the edit was that I don't want to be classifying a person as an "LLM user" and then casting them as transcendentally incapable of reason as a result. Classifying people as capable/incapable of reason by the type of person that they are is probably the most dangerous kind of cognitive habit.
-
@froztbyte @glyph maybe “AI mediated cognitive change”, subtypes “AI mediated cognitive enhancement”, “AI mediated cognitive decline”, and “AI mediated cognitive distortion”?
-
@jacob @glyph I'll agree with that.
But I'm also *tired* of 30 year old repetitions of the same bigotry from people who ostensibly should know better. People who have proven the ability to gain skills and knowledge and move successfully throughout life... And yet still choose to ignore their bias that has been put on display like pearls before swine.
-
@glyph This basilisk thing (great imagery) is very true in translation. Once you've seen the MT suggestion, with its wonky syntax and not quite right tone, it's very hard to dismiss it. The cognitive load is consequently enormous
@janeishly @glyph I have found this exact thing in code reviews - my company enabled automatic AI code reviews (
) and the cognitive load of the automated comments was *enormous*.It often correctly flagged something to pay attention to, but the suggested solution was always incorrect - and ignoring / discarding it was hugely expensive mentally.
I finally managed to get it changed to "opt in" rather than automatic, but wow the whole experience felt like a tarpit for thinking.
-
@jacob@social.jacobian.org @glyph@mastodon.social I think I'm currently at a point in my journey where I try very hard to believe people when they talk about what they have experienced internally, and have become increasingly sceptical of people's ability to judge accurately what actually happened and the results (in both cases for pretty much the same reasons as Glyph as I've noticed the difference between my #adhd internal experience and real world what actually happened).
So "using an LLM made me feel a god-like developer!" I'll completely take as your experience. "My productivity went up by 15 times after I started using agents" (actual claim I have seen) will leave me asking for hard evidence and possibly a scientific study.
It's awkward that we use 'experience' to cover both, and I had the same reaction you're expressing when I read that section but assuming (from the context) that Glyph means the second type of experience I think he has a strong argument, if not the clearest wording.
-
Now, for rhetorical effect, I'm obviously putting this fairly dramatically. Cory points out that people have been doing this *to each other* mediated by technology, in emergent and scary ways, with no need for AI. He shows that people prone to specific types of delusions (Morgellons, Gang Stalking Disorder) have found each other via the Internet and the simple availability of global distributed communication has harmed them. But obviously that has benefits, too.
@glyph Comparing how people influence each other and how LLM usage influences people is a point I find interesting.
A bunch of people get influenced in a bunch of different directions by a bunch of different people. Everybody gets influenced in mostly the same direction by the tool in the hands of ghoulish billionaires.
Sure, influencing is something we do to each other all the time. But is it really the same?
-
@janeishly @glyph I have found this exact thing in code reviews - my company enabled automatic AI code reviews (
) and the cognitive load of the automated comments was *enormous*.It often correctly flagged something to pay attention to, but the suggested solution was always incorrect - and ignoring / discarding it was hugely expensive mentally.
I finally managed to get it changed to "opt in" rather than automatic, but wow the whole experience felt like a tarpit for thinking.
@bluewinds @janeishly I don't know that I trust that subjective feeling of disgust either, even though it's definitely how I feel — a kind of aesthetic revulsion, which might be indicative of something real or might be another weird side-effect of these tools that interacts with a certain neurotype in a certain way. Definitely worth the precaution of turning it off though, and it does seem more aligned with the evidence we have at the moment.