I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.
-
@glyph Ok but that’s a much more narrow version of what you said. You said you must dismiss ALL experiences. If you want to argue specifically about the productivity claims fine whatever I still think you’re wrong but not in a way that matters to me, it’s a narrow thing and I’m only going off vibes anyway. It’s specifically the “all” I’m objecting to.
@jacob I am frustrated that you read it that way, but perhaps it's my fault. I thought the meaning of "all" was obvious in context but it's the reader who gets to decide the meaning. I guess I will see if I can edit this to remove that ambiguity.
And I guess to be fair even this qualification is maybe a *little* narrower than what I meant, because I also mean things like the subjective impression of LLM factual accuracy or output quality, not *just and only* productivity.
-
@jacob @glyph I have to say that's only true for folks who are capable of reason. I mean, it's possible that everyone is, but I have some uncles who will do something to the form of:
"Because all X are Y, because Z said so"
And regardless of any further lines of inquiry, it's always, always dismissed with "... Z said so!"
-
I'm open to a future where we do some research and figure out the limits of how AI influence works, and where the safety valves are, and also the extent to which it's *fine* that AI can influence our views because honestly many different kinds of stimuli can influence our views, not least of which is each other. But it sure looks right now like it has a bunch of very dangerous feedback loops built-in, and it's not clear how to know if you're touching one.
@glyph this is my constant. AI, is not inherently bad.
We, as society, have simply started with the Evil dialed up to 11 and are furiously cranking with all our might for a 12.
-
@jacob I am frustrated that you read it that way, but perhaps it's my fault. I thought the meaning of "all" was obvious in context but it's the reader who gets to decide the meaning. I guess I will see if I can edit this to remove that ambiguity.
And I guess to be fair even this qualification is maybe a *little* narrower than what I meant, because I also mean things like the subjective impression of LLM factual accuracy or output quality, not *just and only* productivity.
@jacob I've changed it as best I can, to really focus in on "LLM use" rather than "LLM users" and subjective experience / objective phenomena distinction.
-
@jacob @glyph I'll agree with that.
But I'm also *tired* of 30 year old repetitions of the same bigotry from people who ostensibly should know better. People who have proven the ability to gain skills and knowledge and move successfully throughout life... And yet still choose to ignore their bias that has been put on display like pearls before swine.
-
@jacob @ketmorco yeah one of the reasons I eventually took your note and made the edit was that I don't want to be classifying a person as an "LLM user" and then casting them as transcendentally incapable of reason as a result. Classifying people as capable/incapable of reason by the type of person that they are is probably the most dangerous kind of cognitive habit.
-
@froztbyte @glyph maybe “AI mediated cognitive change”, subtypes “AI mediated cognitive enhancement”, “AI mediated cognitive decline”, and “AI mediated cognitive distortion”?
-
@jacob @glyph I'll agree with that.
But I'm also *tired* of 30 year old repetitions of the same bigotry from people who ostensibly should know better. People who have proven the ability to gain skills and knowledge and move successfully throughout life... And yet still choose to ignore their bias that has been put on display like pearls before swine.
-
@glyph This basilisk thing (great imagery) is very true in translation. Once you've seen the MT suggestion, with its wonky syntax and not quite right tone, it's very hard to dismiss it. The cognitive load is consequently enormous
@janeishly @glyph I have found this exact thing in code reviews - my company enabled automatic AI code reviews (
) and the cognitive load of the automated comments was *enormous*.It often correctly flagged something to pay attention to, but the suggested solution was always incorrect - and ignoring / discarding it was hugely expensive mentally.
I finally managed to get it changed to "opt in" rather than automatic, but wow the whole experience felt like a tarpit for thinking.
-
@jacob@social.jacobian.org @glyph@mastodon.social I think I'm currently at a point in my journey where I try very hard to believe people when they talk about what they have experienced internally, and have become increasingly sceptical of people's ability to judge accurately what actually happened and the results (in both cases for pretty much the same reasons as Glyph as I've noticed the difference between my #adhd internal experience and real world what actually happened).
So "using an LLM made me feel a god-like developer!" I'll completely take as your experience. "My productivity went up by 15 times after I started using agents" (actual claim I have seen) will leave me asking for hard evidence and possibly a scientific study.
It's awkward that we use 'experience' to cover both, and I had the same reaction you're expressing when I read that section but assuming (from the context) that Glyph means the second type of experience I think he has a strong argument, if not the clearest wording.
-
Now, for rhetorical effect, I'm obviously putting this fairly dramatically. Cory points out that people have been doing this *to each other* mediated by technology, in emergent and scary ways, with no need for AI. He shows that people prone to specific types of delusions (Morgellons, Gang Stalking Disorder) have found each other via the Internet and the simple availability of global distributed communication has harmed them. But obviously that has benefits, too.
@glyph Comparing how people influence each other and how LLM usage influences people is a point I find interesting.
A bunch of people get influenced in a bunch of different directions by a bunch of different people. Everybody gets influenced in mostly the same direction by the tool in the hands of ghoulish billionaires.
Sure, influencing is something we do to each other all the time. But is it really the same?
-
@janeishly @glyph I have found this exact thing in code reviews - my company enabled automatic AI code reviews (
) and the cognitive load of the automated comments was *enormous*.It often correctly flagged something to pay attention to, but the suggested solution was always incorrect - and ignoring / discarding it was hugely expensive mentally.
I finally managed to get it changed to "opt in" rather than automatic, but wow the whole experience felt like a tarpit for thinking.
@bluewinds @janeishly I don't know that I trust that subjective feeling of disgust either, even though it's definitely how I feel — a kind of aesthetic revulsion, which might be indicative of something real or might be another weird side-effect of these tools that interacts with a certain neurotype in a certain way. Definitely worth the precaution of turning it off though, and it does seem more aligned with the evidence we have at the moment.
-
@glyph Comparing how people influence each other and how LLM usage influences people is a point I find interesting.
A bunch of people get influenced in a bunch of different directions by a bunch of different people. Everybody gets influenced in mostly the same direction by the tool in the hands of ghoulish billionaires.
Sure, influencing is something we do to each other all the time. But is it really the same?
@Moutmout oh absolutely not, for a whole host of reasons. But being influenced by a highly concentrated online community of the most extreme delusions that internet technology allows you to distill to peak concentration, to the exclusion of all other voices in your life, is also not "the same thing" as just sitting around with a diverse group of friends you know from school.
-
@glyph@mastodon.social @jacob@social.jacobian.org It certainly reads more clearly to me now.
-
RE: https://mamot.fr/@pluralistic/116219642373307943
I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth
@glyph this thread feels important
-
@glyph@mastodon.social @jacob@social.jacobian.org It certainly reads more clearly to me now.
-
@glyph i don't know if it's the best analogy at the end of the day, but my brain keeps going to lead pipes and asbestos. if we're not sure it's safe, should we be such a hurry to put it in everything?
-
@bluewinds @janeishly I don't know that I trust that subjective feeling of disgust either, even though it's definitely how I feel — a kind of aesthetic revulsion, which might be indicative of something real or might be another weird side-effect of these tools that interacts with a certain neurotype in a certain way. Definitely worth the precaution of turning it off though, and it does seem more aligned with the evidence we have at the moment.
@glyph @janeishly Oh, I'm a "certain neurotype," for sure.
I can report with objective certainty though that it was a net drain for my company - because I'm the most senior developer at the company, and making me unhappy with my job cost them several days worth of lost productivity.
Was it "because the technology sucks" or was it "because BlueWinds hates it irrationally"? Either way, it cost the company thousands in wages (of me not doing anything via demotivation and revulsion at the thought of reviewing PRs).
-
@mason because medical practitioners were hard to convince of the impact. And they still don't do it as much as you think.
Science vs human belief, the belief usually wins
-
@jacob I've changed it as best I can, to really focus in on "LLM use" rather than "LLM users" and subjective experience / objective phenomena distinction.
@glyph @jacob FWIW, in close relationships, I often end up in difficult situations because I communicate my opinions with such assuredness that the listener/receiver gets the sense that I am mocking or devaluing their opposing point of view.
(This dynamic has existed in my marriage for over 10 years, and it still creates friction, even though we are both aware of it!)
I don't like having to include qualifiers or disclaimers in things I say, as I think it is implicit that if I believe a certain thing--there is a reason I believe it--and if I am in discussion with you, and we disagree, I want to understand what evidence there exists to prove my understanding wrong. Is it a subjective experience? Is it evidence based on how something is recollected, or on some other 3rd party authority? Etc...
1/2