I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.
-
@glyph “You must dismiss all experiences of LLM users”
This is where you lose me. There’s no universe in which I’m comfortable dismissing the lived experiences of people that categorically. The most important lesson I’ve learned from decades of activism is “believe people when they tell you about their experiences” — and I see no reason to change now. I’m not willing to give up my curiosity and empathy and I hope you aren’t either.
@jacob Perhaps "dismiss" wasn't the best word choice there, but that's why I included "even if the LLM user is yourself". I dismiss _my own_ experience of LLMs, _as evidence of their quantitative efficacy_. As evidence of their subjective experience, of course it is valid. If it didn't produce the intense subjective experience then there wouldn't be a problem!
There are two reasons that activism teaches us to believe people's lived experience, and neither apply here: …
-
@jacob Perhaps "dismiss" wasn't the best word choice there, but that's why I included "even if the LLM user is yourself". I dismiss _my own_ experience of LLMs, _as evidence of their quantitative efficacy_. As evidence of their subjective experience, of course it is valid. If it didn't produce the intense subjective experience then there wouldn't be a problem!
There are two reasons that activism teaches us to believe people's lived experience, and neither apply here: …
1. The distance between an account of an experience of oppression and the actual event of the oppression is very short. "That man assaulted me" / "that cop beat me". The only way to think that people saying these things are not relaying true information is to believe that they are intentionally lying for personal gain, which just isn't true. (And that's not what I believe about LLM users.)
-
1. The distance between an account of an experience of oppression and the actual event of the oppression is very short. "That man assaulted me" / "that cop beat me". The only way to think that people saying these things are not relaying true information is to believe that they are intentionally lying for personal gain, which just isn't true. (And that's not what I believe about LLM users.)
2. The filter of oppression *itself* means we only hear the accounts of people who are not only probably telling the truth in the first place but had to push through aggressive filtering to even get heard. If you hear one complaint of police violence or SA there's probably hundreds more where that came from. That also doesn't apply. The archetypical LLM user is not silenced by oppression, they're being massively amplified by the largest propaganda apparatus on earth.
-
@pythonbynight Cory's bit on the byzantine premium echoed your thread in some way. "All this money can't be for nothing, all these people can't be so irrational, there has to be something under that pile of crap."
@glyph@ddelemeny @glyph Yup, I read that and smirked ...
Again, "investing" in an open source tooling that will speed up your CI/CD is almost a no-brainer for an organization. They spend zero dollars and reduce costs/risks associated to the problem that the tool is designed to solve. But even then, there are security risks based on supply chain/dependencies that are often scrutinized to no end.
Investing in LLM tooling is supposedly "cheap" (due to subsidies), but the risks include vendor lock in, security vulnerabilities, and weakening worker autonomy (among others). But there seems to be zero scrutiny in spite of that.
-
The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. Which is deeply ironic because it pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/
@glyph You're so extremely full of yourself that I didn't even finish reading your comment, and I no longer care about anything you have to say. Go touch grass.
-
2. The filter of oppression *itself* means we only hear the accounts of people who are not only probably telling the truth in the first place but had to push through aggressive filtering to even get heard. If you hear one complaint of police violence or SA there's probably hundreds more where that came from. That also doesn't apply. The archetypical LLM user is not silenced by oppression, they're being massively amplified by the largest propaganda apparatus on earth.
@jacob Consider another type of "lived experience" — the racist who says "DEI took my job". It would be a mistake to think that this person is *lying* about their experience — they are clearly motivated to their racism by genuine animus, and maybe they did lose their job — but their indirect, abstract experience of the nebulous entity of "DEI" is not reliable, particularly not in terms of employment statistics. So we are more skeptical in that case, and we look at the numbers.
-
@ddelemeny @glyph Yup, I read that and smirked ...
Again, "investing" in an open source tooling that will speed up your CI/CD is almost a no-brainer for an organization. They spend zero dollars and reduce costs/risks associated to the problem that the tool is designed to solve. But even then, there are security risks based on supply chain/dependencies that are often scrutinized to no end.
Investing in LLM tooling is supposedly "cheap" (due to subsidies), but the risks include vendor lock in, security vulnerabilities, and weakening worker autonomy (among others). But there seems to be zero scrutiny in spite of that.
@ddelemeny @glyph Early on at my current job, I built a tool that I thought was very useful and mentioned that I would like to open source it...
I was ultimately shut down. In the interest of "intellectual property" and other sorts of red tape... And I didn't really feel like fighting it.
So, I couldn't share my tool with the commons, but there are absolutely no qualms about feeding my code to a company that WE PAY, so they can ingest it and charge others for benefitting off of it? ...
Sigh...
-
@jacob Perhaps "dismiss" wasn't the best word choice there, but that's why I included "even if the LLM user is yourself". I dismiss _my own_ experience of LLMs, _as evidence of their quantitative efficacy_. As evidence of their subjective experience, of course it is valid. If it didn't produce the intense subjective experience then there wouldn't be a problem!
There are two reasons that activism teaches us to believe people's lived experience, and neither apply here: …
@glyph You’ve reasoned yourself into a position where anyone who says anything contrary to you is either delusional or lying. You might be right — I don’t think you are but who knows maybe — but even so, that’s just not a position I’m willing to take about anything ever.
-
@jacob Consider another type of "lived experience" — the racist who says "DEI took my job". It would be a mistake to think that this person is *lying* about their experience — they are clearly motivated to their racism by genuine animus, and maybe they did lose their job — but their indirect, abstract experience of the nebulous entity of "DEI" is not reliable, particularly not in terms of employment statistics. So we are more skeptical in that case, and we look at the numbers.
@glyph Honestly? The left would be in a better place if we didn’t instantly dismiss that person but actually explored that feeling and engaged with him. “You’re wrong” may be true, and feels good to say, but “what makes feel that way?” is a much better opening if you want to win people over to your side.
-
@glyph You’ve reasoned yourself into a position where anyone who says anything contrary to you is either delusional or lying. You might be right — I don’t think you are but who knows maybe — but even so, that’s just not a position I’m willing to take about anything ever.
@jacob No. I do not think it is a "delusion" to have an inaccurate quantitative understanding of a subjective experience. As the thread explains, I personally have that experience every single day. And I certainly do not believe that anyone saying "anything contrary to me" is doing that. What I am saying is that *one specific type of experience* — the feeling of LLMs positively impacting productivity — is poor evidence of *one specific type of claim*.
-
@jacob No. I do not think it is a "delusion" to have an inaccurate quantitative understanding of a subjective experience. As the thread explains, I personally have that experience every single day. And I certainly do not believe that anyone saying "anything contrary to me" is doing that. What I am saying is that *one specific type of experience* — the feeling of LLMs positively impacting productivity — is poor evidence of *one specific type of claim*.
@glyph Ok but that’s a much more narrow version of what you said. You said you must dismiss ALL experiences. If you want to argue specifically about the productivity claims fine whatever I still think you’re wrong but not in a way that matters to me, it’s a narrow thing and I’m only going off vibes anyway. It’s specifically the “all” I’m objecting to.
-
@glyph Honestly? The left would be in a better place if we didn’t instantly dismiss that person but actually explored that feeling and engaged with him. “You’re wrong” may be true, and feels good to say, but “what makes feel that way?” is a much better opening if you want to win people over to your side.
@jacob @glyph I have to say that's only true for folks who are capable of reason. I mean, it's possible that everyone is, but I have some uncles who will do something to the form of:
"Because all X are Y, because Z said so"
And regardless of any further lines of inquiry, it's always, always dismissed with "... Z said so!"
-
@glyph Ok but that’s a much more narrow version of what you said. You said you must dismiss ALL experiences. If you want to argue specifically about the productivity claims fine whatever I still think you’re wrong but not in a way that matters to me, it’s a narrow thing and I’m only going off vibes anyway. It’s specifically the “all” I’m objecting to.
@jacob I am frustrated that you read it that way, but perhaps it's my fault. I thought the meaning of "all" was obvious in context but it's the reader who gets to decide the meaning. I guess I will see if I can edit this to remove that ambiguity.
And I guess to be fair even this qualification is maybe a *little* narrower than what I meant, because I also mean things like the subjective impression of LLM factual accuracy or output quality, not *just and only* productivity.
-
@jacob @glyph I have to say that's only true for folks who are capable of reason. I mean, it's possible that everyone is, but I have some uncles who will do something to the form of:
"Because all X are Y, because Z said so"
And regardless of any further lines of inquiry, it's always, always dismissed with "... Z said so!"
-
I'm open to a future where we do some research and figure out the limits of how AI influence works, and where the safety valves are, and also the extent to which it's *fine* that AI can influence our views because honestly many different kinds of stimuli can influence our views, not least of which is each other. But it sure looks right now like it has a bunch of very dangerous feedback loops built-in, and it's not clear how to know if you're touching one.
@glyph this is my constant. AI, is not inherently bad.
We, as society, have simply started with the Evil dialed up to 11 and are furiously cranking with all our might for a 12.
-
@jacob I am frustrated that you read it that way, but perhaps it's my fault. I thought the meaning of "all" was obvious in context but it's the reader who gets to decide the meaning. I guess I will see if I can edit this to remove that ambiguity.
And I guess to be fair even this qualification is maybe a *little* narrower than what I meant, because I also mean things like the subjective impression of LLM factual accuracy or output quality, not *just and only* productivity.
@jacob I've changed it as best I can, to really focus in on "LLM use" rather than "LLM users" and subjective experience / objective phenomena distinction.
-
@jacob @glyph I'll agree with that.
But I'm also *tired* of 30 year old repetitions of the same bigotry from people who ostensibly should know better. People who have proven the ability to gain skills and knowledge and move successfully throughout life... And yet still choose to ignore their bias that has been put on display like pearls before swine.
-
@jacob @ketmorco yeah one of the reasons I eventually took your note and made the edit was that I don't want to be classifying a person as an "LLM user" and then casting them as transcendentally incapable of reason as a result. Classifying people as capable/incapable of reason by the type of person that they are is probably the most dangerous kind of cognitive habit.
-
@froztbyte @glyph maybe “AI mediated cognitive change”, subtypes “AI mediated cognitive enhancement”, “AI mediated cognitive decline”, and “AI mediated cognitive distortion”?
-
@jacob @glyph I'll agree with that.
But I'm also *tired* of 30 year old repetitions of the same bigotry from people who ostensibly should know better. People who have proven the ability to gain skills and knowledge and move successfully throughout life... And yet still choose to ignore their bias that has been put on display like pearls before swine.