No, opposing LLMs isn't "purity culture."
-
@codinghorror @xgranade See? You’re not a data point against the people pushing LLM inevitability, you’re taking a more measured approach than they are.
Those people are doing absolutely *insane* things like tracking LLM usage metrics and saying that’ll be factored into performance reviews. (Like is happening at Microsoft with Copilot.) -
No, opposing LLMs isn't "purity culture." I've seen this now from quite a few different people, and I disagree vehemently. It is good, actually, to have moral principles and hold to them, even when people with more money than you find said principles annoying.
@xgranade moral principles is a good thing but has nothing to do with LLM or purity culture.
-
@codinghorror @xgranade See? You’re not a data point against the people pushing LLM inevitability, you’re taking a more measured approach than they are.
Those people are doing absolutely *insane* things like tracking LLM usage metrics and saying that’ll be factored into performance reviews. (Like is happening at Microsoft with Copilot.) -
@codinghorror @eschaton Hey, don't put words in my mouth, I'm not part of that "y'all." I do not agree that doing propaganda work for some of the worst people on the planet, whether intentionally or not, counts as "measured."
But that's what you're doing right now by arguing in favor of LLMs.
-
@codinghorror @eschaton Hey, don't put words in my mouth, I'm not part of that "y'all." I do not agree that doing propaganda work for some of the worst people on the planet, whether intentionally or not, counts as "measured."
But that's what you're doing right now by arguing in favor of LLMs.
@codinghorror @eschaton Given how messy this exchange has gotten, let me pull back slightly. I made a claim, that opposition to LLMs is not an example of "purity culture."
You, despite my explicit ask to not, came into my replies to make a separate but related claim: namely, that LLMs are sometimes useful, and implicitly that that utility is sufficiently great as to justify their ethical problems.
-
@codinghorror @eschaton Given how messy this exchange has gotten, let me pull back slightly. I made a claim, that opposition to LLMs is not an example of "purity culture."
You, despite my explicit ask to not, came into my replies to make a separate but related claim: namely, that LLMs are sometimes useful, and implicitly that that utility is sufficiently great as to justify their ethical problems.
@codinghorror @eschaton While I explicitly said I didn't get into the second point, as the Discourse
has gotten *incredibly* tedious by now, fine. You seem to insist on having that discussion out in my replies anyway.To that end, I laid out several reasons that I find the claim that LLMs are "just a tool" odious: the euginicist origin, the fascist way they're funded and developed, that they attack and undermine labor, that they impose extreme environmental cost, and that they don't work.
-
@codinghorror @eschaton While I explicitly said I didn't get into the second point, as the Discourse
has gotten *incredibly* tedious by now, fine. You seem to insist on having that discussion out in my replies anyway.To that end, I laid out several reasons that I find the claim that LLMs are "just a tool" odious: the euginicist origin, the fascist way they're funded and developed, that they attack and undermine labor, that they impose extreme environmental cost, and that they don't work.
@codinghorror @eschaton You've been very clear that you disagree with that latter point, and also that you expect I will find your disagreement compelling. I don't. It's an extraordinary claim that spicy autocomplete would produce the results ascribed to it, and that claim requires correspondingly extraordinary evidence. Anecdotes are a form of evidence, but without understanding the selection bias that goes into their collection, not on their own sufficient to show extraordinary claims.
-
@codinghorror @eschaton You've been very clear that you disagree with that latter point, and also that you expect I will find your disagreement compelling. I don't. It's an extraordinary claim that spicy autocomplete would produce the results ascribed to it, and that claim requires correspondingly extraordinary evidence. Anecdotes are a form of evidence, but without understanding the selection bias that goes into their collection, not on their own sufficient to show extraordinary claims.
@codinghorror @eschaton But fine, you disagree, I believe, as you've said earlier. I think you are very wrong on that, but I don't think either of us are budging on that right now.
Do you refute or disagree with the other points? Do you believe that there is some degree to which LLMs could, if they worked well enough, justify their usage given those problems?
-
@codinghorror @eschaton But fine, you disagree, I believe, as you've said earlier. I think you are very wrong on that, but I don't think either of us are budging on that right now.
Do you refute or disagree with the other points? Do you believe that there is some degree to which LLMs could, if they worked well enough, justify their usage given those problems?
@codinghorror @eschaton To be clear, I don't think you owe me any answers. I'm just one woman who's been doing this shit for decades, and who knows what the fuck she's talking about, but whatever.
It's that you made the claim *to me*, and have used that claim to justify that opposition to LLMs is pseudoreligous "zealotry." But you haven't addressed any of the substance of the opposition beyond putting forward one anecdote that I can't personally evaluate the veracity of.
-
@codinghorror @eschaton To be clear, I don't think you owe me any answers. I'm just one woman who's been doing this shit for decades, and who knows what the fuck she's talking about, but whatever.
It's that you made the claim *to me*, and have used that claim to justify that opposition to LLMs is pseudoreligous "zealotry." But you haven't addressed any of the substance of the opposition beyond putting forward one anecdote that I can't personally evaluate the veracity of.
@codinghorror @eschaton So far, the justification you've given for the "zealotry" comment has been almost entirely about the *shape* of the claims I made, almost without any reference to the *substance*.
This strikes me as a very strange way to approach other human beings and moral decisions in general.
Is there any strong claim that you would consider to not be "zealotry," or any degree to which a claim could be evidenced such that it would not be "zealotry" to you?
-
@codinghorror @eschaton But fine, you disagree, I believe, as you've said earlier. I think you are very wrong on that, but I don't think either of us are budging on that right now.
Do you refute or disagree with the other points? Do you believe that there is some degree to which LLMs could, if they worked well enough, justify their usage given those problems?
-
@codinghorror @eschaton Perhaps that's the root of our impasse, then. I fairly firmly believe that if something does that much harm to the environment, to labor movements, and to victims of fascism, it cannot be justified by appeals to its efficacy alone.
I suspect that if we cannot agree on basic moral precepts like "don't help fascists get rich" and "don't be a scab," there's probably not much hope for a favorable resolution.
-
E em0nm4stodon@infosec.exchange shared this topic
