No, opposing LLMs isn't "purity culture."
-
@xgranade people should absolutely be taught all the pros and cons, but I really dislike absolutism and zealotry.. it's not useful, it's not practical, it accomplishes nothing (except in the very narrow cases of civil rights or human dignity). If I wanted more ones and zeroes, I'd own more computers..
@codinghorror You seem to think that a strong position is necessarily one reached without reason or rationality? If I'm incorrect in understanding your position, please let me know, but it seems like you're conflating my having a strong view — one that I have repeatedly explained and justified on my feed — with a pseudoreligious "zealotry."
-
@codinghorror You seem to think that a strong position is necessarily one reached without reason or rationality? If I'm incorrect in understanding your position, please let me know, but it seems like you're conflating my having a strong view — one that I have repeatedly explained and justified on my feed — with a pseudoreligious "zealotry."
@codinghorror But that conflation doesn't hold in other cases. To do the physicist-coded thing of looking at the extremes to understand the bulk (I'm not that kind of doctor, but I do have a PhD in physics, it comes up in my thinking sometimes), would you similarly say that a position like "no one should ever be a Nazi or do Nazi-like things" is one of zealotry?
The truth isn't always in the middle, and assuming that it is gives bad-faith actors immense power to unduly shift narratives.
-
@codinghorror But that conflation doesn't hold in other cases. To do the physicist-coded thing of looking at the extremes to understand the bulk (I'm not that kind of doctor, but I do have a PhD in physics, it comes up in my thinking sometimes), would you similarly say that a position like "no one should ever be a Nazi or do Nazi-like things" is one of zealotry?
The truth isn't always in the middle, and assuming that it is gives bad-faith actors immense power to unduly shift narratives.
@codinghorror Regardless, though, I think you've badly missed the point of my thread. I'm not looking to convince you on LLMs, you've convinced me you have enough vested interest in the success of LLMs that I recognize that's a fruitless endeavor.
But you jumped in my replies, on a thread that didn't mention or refer to you, a thread about what goes wrong with "purity culture" rhetoric, to make the only marginally related argument that a strong opposition to LLMs is necessarily one of zealotry.
-
@codinghorror Regardless, though, I think you've badly missed the point of my thread. I'm not looking to convince you on LLMs, you've convinced me you have enough vested interest in the success of LLMs that I recognize that's a fruitless endeavor.
But you jumped in my replies, on a thread that didn't mention or refer to you, a thread about what goes wrong with "purity culture" rhetoric, to make the only marginally related argument that a strong opposition to LLMs is necessarily one of zealotry.
@codinghorror To get to my original point, then, if you believe as I do that it is bad to use tools developed under eugenicist philosophies, that predominantly profit and fund fascists, that carry inordinate environmental costs, that are based on stolen labor, that act as automated scabs, and that don't work, then an opposition to those same tools is a moral position and not one of "purity culture."
-
@codinghorror To get to my original point, then, if you believe as I do that it is bad to use tools developed under eugenicist philosophies, that predominantly profit and fund fascists, that carry inordinate environmental costs, that are based on stolen labor, that act as automated scabs, and that don't work, then an opposition to those same tools is a moral position and not one of "purity culture."
@codinghorror I've made my arguments for each of those many times, it's beside the point here. But critically, none of the above requires me to be correct in my beliefs — only that I have reached those rationally if perhaps based on incomplete or flawed data. In which case, make that argument (not to me, as noted above)! But it's intellectually dishonest to say that that opposition is "purity culture."
-
@codinghorror @xgranade See? You’re not a data point against the people pushing LLM inevitability, you’re taking a more measured approach than they are.
Those people are doing absolutely *insane* things like tracking LLM usage metrics and saying that’ll be factored into performance reviews. (Like is happening at Microsoft with Copilot.) -
No, opposing LLMs isn't "purity culture." I've seen this now from quite a few different people, and I disagree vehemently. It is good, actually, to have moral principles and hold to them, even when people with more money than you find said principles annoying.
@xgranade moral principles is a good thing but has nothing to do with LLM or purity culture.
-
@codinghorror @xgranade See? You’re not a data point against the people pushing LLM inevitability, you’re taking a more measured approach than they are.
Those people are doing absolutely *insane* things like tracking LLM usage metrics and saying that’ll be factored into performance reviews. (Like is happening at Microsoft with Copilot.) -
@codinghorror @eschaton Hey, don't put words in my mouth, I'm not part of that "y'all." I do not agree that doing propaganda work for some of the worst people on the planet, whether intentionally or not, counts as "measured."
But that's what you're doing right now by arguing in favor of LLMs.
-
@codinghorror @eschaton Hey, don't put words in my mouth, I'm not part of that "y'all." I do not agree that doing propaganda work for some of the worst people on the planet, whether intentionally or not, counts as "measured."
But that's what you're doing right now by arguing in favor of LLMs.
@codinghorror @eschaton Given how messy this exchange has gotten, let me pull back slightly. I made a claim, that opposition to LLMs is not an example of "purity culture."
You, despite my explicit ask to not, came into my replies to make a separate but related claim: namely, that LLMs are sometimes useful, and implicitly that that utility is sufficiently great as to justify their ethical problems.
-
@codinghorror @eschaton Given how messy this exchange has gotten, let me pull back slightly. I made a claim, that opposition to LLMs is not an example of "purity culture."
You, despite my explicit ask to not, came into my replies to make a separate but related claim: namely, that LLMs are sometimes useful, and implicitly that that utility is sufficiently great as to justify their ethical problems.
@codinghorror @eschaton While I explicitly said I didn't get into the second point, as the Discourse
has gotten *incredibly* tedious by now, fine. You seem to insist on having that discussion out in my replies anyway.To that end, I laid out several reasons that I find the claim that LLMs are "just a tool" odious: the euginicist origin, the fascist way they're funded and developed, that they attack and undermine labor, that they impose extreme environmental cost, and that they don't work.
-
@codinghorror @eschaton While I explicitly said I didn't get into the second point, as the Discourse
has gotten *incredibly* tedious by now, fine. You seem to insist on having that discussion out in my replies anyway.To that end, I laid out several reasons that I find the claim that LLMs are "just a tool" odious: the euginicist origin, the fascist way they're funded and developed, that they attack and undermine labor, that they impose extreme environmental cost, and that they don't work.
@codinghorror @eschaton You've been very clear that you disagree with that latter point, and also that you expect I will find your disagreement compelling. I don't. It's an extraordinary claim that spicy autocomplete would produce the results ascribed to it, and that claim requires correspondingly extraordinary evidence. Anecdotes are a form of evidence, but without understanding the selection bias that goes into their collection, not on their own sufficient to show extraordinary claims.
-
@codinghorror @eschaton You've been very clear that you disagree with that latter point, and also that you expect I will find your disagreement compelling. I don't. It's an extraordinary claim that spicy autocomplete would produce the results ascribed to it, and that claim requires correspondingly extraordinary evidence. Anecdotes are a form of evidence, but without understanding the selection bias that goes into their collection, not on their own sufficient to show extraordinary claims.
@codinghorror @eschaton But fine, you disagree, I believe, as you've said earlier. I think you are very wrong on that, but I don't think either of us are budging on that right now.
Do you refute or disagree with the other points? Do you believe that there is some degree to which LLMs could, if they worked well enough, justify their usage given those problems?
-
@codinghorror @eschaton But fine, you disagree, I believe, as you've said earlier. I think you are very wrong on that, but I don't think either of us are budging on that right now.
Do you refute or disagree with the other points? Do you believe that there is some degree to which LLMs could, if they worked well enough, justify their usage given those problems?
@codinghorror @eschaton To be clear, I don't think you owe me any answers. I'm just one woman who's been doing this shit for decades, and who knows what the fuck she's talking about, but whatever.
It's that you made the claim *to me*, and have used that claim to justify that opposition to LLMs is pseudoreligous "zealotry." But you haven't addressed any of the substance of the opposition beyond putting forward one anecdote that I can't personally evaluate the veracity of.
-
@codinghorror @eschaton To be clear, I don't think you owe me any answers. I'm just one woman who's been doing this shit for decades, and who knows what the fuck she's talking about, but whatever.
It's that you made the claim *to me*, and have used that claim to justify that opposition to LLMs is pseudoreligous "zealotry." But you haven't addressed any of the substance of the opposition beyond putting forward one anecdote that I can't personally evaluate the veracity of.
@codinghorror @eschaton So far, the justification you've given for the "zealotry" comment has been almost entirely about the *shape* of the claims I made, almost without any reference to the *substance*.
This strikes me as a very strange way to approach other human beings and moral decisions in general.
Is there any strong claim that you would consider to not be "zealotry," or any degree to which a claim could be evidenced such that it would not be "zealotry" to you?
-
@codinghorror @eschaton But fine, you disagree, I believe, as you've said earlier. I think you are very wrong on that, but I don't think either of us are budging on that right now.
Do you refute or disagree with the other points? Do you believe that there is some degree to which LLMs could, if they worked well enough, justify their usage given those problems?
-
@codinghorror @eschaton Perhaps that's the root of our impasse, then. I fairly firmly believe that if something does that much harm to the environment, to labor movements, and to victims of fascism, it cannot be justified by appeals to its efficacy alone.
I suspect that if we cannot agree on basic moral precepts like "don't help fascists get rich" and "don't be a scab," there's probably not much hope for a favorable resolution.
-
E em0nm4stodon@infosec.exchange shared this topic
