Using any kind of #AI is unethical.
-
@zersiax multiple things can be unethical at the same time. I'm tired of takes where people point out one unethical activity to suggest that we should also accept another unethical activity. Seems like whataboutism to me: https://en.wikipedia.org/wiki/Whataboutism
I've seen a lot of whataboutism lately, especially around LLMs. "Oh, the environmental impact of LLMs is bad, but people still fly and airplanes also have an environmental impact. Therefore we should all just give up on climate change."
@zersiax Or to put it another way, what if we worked on solidarity between different movements instead of discounting other people's issues? Do you think you can solve accessibility without involving people whose primary concern is not accessibility?
-
@zersiax So how many nasty DM you expect to receive for this from people who don't grasp sarcasm?
@dhamlinmusic at least 20
-
@zersiax multiple things can be unethical at the same time. I'm tired of takes where people point out one unethical activity to suggest that we should also accept another unethical activity. Seems like whataboutism to me: https://en.wikipedia.org/wiki/Whataboutism
I've seen a lot of whataboutism lately, especially around LLMs. "Oh, the environmental impact of LLMs is bad, but people still fly and airplanes also have an environmental impact. Therefore we should all just give up on climate change."
@skyfaller I don't necessarily agree. This seems to be more insidious. e.g., not writing alt texts because it trains LLMs, not allowing accessibility contributions, no matter the quality, in open-source codebases because an LLM touched them which is particularly funny because a lot of people ASKING for accessibility generally get sent off with a "just send a PR lol" more often than not. There's quite a number of moving parts here
-
@dhamlinmusic at least 20
@zersiax Sounds about right
-
@skyfaller I don't necessarily agree. This seems to be more insidious. e.g., not writing alt texts because it trains LLMs, not allowing accessibility contributions, no matter the quality, in open-source codebases because an LLM touched them which is particularly funny because a lot of people ASKING for accessibility generally get sent off with a "just send a PR lol" more often than not. There's quite a number of moving parts here
@zersiax I 100% agree on the "not writing alt texts because it trains LLMs" bullshit, that's extremely ableist. Anything we publish can be scraped to train LLMs, alt text is not special in that regard.
But not allowing LLM contributions is in fact the ethical thing to do for a very large number of reasons that won't fit in a toot. And this is exactly the "fuck other people's issues" attitude that is toxic to solidarity and will lead to other movements saying, "well, fuck your movement too."
-
@anantagd "So shut up." Is this the response you want from other people when you raise your issue? If not, then why would you deliver this response to other people when they raise their issues? Isn't this belittling me just a little bit?
I actually have spent a lot of time arguing for / working on alt text and other accessibility measures, I'm not an enemy of accessibility, or your personal enemy unless you're determined that I must be.
Also I'm not gonna shut up just because you told me to.
-
@zersiax I 100% agree on the "not writing alt texts because it trains LLMs" bullshit, that's extremely ableist. Anything we publish can be scraped to train LLMs, alt text is not special in that regard.
But not allowing LLM contributions is in fact the ethical thing to do for a very large number of reasons that won't fit in a toot. And this is exactly the "fuck other people's issues" attitude that is toxic to solidarity and will lead to other movements saying, "well, fuck your movement too."
@skyfaller ok, but do you at least see how ridiculously unfair that is for the common layperson?
For years, decades maybe even, a system has been denied to you. You may even have been fired because of it. Now, there's this unethical system you can use to make the system work for you. Getting this to work might mean you're able to stop being unemployed, it gives you back the independence you feel you were robbed from by people not caring about your situation, it empowers you to give a cheery "alright then here you go" when you, a non-coder, are told to contribute code to said system because nobody else can be fucked to do so. And then, when you finally do, you get told that "nope, sorry, only valid when you yourself wrote it by hand".I'm not disagreeing with you, I'm embodying a standpoint that many in the communities I have access to are currently living. From this angle, you could argue that calling LLMs unethical is incredibly ableist

-
@skyfaller ok, but do you at least see how ridiculously unfair that is for the common layperson?
For years, decades maybe even, a system has been denied to you. You may even have been fired because of it. Now, there's this unethical system you can use to make the system work for you. Getting this to work might mean you're able to stop being unemployed, it gives you back the independence you feel you were robbed from by people not caring about your situation, it empowers you to give a cheery "alright then here you go" when you, a non-coder, are told to contribute code to said system because nobody else can be fucked to do so. And then, when you finally do, you get told that "nope, sorry, only valid when you yourself wrote it by hand".I'm not disagreeing with you, I'm embodying a standpoint that many in the communities I have access to are currently living. From this angle, you could argue that calling LLMs unethical is incredibly ableist

@skyfaller Essentially, from that viewpoint, the fact you have the choice to not use XYZ is a privilege the person you're debating against does not have, which transcends LLM use and goes into things like boycotting stores, switching away from big tech, and all sorts of other adjacent topics
-
@skyfaller Essentially, from that viewpoint, the fact you have the choice to not use XYZ is a privilege the person you're debating against does not have, which transcends LLM use and goes into things like boycotting stores, switching away from big tech, and all sorts of other adjacent topics
@zersiax It feels like you're saying that having accessibility challenges gives you a free pass on other ethical concerns. I agree that there are many situations where lack of accessibility makes it difficult to make ethical choices, but that is an extenuating circumstance or an excuse, it does not make the unethical ethical.
If your disability gives you a pass on e.g. racism from LLMs, does the color of someone else's skin give them a pass on accessibility?
-
@zersiax It feels like you're saying that having accessibility challenges gives you a free pass on other ethical concerns. I agree that there are many situations where lack of accessibility makes it difficult to make ethical choices, but that is an extenuating circumstance or an excuse, it does not make the unethical ethical.
If your disability gives you a pass on e.g. racism from LLMs, does the color of someone else's skin give them a pass on accessibility?
@skyfaller I wouldn't say so, but I think for a lot of people this becomes exceedingly gnarly. Ethics, particularly ethics that don't touch the person in question, are often dismissed in the moment. This is definitely not a good thing, and is actually at the foundation of why so many accessibility issues exist and persist, but I think being in that position makes it extraordinarily difficult to see it from that position.
For the average person, they've been wronged, they can set it straight, and are told off for doing so. I don't really think there is a right position in this situation; many would rather not kill the planet over an accessibility barrier but then, they're really being made to feel like they don't exactly have another choice. I'll freely admit I use AI when I'm left with no other option simply because, well ... I have no other option, and that's not a standpoint to be proud of, more a really sad state of affairs -
@skyfaller I wouldn't say so, but I think for a lot of people this becomes exceedingly gnarly. Ethics, particularly ethics that don't touch the person in question, are often dismissed in the moment. This is definitely not a good thing, and is actually at the foundation of why so many accessibility issues exist and persist, but I think being in that position makes it extraordinarily difficult to see it from that position.
For the average person, they've been wronged, they can set it straight, and are told off for doing so. I don't really think there is a right position in this situation; many would rather not kill the planet over an accessibility barrier but then, they're really being made to feel like they don't exactly have another choice. I'll freely admit I use AI when I'm left with no other option simply because, well ... I have no other option, and that's not a standpoint to be proud of, more a really sad state of affairs@zersiax I largely agree.
I just want to draw a line between knowingly making ethical compromises for practical reasons (I still heat my house with methane gas despite my climate activism) and saying that other ethical concerns aren't valid. I can commiserate with you on the former, nobody's perfect, everyone does unethical things sometimes. I don't have patience for the latter, I'm not letting denialism slide.
Being forced to choose between bad options doesn't make the option you choose good.
-
Using any kind of #AI is unethical. Denying huge groups of people the use of applications, operating systems, websites, physical venues and events is, however, perfectly fine, because #accessibility is hard, doesn't make money, or doesn't feel fun/sexy/productive.. Having those people, sick of said being excluded, figure out AI workarounds because humans have failed them for decades is, again, unethical and how dare they for considering such a thing.
I'll see myself out

@zersiax Not all AI is the same in terms of ethics or practicality.
One issue I am seeing is people using AI-glasses without explicitly disclosing that to others in the space (some who can't see or don't know they're being used).
That's an issue because that can put people like immigrants or trans people at risk by their personal data (video and audio of them and their location) in the hands of organisations who could do them harm such as reporting them to ICE, or outing them.
-
@zersiax Not all AI is the same in terms of ethics or practicality.
One issue I am seeing is people using AI-glasses without explicitly disclosing that to others in the space (some who can't see or don't know they're being used).
That's an issue because that can put people like immigrants or trans people at risk by their personal data (video and audio of them and their location) in the hands of organisations who could do them harm such as reporting them to ICE, or outing them.
@zersiax I am a sighted disabled person but several blind friends have reported a trend of others in their communities recording them on the sly and without asking for consent. That I think is concerning and suggests clear disclosure rules need to be set.
I worry about use of my personal work by AI tools which are given it by people using them for access or otherwise. I don't want my writing trained on AI, I didn't consent to that. I don't know if the AI hallucinated my writing or not.
-
@zersiax I am a sighted disabled person but several blind friends have reported a trend of others in their communities recording them on the sly and without asking for consent. That I think is concerning and suggests clear disclosure rules need to be set.
I worry about use of my personal work by AI tools which are given it by people using them for access or otherwise. I don't want my writing trained on AI, I didn't consent to that. I don't know if the AI hallucinated my writing or not.
@zersiax I also don't want to read AI slop it's not nice to read and it wastes my limited time and spoons. I think if people are producing writing with AI they should at least tag it as AI-written, so I can completely ignore, skip or block it.
So for me it's about honesty and disclosure of AI use and not putting people's private information into AI tools inappropriately. Accessibility for any of us doesn't justify non-disclosure of AI use or breaching people's privacy.
-
@zersiax Not all AI is the same in terms of ethics or practicality.
One issue I am seeing is people using AI-glasses without explicitly disclosing that to others in the space (some who can't see or don't know they're being used).
That's an issue because that can put people like immigrants or trans people at risk by their personal data (video and audio of them and their location) in the hands of organisations who could do them harm such as reporting them to ICE, or outing them.
@NatalyaD 100% agree. There is unfortunately always people who stop at their own convenience and forget there's other people in the world, which could almost be called ironic given that's exactly why accessibility issues tend to crop up to begin with

-
@zersiax I also don't want to read AI slop it's not nice to read and it wastes my limited time and spoons. I think if people are producing writing with AI they should at least tag it as AI-written, so I can completely ignore, skip or block it.
So for me it's about honesty and disclosure of AI use and not putting people's private information into AI tools inappropriately. Accessibility for any of us doesn't justify non-disclosure of AI use or breaching people's privacy.
@NatalyaD Again, 100% agree. If AI is being used for something, it should always be clearly marked, and if other people or other people's work are being acted on by AI (yes, I realize the slippery slope in that phrasing) should always be consentual if possible,
-
@NatalyaD 100% agree. There is unfortunately always people who stop at their own convenience and forget there's other people in the world, which could almost be called ironic given that's exactly why accessibility issues tend to crop up to begin with

@zersiax Indeed. I think one concern is AI access being provided instead of inclusive by design or human access e.g. AI ALT text or audio description which could be non contextualised or outright hallucinated in parts. That's what scares me for my students, what they might be missing if something is AI-accessibility with no clear accountability or standards. Or if they use tools e.g. summarisers which cause errors in their academic work which they're liable for.
-
P pixelate@tweesecake.social shared this topic