Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Using any kind of #AI is unethical.

Using any kind of #AI is unethical.

Scheduled Pinned Locked Moved Uncategorized
accessibility
20 Posts 4 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • zersiax@cupoftea.socialZ zersiax@cupoftea.social

    @skyfaller ok, but do you at least see how ridiculously unfair that is for the common layperson?
    For years, decades maybe even, a system has been denied to you. You may even have been fired because of it. Now, there's this unethical system you can use to make the system work for you. Getting this to work might mean you're able to stop being unemployed, it gives you back the independence you feel you were robbed from by people not caring about your situation, it empowers you to give a cheery "alright then here you go" when you, a non-coder, are told to contribute code to said system because nobody else can be fucked to do so. And then, when you finally do, you get told that "nope, sorry, only valid when you yourself wrote it by hand".

    I'm not disagreeing with you, I'm embodying a standpoint that many in the communities I have access to are currently living. From this angle, you could argue that calling LLMs unethical is incredibly ableist ๐Ÿ™‚

    zersiax@cupoftea.socialZ This user is from outside of this forum
    zersiax@cupoftea.socialZ This user is from outside of this forum
    zersiax@cupoftea.social
    wrote last edited by
    #11

    @skyfaller Essentially, from that viewpoint, the fact you have the choice to not use XYZ is a privilege the person you're debating against does not have, which transcends LLM use and goes into things like boycotting stores, switching away from big tech, and all sorts of other adjacent topics

    skyfaller@jawns.clubS 1 Reply Last reply
    0
    • zersiax@cupoftea.socialZ zersiax@cupoftea.social

      @skyfaller Essentially, from that viewpoint, the fact you have the choice to not use XYZ is a privilege the person you're debating against does not have, which transcends LLM use and goes into things like boycotting stores, switching away from big tech, and all sorts of other adjacent topics

      skyfaller@jawns.clubS This user is from outside of this forum
      skyfaller@jawns.clubS This user is from outside of this forum
      skyfaller@jawns.club
      wrote last edited by
      #12

      @zersiax It feels like you're saying that having accessibility challenges gives you a free pass on other ethical concerns. I agree that there are many situations where lack of accessibility makes it difficult to make ethical choices, but that is an extenuating circumstance or an excuse, it does not make the unethical ethical.

      If your disability gives you a pass on e.g. racism from LLMs, does the color of someone else's skin give them a pass on accessibility?

      zersiax@cupoftea.socialZ 1 Reply Last reply
      0
      • skyfaller@jawns.clubS skyfaller@jawns.club

        @zersiax It feels like you're saying that having accessibility challenges gives you a free pass on other ethical concerns. I agree that there are many situations where lack of accessibility makes it difficult to make ethical choices, but that is an extenuating circumstance or an excuse, it does not make the unethical ethical.

        If your disability gives you a pass on e.g. racism from LLMs, does the color of someone else's skin give them a pass on accessibility?

        zersiax@cupoftea.socialZ This user is from outside of this forum
        zersiax@cupoftea.socialZ This user is from outside of this forum
        zersiax@cupoftea.social
        wrote last edited by
        #13

        @skyfaller I wouldn't say so, but I think for a lot of people this becomes exceedingly gnarly. Ethics, particularly ethics that don't touch the person in question, are often dismissed in the moment. This is definitely not a good thing, and is actually at the foundation of why so many accessibility issues exist and persist, but I think being in that position makes it extraordinarily difficult to see it from that position.
        For the average person, they've been wronged, they can set it straight, and are told off for doing so. I don't really think there is a right position in this situation; many would rather not kill the planet over an accessibility barrier but then, they're really being made to feel like they don't exactly have another choice. I'll freely admit I use AI when I'm left with no other option simply because, well ... I have no other option, and that's not a standpoint to be proud of, more a really sad state of affairs

        skyfaller@jawns.clubS 1 Reply Last reply
        0
        • zersiax@cupoftea.socialZ zersiax@cupoftea.social

          @skyfaller I wouldn't say so, but I think for a lot of people this becomes exceedingly gnarly. Ethics, particularly ethics that don't touch the person in question, are often dismissed in the moment. This is definitely not a good thing, and is actually at the foundation of why so many accessibility issues exist and persist, but I think being in that position makes it extraordinarily difficult to see it from that position.
          For the average person, they've been wronged, they can set it straight, and are told off for doing so. I don't really think there is a right position in this situation; many would rather not kill the planet over an accessibility barrier but then, they're really being made to feel like they don't exactly have another choice. I'll freely admit I use AI when I'm left with no other option simply because, well ... I have no other option, and that's not a standpoint to be proud of, more a really sad state of affairs

          skyfaller@jawns.clubS This user is from outside of this forum
          skyfaller@jawns.clubS This user is from outside of this forum
          skyfaller@jawns.club
          wrote last edited by
          #14

          @zersiax I largely agree.

          I just want to draw a line between knowingly making ethical compromises for practical reasons (I still heat my house with methane gas despite my climate activism) and saying that other ethical concerns aren't valid. I can commiserate with you on the former, nobody's perfect, everyone does unethical things sometimes. I don't have patience for the latter, I'm not letting denialism slide.

          Being forced to choose between bad options doesn't make the option you choose good.

          1 Reply Last reply
          0
          • zersiax@cupoftea.socialZ zersiax@cupoftea.social

            Using any kind of #AI is unethical. Denying huge groups of people the use of applications, operating systems, websites, physical venues and events is, however, perfectly fine, because #accessibility is hard, doesn't make money, or doesn't feel fun/sexy/productive.. Having those people, sick of said being excluded, figure out AI workarounds because humans have failed them for decades is, again, unethical and how dare they for considering such a thing.

            I'll see myself out ๐Ÿ™‚

            natalyad@disabled.socialN This user is from outside of this forum
            natalyad@disabled.socialN This user is from outside of this forum
            natalyad@disabled.social
            wrote last edited by
            #15

            @zersiax Not all AI is the same in terms of ethics or practicality.

            One issue I am seeing is people using AI-glasses without explicitly disclosing that to others in the space (some who can't see or don't know they're being used).

            That's an issue because that can put people like immigrants or trans people at risk by their personal data (video and audio of them and their location) in the hands of organisations who could do them harm such as reporting them to ICE, or outing them.

            natalyad@disabled.socialN zersiax@cupoftea.socialZ 2 Replies Last reply
            0
            • natalyad@disabled.socialN natalyad@disabled.social

              @zersiax Not all AI is the same in terms of ethics or practicality.

              One issue I am seeing is people using AI-glasses without explicitly disclosing that to others in the space (some who can't see or don't know they're being used).

              That's an issue because that can put people like immigrants or trans people at risk by their personal data (video and audio of them and their location) in the hands of organisations who could do them harm such as reporting them to ICE, or outing them.

              natalyad@disabled.socialN This user is from outside of this forum
              natalyad@disabled.socialN This user is from outside of this forum
              natalyad@disabled.social
              wrote last edited by
              #16

              @zersiax I am a sighted disabled person but several blind friends have reported a trend of others in their communities recording them on the sly and without asking for consent. That I think is concerning and suggests clear disclosure rules need to be set.

              I worry about use of my personal work by AI tools which are given it by people using them for access or otherwise. I don't want my writing trained on AI, I didn't consent to that. I don't know if the AI hallucinated my writing or not.

              natalyad@disabled.socialN 1 Reply Last reply
              0
              • natalyad@disabled.socialN natalyad@disabled.social

                @zersiax I am a sighted disabled person but several blind friends have reported a trend of others in their communities recording them on the sly and without asking for consent. That I think is concerning and suggests clear disclosure rules need to be set.

                I worry about use of my personal work by AI tools which are given it by people using them for access or otherwise. I don't want my writing trained on AI, I didn't consent to that. I don't know if the AI hallucinated my writing or not.

                natalyad@disabled.socialN This user is from outside of this forum
                natalyad@disabled.socialN This user is from outside of this forum
                natalyad@disabled.social
                wrote last edited by
                #17

                @zersiax I also don't want to read AI slop it's not nice to read and it wastes my limited time and spoons. I think if people are producing writing with AI they should at least tag it as AI-written, so I can completely ignore, skip or block it.

                So for me it's about honesty and disclosure of AI use and not putting people's private information into AI tools inappropriately. Accessibility for any of us doesn't justify non-disclosure of AI use or breaching people's privacy.

                zersiax@cupoftea.socialZ 1 Reply Last reply
                0
                • natalyad@disabled.socialN natalyad@disabled.social

                  @zersiax Not all AI is the same in terms of ethics or practicality.

                  One issue I am seeing is people using AI-glasses without explicitly disclosing that to others in the space (some who can't see or don't know they're being used).

                  That's an issue because that can put people like immigrants or trans people at risk by their personal data (video and audio of them and their location) in the hands of organisations who could do them harm such as reporting them to ICE, or outing them.

                  zersiax@cupoftea.socialZ This user is from outside of this forum
                  zersiax@cupoftea.socialZ This user is from outside of this forum
                  zersiax@cupoftea.social
                  wrote last edited by
                  #18

                  @NatalyaD 100% agree. There is unfortunately always people who stop at their own convenience and forget there's other people in the world, which could almost be called ironic given that's exactly why accessibility issues tend to crop up to begin with ๐Ÿ™‚

                  natalyad@disabled.socialN 1 Reply Last reply
                  0
                  • natalyad@disabled.socialN natalyad@disabled.social

                    @zersiax I also don't want to read AI slop it's not nice to read and it wastes my limited time and spoons. I think if people are producing writing with AI they should at least tag it as AI-written, so I can completely ignore, skip or block it.

                    So for me it's about honesty and disclosure of AI use and not putting people's private information into AI tools inappropriately. Accessibility for any of us doesn't justify non-disclosure of AI use or breaching people's privacy.

                    zersiax@cupoftea.socialZ This user is from outside of this forum
                    zersiax@cupoftea.socialZ This user is from outside of this forum
                    zersiax@cupoftea.social
                    wrote last edited by
                    #19

                    @NatalyaD Again, 100% agree. If AI is being used for something, it should always be clearly marked, and if other people or other people's work are being acted on by AI (yes, I realize the slippery slope in that phrasing) should always be consentual if possible,

                    1 Reply Last reply
                    0
                    • zersiax@cupoftea.socialZ zersiax@cupoftea.social

                      @NatalyaD 100% agree. There is unfortunately always people who stop at their own convenience and forget there's other people in the world, which could almost be called ironic given that's exactly why accessibility issues tend to crop up to begin with ๐Ÿ™‚

                      natalyad@disabled.socialN This user is from outside of this forum
                      natalyad@disabled.socialN This user is from outside of this forum
                      natalyad@disabled.social
                      wrote last edited by
                      #20

                      @zersiax Indeed. I think one concern is AI access being provided instead of inclusive by design or human access e.g. AI ALT text or audio description which could be non contextualised or outright hallucinated in parts. That's what scares me for my students, what they might be missing if something is AI-accessibility with no clear accountability or standards. Or if they use tools e.g. summarisers which cause errors in their academic work which they're liable for.

                      1 Reply Last reply
                      0
                      • pixelate@tweesecake.socialP pixelate@tweesecake.social shared this topic
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups