Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. One thing that continues to grate on my conscience about #AI is how artists and writers consistently feel that the technology has STOLEN from them.

One thing that continues to grate on my conscience about #AI is how artists and writers consistently feel that the technology has STOLEN from them.

Scheduled Pinned Locked Moved Uncategorized
22 Posts 12 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • brib@bribstodon.xyzB brib@bribstodon.xyz

    @drahardja @peter My take (as a creator who generally values free culture) is that it's theft in the context of plagiarism. I have very mixed feelings about copyright as an institution and am generally happy for other humans to use and remix my work -- I think this kind of remixing is a big part of culture. But when AI remixes stuff, nobody knows that I wrote the piece, and if AI wrote something important because of me, it's not going to tell the user to go talk to me if they like it, it's going to take the credit for itself, and users are going to credit AI for the awesome work. That's why it rubs against me so hard: I lose any visibility I get as a creator, which not only translates to lost economic opportunities, but a loss of a big part of the social value of creating something (that is: to connect with other humans).

    There's also an element of consent (or lack of it). I never consented for my work to get scraped and regurgitated, it just happened because techbros felt entitled to my shit. So it feels incredibly icky on that front too.

    Of course, this is my own personal take, I cannot speak for all artists and creators on this matter.

    brib@bribstodon.xyzB This user is from outside of this forum
    brib@bribstodon.xyzB This user is from outside of this forum
    brib@bribstodon.xyz
    wrote last edited by
    #7

    @drahardja @peter It is of note that nearly every free-culture license requires attribution, even the permissive licenses. AI doesn't honour this at all. At best, it cites some webpages if there's a search engine bolted on, but for images, code and music that simply does not happen.

    jrdepriest@infosec.exchangeJ toddthomas@mastodon.socialT 2 Replies Last reply
    0
    • drahardja@sfba.socialD drahardja@sfba.social

      @peter I think everyone in the industry understands that what AI companies are doing FEELS LIKE STEALING, so much so that Cloudflare makes a product that allows site owners to PREVENT scraping by AI-training endpoints.

      Yet I think we all struggle with defining WHY it feels like stealing, and what set of rules or social contract we can put in place to DEFINE why it is theft.

      peter@toot.cafeP This user is from outside of this forum
      peter@toot.cafeP This user is from outside of this forum
      peter@toot.cafe
      wrote last edited by
      #8

      @drahardja I think the key difference vs search engines is there's some mutual benefit there - they give you traffic in exchange for serving ads around links to your content.

      With AI there's no mutual benefit for the creators. In fact, it's worse than that - it's also threatening the creators' livelihoods.

      Obligatory "I'm not a lawyer though" - so I'm not sure how best to translate this into law. Consent feels like a good starting point though. For a start, robots.txt rules must be enforced.

      1 Reply Last reply
      0
      • drahardja@sfba.socialD drahardja@sfba.social

        One thing that continues to grate on my conscience about #AI is how artists and writers consistently feel that the technology has STOLEN from them. We all know that web scraping is (and should be) a perfectly legal and acceptable use, because preventing it also prevents all sorts of beneficial behaviors—the Internet Archive wouldn’t be able to exist, for one thing.

        But yet, the very nature of AI takes scraped content and regurgitates it as a pink-slime extrusion that it feeds back into the web. And to creators, that just FEELS WRONG; it feels like stolen valor, it feels like exploitation.

        And it’s something I can’t (and shouldn’t) shake from my mind each time I see something made by AI. Just because something is LEGAL doesn’t mean it isn’t ABUSIVE and UNETHICAL. Scolding people who complain about AI by telling them that web scraping is good, actually, doesn’t address the main complaint: that somehow, these AI assholes have EXPLOITED A COMMON GOOD and we can’t quite figure out how to stop it.

        wyatt_h_knott@mstdn.socialW This user is from outside of this forum
        wyatt_h_knott@mstdn.socialW This user is from outside of this forum
        wyatt_h_knott@mstdn.social
        wrote last edited by
        #9

        @drahardja Keep in mind that one of the reasons "scraping" by archive.org is ok, is because not only is a record of all media important, but it's important that it be, somehow, a publicly accessable and RELIABLE archive *without it being a for-profit venture* BECAUSE not only did we MAKE it, but all of us grew up on it. Think of the ridiculousness of copyrighting "Happy Birthday" and you'll be on the right track. Human culture IS the record, and we all deserve access to it.

        drahardja@sfba.socialD 1 Reply Last reply
        0
        • brib@bribstodon.xyzB brib@bribstodon.xyz

          @drahardja @peter It is of note that nearly every free-culture license requires attribution, even the permissive licenses. AI doesn't honour this at all. At best, it cites some webpages if there's a search engine bolted on, but for images, code and music that simply does not happen.

          jrdepriest@infosec.exchangeJ This user is from outside of this forum
          jrdepriest@infosec.exchangeJ This user is from outside of this forum
          jrdepriest@infosec.exchange
          wrote last edited by
          #10

          @brib @drahardja @peter

          Due to how LLMs process the slurped information, they can't provide accurate attribution.

          1 Reply Last reply
          0
          • wyatt_h_knott@mstdn.socialW wyatt_h_knott@mstdn.social

            @drahardja Keep in mind that one of the reasons "scraping" by archive.org is ok, is because not only is a record of all media important, but it's important that it be, somehow, a publicly accessable and RELIABLE archive *without it being a for-profit venture* BECAUSE not only did we MAKE it, but all of us grew up on it. Think of the ridiculousness of copyrighting "Happy Birthday" and you'll be on the right track. Human culture IS the record, and we all deserve access to it.

            drahardja@sfba.socialD This user is from outside of this forum
            drahardja@sfba.socialD This user is from outside of this forum
            drahardja@sfba.social
            wrote last edited by
            #11

            @wyatt_h_knott Thought experiment: Would you be ethically OK with ChatGPT if OpenAI had remained a nonprofit, and didn’t take huge amounts of VC money? Or is there something more fundamental about how the models were created and deployed that would still not make it OK?

            1 Reply Last reply
            0
            • drahardja@sfba.socialD drahardja@sfba.social

              Many people I know have taken to large-scale AI-assisted coding with no qualms, because the tech can be useful in many cases, especially when the users are already software experts. But just because a technology is USEFUL doesn’t mean it is ETHICAL to use, and it’s impossible to see AI output without also seeing the masses of creators whose works have been scraped, many of whom feel like they’ve been used and exploited.

              If ever there was a time to resist this technology, it’s now. We are at an inflection point, and we can either jump in headlong and profit from AI as it stands today, or we can help put the brakes on, slow things down, and take the time to work out how (or IF it’s possible) to use this technology ethically.

              Listen to the creators. They almost universally feel exploited by AI. Try to figure out why that is, and why our norms don’t account for that.

              drahardja@sfba.socialD This user is from outside of this forum
              drahardja@sfba.socialD This user is from outside of this forum
              drahardja@sfba.social
              wrote last edited by
              #12

              One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?

              In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.

              mook@possum.cityM johnzajac@dice.campJ ryanboswell@sfba.socialR krans@mastodon.me.ukK tony@toot.hoyle.me.ukT 5 Replies Last reply
              0
              • drahardja@sfba.socialD drahardja@sfba.social

                One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?

                In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.

                mook@possum.cityM This user is from outside of this forum
                mook@possum.cityM This user is from outside of this forum
                mook@possum.city
                wrote last edited by
                #13

                @drahardja@sfba.social speaking of the ethical issues, Claude for example, it used to commit war crimes in Gaza and Iran, these tools are deeply embeded in the american state and military industrial complex, that massacre of school girls was an AI decision

                drahardja@sfba.socialD 1 Reply Last reply
                0
                • mook@possum.cityM mook@possum.city

                  @drahardja@sfba.social speaking of the ethical issues, Claude for example, it used to commit war crimes in Gaza and Iran, these tools are deeply embeded in the american state and military industrial complex, that massacre of school girls was an AI decision

                  drahardja@sfba.socialD This user is from outside of this forum
                  drahardja@sfba.socialD This user is from outside of this forum
                  drahardja@sfba.social
                  wrote last edited by
                  #14

                  @mook Yes and it’s impossible to use one part of the product without also supporting the rest.

                  1 Reply Last reply
                  0
                  • drahardja@sfba.socialD drahardja@sfba.social

                    One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?

                    In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.

                    johnzajac@dice.campJ This user is from outside of this forum
                    johnzajac@dice.campJ This user is from outside of this forum
                    johnzajac@dice.camp
                    wrote last edited by
                    #15

                    @drahardja

                    Many in the West suffer from "everything is like everything else" syndrome, where the Internet Archive's obviously benevolent and beneficial use of scraping is *precisely the same* as planet-destroying garbage-peddling LLM grifters' scraping of the internet.

                    This is exactly the same phenomenon that leads people to say that extremist Leftist and extremist Fascists are "the same"; it doesn't matter that one wants the to have free healthcare and housing and food and the other wants...

                    johnzajac@dice.campJ 1 Reply Last reply
                    0
                    • johnzajac@dice.campJ johnzajac@dice.camp

                      @drahardja

                      Many in the West suffer from "everything is like everything else" syndrome, where the Internet Archive's obviously benevolent and beneficial use of scraping is *precisely the same* as planet-destroying garbage-peddling LLM grifters' scraping of the internet.

                      This is exactly the same phenomenon that leads people to say that extremist Leftist and extremist Fascists are "the same"; it doesn't matter that one wants the to have free healthcare and housing and food and the other wants...

                      johnzajac@dice.campJ This user is from outside of this forum
                      johnzajac@dice.campJ This user is from outside of this forum
                      johnzajac@dice.camp
                      wrote last edited by
                      #16

                      @drahardja

                      ...to murder all trans people, Black people, immigrants, queer people and leftists while implementing a tyrannical government while stealing literally everything from everyone else.

                      Under this bizarre Duality fallacy we've all been acculturated to, the "two sides" have to be "the same", because symmetry amirite? And that's the extent of the argument lol lmao

                      Like, as I have to say almost every day: not everything is the same as everything else. FFS.

                      johnzajac@dice.campJ 1 Reply Last reply
                      0
                      • johnzajac@dice.campJ johnzajac@dice.camp

                        @drahardja

                        ...to murder all trans people, Black people, immigrants, queer people and leftists while implementing a tyrannical government while stealing literally everything from everyone else.

                        Under this bizarre Duality fallacy we've all been acculturated to, the "two sides" have to be "the same", because symmetry amirite? And that's the extent of the argument lol lmao

                        Like, as I have to say almost every day: not everything is the same as everything else. FFS.

                        johnzajac@dice.campJ This user is from outside of this forum
                        johnzajac@dice.campJ This user is from outside of this forum
                        johnzajac@dice.camp
                        wrote last edited by
                        #17

                        @drahardja

                        Another great example is when artists create something inspired by other artists or that quotes or references other artists.

                        This is *not the same* as a voracious, unethical, and monstrous technology regurgitating an artists work slightly changed, without discourse or critique.

                        Like, transparently not the same.

                        And yet - it's LLM-pilled fuckfaces fav argument.

                        1 Reply Last reply
                        0
                        • drahardja@sfba.socialD drahardja@sfba.social

                          One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?

                          In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.

                          ryanboswell@sfba.socialR This user is from outside of this forum
                          ryanboswell@sfba.socialR This user is from outside of this forum
                          ryanboswell@sfba.social
                          wrote last edited by
                          #18

                          @drahardja I’ve been living this particular scenario for a while now. I’m a manager of a team with varied seniority, and with a 9 year tenure at my employer, a fairly visible and recognized colleague.

                          I focus on being a visible critic of the tools and an example you can still do your job and do it well without them. I’ve told my team that I won’t require them, and while I won’t stop them from using LLMs, I obviously won’t assist them actively. And I seek out others with similar positions and collaborate with them on how to challenge the adoption.

                          And I take every chance I have to ask critical but real questions to leadership about their intentions, and strategy. I honestly have found very little success in an ethics focused approach, the pushback I get is “ethics are not a factor in our fiduciary responsibility to stakeholders, earning more profit is and LLMs help that.” That saddens me, but is likely the standard for any company leadership that is already adopting LLMs at any level.

                          1 Reply Last reply
                          0
                          • drahardja@sfba.socialD drahardja@sfba.social

                            One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?

                            In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.

                            krans@mastodon.me.ukK This user is from outside of this forum
                            krans@mastodon.me.ukK This user is from outside of this forum
                            krans@mastodon.me.uk
                            wrote last edited by
                            #19

                            @drahardja I have simply been pointing out that it doesn't help with anything that's actually a bottleneck. Our problem is never that we can't produce code fast enough -- we need to do more *engineering*, not programming, to solve customer issues, and engineering is an intrinsically social activity.

                            1 Reply Last reply
                            0
                            • drahardja@sfba.socialD This user is from outside of this forum
                              drahardja@sfba.socialD This user is from outside of this forum
                              drahardja@sfba.social
                              wrote last edited by
                              #20

                              @tiotasram While I’m a supporter of unions, I don’t think it should be a prerequisite. And neither should being anti-AI be a prerequisite for joining a union.

                              1 Reply Last reply
                              0
                              • brib@bribstodon.xyzB brib@bribstodon.xyz

                                @drahardja @peter It is of note that nearly every free-culture license requires attribution, even the permissive licenses. AI doesn't honour this at all. At best, it cites some webpages if there's a search engine bolted on, but for images, code and music that simply does not happen.

                                toddthomas@mastodon.socialT This user is from outside of this forum
                                toddthomas@mastodon.socialT This user is from outside of this forum
                                toddthomas@mastodon.social
                                wrote last edited by
                                #21

                                @brib @drahardja @peter this is my rebuttal to the people who tell me I’ve “always been a prompt engineer because web search, so just embrace the new better search!” Not only is that a massive overstatement in my case, as I prefer learning by finding and reading good books, but when I need search I still prefer web indexes for the essential context provided by attribution which LLMs are incapable of retaining. Take me to the original source and *its* references!

                                1 Reply Last reply
                                0
                                • drahardja@sfba.socialD drahardja@sfba.social

                                  One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?

                                  In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.

                                  tony@toot.hoyle.me.ukT This user is from outside of this forum
                                  tony@toot.hoyle.me.ukT This user is from outside of this forum
                                  tony@toot.hoyle.me.uk
                                  wrote last edited by
                                  #22

                                  @drahardja Pointing out that it's f..ing awful at coding seems to work.

                                  At our place the legal dept have rung alarm bells at the possibility of our source code being used to train an LLM and be spat out almost unchanged to a competitor.. so they've declared a moratorium on AI use except certain 'approved' ones (which will almost certainly end up being only copilot because we're an MS shop).

                                  1 Reply Last reply
                                  1
                                  0
                                  • R relay@relay.publicsquare.global shared this topic
                                  Reply
                                  • Reply as topic
                                  Log in to reply
                                  • Oldest to Newest
                                  • Newest to Oldest
                                  • Most Votes


                                  • Login

                                  • Login or register to search.
                                  • First post
                                    Last post
                                  0
                                  • Categories
                                  • Recent
                                  • Tags
                                  • Popular
                                  • World
                                  • Users
                                  • Groups