One thing that continues to grate on my conscience about #AI is how artists and writers consistently feel that the technology has STOLEN from them.
-
@drahardja Yep and also - it's not just creatives who use the word "steal" in relation to AI training on their content. Eric Schmidt said the same in a behind-closed-doors lecture:
"Schmidt told the students to go ahead and download whatever they need... If the product takes off, “then you hire a whole bunch of lawyers to go clean the mess up,” he said. “If nobody uses your product, then it doesn’t matter that you stole all the content.”"
The Hypocrisy at the Heart of the AI Industry
Tech companies believe in intellectual property, but not yours.
The Atlantic (www.theatlantic.com)
@peter I think everyone in the industry understands that what AI companies are doing FEELS LIKE STEALING, so much so that Cloudflare makes a product that allows site owners to PREVENT scraping by AI-training endpoints.
Yet I think we all struggle with defining WHY it feels like stealing, and what set of rules or social contract we can put in place to DEFINE why it is theft.
-
One thing that continues to grate on my conscience about #AI is how artists and writers consistently feel that the technology has STOLEN from them. We all know that web scraping is (and should be) a perfectly legal and acceptable use, because preventing it also prevents all sorts of beneficial behaviors—the Internet Archive wouldn’t be able to exist, for one thing.
But yet, the very nature of AI takes scraped content and regurgitates it as a pink-slime extrusion that it feeds back into the web. And to creators, that just FEELS WRONG; it feels like stolen valor, it feels like exploitation.
And it’s something I can’t (and shouldn’t) shake from my mind each time I see something made by AI. Just because something is LEGAL doesn’t mean it isn’t ABUSIVE and UNETHICAL. Scolding people who complain about AI by telling them that web scraping is good, actually, doesn’t address the main complaint: that somehow, these AI assholes have EXPLOITED A COMMON GOOD and we can’t quite figure out how to stop it.
Please check out https://stopgenai.com
-
E em0nm4stodon@infosec.exchange shared this topic
-
@peter I think everyone in the industry understands that what AI companies are doing FEELS LIKE STEALING, so much so that Cloudflare makes a product that allows site owners to PREVENT scraping by AI-training endpoints.
Yet I think we all struggle with defining WHY it feels like stealing, and what set of rules or social contract we can put in place to DEFINE why it is theft.
@drahardja @peter My take (as a creator who generally values free culture) is that it's theft in the context of plagiarism. I have very mixed feelings about copyright as an institution and am generally happy for other humans to use and remix my work -- I think this kind of remixing is a big part of culture. But when AI remixes stuff, nobody knows that I wrote the piece, and if AI wrote something important because of me, it's not going to tell the user to go talk to me if they like it, it's going to take the credit for itself, and users are going to credit AI for the awesome work. That's why it rubs against me so hard: I lose any visibility I get as a creator, which not only translates to lost economic opportunities, but a loss of a big part of the social value of creating something (that is: to connect with other humans).
There's also an element of consent (or lack of it). I never consented for my work to get scraped and regurgitated, it just happened because techbros felt entitled to my shit. So it feels incredibly icky on that front too.
Of course, this is my own personal take, I cannot speak for all artists and creators on this matter.
-
@drahardja @peter My take (as a creator who generally values free culture) is that it's theft in the context of plagiarism. I have very mixed feelings about copyright as an institution and am generally happy for other humans to use and remix my work -- I think this kind of remixing is a big part of culture. But when AI remixes stuff, nobody knows that I wrote the piece, and if AI wrote something important because of me, it's not going to tell the user to go talk to me if they like it, it's going to take the credit for itself, and users are going to credit AI for the awesome work. That's why it rubs against me so hard: I lose any visibility I get as a creator, which not only translates to lost economic opportunities, but a loss of a big part of the social value of creating something (that is: to connect with other humans).
There's also an element of consent (or lack of it). I never consented for my work to get scraped and regurgitated, it just happened because techbros felt entitled to my shit. So it feels incredibly icky on that front too.
Of course, this is my own personal take, I cannot speak for all artists and creators on this matter.
@drahardja @peter It is of note that nearly every free-culture license requires attribution, even the permissive licenses. AI doesn't honour this at all. At best, it cites some webpages if there's a search engine bolted on, but for images, code and music that simply does not happen.
-
@peter I think everyone in the industry understands that what AI companies are doing FEELS LIKE STEALING, so much so that Cloudflare makes a product that allows site owners to PREVENT scraping by AI-training endpoints.
Yet I think we all struggle with defining WHY it feels like stealing, and what set of rules or social contract we can put in place to DEFINE why it is theft.
@drahardja I think the key difference vs search engines is there's some mutual benefit there - they give you traffic in exchange for serving ads around links to your content.
With AI there's no mutual benefit for the creators. In fact, it's worse than that - it's also threatening the creators' livelihoods.
Obligatory "I'm not a lawyer though" - so I'm not sure how best to translate this into law. Consent feels like a good starting point though. For a start, robots.txt rules must be enforced.
-
One thing that continues to grate on my conscience about #AI is how artists and writers consistently feel that the technology has STOLEN from them. We all know that web scraping is (and should be) a perfectly legal and acceptable use, because preventing it also prevents all sorts of beneficial behaviors—the Internet Archive wouldn’t be able to exist, for one thing.
But yet, the very nature of AI takes scraped content and regurgitates it as a pink-slime extrusion that it feeds back into the web. And to creators, that just FEELS WRONG; it feels like stolen valor, it feels like exploitation.
And it’s something I can’t (and shouldn’t) shake from my mind each time I see something made by AI. Just because something is LEGAL doesn’t mean it isn’t ABUSIVE and UNETHICAL. Scolding people who complain about AI by telling them that web scraping is good, actually, doesn’t address the main complaint: that somehow, these AI assholes have EXPLOITED A COMMON GOOD and we can’t quite figure out how to stop it.
@drahardja Keep in mind that one of the reasons "scraping" by archive.org is ok, is because not only is a record of all media important, but it's important that it be, somehow, a publicly accessable and RELIABLE archive *without it being a for-profit venture* BECAUSE not only did we MAKE it, but all of us grew up on it. Think of the ridiculousness of copyrighting "Happy Birthday" and you'll be on the right track. Human culture IS the record, and we all deserve access to it.
-
@drahardja @peter It is of note that nearly every free-culture license requires attribution, even the permissive licenses. AI doesn't honour this at all. At best, it cites some webpages if there's a search engine bolted on, but for images, code and music that simply does not happen.
Due to how LLMs process the slurped information, they can't provide accurate attribution.
-
@drahardja Keep in mind that one of the reasons "scraping" by archive.org is ok, is because not only is a record of all media important, but it's important that it be, somehow, a publicly accessable and RELIABLE archive *without it being a for-profit venture* BECAUSE not only did we MAKE it, but all of us grew up on it. Think of the ridiculousness of copyrighting "Happy Birthday" and you'll be on the right track. Human culture IS the record, and we all deserve access to it.
@wyatt_h_knott Thought experiment: Would you be ethically OK with ChatGPT if OpenAI had remained a nonprofit, and didn’t take huge amounts of VC money? Or is there something more fundamental about how the models were created and deployed that would still not make it OK?
-
Many people I know have taken to large-scale AI-assisted coding with no qualms, because the tech can be useful in many cases, especially when the users are already software experts. But just because a technology is USEFUL doesn’t mean it is ETHICAL to use, and it’s impossible to see AI output without also seeing the masses of creators whose works have been scraped, many of whom feel like they’ve been used and exploited.
If ever there was a time to resist this technology, it’s now. We are at an inflection point, and we can either jump in headlong and profit from AI as it stands today, or we can help put the brakes on, slow things down, and take the time to work out how (or IF it’s possible) to use this technology ethically.
Listen to the creators. They almost universally feel exploited by AI. Try to figure out why that is, and why our norms don’t account for that.
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
-
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
@drahardja@sfba.social speaking of the ethical issues, Claude for example, it used to commit war crimes in Gaza and Iran, these tools are deeply embeded in the american state and military industrial complex, that massacre of school girls was an AI decision
-
@drahardja@sfba.social speaking of the ethical issues, Claude for example, it used to commit war crimes in Gaza and Iran, these tools are deeply embeded in the american state and military industrial complex, that massacre of school girls was an AI decision
@mook Yes and it’s impossible to use one part of the product without also supporting the rest.
-
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
Many in the West suffer from "everything is like everything else" syndrome, where the Internet Archive's obviously benevolent and beneficial use of scraping is *precisely the same* as planet-destroying garbage-peddling LLM grifters' scraping of the internet.
This is exactly the same phenomenon that leads people to say that extremist Leftist and extremist Fascists are "the same"; it doesn't matter that one wants the to have free healthcare and housing and food and the other wants...
-
Many in the West suffer from "everything is like everything else" syndrome, where the Internet Archive's obviously benevolent and beneficial use of scraping is *precisely the same* as planet-destroying garbage-peddling LLM grifters' scraping of the internet.
This is exactly the same phenomenon that leads people to say that extremist Leftist and extremist Fascists are "the same"; it doesn't matter that one wants the to have free healthcare and housing and food and the other wants...
...to murder all trans people, Black people, immigrants, queer people and leftists while implementing a tyrannical government while stealing literally everything from everyone else.
Under this bizarre Duality fallacy we've all been acculturated to, the "two sides" have to be "the same", because symmetry amirite? And that's the extent of the argument lol lmao
Like, as I have to say almost every day: not everything is the same as everything else. FFS.
-
...to murder all trans people, Black people, immigrants, queer people and leftists while implementing a tyrannical government while stealing literally everything from everyone else.
Under this bizarre Duality fallacy we've all been acculturated to, the "two sides" have to be "the same", because symmetry amirite? And that's the extent of the argument lol lmao
Like, as I have to say almost every day: not everything is the same as everything else. FFS.
Another great example is when artists create something inspired by other artists or that quotes or references other artists.
This is *not the same* as a voracious, unethical, and monstrous technology regurgitating an artists work slightly changed, without discourse or critique.
Like, transparently not the same.
And yet - it's LLM-pilled fuckfaces fav argument.
-
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
@drahardja I’ve been living this particular scenario for a while now. I’m a manager of a team with varied seniority, and with a 9 year tenure at my employer, a fairly visible and recognized colleague.
I focus on being a visible critic of the tools and an example you can still do your job and do it well without them. I’ve told my team that I won’t require them, and while I won’t stop them from using LLMs, I obviously won’t assist them actively. And I seek out others with similar positions and collaborate with them on how to challenge the adoption.
And I take every chance I have to ask critical but real questions to leadership about their intentions, and strategy. I honestly have found very little success in an ethics focused approach, the pushback I get is “ethics are not a factor in our fiduciary responsibility to stakeholders, earning more profit is and LLMs help that.” That saddens me, but is likely the standard for any company leadership that is already adopting LLMs at any level.
-
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
@drahardja I have simply been pointing out that it doesn't help with anything that's actually a bottleneck. Our problem is never that we can't produce code fast enough -- we need to do more *engineering*, not programming, to solve customer issues, and engineering is an intrinsically social activity.
-
@tiotasram While I’m a supporter of unions, I don’t think it should be a prerequisite. And neither should being anti-AI be a prerequisite for joining a union.
-
@drahardja @peter It is of note that nearly every free-culture license requires attribution, even the permissive licenses. AI doesn't honour this at all. At best, it cites some webpages if there's a search engine bolted on, but for images, code and music that simply does not happen.
@brib @drahardja @peter this is my rebuttal to the people who tell me I’ve “always been a prompt engineer because web search, so just embrace the new better search!” Not only is that a massive overstatement in my case, as I prefer learning by finding and reading good books, but when I need search I still prefer web indexes for the essential context provided by attribution which LLMs are incapable of retaining. Take me to the original source and *its* references!
-
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
@drahardja Pointing out that it's f..ing awful at coding seems to work.
At our place the legal dept have rung alarm bells at the possibility of our source code being used to train an LLM and be spat out almost unchanged to a competitor.. so they've declared a moratorium on AI use except certain 'approved' ones (which will almost certainly end up being only copilot because we're an MS shop).
-
R relay@relay.publicsquare.global shared this topic