One thing that continues to grate on my conscience about #AI is how artists and writers consistently feel that the technology has STOLEN from them.
-
Many people I know have taken to large-scale AI-assisted coding with no qualms, because the tech can be useful in many cases, especially when the users are already software experts. But just because a technology is USEFUL doesn’t mean it is ETHICAL to use, and it’s impossible to see AI output without also seeing the masses of creators whose works have been scraped, many of whom feel like they’ve been used and exploited.
If ever there was a time to resist this technology, it’s now. We are at an inflection point, and we can either jump in headlong and profit from AI as it stands today, or we can help put the brakes on, slow things down, and take the time to work out how (or IF it’s possible) to use this technology ethically.
Listen to the creators. They almost universally feel exploited by AI. Try to figure out why that is, and why our norms don’t account for that.
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
-
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
@drahardja@sfba.social speaking of the ethical issues, Claude for example, it used to commit war crimes in Gaza and Iran, these tools are deeply embeded in the american state and military industrial complex, that massacre of school girls was an AI decision
-
@drahardja@sfba.social speaking of the ethical issues, Claude for example, it used to commit war crimes in Gaza and Iran, these tools are deeply embeded in the american state and military industrial complex, that massacre of school girls was an AI decision
@mook Yes and it’s impossible to use one part of the product without also supporting the rest.
-
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
Many in the West suffer from "everything is like everything else" syndrome, where the Internet Archive's obviously benevolent and beneficial use of scraping is *precisely the same* as planet-destroying garbage-peddling LLM grifters' scraping of the internet.
This is exactly the same phenomenon that leads people to say that extremist Leftist and extremist Fascists are "the same"; it doesn't matter that one wants the to have free healthcare and housing and food and the other wants...
-
Many in the West suffer from "everything is like everything else" syndrome, where the Internet Archive's obviously benevolent and beneficial use of scraping is *precisely the same* as planet-destroying garbage-peddling LLM grifters' scraping of the internet.
This is exactly the same phenomenon that leads people to say that extremist Leftist and extremist Fascists are "the same"; it doesn't matter that one wants the to have free healthcare and housing and food and the other wants...
...to murder all trans people, Black people, immigrants, queer people and leftists while implementing a tyrannical government while stealing literally everything from everyone else.
Under this bizarre Duality fallacy we've all been acculturated to, the "two sides" have to be "the same", because symmetry amirite? And that's the extent of the argument lol lmao
Like, as I have to say almost every day: not everything is the same as everything else. FFS.
-
...to murder all trans people, Black people, immigrants, queer people and leftists while implementing a tyrannical government while stealing literally everything from everyone else.
Under this bizarre Duality fallacy we've all been acculturated to, the "two sides" have to be "the same", because symmetry amirite? And that's the extent of the argument lol lmao
Like, as I have to say almost every day: not everything is the same as everything else. FFS.
Another great example is when artists create something inspired by other artists or that quotes or references other artists.
This is *not the same* as a voracious, unethical, and monstrous technology regurgitating an artists work slightly changed, without discourse or critique.
Like, transparently not the same.
And yet - it's LLM-pilled fuckfaces fav argument.
-
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
@drahardja I’ve been living this particular scenario for a while now. I’m a manager of a team with varied seniority, and with a 9 year tenure at my employer, a fairly visible and recognized colleague.
I focus on being a visible critic of the tools and an example you can still do your job and do it well without them. I’ve told my team that I won’t require them, and while I won’t stop them from using LLMs, I obviously won’t assist them actively. And I seek out others with similar positions and collaborate with them on how to challenge the adoption.
And I take every chance I have to ask critical but real questions to leadership about their intentions, and strategy. I honestly have found very little success in an ethics focused approach, the pushback I get is “ethics are not a factor in our fiduciary responsibility to stakeholders, earning more profit is and LLMs help that.” That saddens me, but is likely the standard for any company leadership that is already adopting LLMs at any level.
-
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
@drahardja I have simply been pointing out that it doesn't help with anything that's actually a bottleneck. Our problem is never that we can't produce code fast enough -- we need to do more *engineering*, not programming, to solve customer issues, and engineering is an intrinsically social activity.
-
@tiotasram While I’m a supporter of unions, I don’t think it should be a prerequisite. And neither should being anti-AI be a prerequisite for joining a union.
-
@drahardja @peter It is of note that nearly every free-culture license requires attribution, even the permissive licenses. AI doesn't honour this at all. At best, it cites some webpages if there's a search engine bolted on, but for images, code and music that simply does not happen.
@brib @drahardja @peter this is my rebuttal to the people who tell me I’ve “always been a prompt engineer because web search, so just embrace the new better search!” Not only is that a massive overstatement in my case, as I prefer learning by finding and reading good books, but when I need search I still prefer web indexes for the essential context provided by attribution which LLMs are incapable of retaining. Take me to the original source and *its* references!
-
One discussion I’d like to have is how ethically-motivated software engineers and managers, both junior and senior, can put the brakes on at their workplace. Many corporations are implementing top-down mandates to use coding assistant models during development. However, many are now at some “pilot” stage and are therefore somewhat receptive (vulnerable) to pushback from the ranks. What are some strategies that employees have to make ethical problems more salient to the discussion?
In many cases, “refuse to use it” is not an option—or at least it’s likely perceived as a career-limiting option—because of said top-down mandate. Senior staff can choose this path, but junior ones will find it very risky unless there is community support.
@drahardja Pointing out that it's f..ing awful at coding seems to work.
At our place the legal dept have rung alarm bells at the possibility of our source code being used to train an LLM and be spat out almost unchanged to a competitor.. so they've declared a moratorium on AI use except certain 'approved' ones (which will almost certainly end up being only copilot because we're an MS shop).
-
R relay@relay.publicsquare.global shared this topic