I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling.
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger I would look to the work of @emilymbender and her colleagues
-
@seachanger I would look to the work of @emilymbender and her colleagues
@sarae i have followed them for a while but now I am trying to just get some clear sources pasted in that people might know of
-
@seachanger this DAIR page has several issues
https://www.dair-institute.org/categories/the-real-harms-of-ai-systems/
@seachanger and you may also find this one useful, including its citations
-
@seachanger@alaskan.social Not sure about the methodology behind this one, but I've heard about it at least (re: #10): https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
@seachanger@alaskan.social Regarding item #5: https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai
It's important to note, though, that the ruling walks a fine line: training of Claude was considered to be "fair use" (not a ruling I personally agree with but hey), however, the fact that Anthropic pirated all the materials was not. Anthropic settled on this claim rather than take it to trial, it seems. -
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger I probably do here but would need to do some cross referencing I can’t do at the moment
-
@seachanger I would look to the work of @emilymbender and her colleagues
@sarae yes, also @skinnylatte comes to mind for AI & nonprofits
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
MIT recently released a study on the long term cognitive effects of AI use. (Spoiler: they're not good effects.)
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
Oh, and not necessarily something you can "cite" - but on the prohibition on AI in comms: The people you're communicating with deserve your time and energy in creating those messages.
(I'm still salty about one of our executives sending out an intro email to use where he gleefully announced he used ChatGPT for it. How little does he think of us if he can't even be arsed to write his own email?)
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger contact a librarian ...not sure if you are connected to a university. I wasn't, but university librarians were always very happy to help me, and they're fast.
-
@seachanger@alaskan.social Regarding item #5: https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai
It's important to note, though, that the ruling walks a fine line: training of Claude was considered to be "fair use" (not a ruling I personally agree with but hey), however, the fact that Anthropic pirated all the materials was not. Anthropic settled on this claim rather than take it to trial, it seems.@seachanger@alaskan.social speaking to maybe 6 and 7: not all that is sold as “AI” is actually AI, which isn’t quite what I had in mind while looking for privacy and safety concerns but it’s certainly related
https://data-workers.org/france/ -
@seachanger@alaskan.social Not sure about the methodology behind this one, but I've heard about it at least (re: #10): https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
@aud @seachanger That's about the only actual study we have and it has a fairly low sample size, unfortunately. There are some other articles going around about the high cost and failure rates of AI projects though.
Methodology-wise, it's okay and at least tries to control for perception vs reality.
-
@seachanger@alaskan.social speaking to maybe 6 and 7: not all that is sold as “AI” is actually AI, which isn’t quite what I had in mind while looking for privacy and safety concerns but it’s certainly related
https://data-workers.org/france/@seachanger@alaskan.social speaking to #3 a little: https://www.theguardian.com/technology/2026/jan/15/elon-musk-xai-datacenter-memphis
The other companies aren’t quite as blatant as Musk. Not sure I have any good definitive links on that; they definitely like to hide and fudge the numbers (“watt per inference!”) so I was trying to find something about the data center strain on grid capacity, but a lot of is paywalled… -
@sarae i have followed them for a while but now I am trying to just get some clear sources pasted in that people might know of
The endnotes in our book are full of sources:
https://thecon.ai -
The endnotes in our book are full of sources:
https://thecon.aiAlso, not sure what you mean by sources people might know of, but ... our book is a source!
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger this is a great resource, I think you will find some sources here: https://libguides.amherst.edu/genAI/ethics
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger #6. Which links to the Standford report it discusses.
Anecdotally, even though Kagi Translate has instructions to not divulge its prompt with anyone, people are easily able to get it to do so by asking it to create or show the output of programs that do exactly that.
I can dig up those examples if you want.
-
Also, not sure what you mean by sources people might know of, but ... our book is a source!
@emilymbender
Thank you! I just thought people might reference recent stories or reports that back the specific points I was making. I am also adding your book and a few others from https://monetdiaz.com/books-critical-AI.html -
@seachanger this is a great resource, I think you will find some sources here: https://libguides.amherst.edu/genAI/ethics
@arod oh wow yes that is what I was looking for
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger here are a couple of links on ai's role in digital colonialism in africa and south america in case that's helpful!
https://www.ictworks.org/african-digital-colonialism/ (a synopsis of https://www.ictworks.org/wp-content/uploads/2025/01/African-Digital-Colonialism.pdf)
https://peopledaily.digital/insights/the-hidden-cost-of-ai-africas-invisible-workforce-and-digital-servitude (ironically uses an ai generated stock image as the article header)
https://www.technologyreview.com/supertopic/ai-colonialism-supertopic/ (keeps trying to sell me ai books lol)