I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling.
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger@alaskan.social Here's one potential reason: a recent meta-analysis concluded that the general public is terrified of AI and has near-zero trust in AI products https://onlinelibrary.wiley.com/doi/10.1002/cb.70144?af=R
-
@seachanger I probably do here but would need to do some cross referencing I can’t do at the moment
@darby3 thank you! nice work!
-
MIT recently released a study on the long term cognitive effects of AI use. (Spoiler: they're not good effects.)
@cafechatnoir @seachanger
This report from our collective @tunubesecamirio is plenty of reference for the point 3 -
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger don't they have an "AI IS GOING GREAT" website?
Web3 is Going Just Great
A timeline recording only some of the many disasters happening in crypto, decentralized finance, NFTs, and other blockchain-based projects.
(www.web3isgoinggreat.com)
like they had for crypto shit.
-
@seachanger this is a great resource, I think you will find some sources here: https://libguides.amherst.edu/genAI/ethics
@arod @seachanger great list of the reasons to not use AI.
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
My policy for AI in programming tends to be applicable in other areas as well.
• AI should be used to assist, enhance and not replace existing workflows.
• When using AI, be sure to split your workflow into smaller, more manageable chunks.
• Proofread then validate the output against other sources before implementing it into your works.
This ensures that anything contributed by an AI meets the same expectations as a human performing the same task. If you implement the following or some variant of it into your workflow, you'll find that a lot of the common pitfalls with AI can easily be avoided. While the efficacy of AI can never been guaranteed, I find that sticking to those guidelines can help direct the output into something less liable to be derivative.
@seachanger@alaskan.social -
R relay@relay.an.exchange shared this topic
-
MIT recently released a study on the long term cognitive effects of AI use. (Spoiler: they're not good effects.)
@cafechatnoir @seachanger pinging @WeirdWriter, who put in beautiful, powerful words how that experience of “semantic ablation” affected his writer friend. At least, it seems to be recoverable, but at what cost…
-
@cafechatnoir @seachanger pinging @WeirdWriter, who put in beautiful, powerful words how that experience of “semantic ablation” affected his writer friend. At least, it seems to be recoverable, but at what cost…
@juandesant @cafechatnoir @seachanger Yay thank you for tagging! My narrative is at the end. I’ve seen it have a drastically negative psychological consequences for everybody that uses it. Writers, readers, anybody really. I recently had a scenario where a trance friend of mine just quit writing all altogether because, on the one hand everybody was praising her for doing such a fantastic job with prompting the thing when she never used an LLM at all. The truly horrifying thing was, the positive comments were more disturbing because they praised an LLM for creating it when she never touched an LLM in her life. I’m going to write about it, but right now, the emotions are swirling around and I need to calm down after these incidences AnyWho, but if you have not read it yet, the first story is https://sightlessscribbles.com/the-colonization-of-confidence/
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger been collecting news articles here: https://kdwarn.net/programming/links#AI%20Sucks
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger @janeishly I really like what you're doing here. You may want to add that there is little transparency around the training data. Many models are trained on data that contains harmful biases and prejudices against BIPOC, LGBT+ people, etc. Also may involve exploitation of labor in undeveloped countries to assist with training. Good luck with getting a strong policy approved

-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger you miss the bias in training these models
-
I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)
*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil
@seachanger lol, tbh, I think the way here is to have chat-GPT hallucinate sources, provide those and let the board figure out that you just gave them another reason...