Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling.

I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling.

Scheduled Pinned Locked Moved Uncategorized
42 Posts 24 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • seachanger@alaskan.socialS seachanger@alaskan.social

    I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

    *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

    cafechatnoir@mastodon.socialC This user is from outside of this forum
    cafechatnoir@mastodon.socialC This user is from outside of this forum
    cafechatnoir@mastodon.social
    wrote last edited by
    #12

    @seachanger

    MIT recently released a study on the long term cognitive effects of AI use. (Spoiler: they're not good effects.)

    https://publichealthpolicyjournal.com/mit-study-finds-artificial-intelligence-use-reprograms-the-brain-leading-to-cognitive-decline/

    coriopsicologia@mastodon.socialC juandesant@mathstodon.xyzJ 2 Replies Last reply
    0
    • seachanger@alaskan.socialS seachanger@alaskan.social

      I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

      *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

      cafechatnoir@mastodon.socialC This user is from outside of this forum
      cafechatnoir@mastodon.socialC This user is from outside of this forum
      cafechatnoir@mastodon.social
      wrote last edited by
      #13

      @seachanger

      Oh, and not necessarily something you can "cite" - but on the prohibition on AI in comms: The people you're communicating with deserve your time and energy in creating those messages.

      (I'm still salty about one of our executives sending out an intro email to use where he gleefully announced he used ChatGPT for it. How little does he think of us if he can't even be arsed to write his own email?)

      1 Reply Last reply
      0
      • seachanger@alaskan.socialS seachanger@alaskan.social

        I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

        *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

        mtechman@mastodon.ieM This user is from outside of this forum
        mtechman@mastodon.ieM This user is from outside of this forum
        mtechman@mastodon.ie
        wrote last edited by
        #14

        @seachanger contact a librarian ...not sure if you are connected to a university. I wasn't, but university librarians were always very happy to help me, and they're fast.

        1 Reply Last reply
        0
        • aud@fire.asta.lgbtA aud@fire.asta.lgbt

          @seachanger@alaskan.social Regarding item #5: https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai

          It's important to note, though, that the ruling walks a fine line: training of Claude was considered to be "fair use" (not a ruling I personally agree with but hey), however, the fact that Anthropic pirated all the materials was
          not. Anthropic settled on this claim rather than take it to trial, it seems.

          aud@fire.asta.lgbtA This user is from outside of this forum
          aud@fire.asta.lgbtA This user is from outside of this forum
          aud@fire.asta.lgbt
          wrote last edited by
          #15

          @seachanger@alaskan.social speaking to maybe 6 and 7: not all that is sold as “AI” is actually AI, which isn’t quite what I had in mind while looking for privacy and safety concerns but it’s certainly related

          https://data-workers.org/france/

          aud@fire.asta.lgbtA 1 Reply Last reply
          1
          0
          • aud@fire.asta.lgbtA aud@fire.asta.lgbt

            @seachanger@alaskan.social Not sure about the methodology behind this one, but I've heard about it at least (re: #10): https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

            cthos@mastodon.cthos.devC This user is from outside of this forum
            cthos@mastodon.cthos.devC This user is from outside of this forum
            cthos@mastodon.cthos.dev
            wrote last edited by
            #16

            @aud @seachanger That's about the only actual study we have and it has a fairly low sample size, unfortunately. There are some other articles going around about the high cost and failure rates of AI projects though.

            Methodology-wise, it's okay and at least tries to control for perception vs reality.

            1 Reply Last reply
            0
            • aud@fire.asta.lgbtA aud@fire.asta.lgbt

              @seachanger@alaskan.social speaking to maybe 6 and 7: not all that is sold as “AI” is actually AI, which isn’t quite what I had in mind while looking for privacy and safety concerns but it’s certainly related

              https://data-workers.org/france/

              aud@fire.asta.lgbtA This user is from outside of this forum
              aud@fire.asta.lgbtA This user is from outside of this forum
              aud@fire.asta.lgbt
              wrote last edited by
              #17

              @seachanger@alaskan.social speaking to #3 a little: https://www.theguardian.com/technology/2026/jan/15/elon-musk-xai-datacenter-memphis

              The other companies aren’t quite as blatant as Musk. Not sure I have any good definitive links on that; they definitely like to hide and fudge the numbers (“watt per inference!”) so I was trying to find something about the data center strain on grid capacity, but a lot of is paywalled…

              1 Reply Last reply
              0
              • seachanger@alaskan.socialS seachanger@alaskan.social

                @sarae i have followed them for a while but now I am trying to just get some clear sources pasted in that people might know of

                emilymbender@dair-community.socialE This user is from outside of this forum
                emilymbender@dair-community.socialE This user is from outside of this forum
                emilymbender@dair-community.social
                wrote last edited by
                #18

                @seachanger @sarae

                The endnotes in our book are full of sources:
                https://thecon.ai

                emilymbender@dair-community.socialE 1 Reply Last reply
                0
                • emilymbender@dair-community.socialE emilymbender@dair-community.social

                  @seachanger @sarae

                  The endnotes in our book are full of sources:
                  https://thecon.ai

                  emilymbender@dair-community.socialE This user is from outside of this forum
                  emilymbender@dair-community.socialE This user is from outside of this forum
                  emilymbender@dair-community.social
                  wrote last edited by
                  #19

                  @seachanger @sarae

                  Also, not sure what you mean by sources people might know of, but ... our book is a source!

                  seachanger@alaskan.socialS 1 Reply Last reply
                  0
                  • seachanger@alaskan.socialS seachanger@alaskan.social

                    I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                    *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                    edcates@mastodon.socialE This user is from outside of this forum
                    edcates@mastodon.socialE This user is from outside of this forum
                    edcates@mastodon.social
                    wrote last edited by
                    #20

                    @seachanger

                    #10. https://vcresearch.berkeley.edu/news/does-ai-actually-free-workers-time

                    1 Reply Last reply
                    0
                    • seachanger@alaskan.socialS seachanger@alaskan.social

                      I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                      *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                      arod@social.coopA This user is from outside of this forum
                      arod@social.coopA This user is from outside of this forum
                      arod@social.coop
                      wrote last edited by
                      #21

                      @seachanger this is a great resource, I think you will find some sources here: https://libguides.amherst.edu/genAI/ethics

                      seachanger@alaskan.socialS tootbrute@fedi.arkadi.oneT 2 Replies Last reply
                      0
                      • seachanger@alaskan.socialS seachanger@alaskan.social

                        I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                        *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                        edcates@mastodon.socialE This user is from outside of this forum
                        edcates@mastodon.socialE This user is from outside of this forum
                        edcates@mastodon.social
                        wrote last edited by
                        #22

                        @seachanger #6. Which links to the Standford report it discusses.

                        https://www.kiteworks.com/cybersecurity-risk-management/ai-data-privacy-risks-stanford-index-report-2025/

                        Anecdotally, even though Kagi Translate has instructions to not divulge its prompt with anyone, people are easily able to get it to do so by asking it to create or show the output of programs that do exactly that.

                        I can dig up those examples if you want.

                        1 Reply Last reply
                        0
                        • emilymbender@dair-community.socialE emilymbender@dair-community.social

                          @seachanger @sarae

                          Also, not sure what you mean by sources people might know of, but ... our book is a source!

                          seachanger@alaskan.socialS This user is from outside of this forum
                          seachanger@alaskan.socialS This user is from outside of this forum
                          seachanger@alaskan.social
                          wrote last edited by
                          #23

                          @emilymbender
                          Thank you! I just thought people might reference recent stories or reports that back the specific points I was making. I am also adding your book and a few others from https://monetdiaz.com/books-critical-AI.html

                          @sarae

                          johannab@cosocial.caJ 1 Reply Last reply
                          0
                          • arod@social.coopA arod@social.coop

                            @seachanger this is a great resource, I think you will find some sources here: https://libguides.amherst.edu/genAI/ethics

                            seachanger@alaskan.socialS This user is from outside of this forum
                            seachanger@alaskan.socialS This user is from outside of this forum
                            seachanger@alaskan.social
                            wrote last edited by
                            #24

                            @arod oh wow yes that is what I was looking for

                            1 Reply Last reply
                            0
                            • seachanger@alaskan.socialS seachanger@alaskan.social

                              I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                              *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                              imbl@social.treehouse.systemsI This user is from outside of this forum
                              imbl@social.treehouse.systemsI This user is from outside of this forum
                              imbl@social.treehouse.systems
                              wrote last edited by
                              #25

                              @seachanger here are a couple of links on ai's role in digital colonialism in africa and south america in case that's helpful!

                              https://www.ictworks.org/african-digital-colonialism/ (a synopsis of https://www.ictworks.org/wp-content/uploads/2025/01/African-Digital-Colonialism.pdf)
                              https://peopledaily.digital/insights/the-hidden-cost-of-ai-africas-invisible-workforce-and-digital-servitude (ironically uses an ai generated stock image as the article header)
                              https://www.technologyreview.com/supertopic/ai-colonialism-supertopic/ (keeps trying to sell me ai books lol)

                              1 Reply Last reply
                              0
                              • seachanger@alaskan.socialS seachanger@alaskan.social

                                @emilymbender
                                Thank you! I just thought people might reference recent stories or reports that back the specific points I was making. I am also adding your book and a few others from https://monetdiaz.com/books-critical-AI.html

                                @sarae

                                johannab@cosocial.caJ This user is from outside of this forum
                                johannab@cosocial.caJ This user is from outside of this forum
                                johannab@cosocial.ca
                                wrote last edited by
                                #26

                                @seachanger @emilymbender @sarae

                                Not quite at my fingertips right now and I'll go have a look, but the consulting firm Deloitte is a "case study as a dire warning", as is Air Canada - both were held to be liable and had to reimburse clients for letting AI fuckups into their official products or communications.

                                johannab@cosocial.caJ 2 Replies Last reply
                                0
                                • johannab@cosocial.caJ johannab@cosocial.ca

                                  @seachanger @emilymbender @sarae

                                  Not quite at my fingertips right now and I'll go have a look, but the consulting firm Deloitte is a "case study as a dire warning", as is Air Canada - both were held to be liable and had to reimburse clients for letting AI fuckups into their official products or communications.

                                  johannab@cosocial.caJ This user is from outside of this forum
                                  johannab@cosocial.caJ This user is from outside of this forum
                                  johannab@cosocial.ca
                                  wrote last edited by
                                  #27

                                  @seachanger @emilymbender @sarae

                                  Boards are usually much more receptive to "well, this is a risk that could get your own ass handed to you in court, minus any cash you had in your back pocket" than they are to "this is a highly problematic tool that is deceptively easy to misuse badly" because everyone thinks everyone else who got in trouble was just not as smart as they are.

                                  1 Reply Last reply
                                  0
                                  • johannab@cosocial.caJ johannab@cosocial.ca

                                    @seachanger @emilymbender @sarae

                                    Not quite at my fingertips right now and I'll go have a look, but the consulting firm Deloitte is a "case study as a dire warning", as is Air Canada - both were held to be liable and had to reimburse clients for letting AI fuckups into their official products or communications.

                                    johannab@cosocial.caJ This user is from outside of this forum
                                    johannab@cosocial.caJ This user is from outside of this forum
                                    johannab@cosocial.ca
                                    wrote last edited by
                                    #28

                                    @seachanger

                                    not too tough to find, even:

                                    https://www.theguardian.com/australia-news/2025/oct/06/deloitte-to-pay-money-back-to-albanese-government-after-using-ai-in-440000-report

                                    https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416

                                    1 Reply Last reply
                                    0
                                    • seachanger@alaskan.socialS seachanger@alaskan.social

                                      I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                                      *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                                      integerdisarray@lethallava.landI This user is from outside of this forum
                                      integerdisarray@lethallava.landI This user is from outside of this forum
                                      integerdisarray@lethallava.land
                                      wrote last edited by
                                      #29

                                      @seachanger@alaskan.social Here's one potential reason: a recent meta-analysis concluded that the general public is terrified of AI and has near-zero trust in AI products https://onlinelibrary.wiley.com/doi/10.1002/cb.70144?af=R

                                      1 Reply Last reply
                                      0
                                      • darby3@zirk.usD darby3@zirk.us

                                        @seachanger I probably do here but would need to do some cross referencing I can’t do at the moment

                                        Link Preview Image
                                        AI Sucks, Actually

                                        that's it, that's the thesis

                                        favicon

                                        (ai-sucks-actually.fyi)

                                        seachanger@alaskan.socialS This user is from outside of this forum
                                        seachanger@alaskan.socialS This user is from outside of this forum
                                        seachanger@alaskan.social
                                        wrote last edited by
                                        #30

                                        @darby3 thank you! nice work!

                                        1 Reply Last reply
                                        0
                                        • cafechatnoir@mastodon.socialC cafechatnoir@mastodon.social

                                          @seachanger

                                          MIT recently released a study on the long term cognitive effects of AI use. (Spoiler: they're not good effects.)

                                          https://publichealthpolicyjournal.com/mit-study-finds-artificial-intelligence-use-reprograms-the-brain-leading-to-cognitive-decline/

                                          coriopsicologia@mastodon.socialC This user is from outside of this forum
                                          coriopsicologia@mastodon.socialC This user is from outside of this forum
                                          coriopsicologia@mastodon.social
                                          wrote last edited by
                                          #31

                                          @cafechatnoir @seachanger
                                          This report from our collective @tunubesecamirio is plenty of reference for the point 3

                                          https://tunubesecamirio.com/2026/02/05/informe-sobre-los-centros-de-datos-de-aragon-el-precio-de-las-nubes/

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups