Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling.

I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling.

Scheduled Pinned Locked Moved Uncategorized
42 Posts 24 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • seachanger@alaskan.socialS seachanger@alaskan.social

    I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

    *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

    kellyromanych@mastodon.socialK This user is from outside of this forum
    kellyromanych@mastodon.socialK This user is from outside of this forum
    kellyromanych@mastodon.social
    wrote last edited by
    #3

    @seachanger i will check around for cites. Having dealt with boards, the first thing that came to mind was that "AI can't donate"

    kellyromanych@mastodon.socialK 1 Reply Last reply
    1
    0
    • aud@fire.asta.lgbtA aud@fire.asta.lgbt

      @seachanger@alaskan.social Here's a recent Guardian article that speaks to item #2: https://www.theguardian.com/technology/ng-interactive/2026/feb/28/chatgpt-ai-chatbot-mental-health

      EDIT: This one needs a content warning for suicide, to be clear.

      aud@fire.asta.lgbtA This user is from outside of this forum
      aud@fire.asta.lgbtA This user is from outside of this forum
      aud@fire.asta.lgbt
      wrote last edited by
      #4

      @seachanger@alaskan.social Not sure about the methodology behind this one, but I've heard about it at least (re: #10): https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

      aud@fire.asta.lgbtA cthos@mastodon.cthos.devC 2 Replies Last reply
      0
      • kellyromanych@mastodon.socialK kellyromanych@mastodon.social

        @seachanger i will check around for cites. Having dealt with boards, the first thing that came to mind was that "AI can't donate"

        kellyromanych@mastodon.socialK This user is from outside of this forum
        kellyromanych@mastodon.socialK This user is from outside of this forum
        kellyromanych@mastodon.social
        wrote last edited by
        #5

        @seachanger this DAIR page has several issues

        Link Preview Image
        DAIR (Distributed AI Research Institute)

        DAIR is a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence.

        favicon

        DAIR (Distributed AI Research Institute) (www.dair-institute.org)

        kellyromanych@mastodon.socialK 1 Reply Last reply
        0
        • seachanger@alaskan.socialS seachanger@alaskan.social

          I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

          *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

          sarae@ecoevo.socialS This user is from outside of this forum
          sarae@ecoevo.socialS This user is from outside of this forum
          sarae@ecoevo.social
          wrote last edited by
          #6

          @seachanger I would look to the work of @emilymbender and her colleagues

          seachanger@alaskan.socialS kellyromanych@mastodon.socialK 2 Replies Last reply
          0
          • sarae@ecoevo.socialS sarae@ecoevo.social

            @seachanger I would look to the work of @emilymbender and her colleagues

            seachanger@alaskan.socialS This user is from outside of this forum
            seachanger@alaskan.socialS This user is from outside of this forum
            seachanger@alaskan.social
            wrote last edited by
            #7

            @sarae i have followed them for a while but now I am trying to just get some clear sources pasted in that people might know of

            emilymbender@dair-community.socialE 1 Reply Last reply
            0
            • kellyromanych@mastodon.socialK kellyromanych@mastodon.social

              @seachanger this DAIR page has several issues

              Link Preview Image
              DAIR (Distributed AI Research Institute)

              DAIR is a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence.

              favicon

              DAIR (Distributed AI Research Institute) (www.dair-institute.org)

              kellyromanych@mastodon.socialK This user is from outside of this forum
              kellyromanych@mastodon.socialK This user is from outside of this forum
              kellyromanych@mastodon.social
              wrote last edited by
              #8

              @seachanger and you may also find this one useful, including its citations

              Checking your browser - reCAPTCHA

              favicon

              (pmc.ncbi.nlm.nih.gov)

              1 Reply Last reply
              0
              • aud@fire.asta.lgbtA aud@fire.asta.lgbt

                @seachanger@alaskan.social Not sure about the methodology behind this one, but I've heard about it at least (re: #10): https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

                aud@fire.asta.lgbtA This user is from outside of this forum
                aud@fire.asta.lgbtA This user is from outside of this forum
                aud@fire.asta.lgbt
                wrote last edited by
                #9

                @seachanger@alaskan.social Regarding item #5: https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai

                It's important to note, though, that the ruling walks a fine line: training of Claude was considered to be "fair use" (not a ruling I personally agree with but hey), however, the fact that Anthropic pirated all the materials was
                not. Anthropic settled on this claim rather than take it to trial, it seems.

                aud@fire.asta.lgbtA 1 Reply Last reply
                0
                • seachanger@alaskan.socialS seachanger@alaskan.social

                  I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                  *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                  darby3@zirk.usD This user is from outside of this forum
                  darby3@zirk.usD This user is from outside of this forum
                  darby3@zirk.us
                  wrote last edited by
                  #10

                  @seachanger I probably do here but would need to do some cross referencing I can’t do at the moment

                  Link Preview Image
                  AI Sucks, Actually

                  that's it, that's the thesis

                  favicon

                  (ai-sucks-actually.fyi)

                  seachanger@alaskan.socialS 1 Reply Last reply
                  0
                  • sarae@ecoevo.socialS sarae@ecoevo.social

                    @seachanger I would look to the work of @emilymbender and her colleagues

                    kellyromanych@mastodon.socialK This user is from outside of this forum
                    kellyromanych@mastodon.socialK This user is from outside of this forum
                    kellyromanych@mastodon.social
                    wrote last edited by
                    #11

                    @sarae yes, also @skinnylatte comes to mind for AI & nonprofits

                    @seachanger @emilymbender

                    1 Reply Last reply
                    0
                    • seachanger@alaskan.socialS seachanger@alaskan.social

                      I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                      *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                      cafechatnoir@mastodon.socialC This user is from outside of this forum
                      cafechatnoir@mastodon.socialC This user is from outside of this forum
                      cafechatnoir@mastodon.social
                      wrote last edited by
                      #12

                      @seachanger

                      MIT recently released a study on the long term cognitive effects of AI use. (Spoiler: they're not good effects.)

                      Link Preview Image
                      MIT Study Finds Artificial Intelligence Use Reprograms the Brain, Leading to Cognitive Decline - Science, Public Health Policy and the Law

                      By Nicolas Hulscher, MPH

                      favicon

                      Science, Public Health Policy and the Law (publichealthpolicyjournal.com)

                      coriopsicologia@mastodon.socialC juandesant@mathstodon.xyzJ 2 Replies Last reply
                      0
                      • seachanger@alaskan.socialS seachanger@alaskan.social

                        I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                        *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                        cafechatnoir@mastodon.socialC This user is from outside of this forum
                        cafechatnoir@mastodon.socialC This user is from outside of this forum
                        cafechatnoir@mastodon.social
                        wrote last edited by
                        #13

                        @seachanger

                        Oh, and not necessarily something you can "cite" - but on the prohibition on AI in comms: The people you're communicating with deserve your time and energy in creating those messages.

                        (I'm still salty about one of our executives sending out an intro email to use where he gleefully announced he used ChatGPT for it. How little does he think of us if he can't even be arsed to write his own email?)

                        1 Reply Last reply
                        0
                        • seachanger@alaskan.socialS seachanger@alaskan.social

                          I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                          *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                          mtechman@mastodon.ieM This user is from outside of this forum
                          mtechman@mastodon.ieM This user is from outside of this forum
                          mtechman@mastodon.ie
                          wrote last edited by
                          #14

                          @seachanger contact a librarian ...not sure if you are connected to a university. I wasn't, but university librarians were always very happy to help me, and they're fast.

                          1 Reply Last reply
                          0
                          • aud@fire.asta.lgbtA aud@fire.asta.lgbt

                            @seachanger@alaskan.social Regarding item #5: https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai

                            It's important to note, though, that the ruling walks a fine line: training of Claude was considered to be "fair use" (not a ruling I personally agree with but hey), however, the fact that Anthropic pirated all the materials was
                            not. Anthropic settled on this claim rather than take it to trial, it seems.

                            aud@fire.asta.lgbtA This user is from outside of this forum
                            aud@fire.asta.lgbtA This user is from outside of this forum
                            aud@fire.asta.lgbt
                            wrote last edited by
                            #15

                            @seachanger@alaskan.social speaking to maybe 6 and 7: not all that is sold as “AI” is actually AI, which isn’t quite what I had in mind while looking for privacy and safety concerns but it’s certainly related

                            https://data-workers.org/france/

                            aud@fire.asta.lgbtA 1 Reply Last reply
                            1
                            0
                            • aud@fire.asta.lgbtA aud@fire.asta.lgbt

                              @seachanger@alaskan.social Not sure about the methodology behind this one, but I've heard about it at least (re: #10): https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

                              cthos@mastodon.cthos.devC This user is from outside of this forum
                              cthos@mastodon.cthos.devC This user is from outside of this forum
                              cthos@mastodon.cthos.dev
                              wrote last edited by
                              #16

                              @aud @seachanger That's about the only actual study we have and it has a fairly low sample size, unfortunately. There are some other articles going around about the high cost and failure rates of AI projects though.

                              Methodology-wise, it's okay and at least tries to control for perception vs reality.

                              1 Reply Last reply
                              0
                              • aud@fire.asta.lgbtA aud@fire.asta.lgbt

                                @seachanger@alaskan.social speaking to maybe 6 and 7: not all that is sold as “AI” is actually AI, which isn’t quite what I had in mind while looking for privacy and safety concerns but it’s certainly related

                                https://data-workers.org/france/

                                aud@fire.asta.lgbtA This user is from outside of this forum
                                aud@fire.asta.lgbtA This user is from outside of this forum
                                aud@fire.asta.lgbt
                                wrote last edited by
                                #17

                                @seachanger@alaskan.social speaking to #3 a little: https://www.theguardian.com/technology/2026/jan/15/elon-musk-xai-datacenter-memphis

                                The other companies aren’t quite as blatant as Musk. Not sure I have any good definitive links on that; they definitely like to hide and fudge the numbers (“watt per inference!”) so I was trying to find something about the data center strain on grid capacity, but a lot of is paywalled…

                                1 Reply Last reply
                                0
                                • seachanger@alaskan.socialS seachanger@alaskan.social

                                  @sarae i have followed them for a while but now I am trying to just get some clear sources pasted in that people might know of

                                  emilymbender@dair-community.socialE This user is from outside of this forum
                                  emilymbender@dair-community.socialE This user is from outside of this forum
                                  emilymbender@dair-community.social
                                  wrote last edited by
                                  #18

                                  @seachanger @sarae

                                  The endnotes in our book are full of sources:
                                  https://thecon.ai

                                  emilymbender@dair-community.socialE 1 Reply Last reply
                                  0
                                  • emilymbender@dair-community.socialE emilymbender@dair-community.social

                                    @seachanger @sarae

                                    The endnotes in our book are full of sources:
                                    https://thecon.ai

                                    emilymbender@dair-community.socialE This user is from outside of this forum
                                    emilymbender@dair-community.socialE This user is from outside of this forum
                                    emilymbender@dair-community.social
                                    wrote last edited by
                                    #19

                                    @seachanger @sarae

                                    Also, not sure what you mean by sources people might know of, but ... our book is a source!

                                    seachanger@alaskan.socialS 1 Reply Last reply
                                    0
                                    • seachanger@alaskan.socialS seachanger@alaskan.social

                                      I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                                      *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                                      edcates@mastodon.socialE This user is from outside of this forum
                                      edcates@mastodon.socialE This user is from outside of this forum
                                      edcates@mastodon.social
                                      wrote last edited by
                                      #20

                                      @seachanger

                                      #10. https://vcresearch.berkeley.edu/news/does-ai-actually-free-workers-time

                                      1 Reply Last reply
                                      0
                                      • seachanger@alaskan.socialS seachanger@alaskan.social

                                        I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                                        *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                                        arod@social.coopA This user is from outside of this forum
                                        arod@social.coopA This user is from outside of this forum
                                        arod@social.coop
                                        wrote last edited by
                                        #21

                                        @seachanger this is a great resource, I think you will find some sources here: https://libguides.amherst.edu/genAI/ethics

                                        seachanger@alaskan.socialS tootbrute@fedi.arkadi.oneT 2 Replies Last reply
                                        0
                                        • seachanger@alaskan.socialS seachanger@alaskan.social

                                          I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                                          *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                                          edcates@mastodon.socialE This user is from outside of this forum
                                          edcates@mastodon.socialE This user is from outside of this forum
                                          edcates@mastodon.social
                                          wrote last edited by
                                          #22

                                          @seachanger #6. Which links to the Standford report it discusses.

                                          favicon

                                          (www.kiteworks.com)

                                          Anecdotally, even though Kagi Translate has instructions to not divulge its prompt with anyone, people are easily able to get it to do so by asking it to create or show the output of programs that do exactly that.

                                          I can dig up those examples if you want.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups