Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling.

I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling.

Scheduled Pinned Locked Moved Uncategorized
42 Posts 24 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • seachanger@alaskan.socialS seachanger@alaskan.social

    I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

    *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

    edcates@mastodon.socialE This user is from outside of this forum
    edcates@mastodon.socialE This user is from outside of this forum
    edcates@mastodon.social
    wrote last edited by
    #20

    @seachanger

    #10. https://vcresearch.berkeley.edu/news/does-ai-actually-free-workers-time

    1 Reply Last reply
    0
    • seachanger@alaskan.socialS seachanger@alaskan.social

      I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

      *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

      arod@social.coopA This user is from outside of this forum
      arod@social.coopA This user is from outside of this forum
      arod@social.coop
      wrote last edited by
      #21

      @seachanger this is a great resource, I think you will find some sources here: https://libguides.amherst.edu/genAI/ethics

      seachanger@alaskan.socialS tootbrute@fedi.arkadi.oneT 2 Replies Last reply
      0
      • seachanger@alaskan.socialS seachanger@alaskan.social

        I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

        *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

        edcates@mastodon.socialE This user is from outside of this forum
        edcates@mastodon.socialE This user is from outside of this forum
        edcates@mastodon.social
        wrote last edited by
        #22

        @seachanger #6. Which links to the Standford report it discusses.

        favicon

        (www.kiteworks.com)

        Anecdotally, even though Kagi Translate has instructions to not divulge its prompt with anyone, people are easily able to get it to do so by asking it to create or show the output of programs that do exactly that.

        I can dig up those examples if you want.

        1 Reply Last reply
        0
        • emilymbender@dair-community.socialE emilymbender@dair-community.social

          @seachanger @sarae

          Also, not sure what you mean by sources people might know of, but ... our book is a source!

          seachanger@alaskan.socialS This user is from outside of this forum
          seachanger@alaskan.socialS This user is from outside of this forum
          seachanger@alaskan.social
          wrote last edited by
          #23

          @emilymbender
          Thank you! I just thought people might reference recent stories or reports that back the specific points I was making. I am also adding your book and a few others from https://monetdiaz.com/books-critical-AI.html

          @sarae

          johannab@cosocial.caJ 1 Reply Last reply
          0
          • arod@social.coopA arod@social.coop

            @seachanger this is a great resource, I think you will find some sources here: https://libguides.amherst.edu/genAI/ethics

            seachanger@alaskan.socialS This user is from outside of this forum
            seachanger@alaskan.socialS This user is from outside of this forum
            seachanger@alaskan.social
            wrote last edited by
            #24

            @arod oh wow yes that is what I was looking for

            1 Reply Last reply
            0
            • seachanger@alaskan.socialS seachanger@alaskan.social

              I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

              *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

              imbl@social.treehouse.systemsI This user is from outside of this forum
              imbl@social.treehouse.systemsI This user is from outside of this forum
              imbl@social.treehouse.systems
              wrote last edited by
              #25

              @seachanger here are a couple of links on ai's role in digital colonialism in africa and south america in case that's helpful!

              https://www.ictworks.org/african-digital-colonialism/ (a synopsis of https://www.ictworks.org/wp-content/uploads/2025/01/African-Digital-Colonialism.pdf)
              https://peopledaily.digital/insights/the-hidden-cost-of-ai-africas-invisible-workforce-and-digital-servitude (ironically uses an ai generated stock image as the article header)
              https://www.technologyreview.com/supertopic/ai-colonialism-supertopic/ (keeps trying to sell me ai books lol)

              1 Reply Last reply
              0
              • seachanger@alaskan.socialS seachanger@alaskan.social

                @emilymbender
                Thank you! I just thought people might reference recent stories or reports that back the specific points I was making. I am also adding your book and a few others from https://monetdiaz.com/books-critical-AI.html

                @sarae

                johannab@cosocial.caJ This user is from outside of this forum
                johannab@cosocial.caJ This user is from outside of this forum
                johannab@cosocial.ca
                wrote last edited by
                #26

                @seachanger @emilymbender @sarae

                Not quite at my fingertips right now and I'll go have a look, but the consulting firm Deloitte is a "case study as a dire warning", as is Air Canada - both were held to be liable and had to reimburse clients for letting AI fuckups into their official products or communications.

                johannab@cosocial.caJ 2 Replies Last reply
                0
                • johannab@cosocial.caJ johannab@cosocial.ca

                  @seachanger @emilymbender @sarae

                  Not quite at my fingertips right now and I'll go have a look, but the consulting firm Deloitte is a "case study as a dire warning", as is Air Canada - both were held to be liable and had to reimburse clients for letting AI fuckups into their official products or communications.

                  johannab@cosocial.caJ This user is from outside of this forum
                  johannab@cosocial.caJ This user is from outside of this forum
                  johannab@cosocial.ca
                  wrote last edited by
                  #27

                  @seachanger @emilymbender @sarae

                  Boards are usually much more receptive to "well, this is a risk that could get your own ass handed to you in court, minus any cash you had in your back pocket" than they are to "this is a highly problematic tool that is deceptively easy to misuse badly" because everyone thinks everyone else who got in trouble was just not as smart as they are.

                  1 Reply Last reply
                  0
                  • johannab@cosocial.caJ johannab@cosocial.ca

                    @seachanger @emilymbender @sarae

                    Not quite at my fingertips right now and I'll go have a look, but the consulting firm Deloitte is a "case study as a dire warning", as is Air Canada - both were held to be liable and had to reimburse clients for letting AI fuckups into their official products or communications.

                    johannab@cosocial.caJ This user is from outside of this forum
                    johannab@cosocial.caJ This user is from outside of this forum
                    johannab@cosocial.ca
                    wrote last edited by
                    #28

                    @seachanger

                    not too tough to find, even:

                    Link Preview Image
                    Deloitte to pay money back to Albanese government after using AI in $440,000 report

                    Partial refund to be issued after several errors were found in a report into a department’s compliance framework

                    favicon

                    the Guardian (www.theguardian.com)

                    Link Preview Image
                    How can I mislead you? Air Canada found liable for chatbot's bad advice on bereavement rates | CBC News

                    Air Canada has been ordered to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot.

                    favicon

                    CBC (www.cbc.ca)

                    1 Reply Last reply
                    0
                    • seachanger@alaskan.socialS seachanger@alaskan.social

                      I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                      *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                      integerdisarray@lethallava.landI This user is from outside of this forum
                      integerdisarray@lethallava.landI This user is from outside of this forum
                      integerdisarray@lethallava.land
                      wrote last edited by
                      #29

                      @seachanger@alaskan.social Here's one potential reason: a recent meta-analysis concluded that the general public is terrified of AI and has near-zero trust in AI products https://onlinelibrary.wiley.com/doi/10.1002/cb.70144?af=R

                      1 Reply Last reply
                      0
                      • darby3@zirk.usD darby3@zirk.us

                        @seachanger I probably do here but would need to do some cross referencing I can’t do at the moment

                        Link Preview Image
                        AI Sucks, Actually

                        that's it, that's the thesis

                        favicon

                        (ai-sucks-actually.fyi)

                        seachanger@alaskan.socialS This user is from outside of this forum
                        seachanger@alaskan.socialS This user is from outside of this forum
                        seachanger@alaskan.social
                        wrote last edited by
                        #30

                        @darby3 thank you! nice work!

                        1 Reply Last reply
                        0
                        • cafechatnoir@mastodon.socialC cafechatnoir@mastodon.social

                          @seachanger

                          MIT recently released a study on the long term cognitive effects of AI use. (Spoiler: they're not good effects.)

                          Link Preview Image
                          MIT Study Finds Artificial Intelligence Use Reprograms the Brain, Leading to Cognitive Decline - Science, Public Health Policy and the Law

                          By Nicolas Hulscher, MPH

                          favicon

                          Science, Public Health Policy and the Law (publichealthpolicyjournal.com)

                          coriopsicologia@mastodon.socialC This user is from outside of this forum
                          coriopsicologia@mastodon.socialC This user is from outside of this forum
                          coriopsicologia@mastodon.social
                          wrote last edited by
                          #31

                          @cafechatnoir @seachanger
                          This report from our collective @tunubesecamirio is plenty of reference for the point 3

                          Link Preview Image
                          Informe sobre los centros de datos de Aragón- El precio de las nubes – Tu Nube Seca Mi Río

                          favicon

                          (tunubesecamirio.com)

                          1 Reply Last reply
                          0
                          • seachanger@alaskan.socialS seachanger@alaskan.social

                            I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                            *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                            tootbrute@fedi.arkadi.oneT This user is from outside of this forum
                            tootbrute@fedi.arkadi.oneT This user is from outside of this forum
                            tootbrute@fedi.arkadi.one
                            wrote last edited by
                            #32

                            @seachanger don't they have an "AI IS GOING GREAT" website?

                            Link Preview Image
                            Web3 is Going Just Great

                            A timeline recording only some of the many disasters happening in crypto, decentralized finance, NFTs, and other blockchain-based projects.

                            favicon

                            (www.web3isgoinggreat.com)

                            like they had for crypto shit.

                            1 Reply Last reply
                            0
                            • arod@social.coopA arod@social.coop

                              @seachanger this is a great resource, I think you will find some sources here: https://libguides.amherst.edu/genAI/ethics

                              tootbrute@fedi.arkadi.oneT This user is from outside of this forum
                              tootbrute@fedi.arkadi.oneT This user is from outside of this forum
                              tootbrute@fedi.arkadi.one
                              wrote last edited by
                              #33

                              @arod @seachanger great list of the reasons to not use AI.

                              1 Reply Last reply
                              0
                              • seachanger@alaskan.socialS seachanger@alaskan.social

                                I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                                *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                                saorsa@neondystopia.worldS This user is from outside of this forum
                                saorsa@neondystopia.worldS This user is from outside of this forum
                                saorsa@neondystopia.world
                                wrote last edited by
                                #34
                                My policy for AI in programming tends to be applicable in other areas as well.

                                • AI should be used to assist, enhance and not replace existing workflows.
                                • When using AI, be sure to split your workflow into smaller, more manageable chunks.
                                • Proofread then validate the output against other sources before implementing it into your works.

                                This ensures that anything contributed by an AI meets the same expectations as a human performing the same task. If you implement the following or some variant of it into your workflow, you'll find that a lot of the common pitfalls with AI can easily be avoided. While the efficacy of AI can never been guaranteed, I find that sticking to those guidelines can help direct the output into something less liable to be derivative.

                                @seachanger@alaskan.social
                                1 Reply Last reply
                                1
                                0
                                • R relay@relay.an.exchange shared this topic
                                • cafechatnoir@mastodon.socialC cafechatnoir@mastodon.social

                                  @seachanger

                                  MIT recently released a study on the long term cognitive effects of AI use. (Spoiler: they're not good effects.)

                                  Link Preview Image
                                  MIT Study Finds Artificial Intelligence Use Reprograms the Brain, Leading to Cognitive Decline - Science, Public Health Policy and the Law

                                  By Nicolas Hulscher, MPH

                                  favicon

                                  Science, Public Health Policy and the Law (publichealthpolicyjournal.com)

                                  juandesant@mathstodon.xyzJ This user is from outside of this forum
                                  juandesant@mathstodon.xyzJ This user is from outside of this forum
                                  juandesant@mathstodon.xyz
                                  wrote last edited by
                                  #35

                                  @cafechatnoir @seachanger pinging @WeirdWriter, who put in beautiful, powerful words how that experience of “semantic ablation” affected his writer friend. At least, it seems to be recoverable, but at what cost…

                                  weirdwriter@caneandable.socialW 1 Reply Last reply
                                  0
                                  • juandesant@mathstodon.xyzJ juandesant@mathstodon.xyz

                                    @cafechatnoir @seachanger pinging @WeirdWriter, who put in beautiful, powerful words how that experience of “semantic ablation” affected his writer friend. At least, it seems to be recoverable, but at what cost…

                                    weirdwriter@caneandable.socialW This user is from outside of this forum
                                    weirdwriter@caneandable.socialW This user is from outside of this forum
                                    weirdwriter@caneandable.social
                                    wrote last edited by
                                    #36

                                    @juandesant @cafechatnoir @seachanger Yay thank you for tagging! My narrative is at the end. I’ve seen it have a drastically negative psychological consequences for everybody that uses it. Writers, readers, anybody really. I recently had a scenario where a trance friend of mine just quit writing all altogether because, on the one hand everybody was praising her for doing such a fantastic job with prompting the thing when she never used an LLM at all. The truly horrifying thing was, the positive comments were more disturbing because they praised an LLM for creating it when she never touched an LLM in her life. I’m going to write about it, but right now, the emotions are swirling around and I need to calm down after these incidences AnyWho, but if you have not read it yet, the first story is https://sightlessscribbles.com/the-colonization-of-confidence/

                                    1 Reply Last reply
                                    0
                                    • seachanger@alaskan.socialS seachanger@alaskan.social

                                      I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                                      *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                                      thankfulmachine@oldbytes.spaceT This user is from outside of this forum
                                      thankfulmachine@oldbytes.spaceT This user is from outside of this forum
                                      thankfulmachine@oldbytes.space
                                      wrote last edited by
                                      #37

                                      @seachanger https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/

                                      1 Reply Last reply
                                      0
                                      • seachanger@alaskan.socialS seachanger@alaskan.social

                                        I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                                        *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                                        thankfulmachine@oldbytes.spaceT This user is from outside of this forum
                                        thankfulmachine@oldbytes.spaceT This user is from outside of this forum
                                        thankfulmachine@oldbytes.space
                                        wrote last edited by
                                        #38

                                        @seachanger https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

                                        1 Reply Last reply
                                        0
                                        • seachanger@alaskan.socialS seachanger@alaskan.social

                                          I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

                                          *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

                                          kdwarn@social.coopK This user is from outside of this forum
                                          kdwarn@social.coopK This user is from outside of this forum
                                          kdwarn@social.coop
                                          wrote last edited by
                                          #39

                                          @seachanger been collecting news articles here: https://kdwarn.net/programming/links#AI%20Sucks

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups