Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "On the acceptance of GenAI"https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

"On the acceptance of GenAI"https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

Scheduled Pinned Locked Moved Uncategorized
36 Posts 25 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • buckfiftyseven@mastodon.socialB buckfiftyseven@mastodon.social

    @ai6yr @tante it seems pretty similar doesn't it? Taking what you want from a website, regardless of the host's intentions?

    G This user is from outside of this forum
    G This user is from outside of this forum
    gbsills@social.vivaldi.net
    wrote last edited by
    #16

    @buckfiftyseven @ai6yr @tante Actually sites that don't want you to see their sites with ad blockers can easily do so.

    1 Reply Last reply
    0
    • tante@tldr.nettime.orgT tante@tldr.nettime.org

      "On the acceptance of GenAI"
      https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

      netraven@hear-me.socialN This user is from outside of this forum
      netraven@hear-me.socialN This user is from outside of this forum
      netraven@hear-me.social
      wrote last edited by
      #17

      @tante I don't use GenAI, I just try to find new and creative ways to break it.

      1 Reply Last reply
      0
      • tante@tldr.nettime.orgT tante@tldr.nettime.org

        "On the acceptance of GenAI"
        https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

        crankylinuxuser@infosec.exchangeC This user is from outside of this forum
        crankylinuxuser@infosec.exchangeC This user is from outside of this forum
        crankylinuxuser@infosec.exchange
        wrote last edited by
        #18

        @tante

        None of these are true if you run your own LLMs on your own hardware, using FLOSS models.

        But the #MastodonHOA has deemed all AI to be abhorrent as a blanket decision.

        And frankly, if you exist in a capitalist society, and you're not an owner, there is 100% chance you are exploited. The capitalist system requires it.

        tante@tldr.nettime.orgT 1 Reply Last reply
        0
        • crankylinuxuser@infosec.exchangeC crankylinuxuser@infosec.exchange

          @tante

          None of these are true if you run your own LLMs on your own hardware, using FLOSS models.

          But the #MastodonHOA has deemed all AI to be abhorrent as a blanket decision.

          And frankly, if you exist in a capitalist society, and you're not an owner, there is 100% chance you are exploited. The capitalist system requires it.

          tante@tldr.nettime.orgT This user is from outside of this forum
          tante@tldr.nettime.orgT This user is from outside of this forum
          tante@tldr.nettime.org
          wrote last edited by
          #19

          @crankylinuxuser FLOSS Models (which are only freeware) fulfill most of those boxes. Trained on stolen data, massaged by people in global majority countries, trained in environmentally harmful data centers, outsourcing skills to the freeware product a company dumped on me, using a tool that is imbued and trained for how big tech wants to see the world, and effort could have gone to something meaningful. So yeah nope.

          crankylinuxuser@infosec.exchangeC qgustavor@urusai.socialQ 2 Replies Last reply
          0
          • tante@tldr.nettime.orgT tante@tldr.nettime.org

            "On the acceptance of GenAI"
            https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

            kfort@social.sciences.reK This user is from outside of this forum
            kfort@social.sciences.reK This user is from outside of this forum
            kfort@social.sciences.re
            wrote last edited by
            #20

            @tante I looove this! thanks!

            1 Reply Last reply
            0
            • tante@tldr.nettime.orgT tante@tldr.nettime.org

              @crankylinuxuser FLOSS Models (which are only freeware) fulfill most of those boxes. Trained on stolen data, massaged by people in global majority countries, trained in environmentally harmful data centers, outsourcing skills to the freeware product a company dumped on me, using a tool that is imbued and trained for how big tech wants to see the world, and effort could have gone to something meaningful. So yeah nope.

              crankylinuxuser@infosec.exchangeC This user is from outside of this forum
              crankylinuxuser@infosec.exchangeC This user is from outside of this forum
              crankylinuxuser@infosec.exchange
              wrote last edited by
              #21

              @tante

              "Trained on stolen data". Its at best a copyright violation. And I view things like Anna's Archive and Libgen to be internationally renowned Public Libraries.

              "Massaged by people in global majority countries" - yes, people work in capitalism. And guess what... You're exploited.

              "Trained in environmentally harmful data centers". This assumes that training is always needed, and its not. You can train once, and run X times. Again, you're stretching to make local LLM look horrible.

              And really, the rest of these are poor excuses. I won't use poop smear(anthropic), or OpenAI, or other SaaS token companies. I run local, and does not have those things you claim.

              Except for the copyright issue. But again, I dont have that much respect for current US copyright.

              epic_null@infosec.exchangeE 1 Reply Last reply
              0
              • crazyeddie@mastodon.socialC crazyeddie@mastodon.social

                @tante Bad framing.

                There's no such thing as GenAI.

                That's some lofty goal they're supposedly going to reach by investing the entire world economy into it.

                orange_lux@eldritch.cafeO This user is from outside of this forum
                orange_lux@eldritch.cafeO This user is from outside of this forum
                orange_lux@eldritch.cafe
                wrote last edited by
                #22

                @crazyeddie @tante GenAI as in Generative AI, not Artificial General Intelligence (AGI).

                1 Reply Last reply
                0
                • tante@tldr.nettime.orgT tante@tldr.nettime.org

                  "On the acceptance of GenAI"
                  https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

                  lemgandi@mastodon.socialL This user is from outside of this forum
                  lemgandi@mastodon.socialL This user is from outside of this forum
                  lemgandi@mastodon.social
                  wrote last edited by
                  #23

                  @tante

                  x I accept that using this tool will make me measurably stupider

                  1 Reply Last reply
                  0
                  • tante@tldr.nettime.orgT tante@tldr.nettime.org

                    "On the acceptance of GenAI"
                    https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

                    mitsosimo@mastodon.socialM This user is from outside of this forum
                    mitsosimo@mastodon.socialM This user is from outside of this forum
                    mitsosimo@mastodon.social
                    wrote last edited by
                    #24

                    @tante There should be a "I accept that all of my data will be used against me at some point" option.

                    1 Reply Last reply
                    0
                    • tante@tldr.nettime.orgT tante@tldr.nettime.org

                      "On the acceptance of GenAI"
                      https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

                      feld@friedcheese.usF This user is from outside of this forum
                      feld@friedcheese.usF This user is from outside of this forum
                      feld@friedcheese.us
                      wrote last edited by
                      #25
                      @tante even Claude would have added a Select All option
                      1 Reply Last reply
                      0
                      • crankylinuxuser@infosec.exchangeC crankylinuxuser@infosec.exchange

                        @tante

                        "Trained on stolen data". Its at best a copyright violation. And I view things like Anna's Archive and Libgen to be internationally renowned Public Libraries.

                        "Massaged by people in global majority countries" - yes, people work in capitalism. And guess what... You're exploited.

                        "Trained in environmentally harmful data centers". This assumes that training is always needed, and its not. You can train once, and run X times. Again, you're stretching to make local LLM look horrible.

                        And really, the rest of these are poor excuses. I won't use poop smear(anthropic), or OpenAI, or other SaaS token companies. I run local, and does not have those things you claim.

                        Except for the copyright issue. But again, I dont have that much respect for current US copyright.

                        epic_null@infosec.exchangeE This user is from outside of this forum
                        epic_null@infosec.exchangeE This user is from outside of this forum
                        epic_null@infosec.exchange
                        wrote last edited by
                        #26

                        @crankylinuxuser @tante

                        Its at best a copyright violation

                        This may be true for published and public data... but that's not the only data that goes into these things. Any data that comes from breaches, users private cameras, and anything else stored with an expectation of privacy is much worse than a copyright violation.

                        crankylinuxuser@infosec.exchangeC komali_2@mastodon.socialK 2 Replies Last reply
                        0
                        • tante@tldr.nettime.orgT tante@tldr.nettime.org

                          "On the acceptance of GenAI"
                          https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

                          synthyx@social.vivaldi.netS This user is from outside of this forum
                          synthyx@social.vivaldi.netS This user is from outside of this forum
                          synthyx@social.vivaldi.net
                          wrote last edited by
                          #27

                          @tante

                          AI in the modern age is not going away. You shouldn't be shamed for using it, and at this point you should expect it.

                          Even when the bubble goes pop we are still going to have AI in some form. AI is a useful tool for many people, and it's great when you self host it.

                          Also, most things AI "steals" isn't really stealing if it's free and public on the internet.

                          Only thing I really can agree with is environment impacts. At this point though we muck up the environment so much with plastics, overusage of gas, mass deforestation, etc that I don't know how big of an impact that really has. Ideally we would use green forms of energy for everything, and new tech innovation would reduce the absurd amounts of power required to run these supercomputers. Hopefully the ARM architecture is that light in the dark.

                          1 Reply Last reply
                          0
                          • tante@tldr.nettime.orgT tante@tldr.nettime.org

                            @crankylinuxuser FLOSS Models (which are only freeware) fulfill most of those boxes. Trained on stolen data, massaged by people in global majority countries, trained in environmentally harmful data centers, outsourcing skills to the freeware product a company dumped on me, using a tool that is imbued and trained for how big tech wants to see the world, and effort could have gone to something meaningful. So yeah nope.

                            qgustavor@urusai.socialQ This user is from outside of this forum
                            qgustavor@urusai.socialQ This user is from outside of this forum
                            qgustavor@urusai.social
                            wrote last edited by
                            #28

                            @tante @crankylinuxuser I guess some people have zero idea of how AI model training works. They have the impression that "if I run this HuggingFace model in my hardware, it's ethical" but kinda think those models got uploaded there out of thin air, without any implications.

                            qgustavor@urusai.socialQ 1 Reply Last reply
                            0
                            • tante@tldr.nettime.orgT tante@tldr.nettime.org

                              "On the acceptance of GenAI"
                              https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

                              komali_2@mastodon.socialK This user is from outside of this forum
                              komali_2@mastodon.socialK This user is from outside of this forum
                              komali_2@mastodon.social
                              wrote last edited by
                              #29

                              @tante a lot of this applies to basically all participation in capitalism.

                              1 Reply Last reply
                              0
                              • epic_null@infosec.exchangeE epic_null@infosec.exchange

                                @crankylinuxuser @tante

                                Its at best a copyright violation

                                This may be true for published and public data... but that's not the only data that goes into these things. Any data that comes from breaches, users private cameras, and anything else stored with an expectation of privacy is much worse than a copyright violation.

                                crankylinuxuser@infosec.exchangeC This user is from outside of this forum
                                crankylinuxuser@infosec.exchangeC This user is from outside of this forum
                                crankylinuxuser@infosec.exchange
                                wrote last edited by
                                #30

                                @Epic_Null @tante

                                And yes, that is a big issue with the SaaS token vendors. Claude, OpenAI, MS, and the rest do use whatever user data they can get. I am not arguing their horrific behavior.

                                I'm talking about locally running Qwen, or Deepseek, or other FLOSS models.

                                That local LLM running on my machine only sees and uses data I provide. And a control-c in the relevant console window kills the LLM.

                                What folks do not realize is this is #Leibniz's ultimate dream, of being able to do #calculus with words, sentences, and more. He tried to do single word-vectors, but even that had to wait for Word2Vec in 2012.

                                1 Reply Last reply
                                0
                                • tante@tldr.nettime.orgT tante@tldr.nettime.org

                                  "On the acceptance of GenAI"
                                  https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

                                  lumi@snug.moeL This user is from outside of this forum
                                  lumi@snug.moeL This user is from outside of this forum
                                  lumi@snug.moe
                                  wrote last edited by
                                  #31

                                  @tante i would also add "I accept that the goal is stripping humanity from everything."

                                  1 Reply Last reply
                                  0
                                  • epic_null@infosec.exchangeE epic_null@infosec.exchange

                                    @crankylinuxuser @tante

                                    Its at best a copyright violation

                                    This may be true for published and public data... but that's not the only data that goes into these things. Any data that comes from breaches, users private cameras, and anything else stored with an expectation of privacy is much worse than a copyright violation.

                                    komali_2@mastodon.socialK This user is from outside of this forum
                                    komali_2@mastodon.socialK This user is from outside of this forum
                                    komali_2@mastodon.social
                                    wrote last edited by
                                    #32

                                    @Epic_Null @crankylinuxuser @tante

                                    Data wants to be free. This argument simply doesn't work for those of us that have always been open data, anti copyright.

                                    1 Reply Last reply
                                    0
                                    • qgustavor@urusai.socialQ qgustavor@urusai.social

                                      @tante @crankylinuxuser I guess some people have zero idea of how AI model training works. They have the impression that "if I run this HuggingFace model in my hardware, it's ethical" but kinda think those models got uploaded there out of thin air, without any implications.

                                      qgustavor@urusai.socialQ This user is from outside of this forum
                                      qgustavor@urusai.socialQ This user is from outside of this forum
                                      qgustavor@urusai.social
                                      wrote last edited by
                                      #33

                                      @tante @crankylinuxuser Example: I use Whisper for audio transcription (mostly for accessibility issues, it's harder for me to understand audio messages than text messages), so I know using it, even self-hosted, tick most boxes.

                                      I'm sure it was trained on stolen data (as it constantly returns things like "subtitles by example.com"), I'm sure training it hurt the environment, I'm sure the company behind it (OpenAI) does not have a viable business model (but, to be fair, I don't care about that, governments also don't have a viable business model, they don't have to).

                                      But, since I'm using it for accessibility and there is no alternatives, we need to consider the trade offs and promote research that reduces those issues ethically. Saying "bUt I Am RuNNinG iT LoCAllY so ITs eThICAl" is dumb.

                                      qgustavor@urusai.socialQ 1 Reply Last reply
                                      0
                                      • tante@tldr.nettime.orgT tante@tldr.nettime.org

                                        "On the acceptance of GenAI"
                                        https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/

                                        heroicthehobbyist@mastodon.gamedev.placeH This user is from outside of this forum
                                        heroicthehobbyist@mastodon.gamedev.placeH This user is from outside of this forum
                                        heroicthehobbyist@mastodon.gamedev.place
                                        wrote last edited by
                                        #34

                                        @tante - of the many ills of generative ai the thing I find most despicable is that it poorer. Yes it makes me poor financially and materially of course, but it makes me poor mentally and spiritually. It robs of your sense of time too, in that you think you don’t have “enough” time to do something so you make the AI do it, robbing yourself a opportunity for to learn something (even if it’s super minute)

                                        1 Reply Last reply
                                        0
                                        • qgustavor@urusai.socialQ qgustavor@urusai.social

                                          @tante @crankylinuxuser Example: I use Whisper for audio transcription (mostly for accessibility issues, it's harder for me to understand audio messages than text messages), so I know using it, even self-hosted, tick most boxes.

                                          I'm sure it was trained on stolen data (as it constantly returns things like "subtitles by example.com"), I'm sure training it hurt the environment, I'm sure the company behind it (OpenAI) does not have a viable business model (but, to be fair, I don't care about that, governments also don't have a viable business model, they don't have to).

                                          But, since I'm using it for accessibility and there is no alternatives, we need to consider the trade offs and promote research that reduces those issues ethically. Saying "bUt I Am RuNNinG iT LoCAllY so ITs eThICAl" is dumb.

                                          qgustavor@urusai.socialQ This user is from outside of this forum
                                          qgustavor@urusai.socialQ This user is from outside of this forum
                                          qgustavor@urusai.social
                                          wrote last edited by
                                          #35

                                          @tante @crankylinuxuser So, my objetive here: sure, current AI is truly unethical and sadly we have lots of people that want to be blind about its issues, but, not all from it is bad.

                                          I can't just say to a illiterate person "can you write for me instead of speaking?" because they just can't do that. I talk with lots of illiterate people, I'm in the construction business, lots of workers only know numbers and how to write their own name. So, Whisper, despite not being ethical, is what I use.

                                          But, are there ethical alternatives? At the moment, I didn't find anything as reliable as Whisper, but there's the Common Voice dataset, which is free, which could be used to solve the issue of being trained on stolen data (but not the environmental issues).

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups