Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. No, opposing LLMs isn't "purity culture."

No, opposing LLMs isn't "purity culture."

Scheduled Pinned Locked Moved Uncategorized
148 Posts 51 Posters 233 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • matt@toot.cafeM matt@toot.cafe

    @dalias Doctorow seems to feel that this is what he would be doing; he finds the LLM useful. And some programmers I follow and respect feel that way about their LLM-based coding agents (using the big rented models, not a local one like Doctorow), that they'd be denying themselves something useful and putting themselves at a disadvantage for moral reasons.

    matt@toot.cafeM This user is from outside of this forum
    matt@toot.cafeM This user is from outside of this forum
    matt@toot.cafe
    wrote last edited by
    #88

    @dalias To be clear, I'm not convinced by the proponents of LLM-based coding agents. I find the idea of having a statistical text generator pump out volumes of code from ambiguous natural language distasteful. And I sure wouldn't want that approach to be used for something like musl, where you clearly work on it deliberately, carefully, with no line of code wasted.

    1 Reply Last reply
    0
    • craignicol@glasgow.socialC craignicol@glasgow.social

      @xgranade @onepict *especially* when people with more money than you find said principles annoying.

      craignicol@glasgow.socialC This user is from outside of this forum
      craignicol@glasgow.socialC This user is from outside of this forum
      craignicol@glasgow.social
      wrote last edited by
      #89

      @xgranade @onepict see also https://wandering.shop/@susankayequinn/116104755934120567

      1 Reply Last reply
      0
      • cthos@mastodon.cthos.devC cthos@mastodon.cthos.dev

        @xgranade My dude is torching his own credibility to use an LLM to check for typos.

        TYPOS.

        mikalai@privacysafe.socialM This user is from outside of this forum
        mikalai@privacysafe.socialM This user is from outside of this forum
        mikalai@privacysafe.social
        wrote last edited by
        #90

        @cthos @xgranade
        1 - when hands type on autopilot, one will get those.
        2 - have you seen thickness of Corry's glasses?
        Can you imagine how vision field is bent?
        Should such person use some help from computers?

        cthos@mastodon.cthos.devC 1 Reply Last reply
        0
        • komali_2@mastodon.socialK komali_2@mastodon.social

          @cthos I think that's less an indictment of Doctorow and more one of the never-LLM crowd, who have clearly become dogmatic Puritans

          mikalai@privacysafe.socialM This user is from outside of this forum
          mikalai@privacysafe.socialM This user is from outside of this forum
          mikalai@privacysafe.social
          wrote last edited by
          #91

          @komali_2 @cthos
          Is it possible, that this pattern of "puritanity" is what's counterproductive, here and in other places?

          1 Reply Last reply
          0
          • ada@zoner.workA ada@zoner.work

            @xgranade@wandering.shop opposing LLMs is an integrity culture, not purity.

            mikalai@privacysafe.socialM This user is from outside of this forum
            mikalai@privacysafe.socialM This user is from outside of this forum
            mikalai@privacysafe.social
            wrote last edited by
            #92

            @ada @xgranade
            Questioning own beliefs, and correcting them based on evidence is integrity.

            Dying for Coca-Cola vs Pepsi is being a ... fan, not integrity in ideas.

            xgranade@wandering.shopX 1 Reply Last reply
            0
            • xgranade@wandering.shopX xgranade@wandering.shop

              No, opposing LLMs isn't "purity culture." I've seen this now from quite a few different people, and I disagree vehemently. It is good, actually, to have moral principles and hold to them, even when people with more money than you find said principles annoying.

              mikalai@privacysafe.socialM This user is from outside of this forum
              mikalai@privacysafe.socialM This user is from outside of this forum
              mikalai@privacysafe.social
              wrote last edited by
              #93

              @xgranade
              What if instead of "opposing use of LLM" we say as we mean "opposing use of tech you don't control", or something like this.
              Can you, guys find better way to focus attention on the bad power dynamic at hand?

              jeffgrigg@mastodon.socialJ xgranade@wandering.shopX 2 Replies Last reply
              0
              • mikalai@privacysafe.socialM mikalai@privacysafe.social

                @xgranade
                What if instead of "opposing use of LLM" we say as we mean "opposing use of tech you don't control", or something like this.
                Can you, guys find better way to focus attention on the bad power dynamic at hand?

                jeffgrigg@mastodon.socialJ This user is from outside of this forum
                jeffgrigg@mastodon.socialJ This user is from outside of this forum
                jeffgrigg@mastodon.social
                wrote last edited by
                #94

                @mikalai @xgranade

                "But I don't control it!" is not a very compelling issue.

                And it's not the most important issue for those who oppose Generative AI.

                There are a number of compelling issues with Generative AI. And many of them, on their own, may rationally be enough to swear off of it, or even to ban it.

                Insisting that everyone limit the argument to one relatively weak point is a fallacious argument, a logical fallicy.

                mikalai@privacysafe.socialM 2 Replies Last reply
                0
                • srazkvt@tech.lgbtS srazkvt@tech.lgbt

                  @komali_2 @xgranade the important part here is by using an llm you depend on fascists working hard to make your work less valuable

                  komali_2@mastodon.socialK This user is from outside of this forum
                  komali_2@mastodon.socialK This user is from outside of this forum
                  komali_2@mastodon.social
                  wrote last edited by
                  #95

                  @SRAZKVT @xgranade I'm not quite sure I understand what you mean, or by "my work" or "valuable," and that's not me trolling, I often have trouble understanding things that are obvious to others.

                  But what you say makes me think of means of production, which are all quite fully seized by capitalists. My thinking is it's quite funny to blow up their investments by e.g. disseminating distilled models (deepseek) or FOSS versions of software they try to sell

                  1 Reply Last reply
                  0
                  • li@tech.lgbtL li@tech.lgbt

                    @pip @subterfugue @xgranade yknow .. i dont think OP saying that their using LLMs to harm people and scaming the public, is a pro-AI stance, but thats just a guess

                    pip@infosec.exchangeP This user is from outside of this forum
                    pip@infosec.exchangeP This user is from outside of this forum
                    pip@infosec.exchange
                    wrote last edited by
                    #96

                    @Li @subterfugue @xgranade OP is literally insisting that it doesn't matter if you use AI, as long as you're not using it to generate code. Yep, I would call that pro-AI.

                    subterfugue@sfba.socialS 1 Reply Last reply
                    0
                    • jeffgrigg@mastodon.socialJ jeffgrigg@mastodon.social

                      @mikalai @xgranade

                      "But I don't control it!" is not a very compelling issue.

                      And it's not the most important issue for those who oppose Generative AI.

                      There are a number of compelling issues with Generative AI. And many of them, on their own, may rationally be enough to swear off of it, or even to ban it.

                      Insisting that everyone limit the argument to one relatively weak point is a fallacious argument, a logical fallicy.

                      mikalai@privacysafe.socialM This user is from outside of this forum
                      mikalai@privacysafe.socialM This user is from outside of this forum
                      mikalai@privacysafe.social
                      wrote last edited by
                      #97

                      @JeffGrigg @xgranade
                      Well, we collectively took our eyes from the ball. Your not controlling tech in a technological world is the root of a problem.
                      Without already existing reliance on "tech you don't control" (+ some policy = big tech), there would be no giants forcing on us whatever-current-nonsense.
                      Let us focus on power play. Without underlying control, those players won't be in a position to tell whole world what to do.

                      1 Reply Last reply
                      0
                      • jeffgrigg@mastodon.socialJ jeffgrigg@mastodon.social

                        @mikalai @xgranade

                        "But I don't control it!" is not a very compelling issue.

                        And it's not the most important issue for those who oppose Generative AI.

                        There are a number of compelling issues with Generative AI. And many of them, on their own, may rationally be enough to swear off of it, or even to ban it.

                        Insisting that everyone limit the argument to one relatively weak point is a fallacious argument, a logical fallicy.

                        mikalai@privacysafe.socialM This user is from outside of this forum
                        mikalai@privacysafe.socialM This user is from outside of this forum
                        mikalai@privacysafe.social
                        wrote last edited by
                        #98

                        @JeffGrigg @xgranade
                        If you control where datacenter is built, you wouldn't do harm, that people are against.
                        If you control, ....
                        Without control, we'll be playing an infinit whake-a-mole game.

                        1 Reply Last reply
                        0
                        • weirdwriter@caneandable.socialW This user is from outside of this forum
                          weirdwriter@caneandable.socialW This user is from outside of this forum
                          weirdwriter@caneandable.social
                          wrote last edited by
                          #99

                          @violetmadder @Mimesatwork @xgranade What really annoyed me, apart from his justification, was him using the term, NeoLiberal because he knew that would raise some hackles

                          1 Reply Last reply
                          0
                          • xgranade@wandering.shopX xgranade@wandering.shop

                            No, opposing LLMs isn't "purity culture." I've seen this now from quite a few different people, and I disagree vehemently. It is good, actually, to have moral principles and hold to them, even when people with more money than you find said principles annoying.

                            omnipotens@linuxrocks.onlineO This user is from outside of this forum
                            omnipotens@linuxrocks.onlineO This user is from outside of this forum
                            omnipotens@linuxrocks.online
                            wrote last edited by
                            #100

                            @xgranade The issue I have is being dictated to by large corporations who make the LLM's on what is right and moral. When most of those companies are not moral themselves.

                            1 Reply Last reply
                            0
                            • pip@infosec.exchangeP pip@infosec.exchange

                              @Li @subterfugue @xgranade OP is literally insisting that it doesn't matter if you use AI, as long as you're not using it to generate code. Yep, I would call that pro-AI.

                              subterfugue@sfba.socialS This user is from outside of this forum
                              subterfugue@sfba.socialS This user is from outside of this forum
                              subterfugue@sfba.social
                              wrote last edited by
                              #101

                              @pip @Li @xgranade No one but you wrote that in this exchange.

                              li@tech.lgbtL 1 Reply Last reply
                              0
                              • davey_cakes@mastodon.ieD davey_cakes@mastodon.ie

                                @Sickosocial @xgranade "Large Language Models" ChatGPT and stuff like that.

                                People (including me) like to differentiate these from the broader category of AI, because people do good stuff with AI tools without the externalities of LLMs.

                                sickosocial@mastodon.socialS This user is from outside of this forum
                                sickosocial@mastodon.socialS This user is from outside of this forum
                                sickosocial@mastodon.social
                                wrote last edited by
                                #102

                                @davey_cakes Oh, thank you so much for your answer!

                                1 Reply Last reply
                                0
                                • mikalai@privacysafe.socialM mikalai@privacysafe.social

                                  @cthos @xgranade
                                  1 - when hands type on autopilot, one will get those.
                                  2 - have you seen thickness of Corry's glasses?
                                  Can you imagine how vision field is bent?
                                  Should such person use some help from computers?

                                  cthos@mastodon.cthos.devC This user is from outside of this forum
                                  cthos@mastodon.cthos.devC This user is from outside of this forum
                                  cthos@mastodon.cthos.dev
                                  wrote last edited by
                                  #103

                                  @mikalai @xgranade you do know spelling and grammar checkers that do not possess the terrible externalities exist, right?

                                  1 Reply Last reply
                                  0
                                  • xgranade@wandering.shopX xgranade@wandering.shop

                                    No, opposing LLMs isn't "purity culture." I've seen this now from quite a few different people, and I disagree vehemently. It is good, actually, to have moral principles and hold to them, even when people with more money than you find said principles annoying.

                                    gekitsu@toot.catG This user is from outside of this forum
                                    gekitsu@toot.catG This user is from outside of this forum
                                    gekitsu@toot.cat
                                    wrote last edited by
                                    #104

                                    @xgranade this entire line of trying to discredit principledness reeks of that one study that concluded people on the autism spectrum are sticking to their principles too much. (when the test was about how much the test subjects stuck to a principle when nobody was there to witness them going against it.)

                                    1 Reply Last reply
                                    0
                                    • subterfugue@sfba.socialS subterfugue@sfba.social

                                      @pip @Li @xgranade No one but you wrote that in this exchange.

                                      li@tech.lgbtL This user is from outside of this forum
                                      li@tech.lgbtL This user is from outside of this forum
                                      li@tech.lgbt
                                      wrote last edited by
                                      #105

                                      @subterfugue @pip @xgranade it reads more like their saying that "not using ai" doesnt do much to actually stop the proliferation of AI and AI companies dont need you to use it to push it everywjhere ..

                                      pip@infosec.exchangeP 1 Reply Last reply
                                      0
                                      • li@tech.lgbtL li@tech.lgbt

                                        @subterfugue @pip @xgranade it reads more like their saying that "not using ai" doesnt do much to actually stop the proliferation of AI and AI companies dont need you to use it to push it everywjhere ..

                                        pip@infosec.exchangeP This user is from outside of this forum
                                        pip@infosec.exchangeP This user is from outside of this forum
                                        pip@infosec.exchange
                                        wrote last edited by
                                        #106

                                        @Li @subterfugue @xgranade Agreed, but that is really problematic. It discourages people from taking action to break this horrendous system, and puts more people at risk of things like AI psychosis.

                                        Aaron doesn't understand the danger we're in.

                                        li@tech.lgbtL 1 Reply Last reply
                                        0
                                        • pip@infosec.exchangeP pip@infosec.exchange

                                          @Li @subterfugue @xgranade Agreed, but that is really problematic. It discourages people from taking action to break this horrendous system, and puts more people at risk of things like AI psychosis.

                                          Aaron doesn't understand the danger we're in.

                                          li@tech.lgbtL This user is from outside of this forum
                                          li@tech.lgbtL This user is from outside of this forum
                                          li@tech.lgbt
                                          wrote last edited by
                                          #107

                                          @pip @subterfugue @xgranade even if no one buys ai survailence states who want to ask a chatbot "cross reference these pictures from this protest with social media (or fuck "age verification" records) .. to get an answer (which they dont care if is wrong they just want an excuse to hurt people so)

                                          and tbh the way to fight that is more involved than * just * not using AI .. i dont really know what you would do to stop that im not dumb enough to think any "legistation" will do anything (its only purpose is for the same people pushing ai to legitimizing violence and control towards other people, much like the ai survailence, and is written and "enforced" by those creating that in the first place) so idfk revolution ig? hacking the system and tearing it apart? bleh

                                          pip@infosec.exchangeP 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups