Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".

Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".

Scheduled Pinned Locked Moved Uncategorized
163 Posts 63 Posters 50 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • lrhodes@merveilles.townL lrhodes@merveilles.town

    "Artifacts and technologies have certain logics built into their structure that do require certain arrangements around them or that bring forward certain arrangements… Understanding this you cannot take any technology and 'make it good.'"

    lrhodes@merveilles.townL This user is from outside of this forum
    lrhodes@merveilles.townL This user is from outside of this forum
    lrhodes@merveilles.town
    wrote last edited by
    #115

    I'd actually take this a step further and say that technologies ARE social arrangements.

    onepict@chaos.socialO 1 Reply Last reply
    0
    • raymaccarthy@mastodon.ieR raymaccarthy@mastodon.ie

      @tante @simonzerafa
      A brilliant person isn't right about everything.
      It's only a criticism of one view/idea.

      simonzerafa@infosec.exchangeS This user is from outside of this forum
      simonzerafa@infosec.exchangeS This user is from outside of this forum
      simonzerafa@infosec.exchange
      wrote last edited by
      #116

      @raymaccarthy @tante@tldr.nettime.org

      Well, you would think that should be obvious. Another example of the lack of critical thinking or is this just "common sense" being less than common?

      If anyone else has any objections to my earlier well reasoned postings about LLM's please do shout so you can also be blocked.

      1 Reply Last reply
      0
      • pluralistic@mamot.frP pluralistic@mamot.fr

        @dhd6 @tante @simonzerafa

        No. It's like killing a mosquito with a bug zapper whose history includes thousands of years of metallurgy, hundreds of years of electrical engineering, and decades of plastics manufacture.

        There is literally no contemporary manufactured good that doesn't sit atop a vast mountain of extraneous (to that purpose) labor, energy expenditure and capital.

        dhd6@jasette.facil.servicesD This user is from outside of this forum
        dhd6@jasette.facil.servicesD This user is from outside of this forum
        dhd6@jasette.facil.services
        wrote last edited by
        #117

        @pluralistic @tante @simonzerafa As always, yes and no. A bug zapper is designed to zap bugs, it is a simple mechanism that does that one thing, and does it well. An LLM is designed to read text and generate more text.

        That we have decided that the best way to do NLP is to use massively overparameterized word predictors that we have trained using RL to respond to prompts, rather than just, like, doing NLP, is just crazy from an engineering standpoint.

        Rube Goldberg is spinning in his grave!

        pluralistic@mamot.frP 1 Reply Last reply
        0
        • prinlu@0x.trans.failP This user is from outside of this forum
          prinlu@0x.trans.failP This user is from outside of this forum
          prinlu@0x.trans.fail
          wrote last edited by
          #118

          @FediThing @pluralistic @tante i feel in the similar way as big tech has taken the notion of AI and LLMs as a cue/excuse to mount a global campaign of public manipulation and massive investments into a speculative project and pumps gazillions$ into it and convinces everyone it's innevitable tech to be put in bag of potato chips, the backlash is then that anything that bears the name of AI and LLM is poisonous plague and people are unfollowing anyone who's touched it in any way or talks about it in any other way than "it's fascist tech, i'm putting a filter in my feed!" (while it IS fascist tech because it's in hands of fascists).

          in my view the problem seems not what LLMs are (what kind of tech), but how they are used and what they extract from planet when they are used by the big tech in this monstrous harmful way. of course there's a big blurred line and tech can't be separated from the political, but... AI is not intelligent (Big Tech wants you to believe that), and LLMs are not capable of intelligence and learning (Big Tech wants you to believe that).

          so i feel like a big chunk of anger and hate should really be directed at techno oligarchs and only partially and much more critically at actual algorithms in play. it's not LLMs that are harming the planet, but rather the extraction, these companies who are absolute evil and are doing whatever the hell they want, unchecked, unregulated.

          or as varoufakis said to tim nguyen: "we don't want to get rid of your tech or company (google). we want to socialize your company in order to use it more productively" and, if i may add, safely and beneficialy for everyone not just a few.

          bazkie@beige.partyB 1 Reply Last reply
          0
          • jeffgrigg@mastodon.socialJ jeffgrigg@mastodon.social

            @hopeless @tante

            Don't mistake a hugely popular fad or bubble for "reality." And if you don't believe that "[nearly] everybody believes" can be quite detached from punishingly harsh reality, then you need to read about the "Tulip Mania" craze and bubble:

            Link Preview Image
            Tulip mania - Wikipedia

            favicon

            (en.wikipedia.org)

            hopeless@mas.toH This user is from outside of this forum
            hopeless@mas.toH This user is from outside of this forum
            hopeless@mas.to
            wrote last edited by
            #119

            @JeffGrigg @tante

            I see. Well, thanks for wagging your finger at me, and mansplaining about tulip mania as if it's not common knowledge. I hope it has brightened your day.

            Now I must get back to see if Antigravity / Gemini 3.1 has finished the stuff I asked it to do, that I definitely could and would not be able to do myself.

            1 Reply Last reply
            0
            • shiri@foggyminds.comS shiri@foggyminds.com

              @FediThing I think the problem in discourse is the overwhelming amount of people experience anti-AI rage.

              In the topic of LLMs, the two loudest groups by a wide margin are:
              1. People who refuse to see any nuance or detail in the topic, who can not be appeased by anything other than the complete and total end of all machine learning technologies
              2. AI tech bros who think they're only moments away from awakening their own personal machine god

              I like to think I'm in the same camp as @pluralistic , that there's plenty of valid use for the technology and the problems aren't intrinsic to the technology but purely in how it's abused.

              But when those two groups dominate the discussions, it means that people can't even conceive that we might be talking about something slightly different than what they're thinking.

              Cory in the beginning explicitly said they were using a local offline LLM to check their punctuation... and all of this hate you see right here erupted. If you read through the other comment threads, people are barely even reading his responses before lumping more hate on him.

              And if someone as great with language as Cory can't put it in a way that won't get this response... I think that says alot.

              @tante

              prinlu@0x.trans.failP This user is from outside of this forum
              prinlu@0x.trans.failP This user is from outside of this forum
              prinlu@0x.trans.fail
              wrote last edited by
              #120

              @shiri fully agree!

              @pluralistic @tante @FediThing

              1 Reply Last reply
              0
              • dhd6@jasette.facil.servicesD dhd6@jasette.facil.services

                @pluralistic @tante @simonzerafa As always, yes and no. A bug zapper is designed to zap bugs, it is a simple mechanism that does that one thing, and does it well. An LLM is designed to read text and generate more text.

                That we have decided that the best way to do NLP is to use massively overparameterized word predictors that we have trained using RL to respond to prompts, rather than just, like, doing NLP, is just crazy from an engineering standpoint.

                Rube Goldberg is spinning in his grave!

                pluralistic@mamot.frP This user is from outside of this forum
                pluralistic@mamot.frP This user is from outside of this forum
                pluralistic@mamot.fr
                wrote last edited by
                #121

                @dhd6 @tante @simonzerafa

                Remember when Usenet's backbone cabal worried about someone in Congress discovering that the giant, packet-switched research network that had been constructed at enormous public expense was being used for idle chit chat?

                The nature of general purpose technologies is that they will be used for lots of purposes.

                dhd6@jasette.facil.servicesD 1 Reply Last reply
                0
                • reflex@retrogaming.socialR This user is from outside of this forum
                  reflex@retrogaming.socialR This user is from outside of this forum
                  reflex@retrogaming.social
                  wrote last edited by
                  #122

                  @kel @pluralistic @simonzerafa @tante Not only that, but popularizing LLMs but running them all locally is less efficient than running them in the cloud. It's false that it minimizes harm when you are still consuming power, but more of it since the chip in your computer isn't nearly as efficient as the ones the providers use.

                  Plus it's all stolen and biased fashware.

                  1 Reply Last reply
                  0
                  • pluralistic@mamot.frP pluralistic@mamot.fr

                    @simonzerafa @tante

                    What is the incremental environmental damage created by running an existing LLM locally on your own laptop?

                    As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.

                    clintruin@mastodon.socialC This user is from outside of this forum
                    clintruin@mastodon.socialC This user is from outside of this forum
                    clintruin@mastodon.social
                    wrote last edited by
                    #123

                    @pluralistic @simonzerafa @tante
                    "What is the incremental environmental damage created by running an existing LLM locally on your own laptop?"

                    I dunno. But how about a couple of million people?

                    The person who coins the term 'enshittification' defends LLM. Just...wow. We truly are fucked.

                    Let's all do what Cory does!
                    ☠️
                    Meanwhile:
                    https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/?gad_source=1&gad_campaignid=20737314952&gbraid=0AAAAADgO_miNIDzn-BdCIXzZ6r87g94-L&gclid=Cj0KCQiA49XMBhDRARIsAOOKJHbvIzPACe0EdEyWK86TnS7rNlnUaePKc5y22qT0ZsfqUeGDe72zzc0aAhFFEALw_wcB
                    #doomed #ClimateChange

                    pluralistic@mamot.frP 1 Reply Last reply
                    0
                    • tante@tldr.nettime.orgT tante@tldr.nettime.org

                      Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.

                      Link Preview Image
                      Acting ethically in an imperfect world

                      Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

                      favicon

                      Smashing Frames (tante.cc)

                      pkw@snac.d34d.netP This user is from outside of this forum
                      pkw@snac.d34d.netP This user is from outside of this forum
                      pkw@snac.d34d.net
                      wrote last edited by
                      #124
                      Oh boo! boo CD!
                      It's a good thing no gods no masters is my mantra.

                      Also yes!

                      The problem isn't the use of them as much as the apologetics.

                      1 Reply Last reply
                      0
                      • pluralistic@mamot.frP pluralistic@mamot.fr

                        @dhd6 @tante @simonzerafa

                        Remember when Usenet's backbone cabal worried about someone in Congress discovering that the giant, packet-switched research network that had been constructed at enormous public expense was being used for idle chit chat?

                        The nature of general purpose technologies is that they will be used for lots of purposes.

                        dhd6@jasette.facil.servicesD This user is from outside of this forum
                        dhd6@jasette.facil.servicesD This user is from outside of this forum
                        dhd6@jasette.facil.services
                        wrote last edited by
                        #125

                        @pluralistic @tante @simonzerafa indeed, I guess the question is whether the scale of the *ahem* waste, fraud and abuse *ahem* of resources that LLMs seem to imply, even in benign use cases like yours, is out of line with historical precedent or not.

                        Am I an old man yelling at a cloud?

                        No, it's the children who are wrong!

                        pluralistic@mamot.frP 1 Reply Last reply
                        0
                        • clintruin@mastodon.socialC clintruin@mastodon.social

                          @pluralistic @simonzerafa @tante
                          "What is the incremental environmental damage created by running an existing LLM locally on your own laptop?"

                          I dunno. But how about a couple of million people?

                          The person who coins the term 'enshittification' defends LLM. Just...wow. We truly are fucked.

                          Let's all do what Cory does!
                          ☠️
                          Meanwhile:
                          https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/?gad_source=1&gad_campaignid=20737314952&gbraid=0AAAAADgO_miNIDzn-BdCIXzZ6r87g94-L&gclid=Cj0KCQiA49XMBhDRARIsAOOKJHbvIzPACe0EdEyWK86TnS7rNlnUaePKc5y22qT0ZsfqUeGDe72zzc0aAhFFEALw_wcB
                          #doomed #ClimateChange

                          pluralistic@mamot.frP This user is from outside of this forum
                          pluralistic@mamot.frP This user is from outside of this forum
                          pluralistic@mamot.fr
                          wrote last edited by
                          #126

                          @clintruin @simonzerafa @tante

                          Which "couple million people" suffer harm when I run a model on my laptop?

                          clintruin@mastodon.socialC 1 Reply Last reply
                          0
                          • dhd6@jasette.facil.servicesD dhd6@jasette.facil.services

                            @pluralistic @tante @simonzerafa indeed, I guess the question is whether the scale of the *ahem* waste, fraud and abuse *ahem* of resources that LLMs seem to imply, even in benign use cases like yours, is out of line with historical precedent or not.

                            Am I an old man yelling at a cloud?

                            No, it's the children who are wrong!

                            pluralistic@mamot.frP This user is from outside of this forum
                            pluralistic@mamot.frP This user is from outside of this forum
                            pluralistic@mamot.fr
                            wrote last edited by
                            #127

                            @dhd6 @tante @simonzerafa

                            Rockets were literally perfected in Nazi slave labor camps.

                            1 Reply Last reply
                            0
                            • pluralistic@mamot.frP pluralistic@mamot.fr

                              @clintruin @simonzerafa @tante

                              Which "couple million people" suffer harm when I run a model on my laptop?

                              clintruin@mastodon.socialC This user is from outside of this forum
                              clintruin@mastodon.socialC This user is from outside of this forum
                              clintruin@mastodon.social
                              wrote last edited by
                              #128

                              @pluralistic @simonzerafa @tante
                              Missed the point, sir.

                              When one person does it...no big deal.

                              When a couple of million people do it...well, see the MIT article above.

                              clintruin@mastodon.socialC 1 Reply Last reply
                              0
                              • pluralistic@mamot.frP pluralistic@mamot.fr

                                @tante Dunno where you got the idea that I have a "libertarian" background. I was raised by Trotskyists, am a member of the DSA, am advising and have endorsed Avi Lewis, and joined the UK Greens to back Polanski.

                                jorismeys@mstdn.socialJ This user is from outside of this forum
                                jorismeys@mstdn.socialJ This user is from outside of this forum
                                jorismeys@mstdn.social
                                wrote last edited by
                                #129

                                @pluralistic
                                Fair enough, but that's not the core of the argument
                                @tante made. He had the same complaint for starters (your argument was heavily drenched in 'you ppl are purists' ), but he also makes the valid argument that technology isn't neutral in itself. Open weights based on intellectual theft and forced labor is still a problem. Until we have a discussion on how the weights come to fruitition, LLM's are objectively problematic from an ethical view. That has nothing to do with purism.

                                1 Reply Last reply
                                0
                                • reflex@retrogaming.socialR This user is from outside of this forum
                                  reflex@retrogaming.socialR This user is from outside of this forum
                                  reflex@retrogaming.social
                                  wrote last edited by
                                  #130

                                  @mastodonmigration @shiri @pluralistic @tante The only ethical use of a LLM would be one where the training dataset was ethically acquired, the power was minimized to the level of other methods of providing the same benefits, and the 'benefits' were actually measureable and accurate.

                                  None of those are true today, and so far as I know there is little to no path to them.

                                  1 Reply Last reply
                                  0
                                  • clintruin@mastodon.socialC clintruin@mastodon.social

                                    @pluralistic @simonzerafa @tante
                                    Missed the point, sir.

                                    When one person does it...no big deal.

                                    When a couple of million people do it...well, see the MIT article above.

                                    clintruin@mastodon.socialC This user is from outside of this forum
                                    clintruin@mastodon.socialC This user is from outside of this forum
                                    clintruin@mastodon.social
                                    wrote last edited by
                                    #131

                                    @pluralistic @simonzerafa @tante
                                    Subhead quote from the article:
                                    "The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next."

                                    clintruin@mastodon.socialC pluralistic@mamot.frP 2 Replies Last reply
                                    0
                                    • clintruin@mastodon.socialC clintruin@mastodon.social

                                      @pluralistic @simonzerafa @tante
                                      Subhead quote from the article:
                                      "The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next."

                                      clintruin@mastodon.socialC This user is from outside of this forum
                                      clintruin@mastodon.socialC This user is from outside of this forum
                                      clintruin@mastodon.social
                                      wrote last edited by
                                      #132

                                      @pluralistic @simonzerafa @tante
                                      But hey, you do you, Cory.
                                      I'm nobody...your Cory Doctrow.
                                      Let's all do what Cory does...

                                      pluralistic@mamot.frP 1 Reply Last reply
                                      0
                                      • clintruin@mastodon.socialC clintruin@mastodon.social

                                        @pluralistic @simonzerafa @tante
                                        Subhead quote from the article:
                                        "The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next."

                                        pluralistic@mamot.frP This user is from outside of this forum
                                        pluralistic@mamot.frP This user is from outside of this forum
                                        pluralistic@mamot.fr
                                        wrote last edited by
                                        #133

                                        @clintruin @simonzerafa @tante

                                        You are laboring under a misapprehension.

                                        I will reiterate my question, with all caps for emphasis.

                                        Which "couple million people" suffer harm when I run a model ON MY LAPTOP?

                                        clintruin@mastodon.socialC algernon@come-from.mad-scientist.clubA 2 Replies Last reply
                                        0
                                        • clintruin@mastodon.socialC clintruin@mastodon.social

                                          @pluralistic @simonzerafa @tante
                                          But hey, you do you, Cory.
                                          I'm nobody...your Cory Doctrow.
                                          Let's all do what Cory does...

                                          pluralistic@mamot.frP This user is from outside of this forum
                                          pluralistic@mamot.frP This user is from outside of this forum
                                          pluralistic@mamot.fr
                                          wrote last edited by
                                          #134

                                          @clintruin @simonzerafa @tante

                                          Well, you could "do what Cory does" by familiarizing yourself with the conduct that you are criticizing before engaging in ad hominem.

                                          To be fair, that's not unique to me, but people who fail to rise to that standard are doing themselves and others no good.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups