Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. No, opposing LLMs isn't "purity culture."

No, opposing LLMs isn't "purity culture."

Scheduled Pinned Locked Moved Uncategorized
148 Posts 51 Posters 233 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • subterfugue@sfba.socialS subterfugue@sfba.social

    @pip @Li @xgranade No one but you wrote that in this exchange.

    li@tech.lgbtL This user is from outside of this forum
    li@tech.lgbtL This user is from outside of this forum
    li@tech.lgbt
    wrote last edited by
    #105

    @subterfugue @pip @xgranade it reads more like their saying that "not using ai" doesnt do much to actually stop the proliferation of AI and AI companies dont need you to use it to push it everywjhere ..

    pip@infosec.exchangeP 1 Reply Last reply
    0
    • li@tech.lgbtL li@tech.lgbt

      @subterfugue @pip @xgranade it reads more like their saying that "not using ai" doesnt do much to actually stop the proliferation of AI and AI companies dont need you to use it to push it everywjhere ..

      pip@infosec.exchangeP This user is from outside of this forum
      pip@infosec.exchangeP This user is from outside of this forum
      pip@infosec.exchange
      wrote last edited by
      #106

      @Li @subterfugue @xgranade Agreed, but that is really problematic. It discourages people from taking action to break this horrendous system, and puts more people at risk of things like AI psychosis.

      Aaron doesn't understand the danger we're in.

      li@tech.lgbtL 1 Reply Last reply
      0
      • pip@infosec.exchangeP pip@infosec.exchange

        @Li @subterfugue @xgranade Agreed, but that is really problematic. It discourages people from taking action to break this horrendous system, and puts more people at risk of things like AI psychosis.

        Aaron doesn't understand the danger we're in.

        li@tech.lgbtL This user is from outside of this forum
        li@tech.lgbtL This user is from outside of this forum
        li@tech.lgbt
        wrote last edited by
        #107

        @pip @subterfugue @xgranade even if no one buys ai survailence states who want to ask a chatbot "cross reference these pictures from this protest with social media (or fuck "age verification" records) .. to get an answer (which they dont care if is wrong they just want an excuse to hurt people so)

        and tbh the way to fight that is more involved than * just * not using AI .. i dont really know what you would do to stop that im not dumb enough to think any "legistation" will do anything (its only purpose is for the same people pushing ai to legitimizing violence and control towards other people, much like the ai survailence, and is written and "enforced" by those creating that in the first place) so idfk revolution ig? hacking the system and tearing it apart? bleh

        pip@infosec.exchangeP 1 Reply Last reply
        0
        • li@tech.lgbtL li@tech.lgbt

          @pip @subterfugue @xgranade even if no one buys ai survailence states who want to ask a chatbot "cross reference these pictures from this protest with social media (or fuck "age verification" records) .. to get an answer (which they dont care if is wrong they just want an excuse to hurt people so)

          and tbh the way to fight that is more involved than * just * not using AI .. i dont really know what you would do to stop that im not dumb enough to think any "legistation" will do anything (its only purpose is for the same people pushing ai to legitimizing violence and control towards other people, much like the ai survailence, and is written and "enforced" by those creating that in the first place) so idfk revolution ig? hacking the system and tearing it apart? bleh

          pip@infosec.exchangeP This user is from outside of this forum
          pip@infosec.exchangeP This user is from outside of this forum
          pip@infosec.exchange
          wrote last edited by
          #108

          @Li @subterfugue @xgranade It's a hard problem to solve but we have to fucking do it.

          Yes, major political changes are needed. But this kind of thing has been done before, so we know have no reason to think it's impossible. We closed the hole in the ozone layer after all. And the knowledge to create mustard gas exists, yet it is not some daily horror the threatens us.

          In the meantime, the ethical thing to do is to avoid and reject AI everywhere, while pushing for those political changes.

          li@tech.lgbtL 1 Reply Last reply
          0
          • pip@infosec.exchangeP pip@infosec.exchange

            @Li @subterfugue @xgranade It's a hard problem to solve but we have to fucking do it.

            Yes, major political changes are needed. But this kind of thing has been done before, so we know have no reason to think it's impossible. We closed the hole in the ozone layer after all. And the knowledge to create mustard gas exists, yet it is not some daily horror the threatens us.

            In the meantime, the ethical thing to do is to avoid and reject AI everywhere, while pushing for those political changes.

            li@tech.lgbtL This user is from outside of this forum
            li@tech.lgbtL This user is from outside of this forum
            li@tech.lgbt
            wrote last edited by
            #109

            @pip @subterfugue @xgranade i mean i dont disagree you shouldnt use AI, mainly because using it is giving information streight to fascists and also their already manipulating them to give information they approve of (e.g grok told to be overly transphobic and hate immegrants for example) avoid AI because it gives corperations an extreme amount of power over you and the information you receive and whatnot; sure; their also going to call the cops on you if you say the wrong thing; best to avoid using it, all are pretty good reasons to avoid id id say; but i dont think any of us disagree on that point rather just "not using it wont fix anything, thats capitalist bullshit"

            1 Reply Last reply
            0
            • xgranade@wandering.shopX This user is from outside of this forum
              xgranade@wandering.shopX This user is from outside of this forum
              xgranade@wandering.shop
              wrote last edited by
              #110

              @brianowen Thank goodness there's a man around to explain my own position to me.

              1 Reply Last reply
              0
              • mikalai@privacysafe.socialM mikalai@privacysafe.social

                @xgranade
                What if instead of "opposing use of LLM" we say as we mean "opposing use of tech you don't control", or something like this.
                Can you, guys find better way to focus attention on the bad power dynamic at hand?

                xgranade@wandering.shopX This user is from outside of this forum
                xgranade@wandering.shopX This user is from outside of this forum
                xgranade@wandering.shop
                wrote last edited by
                #111

                @mikalai I said what I meant to say. I guarantee that I actually intend to oppose LLMs *specifically* and not just because I don't control them.

                1 Reply Last reply
                0
                • mikalai@privacysafe.socialM mikalai@privacysafe.social

                  @ada @xgranade
                  Questioning own beliefs, and correcting them based on evidence is integrity.

                  Dying for Coca-Cola vs Pepsi is being a ... fan, not integrity in ideas.

                  xgranade@wandering.shopX This user is from outside of this forum
                  xgranade@wandering.shopX This user is from outside of this forum
                  xgranade@wandering.shop
                  wrote last edited by
                  #112

                  @mikalai @ada Guy who has memorized the logical fallacies page on Wikipedia has entered the chat.

                  Opposition to AI isn't a coke v pepsi thing for fuck's sake.

                  1 Reply Last reply
                  0
                  • xgranade@wandering.shopX xgranade@wandering.shop

                    I wouldn't be saying all this if it was just Doctorow, I'm even fine disagreeing with people I deeply respect. But he's not the only one saying shit like this, and I think it's worth calling out the broader rhetorical point.

                    xgranade@wandering.shopX This user is from outside of this forum
                    xgranade@wandering.shopX This user is from outside of this forum
                    xgranade@wandering.shop
                    wrote last edited by
                    #113

                    Addendum: since this has now rather dramatically escaped containment, I want to quickly note that if you reply to this thread in a completely embarrassing way, I reserve the right to be at least a bit rude in my responses.

                    1 Reply Last reply
                    0
                    • xgranade@wandering.shopX xgranade@wandering.shop

                      No, opposing LLMs isn't "purity culture." I've seen this now from quite a few different people, and I disagree vehemently. It is good, actually, to have moral principles and hold to them, even when people with more money than you find said principles annoying.

                      jcolag@mastodon.socialJ This user is from outside of this forum
                      jcolag@mastodon.socialJ This user is from outside of this forum
                      jcolag@mastodon.social
                      wrote last edited by
                      #114

                      @xgranade You have to admit, though, that it's pretty impressive that "no thanks" is purity culture, and not "we need to keep sacrificing transistors and coal to manifest the libertarian god, and everybody who disagrees won't and shouldn't survive."

                      1 Reply Last reply
                      0
                      • xgranade@wandering.shopX xgranade@wandering.shop

                        @codinghorror Anyway, this isn't the first time you've replied to me to make the argument that LLMs are just another kind of tool. I suspect we won't see eye-to-eye on that, especially as my work has been abused to make LLM products.

                        I hope we can agree though, that my objection *even though you disagree with it* is principled and neither knee jerk nor purity culture.

                        codinghorror@infosec.exchangeC This user is from outside of this forum
                        codinghorror@infosec.exchangeC This user is from outside of this forum
                        codinghorror@infosec.exchange
                        wrote last edited by
                        #115

                        @xgranade LLMs told me something critical about my health that no healthcare professional -- and I have a whole team working on me, because I'm bonkers -- ever did. If you want to ask, ask, I can provide very detailed citations and proof.

                        xgranade@wandering.shopX 1 Reply Last reply
                        0
                        • xgranade@wandering.shopX xgranade@wandering.shop

                          @codinghorror Sure, but we're not talking about "which tool is best for driving a nail that I own into a wall that I own," we're talking about "is it ethical to use a technology built on fascist ideology and stolen work, that carries unconscionable environmental costs, and that's used to disrupt labor movements to perform a task that that technology is fundamentally unsuited to?"

                          It's quite fair to have a very firm "no" by way of answer to the second question.

                          codinghorror@infosec.exchangeC This user is from outside of this forum
                          codinghorror@infosec.exchangeC This user is from outside of this forum
                          codinghorror@infosec.exchange
                          wrote last edited by
                          #116

                          @xgranade fair; I want to be alive, see earlier response.

                          1 Reply Last reply
                          0
                          • codinghorror@infosec.exchangeC codinghorror@infosec.exchange

                            @xgranade LLMs told me something critical about my health that no healthcare professional -- and I have a whole team working on me, because I'm bonkers -- ever did. If you want to ask, ask, I can provide very detailed citations and proof.

                            xgranade@wandering.shopX This user is from outside of this forum
                            xgranade@wandering.shopX This user is from outside of this forum
                            xgranade@wandering.shop
                            wrote last edited by
                            #117

                            @codinghorror I'm not a doctor (well, not that *kind* of doctor, anyway), so I'll absolutely admit that I'm not the right person to evaluate those citations. I'll say that from a pretty damned nontrivial degree of expertise with machine learning, I would find it extremely surprising if *on average* text recombination without any underlying semantic model yielded useful advice more commonly than outright dangerous advice.

                            xgranade@wandering.shopX codinghorror@infosec.exchangeC 2 Replies Last reply
                            0
                            • xgranade@wandering.shopX xgranade@wandering.shop

                              @codinghorror I'm not a doctor (well, not that *kind* of doctor, anyway), so I'll absolutely admit that I'm not the right person to evaluate those citations. I'll say that from a pretty damned nontrivial degree of expertise with machine learning, I would find it extremely surprising if *on average* text recombination without any underlying semantic model yielded useful advice more commonly than outright dangerous advice.

                              xgranade@wandering.shopX This user is from outside of this forum
                              xgranade@wandering.shopX This user is from outside of this forum
                              xgranade@wandering.shop
                              wrote last edited by
                              #118

                              @codinghorror Like, nothing about LLMs and the theory behind them prevents anyone from getting lucky — and I'm glad that you got lucky instead of the much more common and probable case. But that doesn't mean that they're anything other than outright terrifyingly dangerous in a medical context more generally.

                              codinghorror@infosec.exchangeC 1 Reply Last reply
                              0
                              • xgranade@wandering.shopX xgranade@wandering.shop

                                @codinghorror I'm not a doctor (well, not that *kind* of doctor, anyway), so I'll absolutely admit that I'm not the right person to evaluate those citations. I'll say that from a pretty damned nontrivial degree of expertise with machine learning, I would find it extremely surprising if *on average* text recombination without any underlying semantic model yielded useful advice more commonly than outright dangerous advice.

                                codinghorror@infosec.exchangeC This user is from outside of this forum
                                codinghorror@infosec.exchangeC This user is from outside of this forum
                                codinghorror@infosec.exchange
                                wrote last edited by
                                #119

                                @xgranade email me if you want to know. I have a rare set of DNA in some cases, as it turns out.

                                1 Reply Last reply
                                0
                                • theorangetheme@en.osm.townT theorangetheme@en.osm.town

                                  @xgranade "You don't want to use the lie machine powered by mulching puppies? What are you, some kind of purist?"

                                  elithebearded@fed.qaz.redE This user is from outside of this forum
                                  elithebearded@fed.qaz.redE This user is from outside of this forum
                                  elithebearded@fed.qaz.red
                                  wrote last edited by
                                  #120

                                  @theorangetheme @xgranade

                                  Do you eat chicken? Do you know how the chicken industry mulches all the rooster chicks?

                                  Not to defend LLM use, but I am starting to get tired of the PETA-esque rhetoric. Do these really mulch animals? No. Do they do have negative impacts in other ways? Yes.

                                  Is it that hard to focus on real impacts?

                                  1 Reply Last reply
                                  0
                                  • xgranade@wandering.shopX xgranade@wandering.shop

                                    @codinghorror Like, nothing about LLMs and the theory behind them prevents anyone from getting lucky — and I'm glad that you got lucky instead of the much more common and probable case. But that doesn't mean that they're anything other than outright terrifyingly dangerous in a medical context more generally.

                                    codinghorror@infosec.exchangeC This user is from outside of this forum
                                    codinghorror@infosec.exchangeC This user is from outside of this forum
                                    codinghorror@infosec.exchange
                                    wrote last edited by
                                    #121

                                    @xgranade people should absolutely be taught all the pros and cons, but I really dislike absolutism and zealotry.. it's not useful, it's not practical, it accomplishes nothing (except in the very narrow cases of civil rights or human dignity). If I wanted more ones and zeroes, I'd own more computers..

                                    codinghorror@infosec.exchangeC dalias@hachyderm.ioD xgranade@wandering.shopX 3 Replies Last reply
                                    0
                                    • codinghorror@infosec.exchangeC codinghorror@infosec.exchange

                                      @xgranade people should absolutely be taught all the pros and cons, but I really dislike absolutism and zealotry.. it's not useful, it's not practical, it accomplishes nothing (except in the very narrow cases of civil rights or human dignity). If I wanted more ones and zeroes, I'd own more computers..

                                      codinghorror@infosec.exchangeC This user is from outside of this forum
                                      codinghorror@infosec.exchangeC This user is from outside of this forum
                                      codinghorror@infosec.exchange
                                      wrote last edited by
                                      #122

                                      @xgranade and as I've said before, if you want to be angry, be angry at cryptocurrency which is gambling, grifters, and human trafficking to the bone. It's horrendous.

                                      eschaton@mastodon.socialE aburka@hachyderm.ioA 2 Replies Last reply
                                      0
                                      • codinghorror@infosec.exchangeC codinghorror@infosec.exchange

                                        @xgranade and as I've said before, if you want to be angry, be angry at cryptocurrency which is gambling, grifters, and human trafficking to the bone. It's horrendous.

                                        eschaton@mastodon.socialE This user is from outside of this forum
                                        eschaton@mastodon.socialE This user is from outside of this forum
                                        eschaton@mastodon.social
                                        wrote last edited by
                                        #123

                                        @codinghorror @xgranade The push for LLM inevitability is all the same people as cryptocurrency. That should tell you something about LLMs. It certainly tells me something.

                                        codinghorror@infosec.exchangeC 1 Reply Last reply
                                        0
                                        • eschaton@mastodon.socialE eschaton@mastodon.social

                                          @codinghorror @xgranade The push for LLM inevitability is all the same people as cryptocurrency. That should tell you something about LLMs. It certainly tells me something.

                                          codinghorror@infosec.exchangeC This user is from outside of this forum
                                          codinghorror@infosec.exchangeC This user is from outside of this forum
                                          codinghorror@infosec.exchange
                                          wrote last edited by
                                          #124

                                          @eschaton @xgranade not true, as I (for one, and I'm not alone) am a data point disproving this.

                                          codinghorror@infosec.exchangeC 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups