Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

Scheduled Pinned Locked Moved Uncategorized
190 Posts 72 Posters 243 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • glyph@mastodon.socialG glyph@mastodon.social

    2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

    flowrider@toot.ioF This user is from outside of this forum
    flowrider@toot.ioF This user is from outside of this forum
    flowrider@toot.io
    wrote last edited by
    #157

    @glyph I heard nobody ever got fired for buying IBM.

    1 Reply Last reply
    0
    • glyph@mastodon.socialG glyph@mastodon.social

      The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. Which is deeply ironic because it pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/

      cy@fedicy.us.toC This user is from outside of this forum
      cy@fedicy.us.toC This user is from outside of this forum
      cy@fedicy.us.to
      wrote last edited by
      #158
      I think you're misinterpreting what @pluralistic@mamot.fr means by "normal." He says:
      Its uses and abuses are normal. That doesn't make it good, but it does make it unexceptional.
      Radium paint was normal. It was also terrible. Poisoning workers and covering it up is not unprecedented, even if you do it with radiation. It's not some new weapon we have no ways of dealing with, just old, tired abuses not getting repossessed and shut down as they must be.

      What he means by "critic psychosis" is every time you shout "AI is an incredibly powerful technology that can control people's brains and is more powerful than any brain control ever before!" it really starts to sound like you're promoting AI. Hyperfocusing on the dangers make AI sound more badass than pathetic.

      You're talking to these people as if they're not trying to ruin you in every way, as if they have a shred of human decency and don't actually want to cause as much profitable chaos and mayhem as possible. It's like warning the Boogaloo Boys that their actions might cause civil war, as if that wasn't already what they're trying to do.

      Also the difference with Radium paint is it only maims and kills people, so rich fucks aren't interested. It reduces the amount and the utility of available slaves for their pleasure. Calling forth the danger of the mythical brain blasting AI on the other hand is music to their ears.
      1 Reply Last reply
      0
      • glyph@mastodon.socialG glyph@mastodon.social

        Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.

        But I'm still waiting.

        johannab@cosocial.caJ This user is from outside of this forum
        johannab@cosocial.caJ This user is from outside of this forum
        johannab@cosocial.ca
        wrote last edited by
        #159

        @glyph this thread needs to be an essay, and then a research hypothesis.

        I very much feel like I’m watching the last 35 years of my ever-enshittifying social network exposure, sped up 10x and replayed.

        In 1991 I remember having the flash of insight - without the life experience to really go into it deeply then - that the way nascent social network tech constrained and shaped interaction was going to force a mass cognitive adaptation for which we were not ready.

        johannab@cosocial.caJ 1 Reply Last reply
        0
        • johannab@cosocial.caJ johannab@cosocial.ca

          @glyph this thread needs to be an essay, and then a research hypothesis.

          I very much feel like I’m watching the last 35 years of my ever-enshittifying social network exposure, sped up 10x and replayed.

          In 1991 I remember having the flash of insight - without the life experience to really go into it deeply then - that the way nascent social network tech constrained and shaped interaction was going to force a mass cognitive adaptation for which we were not ready.

          johannab@cosocial.caJ This user is from outside of this forum
          johannab@cosocial.caJ This user is from outside of this forum
          johannab@cosocial.ca
          wrote last edited by
          #160

          @glyph

          In 2021, we were still suffering the consequences of that, and still not sufficiently adapted to have avoided whatever the fuck is now driving our geopolitical dystopia engine.

          And then suddenly our devolved capacity for social cognition had to deal with the fact that dealing with any humans at any distance far enough away that you couldn’t *lick* them came with no assurance that there even was a human there.

          1 Reply Last reply
          0
          • bluewinds@tech.lgbtB bluewinds@tech.lgbt

            @delta_vee @kirakira @glyph Leaded gasoline.

            jackeric@beige.partyJ This user is from outside of this forum
            jackeric@beige.partyJ This user is from outside of this forum
            jackeric@beige.party
            wrote last edited by
            #161

            @bluewinds @delta_vee @kirakira @glyph I don't think the analogies are good because asbestos is a fantastic insulator, lead is a really helpful additive for petrol and makes fantastic pigments and is really convenient for piping... and the hidden side-effects are the problem. Whereas LLMs _don't_ deliver that primary benefit

            LLMs are more like... cheap laminate flooring, produced with wood pulp harvested unsustainably from old-growth forests and made by grossly exploited factory workers overseas... superficially convenient when remodelling your kitchen and rapidly ubiquitous but also quite unsatisfying and a right faff to work around once it's established

            jackeric@beige.partyJ delta_vee@mstdn.caD dpnash@c.imD 3 Replies Last reply
            0
            • jackeric@beige.partyJ jackeric@beige.party

              @bluewinds @delta_vee @kirakira @glyph I don't think the analogies are good because asbestos is a fantastic insulator, lead is a really helpful additive for petrol and makes fantastic pigments and is really convenient for piping... and the hidden side-effects are the problem. Whereas LLMs _don't_ deliver that primary benefit

              LLMs are more like... cheap laminate flooring, produced with wood pulp harvested unsustainably from old-growth forests and made by grossly exploited factory workers overseas... superficially convenient when remodelling your kitchen and rapidly ubiquitous but also quite unsatisfying and a right faff to work around once it's established

              jackeric@beige.partyJ This user is from outside of this forum
              jackeric@beige.partyJ This user is from outside of this forum
              jackeric@beige.party
              wrote last edited by
              #162

              @bluewinds @delta_vee @kirakira @glyph this post is brought to you by our kitchen floor

              glyph@mastodon.socialG 1 Reply Last reply
              0
              • jackeric@beige.partyJ jackeric@beige.party

                @bluewinds @delta_vee @kirakira @glyph I don't think the analogies are good because asbestos is a fantastic insulator, lead is a really helpful additive for petrol and makes fantastic pigments and is really convenient for piping... and the hidden side-effects are the problem. Whereas LLMs _don't_ deliver that primary benefit

                LLMs are more like... cheap laminate flooring, produced with wood pulp harvested unsustainably from old-growth forests and made by grossly exploited factory workers overseas... superficially convenient when remodelling your kitchen and rapidly ubiquitous but also quite unsatisfying and a right faff to work around once it's established

                delta_vee@mstdn.caD This user is from outside of this forum
                delta_vee@mstdn.caD This user is from outside of this forum
                delta_vee@mstdn.ca
                wrote last edited by
                #163

                @jackeric @bluewinds @kirakira @glyph Cheap laminate floors aren't a cognitohazard though (unless you're in interior design 😉

                1 Reply Last reply
                0
                • jackeric@beige.partyJ jackeric@beige.party

                  @bluewinds @delta_vee @kirakira @glyph this post is brought to you by our kitchen floor

                  glyph@mastodon.socialG This user is from outside of this forum
                  glyph@mastodon.socialG This user is from outside of this forum
                  glyph@mastodon.social
                  wrote last edited by
                  #164

                  @jackeric @bluewinds @delta_vee @kirakira heh. I am not sure I 100% agree with your framing but all the analogies fall short (after all I do not think we have GOOD evidence that LLMs do any of these things, just hints) and this is an interesting contribution to the pile. but I definitely was thinking "wow it sounds like jack is thinking about laminate flooring really hard" the whole time I was reading it

                  1 Reply Last reply
                  0
                  • glyph@mastodon.socialG glyph@mastodon.social

                    If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.

                    dec23k@mastodon.ieD This user is from outside of this forum
                    dec23k@mastodon.ieD This user is from outside of this forum
                    dec23k@mastodon.ie
                    wrote last edited by
                    #165

                    @glyph
                    Here's an industrial accident that's easy to miss:

                    A hydraulic fluid line bursts while you're working on a machine, injecting toxic and/or hot liquid under your skin at high pressure.

                    https://en.wikipedia.org/wiki/High_pressure_injection_injury
                    "Although the initial wound often seems minor, the unseen, internal damage can be severe. With hydraulic fluids, paint, and detergents, these injuries are extremely serious as most hydraulic fluids and organic solvents are highly toxic."

                    glyph@mastodon.socialG 1 Reply Last reply
                    0
                    • glyph@mastodon.socialG glyph@mastodon.social

                      1. YES THEY ARE.

                      They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

                      With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

                      johannab@cosocial.caJ This user is from outside of this forum
                      johannab@cosocial.caJ This user is from outside of this forum
                      johannab@cosocial.ca
                      wrote last edited by
                      #166

                      @glyph

                      Having read over Doctorow's rant-du-jour twice now, I do think when he said "they" were not vibe coding mission-critial AWS modules", he was referring to the "they" in the previous paragraph, being developers he's spoken to, some of whom were friends he knows well.

                      So.... could be very differently skilled people from "some hack in a code assembly shop driving at a reckless pace because Amazon stock needs a bump".

                      It's all back to, though, defining "AI".

                      glyph@mastodon.socialG 1 Reply Last reply
                      0
                      • dec23k@mastodon.ieD dec23k@mastodon.ie

                        @glyph
                        Here's an industrial accident that's easy to miss:

                        A hydraulic fluid line bursts while you're working on a machine, injecting toxic and/or hot liquid under your skin at high pressure.

                        https://en.wikipedia.org/wiki/High_pressure_injection_injury
                        "Although the initial wound often seems minor, the unseen, internal damage can be severe. With hydraulic fluids, paint, and detergents, these injuries are extremely serious as most hydraulic fluids and organic solvents are highly toxic."

                        glyph@mastodon.socialG This user is from outside of this forum
                        glyph@mastodon.socialG This user is from outside of this forum
                        glyph@mastodon.social
                        wrote last edited by
                        #167

                        @dec23k okay definitely not clicking on that link, yeesh

                        1 Reply Last reply
                        0
                        • johannab@cosocial.caJ johannab@cosocial.ca

                          @glyph

                          Having read over Doctorow's rant-du-jour twice now, I do think when he said "they" were not vibe coding mission-critial AWS modules", he was referring to the "they" in the previous paragraph, being developers he's spoken to, some of whom were friends he knows well.

                          So.... could be very differently skilled people from "some hack in a code assembly shop driving at a reckless pace because Amazon stock needs a bump".

                          It's all back to, though, defining "AI".

                          glyph@mastodon.socialG This user is from outside of this forum
                          glyph@mastodon.socialG This user is from outside of this forum
                          glyph@mastodon.social
                          wrote last edited by
                          #168

                          @johannab yeah, I get that; what I am suggesting is that Cory is not auditing their work, he is depending on self-reports of their efficacy in using these tools. And those self-reports are highly dubious, and I've watched people be wrong over and over again as they attempted to assess their own LLM-augmented performance.

                          glyph@mastodon.socialG johannab@cosocial.caJ 2 Replies Last reply
                          0
                          • glyph@mastodon.socialG glyph@mastodon.social

                            @johannab yeah, I get that; what I am suggesting is that Cory is not auditing their work, he is depending on self-reports of their efficacy in using these tools. And those self-reports are highly dubious, and I've watched people be wrong over and over again as they attempted to assess their own LLM-augmented performance.

                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.social
                            wrote last edited by
                            #169

                            @johannab So yes, maybe his contacts are transcendentally better programmers than mine, and they've ascended to a plane of subjective self-assessment beyond mere mortals, but if they're anything like the (extremely skilled, extremely experienced) people I've watched fall into this trap, I'm highly skeptical

                            glyph@mastodon.socialG 1 Reply Last reply
                            0
                            • glyph@mastodon.socialG glyph@mastodon.social

                              @johannab So yes, maybe his contacts are transcendentally better programmers than mine, and they've ascended to a plane of subjective self-assessment beyond mere mortals, but if they're anything like the (extremely skilled, extremely experienced) people I've watched fall into this trap, I'm highly skeptical

                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.social
                              wrote last edited by
                              #170

                              @johannab the AWS link was to showcase that even AWS itself can't prevent vibe-coding their mission-critical modules, and presumably a few skilled practitioners work there.

                              glyph@mastodon.socialG 1 Reply Last reply
                              0
                              • glyph@mastodon.socialG glyph@mastodon.social

                                @johannab yeah, I get that; what I am suggesting is that Cory is not auditing their work, he is depending on self-reports of their efficacy in using these tools. And those self-reports are highly dubious, and I've watched people be wrong over and over again as they attempted to assess their own LLM-augmented performance.

                                johannab@cosocial.caJ This user is from outside of this forum
                                johannab@cosocial.caJ This user is from outside of this forum
                                johannab@cosocial.ca
                                wrote last edited by
                                #171

                                @glyph Fair, for sure.

                                I just realized when reading it over that was a spot there could be a disconnect between the "they" being referred to in the essay narrative as written.

                                I feel like my immediate, 1-degree friends, acquaintances and colleagues include amongst them all the theoretical levels of self-awareness we could speak to, and indeed, *I* can't tell one from the other without more examination of context.

                                johannab@cosocial.caJ 1 Reply Last reply
                                0
                                • johannab@cosocial.caJ johannab@cosocial.ca

                                  @glyph Fair, for sure.

                                  I just realized when reading it over that was a spot there could be a disconnect between the "they" being referred to in the essay narrative as written.

                                  I feel like my immediate, 1-degree friends, acquaintances and colleagues include amongst them all the theoretical levels of self-awareness we could speak to, and indeed, *I* can't tell one from the other without more examination of context.

                                  johannab@cosocial.caJ This user is from outside of this forum
                                  johannab@cosocial.caJ This user is from outside of this forum
                                  johannab@cosocial.ca
                                  wrote last edited by
                                  #172

                                  @glyph

                                  I should go blather on my own blog to brain-dump a little better and get the hell back to my own work. 🤣 This all has me thinking out loud at the keys too much. Too many threads of thought that are a little unwoven right now, but I really appreciate this branching thread you kicked off.

                                  glyph@mastodon.socialG 1 Reply Last reply
                                  0
                                  • johannab@cosocial.caJ johannab@cosocial.ca

                                    @glyph

                                    I should go blather on my own blog to brain-dump a little better and get the hell back to my own work. 🤣 This all has me thinking out loud at the keys too much. Too many threads of thought that are a little unwoven right now, but I really appreciate this branching thread you kicked off.

                                    glyph@mastodon.socialG This user is from outside of this forum
                                    glyph@mastodon.socialG This user is from outside of this forum
                                    glyph@mastodon.social
                                    wrote last edited by
                                    #173

                                    @johannab Very kind of you to say so. Remember to like and subscribe 🙃

                                    1 Reply Last reply
                                    0
                                    • glyph@mastodon.socialG glyph@mastodon.social

                                      @johannab the AWS link was to showcase that even AWS itself can't prevent vibe-coding their mission-critical modules, and presumably a few skilled practitioners work there.

                                      glyph@mastodon.socialG This user is from outside of this forum
                                      glyph@mastodon.socialG This user is from outside of this forum
                                      glyph@mastodon.social
                                      wrote last edited by
                                      #174

                                      @johannab I guess I should concede that there are at least 2 people I know who actually use LLMs all the time and seem completely unaffected. They seem to be slightly more productive and produce normal-looking code with it. But they do not seem to possess any special insight; I have no idea what they're doing that's different.

                                      johannab@cosocial.caJ 1 Reply Last reply
                                      0
                                      • bluewinds@tech.lgbtB bluewinds@tech.lgbt

                                        @janeishly @glyph I have found this exact thing in code reviews - my company enabled automatic AI code reviews ( 🤢 ) and the cognitive load of the automated comments was *enormous*.

                                        It often correctly flagged something to pay attention to, but the suggested solution was always incorrect - and ignoring / discarding it was hugely expensive mentally.

                                        I finally managed to get it changed to "opt in" rather than automatic, but wow the whole experience felt like a tarpit for thinking.

                                        A This user is from outside of this forum
                                        A This user is from outside of this forum
                                        agreeable_landfall@mastodon.social
                                        wrote last edited by
                                        #175

                                        @bluewinds @janeishly @glyph I'd rather have it simply tell me what's wrong. (Or what it "thinks" is wrong.) Having to wade through AI code is like reviewing someone else's work, when you can't count on that person being at all competent. Best to just leave the coding to humans.

                                        I'm all for AI finding faults; these can easily be checked for correctness. Infinitely harder for a human to check AI code for correctness. Which is all lost time against the schedule.

                                        glyph@mastodon.socialG 1 Reply Last reply
                                        0
                                        • bluewinds@tech.lgbtB bluewinds@tech.lgbt

                                          @janeishly @glyph I have found this exact thing in code reviews - my company enabled automatic AI code reviews ( 🤢 ) and the cognitive load of the automated comments was *enormous*.

                                          It often correctly flagged something to pay attention to, but the suggested solution was always incorrect - and ignoring / discarding it was hugely expensive mentally.

                                          I finally managed to get it changed to "opt in" rather than automatic, but wow the whole experience felt like a tarpit for thinking.

                                          A This user is from outside of this forum
                                          A This user is from outside of this forum
                                          agreeable_landfall@mastodon.social
                                          wrote last edited by
                                          #176

                                          @bluewinds @janeishly @glyph I have a friend who insists his AI partner writes great comments. I doubt that, and he's never provided an example. Since AI doesn't _understand_ the code, how can it write comments better than "We're going to loop through <thingies> and delete values out of range." Which the code already tells me. I want to know what you were _trying_ to do. The code may or may not do that, and comments which are based on the code can't help.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups