Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

Scheduled Pinned Locked Moved Uncategorized
190 Posts 72 Posters 243 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • johannab@cosocial.caJ johannab@cosocial.ca

    @glyph

    I should go blather on my own blog to brain-dump a little better and get the hell back to my own work. 🤣 This all has me thinking out loud at the keys too much. Too many threads of thought that are a little unwoven right now, but I really appreciate this branching thread you kicked off.

    glyph@mastodon.socialG This user is from outside of this forum
    glyph@mastodon.socialG This user is from outside of this forum
    glyph@mastodon.social
    wrote last edited by
    #173

    @johannab Very kind of you to say so. Remember to like and subscribe 🙃

    1 Reply Last reply
    0
    • glyph@mastodon.socialG glyph@mastodon.social

      @johannab the AWS link was to showcase that even AWS itself can't prevent vibe-coding their mission-critical modules, and presumably a few skilled practitioners work there.

      glyph@mastodon.socialG This user is from outside of this forum
      glyph@mastodon.socialG This user is from outside of this forum
      glyph@mastodon.social
      wrote last edited by
      #174

      @johannab I guess I should concede that there are at least 2 people I know who actually use LLMs all the time and seem completely unaffected. They seem to be slightly more productive and produce normal-looking code with it. But they do not seem to possess any special insight; I have no idea what they're doing that's different.

      johannab@cosocial.caJ 1 Reply Last reply
      0
      • bluewinds@tech.lgbtB bluewinds@tech.lgbt

        @janeishly @glyph I have found this exact thing in code reviews - my company enabled automatic AI code reviews ( 🤢 ) and the cognitive load of the automated comments was *enormous*.

        It often correctly flagged something to pay attention to, but the suggested solution was always incorrect - and ignoring / discarding it was hugely expensive mentally.

        I finally managed to get it changed to "opt in" rather than automatic, but wow the whole experience felt like a tarpit for thinking.

        A This user is from outside of this forum
        A This user is from outside of this forum
        agreeable_landfall@mastodon.social
        wrote last edited by
        #175

        @bluewinds @janeishly @glyph I'd rather have it simply tell me what's wrong. (Or what it "thinks" is wrong.) Having to wade through AI code is like reviewing someone else's work, when you can't count on that person being at all competent. Best to just leave the coding to humans.

        I'm all for AI finding faults; these can easily be checked for correctness. Infinitely harder for a human to check AI code for correctness. Which is all lost time against the schedule.

        glyph@mastodon.socialG 1 Reply Last reply
        0
        • bluewinds@tech.lgbtB bluewinds@tech.lgbt

          @janeishly @glyph I have found this exact thing in code reviews - my company enabled automatic AI code reviews ( 🤢 ) and the cognitive load of the automated comments was *enormous*.

          It often correctly flagged something to pay attention to, but the suggested solution was always incorrect - and ignoring / discarding it was hugely expensive mentally.

          I finally managed to get it changed to "opt in" rather than automatic, but wow the whole experience felt like a tarpit for thinking.

          A This user is from outside of this forum
          A This user is from outside of this forum
          agreeable_landfall@mastodon.social
          wrote last edited by
          #176

          @bluewinds @janeishly @glyph I have a friend who insists his AI partner writes great comments. I doubt that, and he's never provided an example. Since AI doesn't _understand_ the code, how can it write comments better than "We're going to loop through <thingies> and delete values out of range." Which the code already tells me. I want to know what you were _trying_ to do. The code may or may not do that, and comments which are based on the code can't help.

          1 Reply Last reply
          0
          • A agreeable_landfall@mastodon.social

            @bluewinds @janeishly @glyph I'd rather have it simply tell me what's wrong. (Or what it "thinks" is wrong.) Having to wade through AI code is like reviewing someone else's work, when you can't count on that person being at all competent. Best to just leave the coding to humans.

            I'm all for AI finding faults; these can easily be checked for correctness. Infinitely harder for a human to check AI code for correctness. Which is all lost time against the schedule.

            glyph@mastodon.socialG This user is from outside of this forum
            glyph@mastodon.socialG This user is from outside of this forum
            glyph@mastodon.social
            wrote last edited by
            #177

            @agreeable_landfall @bluewinds @janeishly there's an alert fatigue problem there with LLM code review, but if I had to rank the harm it would definitely be lower down

            1 Reply Last reply
            0
            • nicuveo@tech.lgbtN nicuveo@tech.lgbt

              @glyph my hypothesis on that is that, by virtue of literally being encodings of lexical fields and semantic proximity, and by virtue of their response being the logical continuation of the user's input, LLMs statistically pick up on and amplify subtle tendencies / biases in the user: if you feed it input that uses vocabulary and idioms semantically linked to low self-esteem, the model will more likely compute a reply with similar undertones, feeding said emotion. they amplify whatever emotion you put in, even accidentally.
              (thread here: https://tech.lgbt/@nicuveo/116210599322080105 )

              glyph@mastodon.socialG This user is from outside of this forum
              glyph@mastodon.socialG This user is from outside of this forum
              glyph@mastodon.social
              wrote last edited by
              #178

              @nicuveo seems plausible. I had a much vaguer hypothesis along these lines too. can’t dig up the toot right now but I definitely posted one a few weeks ago

              1 Reply Last reply
              0
              • bbacc@mastodon.bida.imB bbacc@mastodon.bida.im

                @glyph @nils_berger
                this study argues that it encourages cognitive outsourcing on a new level, which in long term period could result in getting used to less cognitive activity, at least for certain tasks.

                link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646

                N This user is from outside of this forum
                N This user is from outside of this forum
                nils_berger@sw-development-is.social
                wrote last edited by
                #179

                @bbacc thank you! 😊

                1 Reply Last reply
                0
                • bluewinds@tech.lgbtB bluewinds@tech.lgbt

                  @janeishly @glyph I have found this exact thing in code reviews - my company enabled automatic AI code reviews ( 🤢 ) and the cognitive load of the automated comments was *enormous*.

                  It often correctly flagged something to pay attention to, but the suggested solution was always incorrect - and ignoring / discarding it was hugely expensive mentally.

                  I finally managed to get it changed to "opt in" rather than automatic, but wow the whole experience felt like a tarpit for thinking.

                  samstart@mastodon.socialS This user is from outside of this forum
                  samstart@mastodon.socialS This user is from outside of this forum
                  samstart@mastodon.social
                  wrote last edited by
                  #180

                  @bluewinds @janeishly @glyph The "tarpit for thinking" framing is perfect. AI code review that flags things but suggests wrong fixes is worse than no review at all — it steals your attention for nothing.

                  That's why we went a different direction with our scanner. Instead of reviewing individual code changes, we check structural signals: does CI exist? Are there tests? Are secrets exposed? Binary yes/no checks that don't require you to evaluate AI-generated suggestions. repofortify.com

                  1 Reply Last reply
                  0
                  • glyph@mastodon.socialG glyph@mastodon.social

                    Furthermore, it is not "nuts" to dismiss the experience of an LLM user. In fact, you must dismiss all experiences of LLM users, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too think for your eyes to see.

                    Because the interesting, novel thing about LLMs, the thing that makes them dangerous and interesting, is that they are, by design, epistemic disruptors.

                    They can produce symboloids more rapidly than any thinking mind. Repetition influences cognition.

                    thetacola@mas.toT This user is from outside of this forum
                    thetacola@mas.toT This user is from outside of this forum
                    thetacola@mas.to
                    wrote last edited by
                    #181

                    @glyph "They can produce symboloids more rapidly than a thinking mind" maybe if someone thinks really slowly? either that or there‘s some much faster llm i‘ve never heard of

                    i find the output on these infuriating because they generate slower than i read, so when i have to test them for whatever reason (usually to show how comically poorly it does at a given application as an example of why not to use it) i have to scroll up until i think its done generating before reading 😕

                    1 Reply Last reply
                    0
                    • glyph@mastodon.socialG glyph@mastodon.social

                      More to the point though in this metaphor where you're getting a potentially-infected scrape at work, we are living in the pre-germ-theory age of AI. We are aware that it might be dangerous sometimes, but we don't know to whom or why. We are attempting to combat miasma with bloodletting right now, and putting the miasma-generator in every home before we know what it's actually doing.

                      kevingranade@mastodon.gamedev.placeK This user is from outside of this forum
                      kevingranade@mastodon.gamedev.placeK This user is from outside of this forum
                      kevingranade@mastodon.gamedev.place
                      wrote last edited by
                      #182

                      @glyph potentially an even better metaphor is RSI, though that does lead to the "you're holding it wrong" argument which isn't applicable, but incidental injuries are in the same bucket but it's just less obvious.

                      1 Reply Last reply
                      0
                      • glyph@mastodon.socialG glyph@mastodon.social

                        @johannab I guess I should concede that there are at least 2 people I know who actually use LLMs all the time and seem completely unaffected. They seem to be slightly more productive and produce normal-looking code with it. But they do not seem to possess any special insight; I have no idea what they're doing that's different.

                        johannab@cosocial.caJ This user is from outside of this forum
                        johannab@cosocial.caJ This user is from outside of this forum
                        johannab@cosocial.ca
                        wrote last edited by
                        #183

                        @glyph I think there are a lot of individual, and small-scale social factors, that make a huge difference here.

                        Prior domain expertise, personal self-image, ability to separate work and not-work life, other social anchors in the non-digital world ... I feel like these all have an interaction.

                        I'm really concerned at what I see of students, even grad students around me, who have basically not *learned* a thing about life without these.

                        johannab@cosocial.caJ 1 Reply Last reply
                        0
                        • johannab@cosocial.caJ johannab@cosocial.ca

                          @glyph I think there are a lot of individual, and small-scale social factors, that make a huge difference here.

                          Prior domain expertise, personal self-image, ability to separate work and not-work life, other social anchors in the non-digital world ... I feel like these all have an interaction.

                          I'm really concerned at what I see of students, even grad students around me, who have basically not *learned* a thing about life without these.

                          johannab@cosocial.caJ This user is from outside of this forum
                          johannab@cosocial.caJ This user is from outside of this forum
                          johannab@cosocial.ca
                          wrote last edited by
                          #184

                          @glyph Less concerned about say, my spouse, who had 28 years sysadmin experience behind him when his hype-chasing CEO declared that All Shalt Use the AI Or Suffer The Performance Review Consequences.

                          He basically dictated what he otherwise would have scripted and let the clanker write the scripts. I'm not sure it saved much time, but he's found a couple of spots where it extracted something he hadn't thought of and got past a sticking point.

                          glyph@mastodon.socialG 1 Reply Last reply
                          0
                          • johannab@cosocial.caJ johannab@cosocial.ca

                            @glyph Less concerned about say, my spouse, who had 28 years sysadmin experience behind him when his hype-chasing CEO declared that All Shalt Use the AI Or Suffer The Performance Review Consequences.

                            He basically dictated what he otherwise would have scripted and let the clanker write the scripts. I'm not sure it saved much time, but he's found a couple of spots where it extracted something he hadn't thought of and got past a sticking point.

                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.social
                            wrote last edited by
                            #185

                            @johannab I have not done a comprehensive survey, but I simultaneously believe that A) you're directionally correct and the relevant factors are *something* like this, and B) there are some counterexamples where very well-adjusted, experienced, emotionally regulated people suddenly and unpredictably lurch off into the deep end, so there's something non-obvious going on too.

                            johannab@cosocial.caJ 1 Reply Last reply
                            0
                            • glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.social
                              wrote last edited by
                              #186

                              @atax1a looping back to some of Cory’s good points here (from another essay): it’s a picture-perfect example of reverse-centaur accountability-sink logic. their jobs are about to become *profoundly* miserable 😞

                              1 Reply Last reply
                              0
                              • glyph@mastodon.socialG This user is from outside of this forum
                                glyph@mastodon.socialG This user is from outside of this forum
                                glyph@mastodon.social
                                wrote last edited by
                                #187

                                @violetmadder have you read https://blog.glyph.im/2025/08/futzing-fraction.html ? I use this metaphor a lot.

                                1 Reply Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  @johannab I have not done a comprehensive survey, but I simultaneously believe that A) you're directionally correct and the relevant factors are *something* like this, and B) there are some counterexamples where very well-adjusted, experienced, emotionally regulated people suddenly and unpredictably lurch off into the deep end, so there's something non-obvious going on too.

                                  johannab@cosocial.caJ This user is from outside of this forum
                                  johannab@cosocial.caJ This user is from outside of this forum
                                  johannab@cosocial.ca
                                  wrote last edited by
                                  #188

                                  @glyph oh, there certainly is!

                                  Human brains tangle many aspects of identity into our relational constructs, including in the working world. Those interact with our cognitive and neurological processes, and create minds that are as unique as fingerprints.

                                  Its as complex, if not more so, than something like addiction. How does one person regularly use THC to be free from chronic pain with no apparent side effects, but another smokes up once and suffers a psychotic break?

                                  johannab@cosocial.caJ 1 Reply Last reply
                                  0
                                  • jackeric@beige.partyJ jackeric@beige.party

                                    @bluewinds @delta_vee @kirakira @glyph I don't think the analogies are good because asbestos is a fantastic insulator, lead is a really helpful additive for petrol and makes fantastic pigments and is really convenient for piping... and the hidden side-effects are the problem. Whereas LLMs _don't_ deliver that primary benefit

                                    LLMs are more like... cheap laminate flooring, produced with wood pulp harvested unsustainably from old-growth forests and made by grossly exploited factory workers overseas... superficially convenient when remodelling your kitchen and rapidly ubiquitous but also quite unsatisfying and a right faff to work around once it's established

                                    dpnash@c.imD This user is from outside of this forum
                                    dpnash@c.imD This user is from outside of this forum
                                    dpnash@c.im
                                    wrote last edited by
                                    #189

                                    @jackeric @bluewinds @delta_vee @kirakira @glyph Only half seriously, but therefore also not totally *unseriously*:

                                    Zip fuel.

                                    If you've never heard of it, there's good reason. But it did attempt to address a legitimate concern at the time: getting more power out of a given volume of jet fuel. Just put highly reactive boron compounds in it. Specifically, *pyrophoric* boron compounds, which don't even need high heat to ignite.

                                    The fuel did indeed produce more power, but it was very toxic (both in raw form and after combustion), and it seriously corroded jet engine parts, leading to an enormous maintenance headache for any aircraft that tried to use the fuel.

                                    (Maintenance headaches ... sound familiar?)

                                    A very good example of what can happen when you decide "speed of one specific component", whether airplane flights or writing code, overwhelmingly dominates a thought process.

                                    Link Preview Image
                                    Zip fuel - Wikipedia

                                    favicon

                                    (en.wikipedia.org)

                                    1 Reply Last reply
                                    0
                                    • johannab@cosocial.caJ johannab@cosocial.ca

                                      @glyph oh, there certainly is!

                                      Human brains tangle many aspects of identity into our relational constructs, including in the working world. Those interact with our cognitive and neurological processes, and create minds that are as unique as fingerprints.

                                      Its as complex, if not more so, than something like addiction. How does one person regularly use THC to be free from chronic pain with no apparent side effects, but another smokes up once and suffers a psychotic break?

                                      johannab@cosocial.caJ This user is from outside of this forum
                                      johannab@cosocial.caJ This user is from outside of this forum
                                      johannab@cosocial.ca
                                      wrote last edited by
                                      #190

                                      @glyph No human psyche is as obvious and superficial as its audible or textual outputs. Nor as robust.

                                      There are so many systems interconnecting when we interact with other brains. When we can manipulate or corrupt a simulated one which has already connected with organic ones, there's no single, clear pathway of cause-and-effect.

                                      We're trying to unscramble the eggs.

                                      1 Reply Last reply
                                      0
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Login or register to search.
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • World
                                      • Users
                                      • Groups