Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

Scheduled Pinned Locked Moved Uncategorized
190 Posts 72 Posters 243 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • glyph@mastodon.socialG glyph@mastodon.social

    For me, this is the body horror money quote from that Scientific American article:

    "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

    So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

    If you can see it, the basilisk has already won.

    F This user is from outside of this forum
    F This user is from outside of this forum
    froztbyte@mastodon.social
    wrote last edited by
    #19

    @glyph oh man, that’s dangerously close to giving an abstinence-based push some fuel and whew does my rather hedonistic ass have some thoughts on that

    1 Reply Last reply
    0
    • glyph@mastodon.socialG glyph@mastodon.social

      For me, this is the body horror money quote from that Scientific American article:

      "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

      So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

      If you can see it, the basilisk has already won.

      miss_rodent@girlcock.clubM This user is from outside of this forum
      miss_rodent@girlcock.clubM This user is from outside of this forum
      miss_rodent@girlcock.club
      wrote last edited by
      #20

      @glyph "Interestingly, the people in the study didn’t tend to think the AI autocomplete suggestions were biased or to notice that they had changed their own thinking on an issue in the course of the study. Warning the participants that they might be exposed to misinformation by the AI didn’t temper the persuasive effect either."

      Also, being aware that it can fuck with your head does not make you less susceptible to it fucking with your head. So you can't really judge if you could use it safely.

      miss_rodent@girlcock.clubM 1 Reply Last reply
      0
      • mcc@mastodon.socialM mcc@mastodon.social

        @glyph Wait. Does Doctorow try to soften the edge of the fact that LLMs do a bad thing to people by pointing out sometimes people do bad things to people? Isn't that just two bad things?

        If his point is it's still bad but it isn't *novel*, isn't the fact of a Fortune 500 company doing it still novel?

        glyph@mastodon.socialG This user is from outside of this forum
        glyph@mastodon.socialG This user is from outside of this forum
        glyph@mastodon.social
        wrote last edited by
        #21

        @mcc Let me just give you the pull quote:

        """
        For many programmers – including several of my acquaintance, whom I know to be both thoughtful and skilled – AI is another plugin, one they find useful enough to be modestly enthusiastic about.

        It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules…
        """

        glyph@mastodon.socialG 1 Reply Last reply
        0
        • glyph@mastodon.socialG glyph@mastodon.social

          @mcc Let me just give you the pull quote:

          """
          For many programmers – including several of my acquaintance, whom I know to be both thoughtful and skilled – AI is another plugin, one they find useful enough to be modestly enthusiastic about.

          It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules…
          """

          glyph@mastodon.socialG This user is from outside of this forum
          glyph@mastodon.socialG This user is from outside of this forum
          glyph@mastodon.social
          wrote last edited by
          #22

          @mcc He thinks the technology is capable of many horrors but it can also be useful for pedestrian things.

          mcc@mastodon.socialM cliftonr@wandering.shopC 2 Replies Last reply
          0
          • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

            @glyph "Interestingly, the people in the study didn’t tend to think the AI autocomplete suggestions were biased or to notice that they had changed their own thinking on an issue in the course of the study. Warning the participants that they might be exposed to misinformation by the AI didn’t temper the persuasive effect either."

            Also, being aware that it can fuck with your head does not make you less susceptible to it fucking with your head. So you can't really judge if you could use it safely.

            miss_rodent@girlcock.clubM This user is from outside of this forum
            miss_rodent@girlcock.clubM This user is from outside of this forum
            miss_rodent@girlcock.club
            wrote last edited by
            #23

            @glyph Nor can you reliably judge if it has or has not already fucked with your head.

            1 Reply Last reply
            0
            • glyph@mastodon.socialG glyph@mastodon.social

              @mcc He thinks the technology is capable of many horrors but it can also be useful for pedestrian things.

              mcc@mastodon.socialM This user is from outside of this forum
              mcc@mastodon.socialM This user is from outside of this forum
              mcc@mastodon.social
              wrote last edited by
              #24

              @glyph That sounds to me like a way to get horrors but you're probably not the person to convince

              1 Reply Last reply
              0
              • glyph@mastodon.socialG glyph@mastodon.social

                I'm open to a future where we do some research and figure out the limits of how AI influence works, and where the safety valves are, and also the extent to which it's *fine* that AI can influence our views because honestly many different kinds of stimuli can influence our views, not least of which is each other. But it sure looks right now like it has a bunch of very dangerous feedback loops built-in, and it's not clear how to know if you're touching one.

                glyph@mastodon.socialG This user is from outside of this forum
                glyph@mastodon.socialG This user is from outside of this forum
                glyph@mastodon.social
                wrote last edited by
                #25

                But, as Cory puts it:

                """
                It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale.
                """

                I had a very visceral emotional reaction to this particular paragraph, and I find it very important to refute. Here are two points to consider:

                glyph@mastodon.socialG hailey@hails.orgH 2 Replies Last reply
                0
                • glyph@mastodon.socialG glyph@mastodon.social

                  But, as Cory puts it:

                  """
                  It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale.
                  """

                  I had a very visceral emotional reaction to this particular paragraph, and I find it very important to refute. Here are two points to consider:

                  glyph@mastodon.socialG This user is from outside of this forum
                  glyph@mastodon.socialG This user is from outside of this forum
                  glyph@mastodon.social
                  wrote last edited by
                  #26

                  1. YES THEY ARE.

                  They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

                  With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

                  glyph@mastodon.socialG dpnash@c.imD jaypeach53@calckeymusic.socialJ pythonbynight@hachyderm.ioP johannab@cosocial.caJ 5 Replies Last reply
                  1
                  0
                  • glyph@mastodon.socialG glyph@mastodon.social

                    But, as Cory puts it:

                    """
                    It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale.
                    """

                    I had a very visceral emotional reaction to this particular paragraph, and I find it very important to refute. Here are two points to consider:

                    hailey@hails.orgH This user is from outside of this forum
                    hailey@hails.orgH This user is from outside of this forum
                    hailey@hails.org
                    wrote last edited by
                    #27

                    @glyph but they are, at scale, generating tech debt

                    1 Reply Last reply
                    0
                    • glyph@mastodon.socialG glyph@mastodon.social

                      1. YES THEY ARE.

                      They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

                      With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

                      glyph@mastodon.socialG This user is from outside of this forum
                      glyph@mastodon.socialG This user is from outside of this forum
                      glyph@mastodon.social
                      wrote last edited by
                      #28

                      2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

                      glyph@mastodon.socialG doragasu@mastodon.sdf.orgD laprice@beige.partyL elseweather@mastodon.socialE mortonrobd@mas.toM 7 Replies Last reply
                      2
                      0
                      • glyph@mastodon.socialG glyph@mastodon.social

                        2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

                        glyph@mastodon.socialG This user is from outside of this forum
                        glyph@mastodon.socialG This user is from outside of this forum
                        glyph@mastodon.social
                        wrote last edited by
                        #29

                        The very fact that things like OpenClaw and Moltbook even *exist* is an indication, to me, that people are *not* making sober, considered judgements about how and where to use LLMs. The fact that they are popular at *all*, let alone popular enough to be featured in mainstream media shows that whatever this cognitive distortion is, it's widespread.

                        glyph@mastodon.socialG kirakira@furry.engineerK gittaca@chaos.socialG 3 Replies Last reply
                        0
                        • glyph@mastodon.socialG glyph@mastodon.social

                          For me, this is the body horror money quote from that Scientific American article:

                          "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

                          So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

                          If you can see it, the basilisk has already won.

                          aud@fire.asta.lgbtA This user is from outside of this forum
                          aud@fire.asta.lgbtA This user is from outside of this forum
                          aud@fire.asta.lgbt
                          wrote last edited by
                          #30

                          @glyph@mastodon.social I wonder if this is why I find the whole genAI thing to be so very antithetical to creative pursuits; once you've been exposed to it, it's in there, and I feel like that just isn't broadly compatible with creativity?

                          Like we're all influenced, for sure, and those influences can become part of our own creative output. But I think there's a difference between, "I read a particular author" vs. "that author is standing over my shoulder telling me what to write". It doesn't help that the output is literally the most average output, either. It's like if the world's most generic author was hovering over your shoulder, telling you what to write.

                          That seems like creative death, not like a helper. And for programming, which
                          is creative (and I'm glad we're all saying it), I feel that same element very much at play.

                          1 Reply Last reply
                          0
                          • glyph@mastodon.socialG glyph@mastodon.social

                            1. YES THEY ARE.

                            They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

                            With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

                            dpnash@c.imD This user is from outside of this forum
                            dpnash@c.imD This user is from outside of this forum
                            dpnash@c.im
                            wrote last edited by
                            #31

                            @glyph

                            Two statements I believe are consistently correct:

                            (1) Generative “AI” produces code significantly faster than humans do only when nobody takes sufficient time to understand it (not just in a narrow syntactic sense; also in the context of organizational needs, longer-term plans, interaction with other applications, etc.)

                            (2) Code nobody understands well is “technical debt” *by definition*, because it takes a disproportionate amount of time and brain power to change or improve.

                            Conclusion: unless software developers are incredibly disciplined, and have a level of time and autonomy they generally do not have in a major tech company, generative “AI” usage will *consistently* create large amounts of “tech debt”.

                            dpnash@c.imD ohir@social.vivaldi.netO 2 Replies Last reply
                            2
                            0
                            • glyph@mastodon.socialG glyph@mastodon.social

                              The very fact that things like OpenClaw and Moltbook even *exist* is an indication, to me, that people are *not* making sober, considered judgements about how and where to use LLMs. The fact that they are popular at *all*, let alone popular enough to be featured in mainstream media shows that whatever this cognitive distortion is, it's widespread.

                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.social
                              wrote last edited by
                              #32

                              Furthermore, it is not "nuts" to dismiss the experience of an LLM user. In fact, you must dismiss all experiences of LLM users, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too think for your eyes to see.

                              Because the interesting, novel thing about LLMs, the thing that makes them dangerous and interesting, is that they are, by design, epistemic disruptors.

                              They can produce symboloids more rapidly than any thinking mind. Repetition influences cognition.

                              glyph@mastodon.socialG lritter@mastodon.gamedev.placeL jacob@social.jacobian.orgJ thetacola@mas.toT 4 Replies Last reply
                              0
                              • mttaggart@infosec.exchangeM mttaggart@infosec.exchange shared this topic
                              • glyph@mastodon.socialG glyph@mastodon.social

                                Furthermore, it is not "nuts" to dismiss the experience of an LLM user. In fact, you must dismiss all experiences of LLM users, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too think for your eyes to see.

                                Because the interesting, novel thing about LLMs, the thing that makes them dangerous and interesting, is that they are, by design, epistemic disruptors.

                                They can produce symboloids more rapidly than any thinking mind. Repetition influences cognition.

                                glyph@mastodon.socialG This user is from outside of this forum
                                glyph@mastodon.socialG This user is from outside of this forum
                                glyph@mastodon.social
                                wrote last edited by
                                #33

                                I have ADHD. Which means I am experienced in this process of self-denial. I have time blindness. I run an app that tells me how long I've been looking at other apps, because if I trust my subjective perception, I will think I've been looking at YouTube for 10 minutes instead of 4 hours. Every day I need to deny my subjective feelings about how using software is going, in order to function in society.

                                glyph@mastodon.socialG 1 Reply Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  I have ADHD. Which means I am experienced in this process of self-denial. I have time blindness. I run an app that tells me how long I've been looking at other apps, because if I trust my subjective perception, I will think I've been looking at YouTube for 10 minutes instead of 4 hours. Every day I need to deny my subjective feelings about how using software is going, in order to function in society.

                                  glyph@mastodon.socialG This user is from outside of this forum
                                  glyph@mastodon.socialG This user is from outside of this forum
                                  glyph@mastodon.social
                                  wrote last edited by
                                  #34

                                  This disability gives me a superpower. I'm Geordi with the visor, able to see what everybody else's regular eyes are missing. This is basically where the idea for https://blog.glyph.im/2025/08/futzing-fraction.html originally came from: since I already monitor my time use, and I noticed that my time in LLM apps was WAY out of whack, consistently in "hyperfocus" levels of time-use, without any of the subjective impression of engagement or pleasure. Just dull frustration and surprising amounts of wasted time.

                                  glyph@mastodon.socialG 1 Reply Last reply
                                  0
                                  • glyph@mastodon.socialG glyph@mastodon.social

                                    This disability gives me a superpower. I'm Geordi with the visor, able to see what everybody else's regular eyes are missing. This is basically where the idea for https://blog.glyph.im/2025/08/futzing-fraction.html originally came from: since I already monitor my time use, and I noticed that my time in LLM apps was WAY out of whack, consistently in "hyperfocus" levels of time-use, without any of the subjective impression of engagement or pleasure. Just dull frustration and surprising amounts of wasted time.

                                    glyph@mastodon.socialG This user is from outside of this forum
                                    glyph@mastodon.socialG This user is from outside of this forum
                                    glyph@mastodon.social
                                    wrote last edited by
                                    #35

                                    The suggestion that the article makes is all about passive monitoring of the amount of time that your LLM projects *actually* take, so you can *know* if you're circling the drain of reprompting and "reasoning". Maybe some people really *are* experiencing this big surge in productivity that just hasn't shown up on anyone's balance sheet yet! But as far as I know, nobody bothers to *check*!

                                    glyph@mastodon.socialG svines@gts.svines.rodeoS sabik@rants.auS raphael@mastodon.sdf.orgR 4 Replies Last reply
                                    0
                                    • dpnash@c.imD dpnash@c.im

                                      @glyph

                                      Two statements I believe are consistently correct:

                                      (1) Generative “AI” produces code significantly faster than humans do only when nobody takes sufficient time to understand it (not just in a narrow syntactic sense; also in the context of organizational needs, longer-term plans, interaction with other applications, etc.)

                                      (2) Code nobody understands well is “technical debt” *by definition*, because it takes a disproportionate amount of time and brain power to change or improve.

                                      Conclusion: unless software developers are incredibly disciplined, and have a level of time and autonomy they generally do not have in a major tech company, generative “AI” usage will *consistently* create large amounts of “tech debt”.

                                      dpnash@c.imD This user is from outside of this forum
                                      dpnash@c.imD This user is from outside of this forum
                                      dpnash@c.im
                                      wrote last edited by
                                      #36

                                      @glyph I should add: I am being careful to say “produces”, not “writes”. It is becoming clear that even if we grant that the pre-LLM bottleneck was developer code-authoring speed, in LLM-heavy workflows, the bottleneck is now “verify that this code is ready to deploy”. This is partly because there is so much more code coming in, but even more because far fewer people have any depth of understanding of the code being PR’ed. *All* the incentives lead to people saying “LGTM, it passes tests, ship it.”

                                      glyph@mastodon.socialG 1 Reply Last reply
                                      0
                                      • dpnash@c.imD dpnash@c.im

                                        @glyph I should add: I am being careful to say “produces”, not “writes”. It is becoming clear that even if we grant that the pre-LLM bottleneck was developer code-authoring speed, in LLM-heavy workflows, the bottleneck is now “verify that this code is ready to deploy”. This is partly because there is so much more code coming in, but even more because far fewer people have any depth of understanding of the code being PR’ed. *All* the incentives lead to people saying “LGTM, it passes tests, ship it.”

                                        glyph@mastodon.socialG This user is from outside of this forum
                                        glyph@mastodon.socialG This user is from outside of this forum
                                        glyph@mastodon.social
                                        wrote last edited by
                                        #37

                                        @dpnash I am, as always, open to seeing real evidence that this is not the case. However, everything I've seen and heard thus far tells me that it is.

                                        Your point (1) could be factually disputed, although I think it would be hard to prove, but your point (2) is just… logically necessary, I think. I cannot imagine ramming the code through a human brain thoroughly enough to actually understand it.

                                        glyph@mastodon.socialG 1 Reply Last reply
                                        0
                                        • glyph@mastodon.socialG glyph@mastodon.social

                                          @dpnash I am, as always, open to seeing real evidence that this is not the case. However, everything I've seen and heard thus far tells me that it is.

                                          Your point (1) could be factually disputed, although I think it would be hard to prove, but your point (2) is just… logically necessary, I think. I cannot imagine ramming the code through a human brain thoroughly enough to actually understand it.

                                          glyph@mastodon.socialG This user is from outside of this forum
                                          glyph@mastodon.socialG This user is from outside of this forum
                                          glyph@mastodon.social
                                          wrote last edited by
                                          #38

                                          @dpnash I mean, heck, the whole concept of the very popular problem of "NIH" is that code *already exists* and we *could* use it, but we don't use it *because writing it is an easier way to understand it*!

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups