Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

Scheduled Pinned Locked Moved Uncategorized
190 Posts 72 Posters 243 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • glyph@mastodon.socialG glyph@mastodon.social

    For me, this is the body horror money quote from that Scientific American article:

    "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

    So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

    If you can see it, the basilisk has already won.

    aeva@mastodon.gamedev.placeA This user is from outside of this forum
    aeva@mastodon.gamedev.placeA This user is from outside of this forum
    aeva@mastodon.gamedev.place
    wrote last edited by
    #78

    @glyph when teams autocorrect rewrites something it decides i misspelled, i am filled with hatred and disgust and usually delete the entire sentence and try again regardless of if it had suggested the word i meant to write. i don't want it anymore

    aeva@mastodon.gamedev.placeA 1 Reply Last reply
    0
    • aeva@mastodon.gamedev.placeA aeva@mastodon.gamedev.place

      @glyph when teams autocorrect rewrites something it decides i misspelled, i am filled with hatred and disgust and usually delete the entire sentence and try again regardless of if it had suggested the word i meant to write. i don't want it anymore

      aeva@mastodon.gamedev.placeA This user is from outside of this forum
      aeva@mastodon.gamedev.placeA This user is from outside of this forum
      aeva@mastodon.gamedev.place
      wrote last edited by
      #79

      @glyph this is how i avoid getting early onset dementia from being exposed to involuntary slop

      1 Reply Last reply
      0
      • raphael@mastodon.sdf.orgR raphael@mastodon.sdf.org

        @glyph I like your breakdown in those articles.

        I think that some of the more valuable stuff has been not when juniors prompt and don’t get value, but when seniors prompt, go do something else for a bit while the machine churns for a couple of minutes, and then come back to something that is pretty close to a good solution.

        Think about a thing that might take you 15 minutes to kinda menially do (add some CLI bo flag that then needs to get passed down 3 layers in some spot for example)

        zimzat@mastodon.socialZ This user is from outside of this forum
        zimzat@mastodon.socialZ This user is from outside of this forum
        zimzat@mastodon.social
        wrote last edited by
        #80

        @raphael @glyph The thing that the LLM is getting you to not think about is that it shouldn't take passing things down three layers (much less more, which is more common). This is the boilerplate that everyone hates and the goal should be to remove the need for it at all, not produce more faster.

        "The least worst way to use an LLM is to do something you already know how to do", now with the addendum that we don't know what we don't know.

        1 Reply Last reply
        0
        • davidtheeviloverlord@mastodon.socialD davidtheeviloverlord@mastodon.social

          @MrBerard @kirakira @glyph

          Stochastic Errorism.

          n_dimension@infosec.exchangeN This user is from outside of this forum
          n_dimension@infosec.exchangeN This user is from outside of this forum
          n_dimension@infosec.exchange
          wrote last edited by
          #81

          @davidtheeviloverlord @MrBerard @kirakira @glyph

          What a fantastic thread.
          Not black or white, but flavoursome.
          Makes you think huh?

          Humans as programmable entities.
          Does a keyboard feel the fingertips?
          Or does it think it's a content creator?

          #Ai is a #Cognitivehazard and we don't have a firewall.

          1 Reply Last reply
          1
          0
          • R relay@relay.infosec.exchange shared this topic
          • glyph@mastodon.socialG glyph@mastodon.social

            Cory also correctly points out that "AI psychosis" is probably going to be gatekept by medical establishment scicomm types soon because "psychosis" probably isn't the right word and already carries an unwarranted stigma. And indeed, I think the biggest problem with "psychosis" as a metaphor is going to be that the ways in which AI can warp our minds are mostly NOT going to be catastrophic psychosis, and are not going to have great existing analogs in existing medical literature.

            mamalake@beige.partyM This user is from outside of this forum
            mamalake@beige.partyM This user is from outside of this forum
            mamalake@beige.party
            wrote last edited by
            #82

            @glyph if this was a tablet/pill taken that promised to help you write better emails, and it accidentally caused psychological disorders, it would be put through vigorous testing before being loaded onto a dishwasher or pushed into every system available. This is the Sacklers of tech, grifting every last cent out of people who are already struggling, promising all the while that it’s non addictive.

            1 Reply Last reply
            0
            • glyph@mastodon.socialG glyph@mastodon.social

              Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.

              But I'm still waiting.

              onepict@chaos.socialO This user is from outside of this forum
              onepict@chaos.socialO This user is from outside of this forum
              onepict@chaos.social
              wrote last edited by
              #83

              @glyph I'm honestly wondering just how much undiagnosed long COVID is playing into this.

              I'm slowly recovering now, well as much as I can, but at the time I was painfully aware weird stuff was happening to my brain because I got caught in the first wave in March 2020.

              So I am wondering if the addictive effects of using these LLMs along with existing cognitive damage is a partial cause.

              crazyjaneway@open-ground.orgC 1 Reply Last reply
              0
              • glyph@mastodon.socialG glyph@mastodon.social

                2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

                mortonrobd@mas.toM This user is from outside of this forum
                mortonrobd@mas.toM This user is from outside of this forum
                mortonrobd@mas.to
                wrote last edited by
                #84

                @glyph Many years back I read something about how sometimes smarter people are easier to fool as they think they're too smart to be fooled. I've observed a few instances in the martial arts world where people see one "body magic" trick and next thing they're down a rabbit hole.

                1 Reply Last reply
                0
                • glyph@mastodon.socialG glyph@mastodon.social

                  If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.

                  hllizi@hespere.deH This user is from outside of this forum
                  hllizi@hespere.deH This user is from outside of this forum
                  hllizi@hespere.de
                  wrote last edited by
                  #85

                  @glyph All the possible harm is just mental, and the mental - this seems to be an unspoken tenet held by many - isn't really real. Mental health in general doesn't really seem to be taken that seriously before its lack manifests physically as chainsaw wielding or some other eccentricity. Nothing to see here, just move on.

                  1 Reply Last reply
                  0
                  • glyph@mastodon.socialG glyph@mastodon.social

                    1. YES THEY ARE.

                    They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

                    With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

                    jaypeach53@calckeymusic.socialJ This user is from outside of this forum
                    jaypeach53@calckeymusic.socialJ This user is from outside of this forum
                    jaypeach53@calckeymusic.social
                    wrote last edited by
                    #86

                    @glyph@mastodon.social the only problem with your analysis is that you refer to vibe-coding. Slop-coding is the proper term.

                    1 Reply Last reply
                    0
                    • glyph@mastodon.socialG glyph@mastodon.social

                      I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.

                      nicuveo@tech.lgbtN This user is from outside of this forum
                      nicuveo@tech.lgbtN This user is from outside of this forum
                      nicuveo@tech.lgbt
                      wrote last edited by
                      #87

                      @glyph my hypothesis on that is that, by virtue of literally being encodings of lexical fields and semantic proximity, and by virtue of their response being the logical continuation of the user's input, LLMs statistically pick up on and amplify subtle tendencies / biases in the user: if you feed it input that uses vocabulary and idioms semantically linked to low self-esteem, the model will more likely compute a reply with similar undertones, feeding said emotion. they amplify whatever emotion you put in, even accidentally.
                      (thread here: https://tech.lgbt/@nicuveo/116210599322080105 )

                      glyph@mastodon.socialG 1 Reply Last reply
                      0
                      • glyph@mastodon.socialG glyph@mastodon.social

                        The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. Which is deeply ironic because it pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/

                        jaypeach53@calckeymusic.socialJ This user is from outside of this forum
                        jaypeach53@calckeymusic.socialJ This user is from outside of this forum
                        jaypeach53@calckeymusic.social
                        wrote last edited by
                        #88

                        @glyph@mastodon.social Cory has outsized influence considering his role as AI ambassador. His writings for the past few years reek of AI Slop. Book after book of rehashes of the same topic. I stopped buying his books.

                        1 Reply Last reply
                        0
                        • alys@selfy.armyA alys@selfy.army

                          @glyph i don't know if it's the best analogy at the end of the day, but my brain keeps going to lead pipes and asbestos. if we're not sure it's safe, should we be such a hurry to put it in everything?

                          happyborg@fosstodon.orgH This user is from outside of this forum
                          happyborg@fosstodon.orgH This user is from outside of this forum
                          happyborg@fosstodon.org
                          wrote last edited by
                          #89

                          @alys FYI the first health concerns with asbestos were being raised in 1907 and yet it was still legal to use it in UK buildings in, wait for it... 1999.

                          So the lesson with #LLMs is...?

                          nielsa@mas.toN 1 Reply Last reply
                          0
                          • glyph@mastodon.socialG glyph@mastodon.social

                            For me, this is the body horror money quote from that Scientific American article:

                            "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

                            So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

                            If you can see it, the basilisk has already won.

                            janeishly@beige.partyJ This user is from outside of this forum
                            janeishly@beige.partyJ This user is from outside of this forum
                            janeishly@beige.party
                            wrote last edited by
                            #90

                            @glyph This basilisk thing (great imagery) is very true in translation. Once you've seen the MT suggestion, with its wonky syntax and not quite right tone, it's very hard to dismiss it. The cognitive load is consequently enormous

                            mmby@mastodon.socialM bluewinds@tech.lgbtB 2 Replies Last reply
                            0
                            • onepict@chaos.socialO onepict@chaos.social

                              @glyph I'm honestly wondering just how much undiagnosed long COVID is playing into this.

                              I'm slowly recovering now, well as much as I can, but at the time I was painfully aware weird stuff was happening to my brain because I got caught in the first wave in March 2020.

                              So I am wondering if the addictive effects of using these LLMs along with existing cognitive damage is a partial cause.

                              crazyjaneway@open-ground.orgC This user is from outside of this forum
                              crazyjaneway@open-ground.orgC This user is from outside of this forum
                              crazyjaneway@open-ground.org
                              wrote last edited by
                              #91

                              @onepict @glyph I suspect yes, because my non-tech friends who use it more are using it as assistive tech to keep them working through health things…

                              onepict@chaos.socialO 1 Reply Last reply
                              0
                              • glyph@mastodon.socialG glyph@mastodon.social

                                @nils_berger have you got a link for that report?

                                bbacc@mastodon.bida.imB This user is from outside of this forum
                                bbacc@mastodon.bida.imB This user is from outside of this forum
                                bbacc@mastodon.bida.im
                                wrote last edited by
                                #92

                                @glyph @nils_berger
                                this study argues that it encourages cognitive outsourcing on a new level, which in long term period could result in getting used to less cognitive activity, at least for certain tasks.

                                link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646

                                N 1 Reply Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  RE: https://mamot.fr/@pluralistic/116219642373307943

                                  I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth

                                  ansuz@gts.cryptography.dogA This user is from outside of this forum
                                  ansuz@gts.cryptography.dogA This user is from outside of this forum
                                  ansuz@gts.cryptography.dog
                                  wrote last edited by
                                  #93

                                  @glyph it's difficult to understand why anyone with Cory's reputation would decide to die on such ridiculous hill 🙄​

                                  1 Reply Last reply
                                  0
                                  • glyph@mastodon.socialG glyph@mastodon.social

                                    @nils_berger have you got a link for that report?

                                    hmperson1@furry.engineerH This user is from outside of this forum
                                    hmperson1@furry.engineerH This user is from outside of this forum
                                    hmperson1@furry.engineer
                                    wrote last edited by
                                    #94

                                    @glyph @nils_berger
                                    i think most people are just referring to these blog posts:

                                    https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report#:~:text=A%2025,mechanisms

                                    https://dora.dev/insights/balancing-ai-tensions/#the-hidden-taxes-of-ai-adoption-navigating-the-tradeoffs

                                    1 Reply Last reply
                                    0
                                    • crazyjaneway@open-ground.orgC crazyjaneway@open-ground.org

                                      @onepict @glyph I suspect yes, because my non-tech friends who use it more are using it as assistive tech to keep them working through health things…

                                      onepict@chaos.socialO This user is from outside of this forum
                                      onepict@chaos.socialO This user is from outside of this forum
                                      onepict@chaos.social
                                      wrote last edited by
                                      #95

                                      @crazyjaneway @glyph We had a client use it to give them permission to spam out their new thing, after we'd explained (and their local IT guy also explained) that if they did that on our servers we'd lock their account.

                                      Which we then did. The client said, "ChatGPT said I could do it". The sycophancy combined with overconfidence is utterly frightening.

                                      I don't particularly like it when my friends use it in their communication with me either.

                                      Link Preview Image
                                      AI and that Guy at the bar

                                      In tech we've always had evangelists, weither it's for FOSS, or Blockchain or now AI. It's a natural thing to do. You have a tech you'r...

                                      favicon

                                      cobbles (dotart.blog)

                                      1 Reply Last reply
                                      0
                                      • glyph@mastodon.socialG glyph@mastodon.social

                                        @nils_berger have you got a link for that report?

                                        gbargoud@masto.nycG This user is from outside of this forum
                                        gbargoud@masto.nycG This user is from outside of this forum
                                        gbargoud@masto.nyc
                                        wrote last edited by
                                        #96

                                        @glyph @nils_berger

                                        This is the link to download it:

                                        https://dora.dev/research/2025/dora-report/

                                        Not sure if there's a mirror

                                        1 Reply Last reply
                                        0
                                        • glyph@mastodon.socialG glyph@mastodon.social

                                          For me, this is the body horror money quote from that Scientific American article:

                                          "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

                                          So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

                                          If you can see it, the basilisk has already won.

                                          mmu_man@m.g3l.orgM This user is from outside of this forum
                                          mmu_man@m.g3l.orgM This user is from outside of this forum
                                          mmu_man@m.g3l.org
                                          wrote last edited by
                                          #97

                                          @glyph don't look at it!

                                          https://en.wikipedia.org/wiki/Medusa

                                          Or even better, the Doctor Who version:

                                          https://en.wikipedia.org/wiki/Weeping_Angel

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups