Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

Scheduled Pinned Locked Moved Uncategorized
190 Posts 72 Posters 243 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • glyph@mastodon.socialG glyph@mastodon.social

    Cory also correctly points out that "AI psychosis" is probably going to be gatekept by medical establishment scicomm types soon because "psychosis" probably isn't the right word and already carries an unwarranted stigma. And indeed, I think the biggest problem with "psychosis" as a metaphor is going to be that the ways in which AI can warp our minds are mostly NOT going to be catastrophic psychosis, and are not going to have great existing analogs in existing medical literature.

    mamalake@beige.partyM This user is from outside of this forum
    mamalake@beige.partyM This user is from outside of this forum
    mamalake@beige.party
    wrote last edited by
    #82

    @glyph if this was a tablet/pill taken that promised to help you write better emails, and it accidentally caused psychological disorders, it would be put through vigorous testing before being loaded onto a dishwasher or pushed into every system available. This is the Sacklers of tech, grifting every last cent out of people who are already struggling, promising all the while that it’s non addictive.

    1 Reply Last reply
    0
    • glyph@mastodon.socialG glyph@mastodon.social

      Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.

      But I'm still waiting.

      onepict@chaos.socialO This user is from outside of this forum
      onepict@chaos.socialO This user is from outside of this forum
      onepict@chaos.social
      wrote last edited by
      #83

      @glyph I'm honestly wondering just how much undiagnosed long COVID is playing into this.

      I'm slowly recovering now, well as much as I can, but at the time I was painfully aware weird stuff was happening to my brain because I got caught in the first wave in March 2020.

      So I am wondering if the addictive effects of using these LLMs along with existing cognitive damage is a partial cause.

      crazyjaneway@open-ground.orgC 1 Reply Last reply
      0
      • glyph@mastodon.socialG glyph@mastodon.social

        2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

        mortonrobd@mas.toM This user is from outside of this forum
        mortonrobd@mas.toM This user is from outside of this forum
        mortonrobd@mas.to
        wrote last edited by
        #84

        @glyph Many years back I read something about how sometimes smarter people are easier to fool as they think they're too smart to be fooled. I've observed a few instances in the martial arts world where people see one "body magic" trick and next thing they're down a rabbit hole.

        1 Reply Last reply
        0
        • glyph@mastodon.socialG glyph@mastodon.social

          If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.

          hllizi@hespere.deH This user is from outside of this forum
          hllizi@hespere.deH This user is from outside of this forum
          hllizi@hespere.de
          wrote last edited by
          #85

          @glyph All the possible harm is just mental, and the mental - this seems to be an unspoken tenet held by many - isn't really real. Mental health in general doesn't really seem to be taken that seriously before its lack manifests physically as chainsaw wielding or some other eccentricity. Nothing to see here, just move on.

          1 Reply Last reply
          0
          • glyph@mastodon.socialG glyph@mastodon.social

            1. YES THEY ARE.

            They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

            With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

            jaypeach53@calckeymusic.socialJ This user is from outside of this forum
            jaypeach53@calckeymusic.socialJ This user is from outside of this forum
            jaypeach53@calckeymusic.social
            wrote last edited by
            #86

            @glyph@mastodon.social the only problem with your analysis is that you refer to vibe-coding. Slop-coding is the proper term.

            1 Reply Last reply
            0
            • glyph@mastodon.socialG glyph@mastodon.social

              I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.

              nicuveo@tech.lgbtN This user is from outside of this forum
              nicuveo@tech.lgbtN This user is from outside of this forum
              nicuveo@tech.lgbt
              wrote last edited by
              #87

              @glyph my hypothesis on that is that, by virtue of literally being encodings of lexical fields and semantic proximity, and by virtue of their response being the logical continuation of the user's input, LLMs statistically pick up on and amplify subtle tendencies / biases in the user: if you feed it input that uses vocabulary and idioms semantically linked to low self-esteem, the model will more likely compute a reply with similar undertones, feeding said emotion. they amplify whatever emotion you put in, even accidentally.
              (thread here: https://tech.lgbt/@nicuveo/116210599322080105 )

              glyph@mastodon.socialG 1 Reply Last reply
              0
              • glyph@mastodon.socialG glyph@mastodon.social

                The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. Which is deeply ironic because it pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/

                jaypeach53@calckeymusic.socialJ This user is from outside of this forum
                jaypeach53@calckeymusic.socialJ This user is from outside of this forum
                jaypeach53@calckeymusic.social
                wrote last edited by
                #88

                @glyph@mastodon.social Cory has outsized influence considering his role as AI ambassador. His writings for the past few years reek of AI Slop. Book after book of rehashes of the same topic. I stopped buying his books.

                1 Reply Last reply
                0
                • alys@selfy.armyA alys@selfy.army

                  @glyph i don't know if it's the best analogy at the end of the day, but my brain keeps going to lead pipes and asbestos. if we're not sure it's safe, should we be such a hurry to put it in everything?

                  happyborg@fosstodon.orgH This user is from outside of this forum
                  happyborg@fosstodon.orgH This user is from outside of this forum
                  happyborg@fosstodon.org
                  wrote last edited by
                  #89

                  @alys FYI the first health concerns with asbestos were being raised in 1907 and yet it was still legal to use it in UK buildings in, wait for it... 1999.

                  So the lesson with #LLMs is...?

                  nielsa@mas.toN 1 Reply Last reply
                  0
                  • glyph@mastodon.socialG glyph@mastodon.social

                    For me, this is the body horror money quote from that Scientific American article:

                    "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

                    So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

                    If you can see it, the basilisk has already won.

                    janeishly@beige.partyJ This user is from outside of this forum
                    janeishly@beige.partyJ This user is from outside of this forum
                    janeishly@beige.party
                    wrote last edited by
                    #90

                    @glyph This basilisk thing (great imagery) is very true in translation. Once you've seen the MT suggestion, with its wonky syntax and not quite right tone, it's very hard to dismiss it. The cognitive load is consequently enormous

                    mmby@mastodon.socialM bluewinds@tech.lgbtB 2 Replies Last reply
                    0
                    • onepict@chaos.socialO onepict@chaos.social

                      @glyph I'm honestly wondering just how much undiagnosed long COVID is playing into this.

                      I'm slowly recovering now, well as much as I can, but at the time I was painfully aware weird stuff was happening to my brain because I got caught in the first wave in March 2020.

                      So I am wondering if the addictive effects of using these LLMs along with existing cognitive damage is a partial cause.

                      crazyjaneway@open-ground.orgC This user is from outside of this forum
                      crazyjaneway@open-ground.orgC This user is from outside of this forum
                      crazyjaneway@open-ground.org
                      wrote last edited by
                      #91

                      @onepict @glyph I suspect yes, because my non-tech friends who use it more are using it as assistive tech to keep them working through health things…

                      onepict@chaos.socialO 1 Reply Last reply
                      0
                      • glyph@mastodon.socialG glyph@mastodon.social

                        @nils_berger have you got a link for that report?

                        bbacc@mastodon.bida.imB This user is from outside of this forum
                        bbacc@mastodon.bida.imB This user is from outside of this forum
                        bbacc@mastodon.bida.im
                        wrote last edited by
                        #92

                        @glyph @nils_berger
                        this study argues that it encourages cognitive outsourcing on a new level, which in long term period could result in getting used to less cognitive activity, at least for certain tasks.

                        link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646

                        N 1 Reply Last reply
                        0
                        • glyph@mastodon.socialG glyph@mastodon.social

                          RE: https://mamot.fr/@pluralistic/116219642373307943

                          I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth

                          ansuz@gts.cryptography.dogA This user is from outside of this forum
                          ansuz@gts.cryptography.dogA This user is from outside of this forum
                          ansuz@gts.cryptography.dog
                          wrote last edited by
                          #93

                          @glyph it's difficult to understand why anyone with Cory's reputation would decide to die on such ridiculous hill 🙄​

                          1 Reply Last reply
                          0
                          • glyph@mastodon.socialG glyph@mastodon.social

                            @nils_berger have you got a link for that report?

                            hmperson1@furry.engineerH This user is from outside of this forum
                            hmperson1@furry.engineerH This user is from outside of this forum
                            hmperson1@furry.engineer
                            wrote last edited by
                            #94

                            @glyph @nils_berger
                            i think most people are just referring to these blog posts:

                            https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report#:~:text=A%2025,mechanisms

                            Link Preview Image
                            DORA | Balancing AI tensions: Moving from AI adoption to effective SDLC use

                            DORA is a long running research program that seeks to understand the capabilities that drive software delivery and operations performance. DORA helps teams apply those capabilities, leading to better organizational performance.

                            favicon

                            (dora.dev)

                            1 Reply Last reply
                            0
                            • crazyjaneway@open-ground.orgC crazyjaneway@open-ground.org

                              @onepict @glyph I suspect yes, because my non-tech friends who use it more are using it as assistive tech to keep them working through health things…

                              onepict@chaos.socialO This user is from outside of this forum
                              onepict@chaos.socialO This user is from outside of this forum
                              onepict@chaos.social
                              wrote last edited by
                              #95

                              @crazyjaneway @glyph We had a client use it to give them permission to spam out their new thing, after we'd explained (and their local IT guy also explained) that if they did that on our servers we'd lock their account.

                              Which we then did. The client said, "ChatGPT said I could do it". The sycophancy combined with overconfidence is utterly frightening.

                              I don't particularly like it when my friends use it in their communication with me either.

                              Link Preview Image
                              AI and that Guy at the bar

                              In tech we've always had evangelists, weither it's for FOSS, or Blockchain or now AI. It's a natural thing to do. You have a tech you'r...

                              favicon

                              cobbles (dotart.blog)

                              1 Reply Last reply
                              0
                              • glyph@mastodon.socialG glyph@mastodon.social

                                @nils_berger have you got a link for that report?

                                gbargoud@masto.nycG This user is from outside of this forum
                                gbargoud@masto.nycG This user is from outside of this forum
                                gbargoud@masto.nyc
                                wrote last edited by
                                #96

                                @glyph @nils_berger

                                This is the link to download it:

                                Link Preview Image
                                DORA | State of AI-assisted Software Development 2025

                                DORA is a long running research program that seeks to understand the capabilities that drive software delivery and operations performance. DORA helps teams apply those capabilities, leading to better organizational performance.

                                favicon

                                (dora.dev)

                                Not sure if there's a mirror

                                1 Reply Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  For me, this is the body horror money quote from that Scientific American article:

                                  "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

                                  So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

                                  If you can see it, the basilisk has already won.

                                  mmu_man@m.g3l.orgM This user is from outside of this forum
                                  mmu_man@m.g3l.orgM This user is from outside of this forum
                                  mmu_man@m.g3l.org
                                  wrote last edited by
                                  #97

                                  @glyph don't look at it!

                                  Link Preview Image
                                  Medusa - Wikipedia

                                  favicon

                                  (en.wikipedia.org)

                                  Or even better, the Doctor Who version:

                                  Link Preview Image
                                  Weeping Angel - Wikipedia

                                  favicon

                                  (en.wikipedia.org)

                                  1 Reply Last reply
                                  0
                                  • glyph@mastodon.socialG glyph@mastodon.social

                                    RE: https://mamot.fr/@pluralistic/116219642373307943

                                    I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth

                                    sabrina@fedi01.unicornsparkle.clubS This user is from outside of this forum
                                    sabrina@fedi01.unicornsparkle.clubS This user is from outside of this forum
                                    sabrina@fedi01.unicornsparkle.club
                                    wrote last edited by
                                    #98

                                    @glyph Why doesn’t he just use the word Luddite? Maybe because the Luddites were right and that would undermine his argument?

                                    Link Preview Image
                                    Phie Lux (@sabrina@fedi01.unicornsparkle.club)

                                    Imagine if, at the start of the Industrial Revolution, we as a species had paused and asked ourselves what the ethical implications are and what the possible and present harms could be. Maybe we could have avoided the worst excesses of modern society like pollution, increasing inequality, overconsumption, climate change, fascism, and social atomization. If we are truly at the start of another such technological revolution, maybe we should learn from history and not dive head first into it. Especially when we know a lot of the ethical issues and real harms already. It seems plainly foolish to look at the harm we’ve done to ourselves with the last technological revolution and decide to just double down on it.

                                    favicon

                                    fedi01.unicornsparkle.club (fedi01.unicornsparkle.club)

                                    1 Reply Last reply
                                    0
                                    • glyph@mastodon.socialG glyph@mastodon.social

                                      The very fact that things like OpenClaw and Moltbook even *exist* is an indication, to me, that people are *not* making sober, considered judgements about how and where to use LLMs. The fact that they are popular at *all*, let alone popular enough to be featured in mainstream media shows that whatever this cognitive distortion is, it's widespread.

                                      gittaca@chaos.socialG This user is from outside of this forum
                                      gittaca@chaos.socialG This user is from outside of this forum
                                      gittaca@chaos.social
                                      wrote last edited by
                                      #99

                                      @glyph The "distortion" is from CoVID: https://www.panaccindex.info/p/answered-does-covid-19-harm-the-brain

                                      A facsimile/helper for _thinking_ seems pretty interesting if one suffers from brain fog, cognitive decline, neuro-nnflamation, etc.

                                      1 Reply Last reply
                                      0
                                      • R relay@relay.mycrowd.ca shared this topic
                                      • dpnash@c.imD dpnash@c.im

                                        @glyph

                                        Two statements I believe are consistently correct:

                                        (1) Generative “AI” produces code significantly faster than humans do only when nobody takes sufficient time to understand it (not just in a narrow syntactic sense; also in the context of organizational needs, longer-term plans, interaction with other applications, etc.)

                                        (2) Code nobody understands well is “technical debt” *by definition*, because it takes a disproportionate amount of time and brain power to change or improve.

                                        Conclusion: unless software developers are incredibly disciplined, and have a level of time and autonomy they generally do not have in a major tech company, generative “AI” usage will *consistently* create large amounts of “tech debt”.

                                        ohir@social.vivaldi.netO This user is from outside of this forum
                                        ohir@social.vivaldi.netO This user is from outside of this forum
                                        ohir@social.vivaldi.net
                                        wrote last edited by
                                        #100

                                        @dpnash @glyph
                                        > “AI” usage will *consistently* create large amounts of “tech debt”

                                        Um, no. There will be no technical debt in such products. Maintenance is too costly and the shop owners would be tied to some protein techie. They will soon pivot to #disposable #software

                                        If some user fills a bug, the whole thing will be generated anew with its prompt amended like "; make bug-description disappear". Possibly with new UI/UX. For the better, because users will be trained to not report bugs but make workarounds, as bug report might make protein serfs to endure UX change...

                                        1 Reply Last reply
                                        0
                                        • glyph@mastodon.socialG glyph@mastodon.social

                                          Furthermore, it is not "nuts" to dismiss the experience of an LLM user. In fact, you must dismiss all experiences of LLM users, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too think for your eyes to see.

                                          Because the interesting, novel thing about LLMs, the thing that makes them dangerous and interesting, is that they are, by design, epistemic disruptors.

                                          They can produce symboloids more rapidly than any thinking mind. Repetition influences cognition.

                                          lritter@mastodon.gamedev.placeL This user is from outside of this forum
                                          lritter@mastodon.gamedev.placeL This user is from outside of this forum
                                          lritter@mastodon.gamedev.place
                                          wrote last edited by
                                          #101

                                          @glyph it is nuts to dismiss the experience of a paint huffer

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups