Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

Scheduled Pinned Locked Moved Uncategorized
190 Posts 72 Posters 243 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • glyph@mastodon.socialG glyph@mastodon.social

    2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

    elseweather@mastodon.socialE This user is from outside of this forum
    elseweather@mastodon.socialE This user is from outside of this forum
    elseweather@mastodon.social
    wrote last edited by
    #72

    @glyph Something that has gotten under my skin for the past year or so is seeing code changes like: large refactors, porting a legacy tool to rust, even minor bugfixes - things that would be a struggle to push through the inertia of code review - get fast tracked when "the AI did it." Like the exact PRs I've written and tried to advocate before and eventually gave up on. The changes and their risks are the same, I can only conclude that the bar is lower for accepting "AI" contributions.

    oschonrock@mastodon.socialO 1 Reply Last reply
    0
    • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

      @MrBerard @glyph (poverty of speech, flat affect, disorganized speech/though, delusions, reduced attention, brain fog, disorientation, confusion, etc. all being pretty common psychosis features - and all coming in various degrees, many of which LLM folks seem to exhibit to various degrees pretty commonly.)

      mrberard@mastodon.acm.orgM This user is from outside of this forum
      mrberard@mastodon.acm.orgM This user is from outside of this forum
      mrberard@mastodon.acm.org
      wrote last edited by
      #73

      @miss_rodent @glyph

      Agreed. But it's the subtle influence on user's views I'm referring to. Which was a social media problem before it was an AI issue.

      Sure, we can categorise this as "delusions", but I don't know that bundling everything as 'psychosis' helps the debate, in that it flattens the nuances between subtle and overt cases.

      Ultimately, we're tying to apply a medical model designed before mass media , DSM updates notwithstanding. Not surprising it reaches the limits of its utility.

      1 Reply Last reply
      0
      • glyph@mastodon.socialG glyph@mastodon.social

        @mcc He thinks the technology is capable of many horrors but it can also be useful for pedestrian things.

        cliftonr@wandering.shopC This user is from outside of this forum
        cliftonr@wandering.shopC This user is from outside of this forum
        cliftonr@wandering.shop
        wrote last edited by
        #74

        @glyph @mcc

        What I've observed very recently is that even intelligent people, experienced developers - who know perfectly well that LLMs are just generators of text from statistical models of what someone is likely to write - will still pull up AI written search results and proceed on the automatic assumption that whatever they say is correct.

        That is not a general observation. That was this morning, with some senior programmers trying to solve a problem that's prolonging a code freeze.

        cliftonr@wandering.shopC paparouleur@mastodon.socialP 2 Replies Last reply
        0
        • cliftonr@wandering.shopC cliftonr@wandering.shop

          @glyph @mcc

          What I've observed very recently is that even intelligent people, experienced developers - who know perfectly well that LLMs are just generators of text from statistical models of what someone is likely to write - will still pull up AI written search results and proceed on the automatic assumption that whatever they say is correct.

          That is not a general observation. That was this morning, with some senior programmers trying to solve a problem that's prolonging a code freeze.

          cliftonr@wandering.shopC This user is from outside of this forum
          cliftonr@wandering.shopC This user is from outside of this forum
          cliftonr@wandering.shop
          wrote last edited by
          #75

          @glyph @mcc

          They *know* it, and yet they react and behave as if they don't know it.

          The similarities to other deeply rooted problems in our society are left as an exercise to the reader.

          1 Reply Last reply
          0
          • glyph@mastodon.socialG glyph@mastodon.social

            For me, this is the body horror money quote from that Scientific American article:

            "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

            So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

            If you can see it, the basilisk has already won.

            lritter@mastodon.gamedev.placeL This user is from outside of this forum
            lritter@mastodon.gamedev.placeL This user is from outside of this forum
            lritter@mastodon.gamedev.place
            wrote last edited by
            #76

            @glyph i can absolutely use it responsibly because i'm not new to NLP, but unfortunately it is liquified shite.

            lritter@mastodon.gamedev.placeL 1 Reply Last reply
            0
            • lritter@mastodon.gamedev.placeL lritter@mastodon.gamedev.place

              @glyph i can absolutely use it responsibly because i'm not new to NLP, but unfortunately it is liquified shite.

              lritter@mastodon.gamedev.placeL This user is from outside of this forum
              lritter@mastodon.gamedev.placeL This user is from outside of this forum
              lritter@mastodon.gamedev.place
              wrote last edited by
              #77

              @glyph oh btw, have coded stuff with Twisted a long time ago, was in fact my introduction to async callback oriented programming. so using this opportunity to say thank you for teaching me the reactor pattern!

              1 Reply Last reply
              0
              • glyph@mastodon.socialG glyph@mastodon.social

                For me, this is the body horror money quote from that Scientific American article:

                "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

                So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

                If you can see it, the basilisk has already won.

                aeva@mastodon.gamedev.placeA This user is from outside of this forum
                aeva@mastodon.gamedev.placeA This user is from outside of this forum
                aeva@mastodon.gamedev.place
                wrote last edited by
                #78

                @glyph when teams autocorrect rewrites something it decides i misspelled, i am filled with hatred and disgust and usually delete the entire sentence and try again regardless of if it had suggested the word i meant to write. i don't want it anymore

                aeva@mastodon.gamedev.placeA 1 Reply Last reply
                0
                • aeva@mastodon.gamedev.placeA aeva@mastodon.gamedev.place

                  @glyph when teams autocorrect rewrites something it decides i misspelled, i am filled with hatred and disgust and usually delete the entire sentence and try again regardless of if it had suggested the word i meant to write. i don't want it anymore

                  aeva@mastodon.gamedev.placeA This user is from outside of this forum
                  aeva@mastodon.gamedev.placeA This user is from outside of this forum
                  aeva@mastodon.gamedev.place
                  wrote last edited by
                  #79

                  @glyph this is how i avoid getting early onset dementia from being exposed to involuntary slop

                  1 Reply Last reply
                  0
                  • raphael@mastodon.sdf.orgR raphael@mastodon.sdf.org

                    @glyph I like your breakdown in those articles.

                    I think that some of the more valuable stuff has been not when juniors prompt and don’t get value, but when seniors prompt, go do something else for a bit while the machine churns for a couple of minutes, and then come back to something that is pretty close to a good solution.

                    Think about a thing that might take you 15 minutes to kinda menially do (add some CLI bo flag that then needs to get passed down 3 layers in some spot for example)

                    zimzat@mastodon.socialZ This user is from outside of this forum
                    zimzat@mastodon.socialZ This user is from outside of this forum
                    zimzat@mastodon.social
                    wrote last edited by
                    #80

                    @raphael @glyph The thing that the LLM is getting you to not think about is that it shouldn't take passing things down three layers (much less more, which is more common). This is the boilerplate that everyone hates and the goal should be to remove the need for it at all, not produce more faster.

                    "The least worst way to use an LLM is to do something you already know how to do", now with the addendum that we don't know what we don't know.

                    1 Reply Last reply
                    0
                    • davidtheeviloverlord@mastodon.socialD davidtheeviloverlord@mastodon.social

                      @MrBerard @kirakira @glyph

                      Stochastic Errorism.

                      n_dimension@infosec.exchangeN This user is from outside of this forum
                      n_dimension@infosec.exchangeN This user is from outside of this forum
                      n_dimension@infosec.exchange
                      wrote last edited by
                      #81

                      @davidtheeviloverlord @MrBerard @kirakira @glyph

                      What a fantastic thread.
                      Not black or white, but flavoursome.
                      Makes you think huh?

                      Humans as programmable entities.
                      Does a keyboard feel the fingertips?
                      Or does it think it's a content creator?

                      #Ai is a #Cognitivehazard and we don't have a firewall.

                      1 Reply Last reply
                      1
                      0
                      • R relay@relay.infosec.exchange shared this topic
                      • glyph@mastodon.socialG glyph@mastodon.social

                        Cory also correctly points out that "AI psychosis" is probably going to be gatekept by medical establishment scicomm types soon because "psychosis" probably isn't the right word and already carries an unwarranted stigma. And indeed, I think the biggest problem with "psychosis" as a metaphor is going to be that the ways in which AI can warp our minds are mostly NOT going to be catastrophic psychosis, and are not going to have great existing analogs in existing medical literature.

                        mamalake@beige.partyM This user is from outside of this forum
                        mamalake@beige.partyM This user is from outside of this forum
                        mamalake@beige.party
                        wrote last edited by
                        #82

                        @glyph if this was a tablet/pill taken that promised to help you write better emails, and it accidentally caused psychological disorders, it would be put through vigorous testing before being loaded onto a dishwasher or pushed into every system available. This is the Sacklers of tech, grifting every last cent out of people who are already struggling, promising all the while that it’s non addictive.

                        1 Reply Last reply
                        0
                        • glyph@mastodon.socialG glyph@mastodon.social

                          Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.

                          But I'm still waiting.

                          onepict@chaos.socialO This user is from outside of this forum
                          onepict@chaos.socialO This user is from outside of this forum
                          onepict@chaos.social
                          wrote last edited by
                          #83

                          @glyph I'm honestly wondering just how much undiagnosed long COVID is playing into this.

                          I'm slowly recovering now, well as much as I can, but at the time I was painfully aware weird stuff was happening to my brain because I got caught in the first wave in March 2020.

                          So I am wondering if the addictive effects of using these LLMs along with existing cognitive damage is a partial cause.

                          crazyjaneway@open-ground.orgC 1 Reply Last reply
                          0
                          • glyph@mastodon.socialG glyph@mastodon.social

                            2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

                            mortonrobd@mas.toM This user is from outside of this forum
                            mortonrobd@mas.toM This user is from outside of this forum
                            mortonrobd@mas.to
                            wrote last edited by
                            #84

                            @glyph Many years back I read something about how sometimes smarter people are easier to fool as they think they're too smart to be fooled. I've observed a few instances in the martial arts world where people see one "body magic" trick and next thing they're down a rabbit hole.

                            1 Reply Last reply
                            0
                            • glyph@mastodon.socialG glyph@mastodon.social

                              If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.

                              hllizi@hespere.deH This user is from outside of this forum
                              hllizi@hespere.deH This user is from outside of this forum
                              hllizi@hespere.de
                              wrote last edited by
                              #85

                              @glyph All the possible harm is just mental, and the mental - this seems to be an unspoken tenet held by many - isn't really real. Mental health in general doesn't really seem to be taken that seriously before its lack manifests physically as chainsaw wielding or some other eccentricity. Nothing to see here, just move on.

                              1 Reply Last reply
                              0
                              • glyph@mastodon.socialG glyph@mastodon.social

                                1. YES THEY ARE.

                                They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

                                With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

                                jaypeach53@calckeymusic.socialJ This user is from outside of this forum
                                jaypeach53@calckeymusic.socialJ This user is from outside of this forum
                                jaypeach53@calckeymusic.social
                                wrote last edited by
                                #86

                                @glyph@mastodon.social the only problem with your analysis is that you refer to vibe-coding. Slop-coding is the proper term.

                                1 Reply Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.

                                  nicuveo@tech.lgbtN This user is from outside of this forum
                                  nicuveo@tech.lgbtN This user is from outside of this forum
                                  nicuveo@tech.lgbt
                                  wrote last edited by
                                  #87

                                  @glyph my hypothesis on that is that, by virtue of literally being encodings of lexical fields and semantic proximity, and by virtue of their response being the logical continuation of the user's input, LLMs statistically pick up on and amplify subtle tendencies / biases in the user: if you feed it input that uses vocabulary and idioms semantically linked to low self-esteem, the model will more likely compute a reply with similar undertones, feeding said emotion. they amplify whatever emotion you put in, even accidentally.
                                  (thread here: https://tech.lgbt/@nicuveo/116210599322080105 )

                                  glyph@mastodon.socialG 1 Reply Last reply
                                  0
                                  • glyph@mastodon.socialG glyph@mastodon.social

                                    The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. Which is deeply ironic because it pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/

                                    jaypeach53@calckeymusic.socialJ This user is from outside of this forum
                                    jaypeach53@calckeymusic.socialJ This user is from outside of this forum
                                    jaypeach53@calckeymusic.social
                                    wrote last edited by
                                    #88

                                    @glyph@mastodon.social Cory has outsized influence considering his role as AI ambassador. His writings for the past few years reek of AI Slop. Book after book of rehashes of the same topic. I stopped buying his books.

                                    1 Reply Last reply
                                    0
                                    • alys@selfy.armyA alys@selfy.army

                                      @glyph i don't know if it's the best analogy at the end of the day, but my brain keeps going to lead pipes and asbestos. if we're not sure it's safe, should we be such a hurry to put it in everything?

                                      happyborg@fosstodon.orgH This user is from outside of this forum
                                      happyborg@fosstodon.orgH This user is from outside of this forum
                                      happyborg@fosstodon.org
                                      wrote last edited by
                                      #89

                                      @alys FYI the first health concerns with asbestos were being raised in 1907 and yet it was still legal to use it in UK buildings in, wait for it... 1999.

                                      So the lesson with #LLMs is...?

                                      nielsa@mas.toN 1 Reply Last reply
                                      0
                                      • glyph@mastodon.socialG glyph@mastodon.social

                                        For me, this is the body horror money quote from that Scientific American article:

                                        "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

                                        So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

                                        If you can see it, the basilisk has already won.

                                        janeishly@beige.partyJ This user is from outside of this forum
                                        janeishly@beige.partyJ This user is from outside of this forum
                                        janeishly@beige.party
                                        wrote last edited by
                                        #90

                                        @glyph This basilisk thing (great imagery) is very true in translation. Once you've seen the MT suggestion, with its wonky syntax and not quite right tone, it's very hard to dismiss it. The cognitive load is consequently enormous

                                        mmby@mastodon.socialM bluewinds@tech.lgbtB 2 Replies Last reply
                                        0
                                        • onepict@chaos.socialO onepict@chaos.social

                                          @glyph I'm honestly wondering just how much undiagnosed long COVID is playing into this.

                                          I'm slowly recovering now, well as much as I can, but at the time I was painfully aware weird stuff was happening to my brain because I got caught in the first wave in March 2020.

                                          So I am wondering if the addictive effects of using these LLMs along with existing cognitive damage is a partial cause.

                                          crazyjaneway@open-ground.orgC This user is from outside of this forum
                                          crazyjaneway@open-ground.orgC This user is from outside of this forum
                                          crazyjaneway@open-ground.org
                                          wrote last edited by
                                          #91

                                          @onepict @glyph I suspect yes, because my non-tech friends who use it more are using it as assistive tech to keep them working through health things…

                                          onepict@chaos.socialO 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups