Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

Scheduled Pinned Locked Moved Uncategorized
190 Posts 72 Posters 243 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • glyph@mastodon.socialG glyph@mastodon.social

    RE: https://mamot.fr/@pluralistic/116219642373307943

    I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth

    miss_rodent@girlcock.clubM This user is from outside of this forum
    miss_rodent@girlcock.clubM This user is from outside of this forum
    miss_rodent@girlcock.club
    wrote last edited by
    #9

    @glyph Asbestos - to use a comparison he has used himself - was also a 'normal technology'.
    But then people came to their senses and decided to rip it out of the walls because of the effects of exposure to it.
    See also: Lead paint

    1 Reply Last reply
    0
    • glyph@mastodon.socialG glyph@mastodon.social

      If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.

      glyph@mastodon.socialG This user is from outside of this forum
      glyph@mastodon.socialG This user is from outside of this forum
      glyph@mastodon.social
      wrote last edited by
      #10

      More to the point though in this metaphor where you're getting a potentially-infected scrape at work, we are living in the pre-germ-theory age of AI. We are aware that it might be dangerous sometimes, but we don't know to whom or why. We are attempting to combat miasma with bloodletting right now, and putting the miasma-generator in every home before we know what it's actually doing.

      glyph@mastodon.socialG kevingranade@mastodon.gamedev.placeK 2 Replies Last reply
      0
      • glyph@mastodon.socialG glyph@mastodon.social

        More to the point though in this metaphor where you're getting a potentially-infected scrape at work, we are living in the pre-germ-theory age of AI. We are aware that it might be dangerous sometimes, but we don't know to whom or why. We are attempting to combat miasma with bloodletting right now, and putting the miasma-generator in every home before we know what it's actually doing.

        glyph@mastodon.socialG This user is from outside of this forum
        glyph@mastodon.socialG This user is from outside of this forum
        glyph@mastodon.social
        wrote last edited by
        #11

        For me, this is the body horror money quote from that Scientific American article:

        "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

        So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

        If you can see it, the basilisk has already won.

        glyph@mastodon.socialG F miss_rodent@girlcock.clubM aud@fire.asta.lgbtA nielsa@mas.toN 10 Replies Last reply
        1
        0
        • glyph@mastodon.socialG glyph@mastodon.social

          Cory also correctly points out that "AI psychosis" is probably going to be gatekept by medical establishment scicomm types soon because "psychosis" probably isn't the right word and already carries an unwarranted stigma. And indeed, I think the biggest problem with "psychosis" as a metaphor is going to be that the ways in which AI can warp our minds are mostly NOT going to be catastrophic psychosis, and are not going to have great existing analogs in existing medical literature.

          miss_rodent@girlcock.clubM This user is from outside of this forum
          miss_rodent@girlcock.clubM This user is from outside of this forum
          miss_rodent@girlcock.club
          wrote last edited by
          #12

          @glyph Honestly - speaking as someone with a psychotic disorder, but who is not a medical professional - "AI psychosis" seems pretty appropriate, from the behaviours I've seen it result in? Even in more mild cases of people babbling inane bullshit, but not like, so far off reality that they're at risk of physical harm (to themself or others)

          miss_rodent@girlcock.clubM mrberard@mastodon.acm.orgM 2 Replies Last reply
          0
          • aud@fire.asta.lgbtA aud@fire.asta.lgbt

            @froztbyte@mastodon.social @glyph@mastodon.social "yes bots", as opposed to "yes men"?

            F This user is from outside of this forum
            F This user is from outside of this forum
            froztbyte@mastodon.social
            wrote last edited by
            #13

            @aud @glyph Hmm, sycophaintic? Has the extra possibility of the word tail being modifiable to fit: -ist, -istry, etc

            Downside is requires decent phonetic use and that might not survive in dialects outside text

            aud@fire.asta.lgbtA 1 Reply Last reply
            0
            • glyph@mastodon.socialG glyph@mastodon.social

              For me, this is the body horror money quote from that Scientific American article:

              "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

              So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

              If you can see it, the basilisk has already won.

              glyph@mastodon.socialG This user is from outside of this forum
              glyph@mastodon.socialG This user is from outside of this forum
              glyph@mastodon.social
              wrote last edited by
              #14

              Now, for rhetorical effect, I'm obviously putting this fairly dramatically. Cory points out that people have been doing this *to each other* mediated by technology, in emergent and scary ways, with no need for AI. He shows that people prone to specific types of delusions (Morgellons, Gang Stalking Disorder) have found each other via the Internet and the simple availability of global distributed communication has harmed them. But obviously that has benefits, too.

              glyph@mastodon.socialG mcc@mastodon.socialM moutmout@framapiaf.orgM 3 Replies Last reply
              0
              • F froztbyte@mastodon.social

                @aud @glyph Hmm, sycophaintic? Has the extra possibility of the word tail being modifiable to fit: -ist, -istry, etc

                Downside is requires decent phonetic use and that might not survive in dialects outside text

                aud@fire.asta.lgbtA This user is from outside of this forum
                aud@fire.asta.lgbtA This user is from outside of this forum
                aud@fire.asta.lgbt
                wrote last edited by
                #15

                @froztbyte@mastodon.social @glyph@mastodon.social oooh, I kinda like it, even though it's subtle and sort of easy to miss

                sycophAInt

                sycophaint

                plus it rhymes with "taint", which is appropriate.

                1 Reply Last reply
                0
                • glyph@mastodon.socialG glyph@mastodon.social

                  Now, for rhetorical effect, I'm obviously putting this fairly dramatically. Cory points out that people have been doing this *to each other* mediated by technology, in emergent and scary ways, with no need for AI. He shows that people prone to specific types of delusions (Morgellons, Gang Stalking Disorder) have found each other via the Internet and the simple availability of global distributed communication has harmed them. But obviously that has benefits, too.

                  glyph@mastodon.socialG This user is from outside of this forum
                  glyph@mastodon.socialG This user is from outside of this forum
                  glyph@mastodon.social
                  wrote last edited by
                  #16

                  I'm open to a future where we do some research and figure out the limits of how AI influence works, and where the safety valves are, and also the extent to which it's *fine* that AI can influence our views because honestly many different kinds of stimuli can influence our views, not least of which is each other. But it sure looks right now like it has a bunch of very dangerous feedback loops built-in, and it's not clear how to know if you're touching one.

                  glyph@mastodon.socialG ketmorco@fosstodon.orgK 2 Replies Last reply
                  0
                  • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                    @glyph Honestly - speaking as someone with a psychotic disorder, but who is not a medical professional - "AI psychosis" seems pretty appropriate, from the behaviours I've seen it result in? Even in more mild cases of people babbling inane bullshit, but not like, so far off reality that they're at risk of physical harm (to themself or others)

                    miss_rodent@girlcock.clubM This user is from outside of this forum
                    miss_rodent@girlcock.clubM This user is from outside of this forum
                    miss_rodent@girlcock.club
                    wrote last edited by
                    #17

                    @glyph I'm sure the mechanism - how they got there - has more in common with emotional abuse and brainwashing/indoctrination techniques.
                    But the end result - that detachment from reality - is kind of the core of the psychosis experience, and trying to find ways to keep tethered and avoid drifting off into wonderland like that is a persistent part of my day-to-day life.
                    Which is part of *why* I avoid the chatbots like they're carrying the plague.

                    glyph@mastodon.socialG 1 Reply Last reply
                    0
                    • glyph@mastodon.socialG glyph@mastodon.social

                      Now, for rhetorical effect, I'm obviously putting this fairly dramatically. Cory points out that people have been doing this *to each other* mediated by technology, in emergent and scary ways, with no need for AI. He shows that people prone to specific types of delusions (Morgellons, Gang Stalking Disorder) have found each other via the Internet and the simple availability of global distributed communication has harmed them. But obviously that has benefits, too.

                      mcc@mastodon.socialM This user is from outside of this forum
                      mcc@mastodon.socialM This user is from outside of this forum
                      mcc@mastodon.social
                      wrote last edited by
                      #18

                      @glyph Wait. Does Doctorow try to soften the edge of the fact that LLMs do a bad thing to people by pointing out sometimes people do bad things to people? Isn't that just two bad things?

                      If his point is it's still bad but it isn't *novel*, isn't the fact of a Fortune 500 company doing it still novel?

                      glyph@mastodon.socialG 1 Reply Last reply
                      0
                      • glyph@mastodon.socialG glyph@mastodon.social

                        For me, this is the body horror money quote from that Scientific American article:

                        "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

                        So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

                        If you can see it, the basilisk has already won.

                        F This user is from outside of this forum
                        F This user is from outside of this forum
                        froztbyte@mastodon.social
                        wrote last edited by
                        #19

                        @glyph oh man, that’s dangerously close to giving an abstinence-based push some fuel and whew does my rather hedonistic ass have some thoughts on that

                        1 Reply Last reply
                        0
                        • glyph@mastodon.socialG glyph@mastodon.social

                          For me, this is the body horror money quote from that Scientific American article:

                          "participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

                          So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

                          If you can see it, the basilisk has already won.

                          miss_rodent@girlcock.clubM This user is from outside of this forum
                          miss_rodent@girlcock.clubM This user is from outside of this forum
                          miss_rodent@girlcock.club
                          wrote last edited by
                          #20

                          @glyph "Interestingly, the people in the study didn’t tend to think the AI autocomplete suggestions were biased or to notice that they had changed their own thinking on an issue in the course of the study. Warning the participants that they might be exposed to misinformation by the AI didn’t temper the persuasive effect either."

                          Also, being aware that it can fuck with your head does not make you less susceptible to it fucking with your head. So you can't really judge if you could use it safely.

                          miss_rodent@girlcock.clubM 1 Reply Last reply
                          0
                          • mcc@mastodon.socialM mcc@mastodon.social

                            @glyph Wait. Does Doctorow try to soften the edge of the fact that LLMs do a bad thing to people by pointing out sometimes people do bad things to people? Isn't that just two bad things?

                            If his point is it's still bad but it isn't *novel*, isn't the fact of a Fortune 500 company doing it still novel?

                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.social
                            wrote last edited by
                            #21

                            @mcc Let me just give you the pull quote:

                            """
                            For many programmers – including several of my acquaintance, whom I know to be both thoughtful and skilled – AI is another plugin, one they find useful enough to be modestly enthusiastic about.

                            It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules…
                            """

                            glyph@mastodon.socialG 1 Reply Last reply
                            0
                            • glyph@mastodon.socialG glyph@mastodon.social

                              @mcc Let me just give you the pull quote:

                              """
                              For many programmers – including several of my acquaintance, whom I know to be both thoughtful and skilled – AI is another plugin, one they find useful enough to be modestly enthusiastic about.

                              It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules…
                              """

                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.social
                              wrote last edited by
                              #22

                              @mcc He thinks the technology is capable of many horrors but it can also be useful for pedestrian things.

                              mcc@mastodon.socialM cliftonr@wandering.shopC 2 Replies Last reply
                              0
                              • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                                @glyph "Interestingly, the people in the study didn’t tend to think the AI autocomplete suggestions were biased or to notice that they had changed their own thinking on an issue in the course of the study. Warning the participants that they might be exposed to misinformation by the AI didn’t temper the persuasive effect either."

                                Also, being aware that it can fuck with your head does not make you less susceptible to it fucking with your head. So you can't really judge if you could use it safely.

                                miss_rodent@girlcock.clubM This user is from outside of this forum
                                miss_rodent@girlcock.clubM This user is from outside of this forum
                                miss_rodent@girlcock.club
                                wrote last edited by
                                #23

                                @glyph Nor can you reliably judge if it has or has not already fucked with your head.

                                1 Reply Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  @mcc He thinks the technology is capable of many horrors but it can also be useful for pedestrian things.

                                  mcc@mastodon.socialM This user is from outside of this forum
                                  mcc@mastodon.socialM This user is from outside of this forum
                                  mcc@mastodon.social
                                  wrote last edited by
                                  #24

                                  @glyph That sounds to me like a way to get horrors but you're probably not the person to convince

                                  1 Reply Last reply
                                  0
                                  • glyph@mastodon.socialG glyph@mastodon.social

                                    I'm open to a future where we do some research and figure out the limits of how AI influence works, and where the safety valves are, and also the extent to which it's *fine* that AI can influence our views because honestly many different kinds of stimuli can influence our views, not least of which is each other. But it sure looks right now like it has a bunch of very dangerous feedback loops built-in, and it's not clear how to know if you're touching one.

                                    glyph@mastodon.socialG This user is from outside of this forum
                                    glyph@mastodon.socialG This user is from outside of this forum
                                    glyph@mastodon.social
                                    wrote last edited by
                                    #25

                                    But, as Cory puts it:

                                    """
                                    It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale.
                                    """

                                    I had a very visceral emotional reaction to this particular paragraph, and I find it very important to refute. Here are two points to consider:

                                    glyph@mastodon.socialG hailey@hails.orgH 2 Replies Last reply
                                    0
                                    • glyph@mastodon.socialG glyph@mastodon.social

                                      But, as Cory puts it:

                                      """
                                      It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale.
                                      """

                                      I had a very visceral emotional reaction to this particular paragraph, and I find it very important to refute. Here are two points to consider:

                                      glyph@mastodon.socialG This user is from outside of this forum
                                      glyph@mastodon.socialG This user is from outside of this forum
                                      glyph@mastodon.social
                                      wrote last edited by
                                      #26

                                      1. YES THEY ARE.

                                      They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

                                      With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

                                      glyph@mastodon.socialG dpnash@c.imD jaypeach53@calckeymusic.socialJ pythonbynight@hachyderm.ioP johannab@cosocial.caJ 5 Replies Last reply
                                      1
                                      0
                                      • glyph@mastodon.socialG glyph@mastodon.social

                                        But, as Cory puts it:

                                        """
                                        It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale.
                                        """

                                        I had a very visceral emotional reaction to this particular paragraph, and I find it very important to refute. Here are two points to consider:

                                        hailey@hails.orgH This user is from outside of this forum
                                        hailey@hails.orgH This user is from outside of this forum
                                        hailey@hails.org
                                        wrote last edited by
                                        #27

                                        @glyph but they are, at scale, generating tech debt

                                        1 Reply Last reply
                                        0
                                        • glyph@mastodon.socialG glyph@mastodon.social

                                          1. YES THEY ARE.

                                          They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

                                          With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

                                          glyph@mastodon.socialG This user is from outside of this forum
                                          glyph@mastodon.socialG This user is from outside of this forum
                                          glyph@mastodon.social
                                          wrote last edited by
                                          #28

                                          2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good

                                          glyph@mastodon.socialG doragasu@mastodon.sdf.orgD laprice@beige.partyL elseweather@mastodon.socialE mortonrobd@mas.toM 7 Replies Last reply
                                          2
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups