Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.

Scheduled Pinned Locked Moved Uncategorized
190 Posts 72 Posters 243 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

    @MrBerard @glyph Psychosis is a broad range? It covers a range of severities - most days, I read to those who don't know me as "kinda weird", most don't think "schizo" - but on my worse days, I definitely read as psychotic.
    But - from *my* side of that, the diference is not 'psychotic' or 'not psychotic', it's just a question of how high the volume & intensity is set. The voices haven't *stopped* - ever - since I was 13, for example.

    mrberard@mastodon.acm.orgM This user is from outside of this forum
    mrberard@mastodon.acm.orgM This user is from outside of this forum
    mrberard@mastodon.acm.org
    wrote last edited by
    #113

    @miss_rodent @glyph

    That's an interesting example, because my understanding is that hearing voices is more common than people think, and often not accompanied by the symptom cluster that would lead to a psychosis diagnosis.

    I think the problem is the underlying model for diagnostic criteria, which was already defective IMO even before AI complicated the picture.

    Lexically, a single term blurs the nuances. For a broader, umbrella term, 'AI brainrot' seems more appropriate IMO.

    1 Reply Last reply
    0
    • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

      @MrBerard @glyph my point being; a lot of the more minor oddities - changes to speech and writing patterns, being swayed more easily by nonsense, groundless beliefs defended disproportionately strongly in a manner resembling delusions being challenged, the cognitive backflips involvd in preserving those beliefs against mounting contrary evidence, etc.
      All read as potentially 'psychotic' to me - even in the tame case of 'It's bad except this one little niche exception that I'll defend fiercely!"

      mrberard@mastodon.acm.orgM This user is from outside of this forum
      mrberard@mastodon.acm.orgM This user is from outside of this forum
      mrberard@mastodon.acm.org
      wrote last edited by
      #114

      @miss_rodent @glyph

      Again, I am not disagreeing with this point, just with the practical utility of choosing to use the term based on it.

      1 Reply Last reply
      0
      • glyph@mastodon.socialG glyph@mastodon.social

        Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.

        But I'm still waiting.

        nielsa@mas.toN This user is from outside of this forum
        nielsa@mas.toN This user is from outside of this forum
        nielsa@mas.to
        wrote last edited by
        #115

        @glyph Very good analysis, thank you, I'll be passing this around 😁

        1 Reply Last reply
        0
        • happyborg@fosstodon.orgH happyborg@fosstodon.org

          @alys FYI the first health concerns with asbestos were being raised in 1907 and yet it was still legal to use it in UK buildings in, wait for it... 1999.

          So the lesson with #LLMs is...?

          nielsa@mas.toN This user is from outside of this forum
          nielsa@mas.toN This user is from outside of this forum
          nielsa@mas.to
          wrote last edited by
          #116

          @happyborg oh no

          1 Reply Last reply
          0
          • nielsa@mas.toN nielsa@mas.to

            @glyph I've been using "AI delusion" for these milder cases. As I understood AI psychosis it pertains only to those cases where people fully lose grasp of reality...

            I've seen it used colloquially as "being wrong because of or about AI", but that always hit me like people calling someone "crazy" for doing something odd or impulsive—and that word use isn't really a good look imo.

            nielsa@mas.toN This user is from outside of this forum
            nielsa@mas.toN This user is from outside of this forum
            nielsa@mas.to
            wrote last edited by
            #117

            @glyph Finished Doctorow's thread and... he spends so long arguing that he should be allowed to use an edgy analogy if it works well... but then it kinda really just doesn't work well in context?? He describes (granted, delusional, poorly analyzed) things that capitalism has been making people do forever, but now it's done with AI flavor, and he really wants to call that... psychosis? Like what.

            1 Reply Last reply
            0
            • glyph@mastodon.socialG glyph@mastodon.social

              Furthermore, it is not "nuts" to dismiss the experience of an LLM user. In fact, you must dismiss all experiences of LLM users, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too think for your eyes to see.

              Because the interesting, novel thing about LLMs, the thing that makes them dangerous and interesting, is that they are, by design, epistemic disruptors.

              They can produce symboloids more rapidly than any thinking mind. Repetition influences cognition.

              jacob@social.jacobian.orgJ This user is from outside of this forum
              jacob@social.jacobian.orgJ This user is from outside of this forum
              jacob@social.jacobian.org
              wrote last edited by
              #118

              @glyph “You must dismiss all experiences of LLM users”

              This is where you lose me. There’s no universe in which I’m comfortable dismissing the lived experiences of people that categorically. The most important lesson I’ve learned from decades of activism is “believe people when they tell you about their experiences” — and I see no reason to change now. I’m not willing to give up my curiosity and empathy and I hope you aren’t either.

              mavnn@bonfire.mavnn.euM glyph@mastodon.socialG 2 Replies Last reply
              0
              • kirakira@furry.engineerK kirakira@furry.engineer

                @glyph i've used the term "neural asbestos" before and it feels a lot like that may be the type of thing we're dealing with

                delta_vee@mstdn.caD This user is from outside of this forum
                delta_vee@mstdn.caD This user is from outside of this forum
                delta_vee@mstdn.ca
                wrote last edited by
                #119

                @kirakira @glyph "metacognitive sandblaster" is mine

                bluewinds@tech.lgbtB 1 Reply Last reply
                0
                • glyph@mastodon.socialG glyph@mastodon.social

                  I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.

                  delta_vee@mstdn.caD This user is from outside of this forum
                  delta_vee@mstdn.caD This user is from outside of this forum
                  delta_vee@mstdn.ca
                  wrote last edited by
                  #120

                  @glyph From everything I've seen, there's some kind of metacognitive subversion and/or corrosion going on - it's the throughline I see from the METR dev study through the LSAT confidence one to the recent "cognitive surrender" paper. Any kind of sustained exposure just obliterates the normal self-regulation and self-evaluation

                  1 Reply Last reply
                  1
                  0
                  • pythonbynight@hachyderm.ioP pythonbynight@hachyderm.io

                    @glyph While this is purely anecdotal, it's darkly comical that just yesterday, at work, a "chief architect" explained and described their claude code setup as ... "giving a monkey a machine gun" ... with no irony or shame.

                    His point was very clearly that he wasn't sure he could trust his setup, but it was still certainly worth it for the perceived gains.

                    While I've not made many arguments pro/against LLM usage in general (based on how useful they are or aren't), this admission seemed really odd to me.

                    We're being asked to implement these tools in our workflows, but we're not given guidance on how to do so safely.

                    And I'm not against experimentation and learning new things--but I think that has its place within a certain context.

                    You want to give a monkey a machine gun? Well, find someplace safe to do so, and hope nobody gets hurt... but, like, why should I do the same thing?

                    ddelemeny@mastodon.xyzD This user is from outside of this forum
                    ddelemeny@mastodon.xyzD This user is from outside of this forum
                    ddelemeny@mastodon.xyz
                    wrote last edited by
                    #121

                    @pythonbynight Cory's bit on the byzantine premium echoed your thread in some way. "All this money can't be for nothing, all these people can't be so irrational, there has to be something under that pile of crap."
                    @glyph

                    pythonbynight@hachyderm.ioP 1 Reply Last reply
                    0
                    • jacob@social.jacobian.orgJ jacob@social.jacobian.org

                      @glyph “You must dismiss all experiences of LLM users”

                      This is where you lose me. There’s no universe in which I’m comfortable dismissing the lived experiences of people that categorically. The most important lesson I’ve learned from decades of activism is “believe people when they tell you about their experiences” — and I see no reason to change now. I’m not willing to give up my curiosity and empathy and I hope you aren’t either.

                      mavnn@bonfire.mavnn.euM This user is from outside of this forum
                      mavnn@bonfire.mavnn.euM This user is from outside of this forum
                      mavnn@bonfire.mavnn.eu
                      wrote last edited by
                      #122

                      @jacob@social.jacobian.org @glyph@mastodon.social ​I think I'm currently at a point in my journey where I try very hard to believe people when they talk about what they have experienced internally, and have become increasingly sceptical of people's ability to judge accurately what actually happened and the results (in both cases for pretty much the same reasons as Glyph as I've noticed the difference between my #adhd internal experience and real world what actually happened).

                      So "using an LLM made me feel a god-like developer!" I'll completely take as your experience. "My productivity went up by 15 times after I started using agents" (actual claim I have seen) will leave me asking for hard evidence and possibly a scientific study.

                      It's awkward that we use 'experience' to cover both, and I had the same reaction you're expressing when I read that section but assuming (from the context) that Glyph means the second type of experience I think he has a strong argument, if not the clearest wording.

                      glyph@mastodon.socialG 1 Reply Last reply
                      0
                      • delta_vee@mstdn.caD delta_vee@mstdn.ca

                        @kirakira @glyph "metacognitive sandblaster" is mine

                        bluewinds@tech.lgbtB This user is from outside of this forum
                        bluewinds@tech.lgbtB This user is from outside of this forum
                        bluewinds@tech.lgbt
                        wrote last edited by
                        #123

                        @delta_vee @kirakira @glyph Leaded gasoline.

                        jackeric@beige.partyJ 1 Reply Last reply
                        0
                        • jacob@social.jacobian.orgJ jacob@social.jacobian.org

                          @glyph “You must dismiss all experiences of LLM users”

                          This is where you lose me. There’s no universe in which I’m comfortable dismissing the lived experiences of people that categorically. The most important lesson I’ve learned from decades of activism is “believe people when they tell you about their experiences” — and I see no reason to change now. I’m not willing to give up my curiosity and empathy and I hope you aren’t either.

                          glyph@mastodon.socialG This user is from outside of this forum
                          glyph@mastodon.socialG This user is from outside of this forum
                          glyph@mastodon.social
                          wrote last edited by
                          #124

                          @jacob Perhaps "dismiss" wasn't the best word choice there, but that's why I included "even if the LLM user is yourself". I dismiss _my own_ experience of LLMs, _as evidence of their quantitative efficacy_. As evidence of their subjective experience, of course it is valid. If it didn't produce the intense subjective experience then there wouldn't be a problem!

                          There are two reasons that activism teaches us to believe people's lived experience, and neither apply here: …

                          glyph@mastodon.socialG jacob@social.jacobian.orgJ 2 Replies Last reply
                          0
                          • glyph@mastodon.socialG glyph@mastodon.social

                            @jacob Perhaps "dismiss" wasn't the best word choice there, but that's why I included "even if the LLM user is yourself". I dismiss _my own_ experience of LLMs, _as evidence of their quantitative efficacy_. As evidence of their subjective experience, of course it is valid. If it didn't produce the intense subjective experience then there wouldn't be a problem!

                            There are two reasons that activism teaches us to believe people's lived experience, and neither apply here: …

                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.socialG This user is from outside of this forum
                            glyph@mastodon.social
                            wrote last edited by
                            #125

                            @jacob

                            1. The distance between an account of an experience of oppression and the actual event of the oppression is very short. "That man assaulted me" / "that cop beat me". The only way to think that people saying these things are not relaying true information is to believe that they are intentionally lying for personal gain, which just isn't true. (And that's not what I believe about LLM users.)

                            glyph@mastodon.socialG 1 Reply Last reply
                            0
                            • glyph@mastodon.socialG glyph@mastodon.social

                              @jacob

                              1. The distance between an account of an experience of oppression and the actual event of the oppression is very short. "That man assaulted me" / "that cop beat me". The only way to think that people saying these things are not relaying true information is to believe that they are intentionally lying for personal gain, which just isn't true. (And that's not what I believe about LLM users.)

                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.socialG This user is from outside of this forum
                              glyph@mastodon.social
                              wrote last edited by
                              #126

                              @jacob

                              2. The filter of oppression *itself* means we only hear the accounts of people who are not only probably telling the truth in the first place but had to push through aggressive filtering to even get heard. If you hear one complaint of police violence or SA there's probably hundreds more where that came from. That also doesn't apply. The archetypical LLM user is not silenced by oppression, they're being massively amplified by the largest propaganda apparatus on earth.

                              glyph@mastodon.socialG 1 Reply Last reply
                              0
                              • ddelemeny@mastodon.xyzD ddelemeny@mastodon.xyz

                                @pythonbynight Cory's bit on the byzantine premium echoed your thread in some way. "All this money can't be for nothing, all these people can't be so irrational, there has to be something under that pile of crap."
                                @glyph

                                pythonbynight@hachyderm.ioP This user is from outside of this forum
                                pythonbynight@hachyderm.ioP This user is from outside of this forum
                                pythonbynight@hachyderm.io
                                wrote last edited by
                                #127

                                @ddelemeny @glyph Yup, I read that and smirked ...

                                Again, "investing" in an open source tooling that will speed up your CI/CD is almost a no-brainer for an organization. They spend zero dollars and reduce costs/risks associated to the problem that the tool is designed to solve. But even then, there are security risks based on supply chain/dependencies that are often scrutinized to no end.

                                Investing in LLM tooling is supposedly "cheap" (due to subsidies), but the risks include vendor lock in, security vulnerabilities, and weakening worker autonomy (among others). But there seems to be zero scrutiny in spite of that.

                                pythonbynight@hachyderm.ioP 1 Reply Last reply
                                0
                                • glyph@mastodon.socialG glyph@mastodon.social

                                  The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. Which is deeply ironic because it pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/

                                  wesdym@mastodon.socialW This user is from outside of this forum
                                  wesdym@mastodon.socialW This user is from outside of this forum
                                  wesdym@mastodon.social
                                  wrote last edited by
                                  #128

                                  @glyph You're so extremely full of yourself that I didn't even finish reading your comment, and I no longer care about anything you have to say. Go touch grass.

                                  1 Reply Last reply
                                  0
                                  • glyph@mastodon.socialG glyph@mastodon.social

                                    @jacob

                                    2. The filter of oppression *itself* means we only hear the accounts of people who are not only probably telling the truth in the first place but had to push through aggressive filtering to even get heard. If you hear one complaint of police violence or SA there's probably hundreds more where that came from. That also doesn't apply. The archetypical LLM user is not silenced by oppression, they're being massively amplified by the largest propaganda apparatus on earth.

                                    glyph@mastodon.socialG This user is from outside of this forum
                                    glyph@mastodon.socialG This user is from outside of this forum
                                    glyph@mastodon.social
                                    wrote last edited by
                                    #129

                                    @jacob Consider another type of "lived experience" — the racist who says "DEI took my job". It would be a mistake to think that this person is *lying* about their experience — they are clearly motivated to their racism by genuine animus, and maybe they did lose their job — but their indirect, abstract experience of the nebulous entity of "DEI" is not reliable, particularly not in terms of employment statistics. So we are more skeptical in that case, and we look at the numbers.

                                    jacob@social.jacobian.orgJ 1 Reply Last reply
                                    0
                                    • pythonbynight@hachyderm.ioP pythonbynight@hachyderm.io

                                      @ddelemeny @glyph Yup, I read that and smirked ...

                                      Again, "investing" in an open source tooling that will speed up your CI/CD is almost a no-brainer for an organization. They spend zero dollars and reduce costs/risks associated to the problem that the tool is designed to solve. But even then, there are security risks based on supply chain/dependencies that are often scrutinized to no end.

                                      Investing in LLM tooling is supposedly "cheap" (due to subsidies), but the risks include vendor lock in, security vulnerabilities, and weakening worker autonomy (among others). But there seems to be zero scrutiny in spite of that.

                                      pythonbynight@hachyderm.ioP This user is from outside of this forum
                                      pythonbynight@hachyderm.ioP This user is from outside of this forum
                                      pythonbynight@hachyderm.io
                                      wrote last edited by
                                      #130

                                      @ddelemeny @glyph Early on at my current job, I built a tool that I thought was very useful and mentioned that I would like to open source it...

                                      I was ultimately shut down. In the interest of "intellectual property" and other sorts of red tape... And I didn't really feel like fighting it.

                                      So, I couldn't share my tool with the commons, but there are absolutely no qualms about feeding my code to a company that WE PAY, so they can ingest it and charge others for benefitting off of it? ...

                                      Sigh...

                                      1 Reply Last reply
                                      0
                                      • glyph@mastodon.socialG glyph@mastodon.social

                                        @jacob Perhaps "dismiss" wasn't the best word choice there, but that's why I included "even if the LLM user is yourself". I dismiss _my own_ experience of LLMs, _as evidence of their quantitative efficacy_. As evidence of their subjective experience, of course it is valid. If it didn't produce the intense subjective experience then there wouldn't be a problem!

                                        There are two reasons that activism teaches us to believe people's lived experience, and neither apply here: …

                                        jacob@social.jacobian.orgJ This user is from outside of this forum
                                        jacob@social.jacobian.orgJ This user is from outside of this forum
                                        jacob@social.jacobian.org
                                        wrote last edited by
                                        #131

                                        @glyph You’ve reasoned yourself into a position where anyone who says anything contrary to you is either delusional or lying. You might be right — I don’t think you are but who knows maybe — but even so, that’s just not a position I’m willing to take about anything ever.

                                        glyph@mastodon.socialG 1 Reply Last reply
                                        0
                                        • glyph@mastodon.socialG glyph@mastodon.social

                                          @jacob Consider another type of "lived experience" — the racist who says "DEI took my job". It would be a mistake to think that this person is *lying* about their experience — they are clearly motivated to their racism by genuine animus, and maybe they did lose their job — but their indirect, abstract experience of the nebulous entity of "DEI" is not reliable, particularly not in terms of employment statistics. So we are more skeptical in that case, and we look at the numbers.

                                          jacob@social.jacobian.orgJ This user is from outside of this forum
                                          jacob@social.jacobian.orgJ This user is from outside of this forum
                                          jacob@social.jacobian.org
                                          wrote last edited by
                                          #132

                                          @glyph Honestly? The left would be in a better place if we didn’t instantly dismiss that person but actually explored that feeling and engaged with him. “You’re wrong” may be true, and feels good to say, but “what makes feel that way?” is a much better opening if you want to win people over to your side.

                                          ketmorco@fosstodon.orgK 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups