Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. There seem to be two distinct kinds of “chatbot psychosis” happening right now:

There seem to be two distinct kinds of “chatbot psychosis” happening right now:

Scheduled Pinned Locked Moved Uncategorized
llmslop
26 Posts 17 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • arroz@mastodon.socialA arroz@mastodon.social

    @osma @colincornaby @eschaton I think you’re mixing tools and content. A painting is not done “by hand”. Painters use tools, like brushes and many other objects. That’s one thing. The other thing is asking a machine “create an image of a sunset over the ocean seen from a cliff, with a beach in the frame, in cubist style” and simply accepting what it spits out as art, and worse, as *their* art. They didn’t create it, they ordered a machine to create it (by plagiarism, usually).

    osma@mas.toO This user is from outside of this forum
    osma@mas.toO This user is from outside of this forum
    osma@mas.to
    wrote last edited by
    #16

    I think you misinterpret me. But thanks for the explanation, never could have imagined that myself.
    @arroz

    1 Reply Last reply
    0
    • eschaton@mastodon.socialE eschaton@mastodon.social

      There seem to be two distinct kinds of “chatbot psychosis” happening right now:

      1. Becoming delusional about themselves and the world as a result of being glazed nonstop by the friend in their computer, thinking they’re inventing new physics, discovering mystical secrets, etc. and becoming manic.

      2. Becoming delusional about what LLMs are capable of and how effective they are, as a result of developing a reliance upon them, and becoming fanatical in their promotion and defense.

      #ai #llm #slop

      bitsavers@oldbytes.spaceB This user is from outside of this forum
      bitsavers@oldbytes.spaceB This user is from outside of this forum
      bitsavers@oldbytes.space
      wrote last edited by
      #17

      @eschaton

      I boosted a post because this all can be explained as "the psychic's con"

      1 Reply Last reply
      0
      • michaelgemar@mstdn.caM michaelgemar@mstdn.ca

        @eschaton Does #2 include CEOs, or is firing huge swathes of your staff and replacing them with AI a different type of psychosis?

        simonzerafa@infosec.exchangeS This user is from outside of this forum
        simonzerafa@infosec.exchangeS This user is from outside of this forum
        simonzerafa@infosec.exchange
        wrote last edited by
        #18

        @michaelgemar @eschaton

        That's almost a combination of Type 1 and Type 2, in that both together can lead to unrealistic and delusional levels of belief on how effective LLM model output can be 🙂

        Type 12 (combined psychosis) or Type 3? 🙂🤷‍♂️

        1 Reply Last reply
        0
        • eschaton@mastodon.socialE eschaton@mastodon.social

          Type 2 can be summed up as “How dare you presume to tell me whether I’m allowed to use an LLM if I want to?!” Just an absolutely incredible degree of entitlement.

          #ai #llm #slop

          janl@narrativ.esJ This user is from outside of this forum
          janl@narrativ.esJ This user is from outside of this forum
          janl@narrativ.es
          wrote last edited by
          #19

          @eschaton amen. Relatedly: https://narrativ.es/@janl/114566975034056419

          1 Reply Last reply
          0
          • eschaton@mastodon.socialE eschaton@mastodon.social

            There seem to be two distinct kinds of “chatbot psychosis” happening right now:

            1. Becoming delusional about themselves and the world as a result of being glazed nonstop by the friend in their computer, thinking they’re inventing new physics, discovering mystical secrets, etc. and becoming manic.

            2. Becoming delusional about what LLMs are capable of and how effective they are, as a result of developing a reliance upon them, and becoming fanatical in their promotion and defense.

            #ai #llm #slop

            ruenahcmohr@infosec.exchangeR This user is from outside of this forum
            ruenahcmohr@infosec.exchangeR This user is from outside of this forum
            ruenahcmohr@infosec.exchange
            wrote last edited by
            #20

            @eschaton which does "I have nobody to talk to but the ai" fit into?

            1 Reply Last reply
            0
            • paul@tapbots.socialP paul@tapbots.social

              @eschaton I’m curious if you think its all plagiarism or if some uses of LLMs are not? I asked it today to go look through some classes and add a define everywhere I was hardcoding a specific constant. I find it hard to accept that as plagiarism for any kind of definition of it that makes sense to me. Where doing “write a web browser" I'd imagine is going to just spew out a ton of other people's code.

              _ This user is from outside of this forum
              _ This user is from outside of this forum
              __d@mastodon.social
              wrote last edited by
              #21

              @paul @eschaton I like to imagine that instead of the LLM behind the prompt, there’s a person. Instead of paying Anthropic/whoever, I’m paying a human. All the generated code is written by the hidden person. All those constant values replaced by defines were written by the person behind the interface.

              Now, do I consider the result to be 100% my own work? I find that I cannot.

              ahltorp@mastodon.nuA 1 Reply Last reply
              0
              • eschaton@mastodon.socialE eschaton@mastodon.social

                As an example, see the incredible escalation in response to me saying that the output of an LLM does not represent a developer’s own work: https://news.ycombinator.com/item?id=47344155

                The slopmonger refuses to accept that what they’re doing meets the academic definition of plagiarism. Instead they insist that I must not understand LLMs and that I need to get out of the way and out of the industry because what they’re doing is the way of the future.

                #ai #llm #slop

                europlus@social.europlus.zoneE This user is from outside of this forum
                europlus@social.europlus.zoneE This user is from outside of this forum
                europlus@social.europlus.zone
                wrote last edited by
                #22

                @eschaton “you’re a stupid poo-poo head…Poo-poo Head 😝

                1 Reply Last reply
                0
                • _ __d@mastodon.social

                  @paul @eschaton I like to imagine that instead of the LLM behind the prompt, there’s a person. Instead of paying Anthropic/whoever, I’m paying a human. All the generated code is written by the hidden person. All those constant values replaced by defines were written by the person behind the interface.

                  Now, do I consider the result to be 100% my own work? I find that I cannot.

                  ahltorp@mastodon.nuA This user is from outside of this forum
                  ahltorp@mastodon.nuA This user is from outside of this forum
                  ahltorp@mastodon.nu
                  wrote last edited by
                  #23

                  @__d @paul @eschaton I also often use this "LLM as a person" way of looking at it, especially in academic settings when I try to explain plagiarism. As long as it is only used as one tool for explanation, and not the only one, I find that it works quite well.

                  Some people don't even seem to understand that having someone else write it for you is plagiarism, though.

                  1 Reply Last reply
                  0
                  • eschaton@mastodon.socialE eschaton@mastodon.social

                    @michaelgemar It absolutely includes CEOs, CTOs, pundits, and the like. However it also includes the people who get extremely angry when an Open Source project says “no, we will not take your contribution to our project if you used an LLM to create it, because it’s not your work.” They can go to Dennis Reynolds levels of unbound rage almost instantly and it’s really something to see.

                    abucci@buc.ciA This user is from outside of this forum
                    abucci@buc.ciA This user is from outside of this forum
                    abucci@buc.ci
                    wrote last edited by
                    #24
                    @eschaton@mastodon.social @michaelgemar@mstdn.ca I think the anger response is at least partly explainable by this: https://buc.ci/abucci/p/1773412163.748396

                    The CEO response may be totally explained by that...
                    1 Reply Last reply
                    0
                    • michaelgemar@mstdn.caM michaelgemar@mstdn.ca

                      @eschaton Does #2 include CEOs, or is firing huge swathes of your staff and replacing them with AI a different type of psychosis?

                      abucci@buc.ciA This user is from outside of this forum
                      abucci@buc.ciA This user is from outside of this forum
                      abucci@buc.ci
                      wrote last edited by
                      #25
                      @michaelgemar@mstdn.ca For what it's worth, the majority of layoffs have been done for conventional economic reasons, or because companies (esp. tech companies) overhired near the beginning of the COVID pandemic. They are using AI as an excuse, hoping AI psychosis will distract from the otherwise-obvious conclusion that they made poor management decisions. @eschaton@mastodon.social
                      1 Reply Last reply
                      0
                      • eschaton@mastodon.socialE eschaton@mastodon.social

                        There seem to be two distinct kinds of “chatbot psychosis” happening right now:

                        1. Becoming delusional about themselves and the world as a result of being glazed nonstop by the friend in their computer, thinking they’re inventing new physics, discovering mystical secrets, etc. and becoming manic.

                        2. Becoming delusional about what LLMs are capable of and how effective they are, as a result of developing a reliance upon them, and becoming fanatical in their promotion and defense.

                        #ai #llm #slop

                        nielsa@mas.toN This user is from outside of this forum
                        nielsa@mas.toN This user is from outside of this forum
                        nielsa@mas.to
                        wrote last edited by
                        #26

                        @eschaton Yeah—but I don't really think the analogy of "psychosis" works for the latter term. Delusion, sure.

                        1 Reply Last reply
                        0
                        • R relay@relay.publicsquare.global shared this topic
                          R relay@relay.mycrowd.ca shared this topic
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups