There seem to be two distinct kinds of “chatbot psychosis” happening right now:
-
@eschaton I’m not sure exactly how to put it - but I just don’t have as much interest in something if I learn the code was generated.
Maybe the best metaphor I have is in art. I have art hanging on my wall that I admire because it’s nice but also because it was made by hand. I can see the craft and work that went into it.
Maybe one decided to generate AI art. That doesn’t mean I’m going to feel the same about it or think that you’re as much of an artist.
Can it be art if it's not made by hand? Lots of examples come to mind. Jewelry may be reproduced by casting. Prints are, well, prints. Architecture is manufactured (by machines as well as people other than the designer). Even in music there are loop machines, synthesizers, etc. But an author or artist is at the core.
@colincornaby @eschaton -
Can it be art if it's not made by hand? Lots of examples come to mind. Jewelry may be reproduced by casting. Prints are, well, prints. Architecture is manufactured (by machines as well as people other than the designer). Even in music there are loop machines, synthesizers, etc. But an author or artist is at the core.
@colincornaby @eschaton@osma @eschaton I think it depends. I don't have the same relationship with prints - but I also own some because they're reproductions of the original artwork. I would assume the same is true of jewelry.
I think even if you wanted to call AI art "art" it doesn't require the same emotional connection or recognition. In the same way that someone who brings home McDonalds for dinner doesn't need to be treated as if they cooked the meal.
-
Can it be art if it's not made by hand? Lots of examples come to mind. Jewelry may be reproduced by casting. Prints are, well, prints. Architecture is manufactured (by machines as well as people other than the designer). Even in music there are loop machines, synthesizers, etc. But an author or artist is at the core.
@colincornaby @eschaton@osma @colincornaby @eschaton I think you’re mixing tools and content. A painting is not done “by hand”. Painters use tools, like brushes and many other objects. That’s one thing. The other thing is asking a machine “create an image of a sunset over the ocean seen from a cliff, with a beach in the frame, in cubist style” and simply accepting what it spits out as art, and worse, as *their* art. They didn’t create it, they ordered a machine to create it (by plagiarism, usually).
-
@osma @eschaton I think it depends. I don't have the same relationship with prints - but I also own some because they're reproductions of the original artwork. I would assume the same is true of jewelry.
I think even if you wanted to call AI art "art" it doesn't require the same emotional connection or recognition. In the same way that someone who brings home McDonalds for dinner doesn't need to be treated as if they cooked the meal.
Some prints and some jewelry are reproductions. Others have been designed to be reproduced - the medium being part of the piece.
It's easy to agree that slop at the scale of McDonalds isn't equivalent to a lovingly crafted original, not so easy to set a bright line behind which everything is different.
@colincornaby @eschaton -
@osma @colincornaby @eschaton I think you’re mixing tools and content. A painting is not done “by hand”. Painters use tools, like brushes and many other objects. That’s one thing. The other thing is asking a machine “create an image of a sunset over the ocean seen from a cliff, with a beach in the frame, in cubist style” and simply accepting what it spits out as art, and worse, as *their* art. They didn’t create it, they ordered a machine to create it (by plagiarism, usually).
I think you misinterpret me. But thanks for the explanation, never could have imagined that myself.
@arroz -
There seem to be two distinct kinds of “chatbot psychosis” happening right now:
1. Becoming delusional about themselves and the world as a result of being glazed nonstop by the friend in their computer, thinking they’re inventing new physics, discovering mystical secrets, etc. and becoming manic.
2. Becoming delusional about what LLMs are capable of and how effective they are, as a result of developing a reliance upon them, and becoming fanatical in their promotion and defense.
I boosted a post because this all can be explained as "the psychic's con"
-
@eschaton Does #2 include CEOs, or is firing huge swathes of your staff and replacing them with AI a different type of psychosis?
That's almost a combination of Type 1 and Type 2, in that both together can lead to unrealistic and delusional levels of belief on how effective LLM model output can be

Type 12 (combined psychosis) or Type 3?

️ -
@eschaton amen. Relatedly: https://narrativ.es/@janl/114566975034056419
-
There seem to be two distinct kinds of “chatbot psychosis” happening right now:
1. Becoming delusional about themselves and the world as a result of being glazed nonstop by the friend in their computer, thinking they’re inventing new physics, discovering mystical secrets, etc. and becoming manic.
2. Becoming delusional about what LLMs are capable of and how effective they are, as a result of developing a reliance upon them, and becoming fanatical in their promotion and defense.
@eschaton which does "I have nobody to talk to but the ai" fit into?
-
@eschaton I’m curious if you think its all plagiarism or if some uses of LLMs are not? I asked it today to go look through some classes and add a define everywhere I was hardcoding a specific constant. I find it hard to accept that as plagiarism for any kind of definition of it that makes sense to me. Where doing “write a web browser" I'd imagine is going to just spew out a ton of other people's code.
@paul @eschaton I like to imagine that instead of the LLM behind the prompt, there’s a person. Instead of paying Anthropic/whoever, I’m paying a human. All the generated code is written by the hidden person. All those constant values replaced by defines were written by the person behind the interface.
Now, do I consider the result to be 100% my own work? I find that I cannot.
-
As an example, see the incredible escalation in response to me saying that the output of an LLM does not represent a developer’s own work: https://news.ycombinator.com/item?id=47344155
The slopmonger refuses to accept that what they’re doing meets the academic definition of plagiarism. Instead they insist that I must not understand LLMs and that I need to get out of the way and out of the industry because what they’re doing is the way of the future.
@eschaton “you’re a stupid poo-poo head…Poo-poo Head

-
@paul @eschaton I like to imagine that instead of the LLM behind the prompt, there’s a person. Instead of paying Anthropic/whoever, I’m paying a human. All the generated code is written by the hidden person. All those constant values replaced by defines were written by the person behind the interface.
Now, do I consider the result to be 100% my own work? I find that I cannot.
@__d @paul @eschaton I also often use this "LLM as a person" way of looking at it, especially in academic settings when I try to explain plagiarism. As long as it is only used as one tool for explanation, and not the only one, I find that it works quite well.
Some people don't even seem to understand that having someone else write it for you is plagiarism, though.
-
@michaelgemar It absolutely includes CEOs, CTOs, pundits, and the like. However it also includes the people who get extremely angry when an Open Source project says “no, we will not take your contribution to our project if you used an LLM to create it, because it’s not your work.” They can go to Dennis Reynolds levels of unbound rage almost instantly and it’s really something to see.
@eschaton@mastodon.social @michaelgemar@mstdn.ca I think the anger response is at least partly explainable by this: https://buc.ci/abucci/p/1773412163.748396
The CEO response may be totally explained by that...
-
@eschaton Does #2 include CEOs, or is firing huge swathes of your staff and replacing them with AI a different type of psychosis?
@michaelgemar@mstdn.ca For what it's worth, the majority of layoffs have been done for conventional economic reasons, or because companies (esp. tech companies) overhired near the beginning of the COVID pandemic. They are using AI as an excuse, hoping AI psychosis will distract from the otherwise-obvious conclusion that they made poor management decisions. @eschaton@mastodon.social
-
There seem to be two distinct kinds of “chatbot psychosis” happening right now:
1. Becoming delusional about themselves and the world as a result of being glazed nonstop by the friend in their computer, thinking they’re inventing new physics, discovering mystical secrets, etc. and becoming manic.
2. Becoming delusional about what LLMs are capable of and how effective they are, as a result of developing a reliance upon them, and becoming fanatical in their promotion and defense.
@eschaton Yeah—but I don't really think the analogy of "psychosis" works for the latter term. Delusion, sure.
-
R relay@relay.publicsquare.global shared this topicR relay@relay.mycrowd.ca shared this topic