Everyone knows (or should that as fascinating as your dreams are to *you*, they're eye-glazingly dull to others.
-
The same is true of your conversations with chatbots. Even if you find these conversations interesting, you should never assume that anyone else will be entertained by them. In the absence of an explicit reassurance to the contrary, you should presume that recounting your AI chatbot sessions to your friends is an imposition on the friendship, and forwarding the transcripts of those sessions doubly so (perhaps triply so, given the verbosity of chatbot responses).
2/
I will stipulate that there might be friend groups out there where pastebombs of AI chat transcripts are welcome, but even if you work in such a milieu, you should *never, ever* assume that a stranger wants to see or hear about your AI "conversations." Tagging a chatbot into a social media conversation with a stranger and typing, "Hey Grok‡, what do you think of that?" is like masturbating in front of a stranger.
‡ Ugh
It's rude. It's an imposition. It's gross.
3/
-
I will stipulate that there might be friend groups out there where pastebombs of AI chat transcripts are welcome, but even if you work in such a milieu, you should *never, ever* assume that a stranger wants to see or hear about your AI "conversations." Tagging a chatbot into a social media conversation with a stranger and typing, "Hey Grok‡, what do you think of that?" is like masturbating in front of a stranger.
‡ Ugh
It's rude. It's an imposition. It's gross.
3/
There's an even *worse* circle of hell than the one you create when you nonconsensually add a chatbot to a dialog: the hell that comes from reading something a stranger wrote, and then asking a chatbot to generate "commentary" on it and emailing it to that stranger.
4/
-
There's an even *worse* circle of hell than the one you create when you nonconsensually add a chatbot to a dialog: the hell that comes from reading something a stranger wrote, and then asking a chatbot to generate "commentary" on it and emailing it to that stranger.
4/
Even the AI companies pitching their products claim that they need human oversight because they are prone to errors (including the errors that the companies dress up by calling them "hallucinations"). If you've read something you disagree with but don't understand well enough to rebut, and you ask an AI to generate a rebuttal for you, *you still don't understand it well enough to rebut it*.
5/
-
Even the AI companies pitching their products claim that they need human oversight because they are prone to errors (including the errors that the companies dress up by calling them "hallucinations"). If you've read something you disagree with but don't understand well enough to rebut, and you ask an AI to generate a rebuttal for you, *you still don't understand it well enough to rebut it*.
5/
You haven't generated a rebuttal: you have generated a blob of plausible sentences that may or may not constitute a valid critique of the work you're upset with - but until a human being *who understands the issue* goes through the AI output line by line and verifies it, it's just stochastic word-salad.
6/
-
You haven't generated a rebuttal: you have generated a blob of plausible sentences that may or may not constitute a valid critique of the work you're upset with - but until a human being *who understands the issue* goes through the AI output line by line and verifies it, it's just stochastic word-salad.
6/
Once again: the act of prompting a sentence generator to create a rebuttal-shaped series of sentences *does not impart understanding to the prompter.* In the dialog between someone who's written something and someone who disagrees with it, but doesn't understand it well enough to rebut it, *the only person* qualified to evaluate the chatbot's output is the original author - that is, the stranger you've just emailed a chat transcript to.
7/
-
Once again: the act of prompting a sentence generator to create a rebuttal-shaped series of sentences *does not impart understanding to the prompter.* In the dialog between someone who's written something and someone who disagrees with it, but doesn't understand it well enough to rebut it, *the only person* qualified to evaluate the chatbot's output is the original author - that is, the stranger you've just emailed a chat transcript to.
7/
Emailing a stranger a blob of unverified AI output is not a form of dialogue - it's an attempt to coerce a stranger into unpaid labor on your behalf. Strangers are not your "human in the loop" whose expensive time is on offer to painstakingly work through the plausible sentences a chatbot made for you for free.
8/
-
Emailing a stranger a blob of unverified AI output is not a form of dialogue - it's an attempt to coerce a stranger into unpaid labor on your behalf. Strangers are not your "human in the loop" whose expensive time is on offer to painstakingly work through the plausible sentences a chatbot made for you for free.
8/
Remember: even the AI companies will tell you that the work of overseeing an AI's output is valuable labor. The fact that you can costlessly (to you) generate infinite volumes of verbose, plausible-seeming topical sentences in no way implies that the people who actually think about things and then write them down have the time to mark your chatbot's homework.
9/
-
Remember: even the AI companies will tell you that the work of overseeing an AI's output is valuable labor. The fact that you can costlessly (to you) generate infinite volumes of verbose, plausible-seeming topical sentences in no way implies that the people who actually think about things and then write them down have the time to mark your chatbot's homework.
9/
That is a fatal flaw in the idea that we will increase our productivity by asking chatbots to summarize things we don't understand: by definition, if we don't understand a subject, then we won't be qualified to evaluate the summary, either.
There simply is no substitute for learning about a subject and coming to understand it well enough to advance the subject, whether by contributing your own additions or by critiquing its flaws.
10/
-
That is a fatal flaw in the idea that we will increase our productivity by asking chatbots to summarize things we don't understand: by definition, if we don't understand a subject, then we won't be qualified to evaluate the summary, either.
There simply is no substitute for learning about a subject and coming to understand it well enough to advance the subject, whether by contributing your own additions or by critiquing its flaws.
10/
That's not to say that we shouldn't aspire to participate in discourse about areas that seem interesting or momentous - but asking a chatbot to contribute on your behalf does not impart insight to you, and it is a gross imposition on people who *have* taken the time to understand and participate using their own minds and experience.
11/
-
That's not to say that we shouldn't aspire to participate in discourse about areas that seem interesting or momentous - but asking a chatbot to contribute on your behalf does not impart insight to you, and it is a gross imposition on people who *have* taken the time to understand and participate using their own minds and experience.
11/
Image:
Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svgCC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.eneof/
-
Remember: even the AI companies will tell you that the work of overseeing an AI's output is valuable labor. The fact that you can costlessly (to you) generate infinite volumes of verbose, plausible-seeming topical sentences in no way implies that the people who actually think about things and then write them down have the time to mark your chatbot's homework.
9/
@pluralistic Well said.
-
Emailing a stranger a blob of unverified AI output is not a form of dialogue - it's an attempt to coerce a stranger into unpaid labor on your behalf. Strangers are not your "human in the loop" whose expensive time is on offer to painstakingly work through the plausible sentences a chatbot made for you for free.
8/
@pluralistic
...This leaves me curious as to whether someone did this to you. >_>; -
@pluralistic
...This leaves me curious as to whether someone did this to you. >_>;@pteryx Daily.
-
@pteryx Daily.
@pluralistic
Oh geez. Awful that you have to not only go through that, but so *much* of that. -
Emailing a stranger a blob of unverified AI output is not a form of dialogue - it's an attempt to coerce a stranger into unpaid labor on your behalf. Strangers are not your "human in the loop" whose expensive time is on offer to painstakingly work through the plausible sentences a chatbot made for you for free.
8/
It's the latest iteration of "Watch this two-hour video, which will counter all your arguments!"
Flooding the zone with time-wasting bullshit, rather than actual engagement with the discussion.
-
R relay@relay.an.exchange shared this topic
-
It's the latest iteration of "Watch this two-hour video, which will counter all your arguments!"
Flooding the zone with time-wasting bullshit, rather than actual engagement with the discussion.
@juergen_hubert This is exactly right.
-
That is a fatal flaw in the idea that we will increase our productivity by asking chatbots to summarize things we don't understand: by definition, if we don't understand a subject, then we won't be qualified to evaluate the summary, either.
There simply is no substitute for learning about a subject and coming to understand it well enough to advance the subject, whether by contributing your own additions or by critiquing its flaws.
10/
@pluralistic education and art are processes, not merely products. You get educated by engaging in the process not by having outputs to be measured (which you can buy or have a chatbot make).
If you read the solutions at the end of a maths book and copy them into an answer sheet you haven't learned anything.
-
R relay@relay.infosec.exchange shared this topic
-
I will stipulate that there might be friend groups out there where pastebombs of AI chat transcripts are welcome, but even if you work in such a milieu, you should *never, ever* assume that a stranger wants to see or hear about your AI "conversations." Tagging a chatbot into a social media conversation with a stranger and typing, "Hey Grok‡, what do you think of that?" is like masturbating in front of a stranger.
‡ Ugh
It's rude. It's an imposition. It's gross.
3/
@pluralistic … and I am seeing it more and more in professional circles. “I’m starting a conversation!”, I’ve been told. No, you’re imposing on me, an actual expert on certain topics, the obligation of correcting the word salad you had generated and then vomited into the world. You’re then adding the puss-filled cherry to this shit sundae by suggesting that you deserve credit, rather than opprobrium, for this selfish act.
-
@pteryx Daily.
@pluralistic @pteryx I'm just dumbfounded people have the nerve to do this to you. Just gobsmacked.
-
Image:
Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svgCC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.eneof/
@pluralistic Sometimes I'll have an LLM argue me out of making a post I know won't help
There's a disconnect in how different people use them that chatbot vendors need to contend with.