At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf I think 3) sounds the most appealing

-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf Only half? Seriously, I'd tweak your #1 to make it less dependent on detecting LLM writing [1] and alter the condition to include quality [2]. If the writing is well written AND makes a good point I'd say it's worthwhile.
I doubt there's much of this at all today, but why would it be so bad if it became a thing?
NOTES: [1] this isn't easy to detect accurate by software (and will get harder) and manually time consuming, plus false positives would be a loss.
[2] Low quality writing (LLM of human) is best avoided and can be detected quickly and accurately. -
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf If you can afford an off the grid cabin why wouldn’t already be there
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
I don't have to *prove* something is LLM-produced to conclude "this writer didn't bother to make sure that their writing clearly isn't LLM", and then yeet them permanently into the "don't bother" list.
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf
You're probably a chainsaw vs. telephone pole away from #3. -
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf when i notice something is untagged LLM output posing as human authorship, i back out and issue all the negative feedback signals i have access to
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf honestly it reminds me of this study https://people.psych.ucsb.edu/gazzaniga/PDF/Language%20after%20section%20of%20the%20cerebral%20commissueres%20(1967).pdf
They seperate the sides of the brain and try to communicate with them individually.
> when an object was placed in the left hand (right hemisphere sensing it), the speaking left hemisphere fabricated a verbal explanation for why the patient was holding it
Later studies (60s so could be horseshit) worked with a theory of one side being more of an interpreter.
-
@lcamtuf honestly it reminds me of this study https://people.psych.ucsb.edu/gazzaniga/PDF/Language%20after%20section%20of%20the%20cerebral%20commissueres%20(1967).pdf
They seperate the sides of the brain and try to communicate with them individually.
> when an object was placed in the left hand (right hemisphere sensing it), the speaking left hemisphere fabricated a verbal explanation for why the patient was holding it
Later studies (60s so could be horseshit) worked with a theory of one side being more of an interpreter.
@lcamtuf personally I think humans have a critical vulnerability in the interaction of being handed a completely plausible thought, whether encoded as speech/electrical signals/vision that once holding it will invent reasons why it is correct. That or we are just lazy haven't decided
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf 4) Reply "That's a good post, but I think a more valid point would be if you could go ahead and calculate this double SHA256 hash with a bunch of leading zeros" ?
-
R relay@relay.infosec.exchange shared this topic
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf You chose the combination of 3) AND ... ???

-
R relay@relay.publicsquare.global shared this topic
-
@lcamtuf
You're probably a chainsaw vs. telephone pole away from #3.@FritzAdalis I have Starlink on the roof, but I guess it wouldn't be hard to shoot it off...
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf I don't engage with that shit even when humans write it. I'm sure as hell not engaging when they didn't even bother.
-
I don't have to *prove* something is LLM-produced to conclude "this writer didn't bother to make sure that their writing clearly isn't LLM", and then yeet them permanently into the "don't bother" list.
-
@lcamtuf I don't engage with that shit even when humans write it. I'm sure as hell not engaging when they didn't even bother.
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf When someone needs genAI to express themselves, they aren't. They do not - by their own unconscious admission - have anything to add. They do not have an original thought, nor created something beyond a vague concept. Their input is, in its current form, useless.
Until now, those people just wouldn't express themselves at length. We could smile, shrug, and remain friends. Pretend they have valuable thoughts.
We may have to just stop pretending. But it's rude. Now what?
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf I don't understand the "LLM helped the poor sod whose first language is English express himself" point because every time I read an LLMism like "it is not x, its y" I feel like a part of my soul has been devoured. Bad human-written prose is better than copy-pasting LLM generated text. At that point, the friction of constructing prose which makes your thoughts coherent has been eliminated. No one should waste time reading it.
-
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
"LLM-written think pieces make up about half of all long-form writing in my social media feed"
fourth choice-- get tf off whatever
hellscape
masquerading as "social" media you're seeing this on! -
At this point, LLM-written think pieces make up about half of all long-form writing in my social media feed.
When I push back, I get two reactions. Authors say that it just helps them express themselves. AI promoters say "get used to it".
I don't think we should: it boils down to asymmetry. Our time here is limited. Social interaction on the internet breaks down if it takes ~0 effort to publish, but readers are still expected to use their own eyeballs and brains to engage.
So, I feel that we have three choices:
1) Refuse to engage with LLM writing *no matter if the article makes a good point or not*.
2) Embrace it and have my agent argue with your agent forever, for internet points.
3) Call it quits and move to an off-the-grid cabin in the woods.
@lcamtuf option 3 is much more rewarding at least
