There are lots of ways that AI is eroding the intellectual commons, but a subtle one is that now the discussion around every single essay and blog post is immediately dominated by a debate over whether or not it was written with AI
-
@jalefkowit the exact inverse problem is that i saw this piece posted on metafilter a couple of days ago and i felt crazy because nobody else was talking about the fact that the illustrations are almost definitely ai, which feels egregious considering the topic https://bachmanrachel.substack.com/p/what-children-actually-want-from
@hannah Yeah, I would say it's all of a piece; you can't engage with the substance of a work anymore without first establishing how much of it is from the author's own hand and how much is AI, and there's no independent way to do that, so you end up squinting at every line, every illustration, every chart, asking yourself, can I trust this? Is this real?
It's exhausting, which is why it makes me fear for the future of thought. I find myself turning away from things just because I don't want to have to be the Em Dash Police
-
There are lots of ways that AI is eroding the intellectual commons, but a subtle one is that now the discussion around every single essay and blog post is immediately dominated by a debate over whether or not it was written with AI
@jalefkowit I would disagree here
We're really good at picking up on AI generated writing, and if a post sparks that debate, it is almost definitely AI.
Good writing doesn't beg this question.
-
There are lots of ways that AI is eroding the intellectual commons, but a subtle one is that now the discussion around every single essay and blog post is immediately dominated by a debate over whether or not it was written with AI
@jalefkowit Was that toot written by AI?
-
@jalefkowit Was that toot written by AI?
@mschfr Beep boop no beep boop
-
@jalefkowit I would disagree here
We're really good at picking up on AI generated writing, and if a post sparks that debate, it is almost definitely AI.
Good writing doesn't beg this question.
How do you define 'good' writing, though?
Non-native and autistic writers (I'm both) get accused of using AI disproportionately more often than native speakers:
The People Getting Falsely Accused of Using AI to Write
As AI-generated text floods the internet, people are getting falsely accused of using LLMs to write. Clean and precise prose has become a liability, and non-native English speakers and autistic writers are often paying the price.
Intelligencer (nymag.com)
I ran my university essays from the early 2000s through AI detectors, and each one was flagged as AI-generated with almost 100% certainty.
We've created a system where excellence is penalised and mediocre writing becomes the expectation.
The below is also worth reading:
We’re Training Students To Write Worse To Prove They’re Not Robots, And It’s Pushing Them To Use More AI
About a year and a half ago, I wrote about my kid's experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Kurt Vonnegut's Harrison Bergeron—a story about a dystopian society that enforces "equality" by handicapping anyone who excels—and the AI detection tool…
Techdirt (www.techdirt.com)
-
How do you define 'good' writing, though?
Non-native and autistic writers (I'm both) get accused of using AI disproportionately more often than native speakers:
The People Getting Falsely Accused of Using AI to Write
As AI-generated text floods the internet, people are getting falsely accused of using LLMs to write. Clean and precise prose has become a liability, and non-native English speakers and autistic writers are often paying the price.
Intelligencer (nymag.com)
I ran my university essays from the early 2000s through AI detectors, and each one was flagged as AI-generated with almost 100% certainty.
We've created a system where excellence is penalised and mediocre writing becomes the expectation.
The below is also worth reading:
We’re Training Students To Write Worse To Prove They’re Not Robots, And It’s Pushing Them To Use More AI
About a year and a half ago, I wrote about my kid's experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Kurt Vonnegut's Harrison Bergeron—a story about a dystopian society that enforces "equality" by handicapping anyone who excels—and the AI detection tool…
Techdirt (www.techdirt.com)
@RaffKarva @jalefkowit It's less the AI-detectors... those are bad.
People have a certain cadence of writing, even academically, that AI does not respect at all.
As a teacher, I see this all the time. Unless the student has rewritten the whole essay in their voice, individual sentences can stand out to me as AI generated.
Trust your gut, read more content from the author, and it's a bit easier to filter out the noise that way.
-
How do you define 'good' writing, though?
Non-native and autistic writers (I'm both) get accused of using AI disproportionately more often than native speakers:
The People Getting Falsely Accused of Using AI to Write
As AI-generated text floods the internet, people are getting falsely accused of using LLMs to write. Clean and precise prose has become a liability, and non-native English speakers and autistic writers are often paying the price.
Intelligencer (nymag.com)
I ran my university essays from the early 2000s through AI detectors, and each one was flagged as AI-generated with almost 100% certainty.
We've created a system where excellence is penalised and mediocre writing becomes the expectation.
The below is also worth reading:
We’re Training Students To Write Worse To Prove They’re Not Robots, And It’s Pushing Them To Use More AI
About a year and a half ago, I wrote about my kid's experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Kurt Vonnegut's Harrison Bergeron—a story about a dystopian society that enforces "equality" by handicapping anyone who excels—and the AI detection tool…
Techdirt (www.techdirt.com)
Here's a good article on some research that was done, albeit with older models.
-
There are lots of ways that AI is eroding the intellectual commons, but a subtle one is that now the discussion around every single essay and blog post is immediately dominated by a debate over whether or not it was written with AI
@jalefkowit It has entirely destroyed my ability to enjoy memes, because now before sharing them I have to research a book report on each one first.
-
@RaffKarva @jalefkowit It's less the AI-detectors... those are bad.
People have a certain cadence of writing, even academically, that AI does not respect at all.
As a teacher, I see this all the time. Unless the student has rewritten the whole essay in their voice, individual sentences can stand out to me as AI generated.
Trust your gut, read more content from the author, and it's a bit easier to filter out the noise that way.
Based on your answer I am going to assme you didn't read the two links I shared.
-
@hannah Yeah, I would say it's all of a piece; you can't engage with the substance of a work anymore without first establishing how much of it is from the author's own hand and how much is AI, and there's no independent way to do that, so you end up squinting at every line, every illustration, every chart, asking yourself, can I trust this? Is this real?
It's exhausting, which is why it makes me fear for the future of thought. I find myself turning away from things just because I don't want to have to be the Em Dash Police
@jalefkowit @hannah "Em Dash Police" <— another frustrating bit, because I fucking love em dashes, and now I feel like I need to edit them out of my writing entirely.
-
@jalefkowit @hannah "Em Dash Police" <— another frustrating bit, because I fucking love em dashes, and now I feel like I need to edit them out of my writing entirely.
-
-
@jalefkowit @hannah I used to love semicolons

-
Based on your answer I am going to assme you didn't read the two links I shared.
@RaffKarva @jalefkowit I don't have a nymag sub, but I read the techdirt piece.
This responsibility falls on educators to not rely on this tool. While the "18%" may be scary, it's also going to be ignored in a lot of cases. It's the same when TurnItIn flags an essay as plagiarism when you're citing something from the source.
I'm not saying this isn't an issue, I'm saying that we've been trained our whole lives to detect this. The same thought you get when you see an AI generated image (less and less, I understand that) is the same feeling you get when you read an AI generated piece.
The difference, humans are linguistic creatures first. We are social creatures and we are trained to tell when someone sounds like they're lying or being coy or sarcastic. It may take a bit longer, and some practice, but we can tell when AI wrote something. An algorithm can't.
-
Based on your answer I am going to assme you didn't read the two links I shared.
@RaffKarva lol, clicked on your profile and realized I'm arguing with a linguist about... linguistics.
-
@hannah Yeah, I would say it's all of a piece; you can't engage with the substance of a work anymore without first establishing how much of it is from the author's own hand and how much is AI, and there's no independent way to do that, so you end up squinting at every line, every illustration, every chart, asking yourself, can I trust this? Is this real?
It's exhausting, which is why it makes me fear for the future of thought. I find myself turning away from things just because I don't want to have to be the Em Dash Police
@jalefkowit yeah it feels like a gresham's law thing where in a few years the open internet will just be 99.99% llm spam like what happened to usenet, and we'll all have to go back to small trusted sites or private group chats. oh well
-
@RaffKarva @jalefkowit I don't have a nymag sub, but I read the techdirt piece.
This responsibility falls on educators to not rely on this tool. While the "18%" may be scary, it's also going to be ignored in a lot of cases. It's the same when TurnItIn flags an essay as plagiarism when you're citing something from the source.
I'm not saying this isn't an issue, I'm saying that we've been trained our whole lives to detect this. The same thought you get when you see an AI generated image (less and less, I understand that) is the same feeling you get when you read an AI generated piece.
The difference, humans are linguistic creatures first. We are social creatures and we are trained to tell when someone sounds like they're lying or being coy or sarcastic. It may take a bit longer, and some practice, but we can tell when AI wrote something. An algorithm can't.
@sethhonda @RaffKarva You are focusing on educators evaluating the work of students, but that is not what I was talking about.
I'm just a layperson. A link circulates and I read it. Odds are I have no familiarity with the style of its author. I don't have the advantage you have of knowing your students. I have to evaluate each piece that crosses my desk de novo.
When that happens, the only options are reviewing their entire body of previous work (if there is one), or shoddy heuristics like "check out all those em dashes." Neither of which are great. I don't have time for the former, and the latter is reading chicken entrails.
-
@hannah Yeah, I would say it's all of a piece; you can't engage with the substance of a work anymore without first establishing how much of it is from the author's own hand and how much is AI, and there's no independent way to do that, so you end up squinting at every line, every illustration, every chart, asking yourself, can I trust this? Is this real?
It's exhausting, which is why it makes me fear for the future of thought. I find myself turning away from things just because I don't want to have to be the Em Dash Police
I read this described as breaking a social contract. Pre-AI, the writer always put more time into writing a piece than the reader would spend reading it. In effect, they were giving you X hours(days/months) of their work, hoping to earn Y minutes of your attention.
AI has inverted this. The writer now demands Y minutes of our attention in exchange for X seconds of their 'effort'. It's anti-social narcism: my half-baked idea is worth your careful consideration.
And of course there's the knock-on effect you describe, in that we now have to interrogate every piece of writing we encounter to determine if it's a good faith expression of someone's thoughts, or just some fleeting thought inflated to a grotesque imitation of human communication.
-
@jalefkowit It has entirely destroyed my ability to enjoy memes, because now before sharing them I have to research a book report on each one first.
@jwz @jalefkowit Yes, everything unbelievable is possibly synthetic, now.
-
@jalefkowit It has entirely destroyed my ability to enjoy memes, because now before sharing them I have to research a book report on each one first.
@jwz And even if you do that and are reasonably confident it's authentic, one of the first replies you will get when you post it is "ew, AI."
Sigh
