Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool.
-
Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.
From Futurism.com:
"In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.
“I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.
Futurism (futurism.com)
"We always write things by hand and never use AI, except for this one small case where you caught us. And the next time you catch us. But there's no general tendency. You're just very good at catching exactly the cases where we use AI." -
Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.
From Futurism.com:
"In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.
“I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.
Futurism (futurism.com)
@briankrebs
I do not use AI when I write tech articles for business IT magazines (it’s my main job at the moment). The reason: I cannot trust LLMs in any way, as the Ars story proves, and therefore I’d lose more time checking their output than doing things myself.
I tried to use ChatGpt to summarize long research papers or industry reports, but it’s mostly useless. I have to read papers and reports anyway - to learn what’s happening - and LLMs’ summaries are unreliable.
In one of my first attempts with LLMs, I asked ChatGPT to summarize some report’s data about a specific county, knowing that it wasn’t even mentioned in the study. Nonetheless, chat created a summary - with tables - based on non existing data.
For me, now, the inability of getting context is a definitive big no. -
@nirak @screwturn Spot on. If you have to redo someone else's work all the time because you're not sure if it's right, why not just do that work yourself from the get-go?
@briankrebs @nirak @screwturn That's the AI Trap. When 80% of the work feels like done by magic, but fixing the remaining 20% - plus the prep to get the AI there - ends up taking longer than doing it yourself.
-
R relay@relay.mycrowd.ca shared this topic
-
Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.
From Futurism.com:
"In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.
“I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.
Futurism (futurism.com)
Two-fold failure here. This guy should have taken a sick day (and possibly was incentivized not to do so? We don't know), and under no circumstances is "using AI to mine sources" an error you get to bounce back from as a journalist. Unforgivable - you understood the risks!
-
Two-fold failure here. This guy should have taken a sick day (and possibly was incentivized not to do so? We don't know), and under no circumstances is "using AI to mine sources" an error you get to bounce back from as a journalist. Unforgivable - you understood the risks!
Ars Technica's credibility is forever marred by this event, however fair you think that is. And it's this dude's fault!
-
"We always write things by hand and never use AI, except for this one small case where you caught us. And the next time you catch us. But there's no general tendency. You're just very good at catching exactly the cases where we use AI."
Also we only use it when we're sick - we'd definitely never do this when we're feeling fine, no sirree.
-
@screwturn @briankrebs What if the summaries are wrong? How do you know? If you read through everything to find the errors, does it actually save time?
Wrong in what way?
Yes, in most cases I'm reading the entire text, but sometimes the AI captures something I missed, and other times it confirms what I already got.Time saving does feature, but the bigger issue is that using it improves validity, because of catching the missed topics
-
@nirak @screwturn Spot on. If you have to redo someone else's work all the time because you're not sure if it's right, why not just do that work yourself from the get-go?
In qualitative research we routinely redo each other's work and our own.
Having an AI do that too increases construction validity and reliability. -
Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.
From Futurism.com:
"In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.
“I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.
Futurism (futurism.com)
@briankrebs
AI in journalism is Farse Technica -
@briankrebs
AI in journalism is Farse Technica@chicob Arse. It was right there, dude.
-
Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.
From Futurism.com:
"In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.
“I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.
Futurism (futurism.com)
@briankrebs well, at least he only did it once.
er, correction ...
he only got *caught* doing it once -
Wrong in what way?
Yes, in most cases I'm reading the entire text, but sometimes the AI captures something I missed, and other times it confirms what I already got.Time saving does feature, but the bigger issue is that using it improves validity, because of catching the missed topics
@screwturn @nirak @briankrebs Even a blind pig will find the occasional acorn.
-
Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.
From Futurism.com:
"In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.
“I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.
Futurism (futurism.com)
@briankrebs He's not at fault here. Even after explanations, he souldn't have been fired. The person has not intentionaly take credit for others work. That person should be reinstated, have his job back.
-
Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.
From Futurism.com:
"In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.
“I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.
Futurism (futurism.com)
@briankrebs I learned not to trust Ars reporting after the Hacker X story, which they have still declined to retract.
-
Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.
From Futurism.com:
"In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.
“I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.
Futurism (futurism.com)
@briankrebs you 100% cannot trust it. Like Google search results and Wikipedia. But it still might give you some idea or thought or resource you hadn’t seen yet that you can go research further. It can help you think of new questions and point you in new directions (which can be good or bad). I use it to explore ideas and if I do copy something written by AI I write “from Google AI:” or whatever so people can take it with a grain of salt and can back that up with links to other sources. It’s usually something I know is right but I like the way it wrote it and saves me some time. Sometimes I call out when it is wrong to demonstrate why you can’t always trust it. But I’m researching and writing about AI not the kind of things you write about so it’s a bit different. I generally just cite sources if I’m writing something about a data breach like you (and nowhere near the deep dive you do!)
-
Wrong in what way?
Yes, in most cases I'm reading the entire text, but sometimes the AI captures something I missed, and other times it confirms what I already got.Time saving does feature, but the bigger issue is that using it improves validity, because of catching the missed topics
@screwturn @nirak There are some pretty decent and recent studies showing AI substantially misses or misrepresents the point or summary of a story about 40-50 percent of the time.
-
@screwturn @nirak There are some pretty decent and recent studies showing AI substantially misses or misrepresents the point or summary of a story about 40-50 percent of the time.
Even if the success were 95%, as a journalist, consistently using a stochastic method to give sources guarantees you eventually fuck up and let a fabricated quote into print.
-
Even if the success were 95%, as a journalist, consistently using a stochastic method to give sources guarantees you eventually fuck up and let a fabricated quote into print.
@ct
um... sure, but who is going to be asking the free version of ChatGPT for sources?
That is going to be a very poor use case.If I am using the Ai that is inside my CAQDAS, I am not going to see hallucination, and it internally cites each fact it produces. Reliability and validity are going to vary greatly depending on the environment you use the AI in, and what you are trying to do.
-
@screwturn @nirak There are some pretty decent and recent studies showing AI substantially misses or misrepresents the point or summary of a story about 40-50 percent of the time.
@briankrebs
I'm not sure what that means
Like 50% of the time I use it, it will miss at least one point? Sure, but those odds are fine by me if it is also spotting things I missed, and has a reasonable inter-rater reliability with what I saw.If you mean it gets 50% of the points wrong, then that is probably true of the free versions, but not what I am seeing in practice when I use the AI inside my CAQDAS
-
@screwturn @nirak @briankrebs Even a blind pig will find the occasional acorn.
@StumpyTheMutt
If it finds an acorn that I missed, then it found something of value.Keep in mind, I'm not using the free version in its wide-open configuration, but rather a tightly configured version inside a research workbench. In three years of use, I have not seen a single case of hallucination by the LLM.
-
R relay@relay.an.exchange shared this topic