Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool.

Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool.

Scheduled Pinned Locked Moved Uncategorized
39 Posts 20 Posters 8 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

    Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.

    From Futurism.com:

    "In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.

    “I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”

    Link Preview Image
    Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

    Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

    favicon

    Futurism (futurism.com)

    nirak@carhenge.clubN This user is from outside of this forum
    nirak@carhenge.clubN This user is from outside of this forum
    nirak@carhenge.club
    wrote last edited by
    #3

    @briankrebs The other thing that strikes me about this is the fact that he CAN take sick days but did not. Why? Why continue to work? What is it that makes people think that this article is SO IMPORTANT that they can’t sleep when sick? (Unless that was all a made up excuse but I kinda doubt it)

    briankrebs@infosec.exchangeB 1 Reply Last reply
    0
    • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

      Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.

      From Futurism.com:

      "In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.

      “I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”

      Link Preview Image
      Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

      Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

      favicon

      Futurism (futurism.com)

      nev@status.nevillepark.caN This user is from outside of this forum
      nev@status.nevillepark.caN This user is from outside of this forum
      nev@status.nevillepark.ca
      wrote last edited by
      #4

      @briankrebs nah, that's a firing. Like what?? It's a short blog post??? You can just copy the link???? There's like 4 links to keep track of?

      1 Reply Last reply
      0
      • nirak@carhenge.clubN nirak@carhenge.club

        @briankrebs The other thing that strikes me about this is the fact that he CAN take sick days but did not. Why? Why continue to work? What is it that makes people think that this article is SO IMPORTANT that they can’t sleep when sick? (Unless that was all a made up excuse but I kinda doubt it)

        briankrebs@infosec.exchangeB This user is from outside of this forum
        briankrebs@infosec.exchangeB This user is from outside of this forum
        briankrebs@infosec.exchange
        wrote last edited by
        #5

        @nirak Because Conde Nast runs Ars and other properties like content treadmills? Reporters are expected to churn out a lot of content and clicks.

        nirak@carhenge.clubN 1 Reply Last reply
        0
        • screwturn@mastodon.socialS screwturn@mastodon.social

          @briankrebs
          Researcher rather than a journalist, but I use the AI embedded in my qualitative data analysis system to summarize, identify topics, and suggest coding, as a means of checking my own understanding of texts.
          It results in higher construct validity and a better analysis

          nirak@carhenge.clubN This user is from outside of this forum
          nirak@carhenge.clubN This user is from outside of this forum
          nirak@carhenge.club
          wrote last edited by
          #6

          @screwturn @briankrebs What if the summaries are wrong? How do you know? If you read through everything to find the errors, does it actually save time?

          briankrebs@infosec.exchangeB screwturn@mastodon.socialS 2 Replies Last reply
          0
          • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

            Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.

            From Futurism.com:

            "In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.

            “I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”

            Link Preview Image
            Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

            Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

            favicon

            Futurism (futurism.com)

            theodoraward@mastodon.socialT This user is from outside of this forum
            theodoraward@mastodon.socialT This user is from outside of this forum
            theodoraward@mastodon.social
            wrote last edited by
            #7

            RE: https://infosec.exchange/@briankrebs/116165825763346603

            this is a bummer too bc i always liked benj's non-AI-related work a lot -- his retronauts episodes are always great, and he's a treat to read/listen to on Old Computer Stuff. i sometimes wondered why he was even on the AI beat

            1 Reply Last reply
            0
            • nirak@carhenge.clubN nirak@carhenge.club

              @screwturn @briankrebs What if the summaries are wrong? How do you know? If you read through everything to find the errors, does it actually save time?

              briankrebs@infosec.exchangeB This user is from outside of this forum
              briankrebs@infosec.exchangeB This user is from outside of this forum
              briankrebs@infosec.exchange
              wrote last edited by
              #8

              @nirak @screwturn Spot on. If you have to redo someone else's work all the time because you're not sure if it's right, why not just do that work yourself from the get-go?

              scotty86@mastodon.socialS screwturn@mastodon.socialS 2 Replies Last reply
              1
              0
              • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                @nirak Because Conde Nast runs Ars and other properties like content treadmills? Reporters are expected to churn out a lot of content and clicks.

                nirak@carhenge.clubN This user is from outside of this forum
                nirak@carhenge.clubN This user is from outside of this forum
                nirak@carhenge.club
                wrote last edited by
                #9

                @briankrebs So why is he saying “I should have taken a sick day”? If he’s not allowed to take sick days, why wouldn’t he say “I can’t take sick days, so…”

                It’s a rhetorical question, because I know the answer is we’re trained to simp for companies no matter what, but I just don’t get it

                1 Reply Last reply
                0
                • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                  Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.

                  From Futurism.com:

                  "In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.

                  “I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”

                  Link Preview Image
                  Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

                  Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

                  favicon

                  Futurism (futurism.com)

                  rootwyrm@weird.autosR This user is from outside of this forum
                  rootwyrm@weird.autosR This user is from outside of this forum
                  rootwyrm@weird.autos
                  wrote last edited by
                  #10

                  @briankrebs I would be VERY deeply concerned about anybody claiming to be a journalist and using the incorrect plagiarism machine for literally anything.
                  If you are in the minority and not a 99%+ majority, the entirety of journalism is irreparably broken.
                  Doing it right is hard work. It absolutely should be better paid. And it is vitally important work, even when it seems not to be.

                  1 Reply Last reply
                  0
                  • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                    Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.

                    From Futurism.com:

                    "In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.

                    “I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”

                    Link Preview Image
                    Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

                    Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

                    favicon

                    Futurism (futurism.com)

                    michael@westergaard.socialM This user is from outside of this forum
                    michael@westergaard.socialM This user is from outside of this forum
                    michael@westergaard.social
                    wrote last edited by
                    #11
                    "We always write things by hand and never use AI, except for this one small case where you caught us. And the next time you catch us. But there's no general tendency. You're just very good at catching exactly the cases where we use AI."
                    alessandro@mstdn.caA kasperd@westergaard.socialK 2 Replies Last reply
                    0
                    • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                      Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.

                      From Futurism.com:

                      "In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.

                      “I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”

                      Link Preview Image
                      Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

                      Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

                      favicon

                      Futurism (futurism.com)

                      bitzero@corteximplant.netB This user is from outside of this forum
                      bitzero@corteximplant.netB This user is from outside of this forum
                      bitzero@corteximplant.net
                      wrote last edited by
                      #12
                      @briankrebs
                      I do not use AI when I write tech articles for business IT magazines (it’s my main job at the moment). The reason: I cannot trust LLMs in any way, as the Ars story proves, and therefore I’d lose more time checking their output than doing things myself.

                      I tried to use ChatGpt to summarize long research papers or industry reports, but it’s mostly useless. I have to read papers and reports anyway - to learn what’s happening - and LLMs’ summaries are unreliable.

                      In one of my first attempts with LLMs, I asked ChatGPT to summarize some report’s data about a specific county, knowing that it wasn’t even mentioned in the study. Nonetheless, chat created a summary - with tables - based on non existing data.

                      For me, now, the inability of getting context is a definitive big no.
                      1 Reply Last reply
                      0
                      • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                        @nirak @screwturn Spot on. If you have to redo someone else's work all the time because you're not sure if it's right, why not just do that work yourself from the get-go?

                        scotty86@mastodon.socialS This user is from outside of this forum
                        scotty86@mastodon.socialS This user is from outside of this forum
                        scotty86@mastodon.social
                        wrote last edited by
                        #13

                        @briankrebs @nirak @screwturn That's the AI Trap. When 80% of the work feels like done by magic, but fixing the remaining 20% - plus the prep to get the AI there - ends up taking longer than doing it yourself.

                        1 Reply Last reply
                        0
                        • R relay@relay.mycrowd.ca shared this topic
                        • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                          Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.

                          From Futurism.com:

                          "In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.

                          “I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”

                          Link Preview Image
                          Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

                          Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

                          favicon

                          Futurism (futurism.com)

                          ct@app.wafrn.netC This user is from outside of this forum
                          ct@app.wafrn.netC This user is from outside of this forum
                          ct@app.wafrn.net
                          wrote last edited by
                          #14

                          Two-fold failure here. This guy should have taken a sick day (and possibly was incentivized not to do so? We don't know), and under no circumstances is "using AI to mine sources" an error you get to bounce back from as a journalist. Unforgivable - you understood the risks!

                          ct@app.wafrn.netC 1 Reply Last reply
                          0
                          • ct@app.wafrn.netC ct@app.wafrn.net

                            Two-fold failure here. This guy should have taken a sick day (and possibly was incentivized not to do so? We don't know), and under no circumstances is "using AI to mine sources" an error you get to bounce back from as a journalist. Unforgivable - you understood the risks!

                            ct@app.wafrn.netC This user is from outside of this forum
                            ct@app.wafrn.netC This user is from outside of this forum
                            ct@app.wafrn.net
                            wrote last edited by
                            #15

                            Ars Technica's credibility is forever marred by this event, however fair you think that is. And it's this dude's fault!

                            1 Reply Last reply
                            0
                            • michael@westergaard.socialM michael@westergaard.social
                              "We always write things by hand and never use AI, except for this one small case where you caught us. And the next time you catch us. But there's no general tendency. You're just very good at catching exactly the cases where we use AI."
                              alessandro@mstdn.caA This user is from outside of this forum
                              alessandro@mstdn.caA This user is from outside of this forum
                              alessandro@mstdn.ca
                              wrote last edited by
                              #16

                              @michael

                              Also we only use it when we're sick - we'd definitely never do this when we're feeling fine, no sirree.

                              @briankrebs

                              1 Reply Last reply
                              0
                              • nirak@carhenge.clubN nirak@carhenge.club

                                @screwturn @briankrebs What if the summaries are wrong? How do you know? If you read through everything to find the errors, does it actually save time?

                                screwturn@mastodon.socialS This user is from outside of this forum
                                screwturn@mastodon.socialS This user is from outside of this forum
                                screwturn@mastodon.social
                                wrote last edited by
                                #17

                                @nirak

                                Wrong in what way?
                                Yes, in most cases I'm reading the entire text, but sometimes the AI captures something I missed, and other times it confirms what I already got.

                                Time saving does feature, but the bigger issue is that using it improves validity, because of catching the missed topics

                                @briankrebs

                                stumpythemutt@social.linux.pizzaS briankrebs@infosec.exchangeB 2 Replies Last reply
                                0
                                • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                                  @nirak @screwturn Spot on. If you have to redo someone else's work all the time because you're not sure if it's right, why not just do that work yourself from the get-go?

                                  screwturn@mastodon.socialS This user is from outside of this forum
                                  screwturn@mastodon.socialS This user is from outside of this forum
                                  screwturn@mastodon.social
                                  wrote last edited by
                                  #18

                                  @briankrebs

                                  In qualitative research we routinely redo each other's work and our own.
                                  Having an AI do that too increases construction validity and reliability.

                                  @nirak

                                  1 Reply Last reply
                                  0
                                  • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                                    Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.

                                    From Futurism.com:

                                    "In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.

                                    “I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”

                                    Link Preview Image
                                    Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

                                    Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

                                    favicon

                                    Futurism (futurism.com)

                                    chicob@mstdn.socialC This user is from outside of this forum
                                    chicob@mstdn.socialC This user is from outside of this forum
                                    chicob@mstdn.social
                                    wrote last edited by
                                    #19

                                    @briankrebs
                                    AI in journalism is Farse Technica

                                    briankrebs@infosec.exchangeB 1 Reply Last reply
                                    0
                                    • chicob@mstdn.socialC chicob@mstdn.social

                                      @briankrebs
                                      AI in journalism is Farse Technica

                                      briankrebs@infosec.exchangeB This user is from outside of this forum
                                      briankrebs@infosec.exchangeB This user is from outside of this forum
                                      briankrebs@infosec.exchange
                                      wrote last edited by
                                      #20

                                      @chicob Arse. It was right there, dude.

                                      1 Reply Last reply
                                      0
                                      • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                                        Found myself wincing while reading this story about how Ars Technica fired a reporter over fabricated quotations generated by an AI tool. What a mess. And a tough one to bounce back from. I get asked all the time how I use AI in my work, and my answer is always the same: I don't, for all the reasons I also don't delegate important research to others, plus a whole bunch of other good reasons. But I really am interested in the answer from other journalists, because I suspect I'm in the minority here.

                                        From Futurism.com:

                                        "In the post, Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why.

                                        “I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards. None of our articles are AI-generated, it is against company policy and we have always respected that.”

                                        Link Preview Image
                                        Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

                                        Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

                                        favicon

                                        Futurism (futurism.com)

                                        colo_lee@mstdn.socialC This user is from outside of this forum
                                        colo_lee@mstdn.socialC This user is from outside of this forum
                                        colo_lee@mstdn.social
                                        wrote last edited by
                                        #21

                                        @briankrebs well, at least he only did it once.
                                        er, correction ...
                                        he only got *caught* doing it once

                                        1 Reply Last reply
                                        0
                                        • screwturn@mastodon.socialS screwturn@mastodon.social

                                          @nirak

                                          Wrong in what way?
                                          Yes, in most cases I'm reading the entire text, but sometimes the AI captures something I missed, and other times it confirms what I already got.

                                          Time saving does feature, but the bigger issue is that using it improves validity, because of catching the missed topics

                                          @briankrebs

                                          stumpythemutt@social.linux.pizzaS This user is from outside of this forum
                                          stumpythemutt@social.linux.pizzaS This user is from outside of this forum
                                          stumpythemutt@social.linux.pizza
                                          wrote last edited by
                                          #22

                                          @screwturn @nirak @briankrebs Even a blind pig will find the occasional acorn.

                                          screwturn@mastodon.socialS 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups