Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I don't understand how or why any person who knows how to read can claim AI systems are good at summarising.

I don't understand how or why any person who knows how to read can claim AI systems are good at summarising.

Scheduled Pinned Locked Moved Uncategorized
9 Posts 4 Posters 10 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • olivia@scholar.socialO This user is from outside of this forum
    olivia@scholar.socialO This user is from outside of this forum
    olivia@scholar.social
    wrote last edited by
    #1

    I don't understand how or why any person who knows how to read can claim AI systems are good at summarising. But then I realise what they are claiming is different:

    1️⃣ they don't know what summary means

    2️⃣ they don't care about EVIDENCE AGAINST their views like...

    Link Preview Image
    Major Update in our NEH Lawsuit - AHA

    On March 6, 2026, the American Historical Association and our co-plaintiffs filed a motion for summary judgment. Depositions and records obtained through the discovery process detail the role of DOGE staff in cancelling humanities grants, and how both the Federal Equal Protection Clause of the 5th Amendment and the Federal…

    favicon

    AHA (www.historians.org)

    Link Preview ImageLink Preview Image
    olivia@scholar.socialO urlyman@mastodon.socialU 2 Replies Last reply
    1
    0
    • olivia@scholar.socialO olivia@scholar.social

      I don't understand how or why any person who knows how to read can claim AI systems are good at summarising. But then I realise what they are claiming is different:

      1️⃣ they don't know what summary means

      2️⃣ they don't care about EVIDENCE AGAINST their views like...

      Link Preview Image
      Major Update in our NEH Lawsuit - AHA

      On March 6, 2026, the American Historical Association and our co-plaintiffs filed a motion for summary judgment. Depositions and records obtained through the discovery process detail the role of DOGE staff in cancelling humanities grants, and how both the Federal Equal Protection Clause of the 5th Amendment and the Federal…

      favicon

      AHA (www.historians.org)

      Link Preview ImageLink Preview Image
      olivia@scholar.socialO This user is from outside of this forum
      olivia@scholar.socialO This user is from outside of this forum
      olivia@scholar.social
      wrote last edited by
      #2

      In other words, I understand and I see your motivated reasoning and I raise you: I don't have a conflict of interest so I know AI cannot do that

      More context https://flipboard.com/@404media/404-media-qvt3vv94z/-/a-Scki3aliRTqz_5qqo3-DBQ%3Aa%3A4082434389-%2F0

      aoanla@hachyderm.ioA 1 Reply Last reply
      0
      • olivia@scholar.socialO olivia@scholar.social

        I don't understand how or why any person who knows how to read can claim AI systems are good at summarising. But then I realise what they are claiming is different:

        1️⃣ they don't know what summary means

        2️⃣ they don't care about EVIDENCE AGAINST their views like...

        Link Preview Image
        Major Update in our NEH Lawsuit - AHA

        On March 6, 2026, the American Historical Association and our co-plaintiffs filed a motion for summary judgment. Depositions and records obtained through the discovery process detail the role of DOGE staff in cancelling humanities grants, and how both the Federal Equal Protection Clause of the 5th Amendment and the Federal…

        favicon

        AHA (www.historians.org)

        Link Preview ImageLink Preview Image
        urlyman@mastodon.socialU This user is from outside of this forum
        urlyman@mastodon.socialU This user is from outside of this forum
        urlyman@mastodon.social
        wrote last edited by
        #3

        @olivia me neither

        Jonathan Schofield (@urlyman@mastodon.social)

        I have a client who is a lovely person, but they are all in on LLM ‘help’. Yesterday, a bit frazzled after getting nowhere with a tech challenge for another client, I asked them to repeat and clarify some requests they had made of me which I needed to action. They did, which is great. But they also sent me an AI summary of a meeting we had had, pointing out that the information I needed was in that summary. Except it wasn’t, and the bit that was in there was wrong 🤷‍♂️

        favicon

        Mastodon (mastodon.social)

        olivia@scholar.socialO 1 Reply Last reply
        0
        • olivia@scholar.socialO olivia@scholar.social

          In other words, I understand and I see your motivated reasoning and I raise you: I don't have a conflict of interest so I know AI cannot do that

          More context https://flipboard.com/@404media/404-media-qvt3vv94z/-/a-Scki3aliRTqz_5qqo3-DBQ%3Aa%3A4082434389-%2F0

          aoanla@hachyderm.ioA This user is from outside of this forum
          aoanla@hachyderm.ioA This user is from outside of this forum
          aoanla@hachyderm.io
          wrote last edited by
          #4

          @olivia I was just writing a reply about motivated reasoning when you extended this - I think a lot of this isn't more than "we wish there was a way to obviate the need to read lots of documents, so we're going to assume this tool does that".
          (That is: they *know* what a summary is, but they want to not have to read stuff sufficiently strongly that this overrides any other consideration.)

          olivia@scholar.socialO 1 Reply Last reply
          0
          • aoanla@hachyderm.ioA aoanla@hachyderm.io

            @olivia I was just writing a reply about motivated reasoning when you extended this - I think a lot of this isn't more than "we wish there was a way to obviate the need to read lots of documents, so we're going to assume this tool does that".
            (That is: they *know* what a summary is, but they want to not have to read stuff sufficiently strongly that this overrides any other consideration.)

            olivia@scholar.socialO This user is from outside of this forum
            olivia@scholar.socialO This user is from outside of this forum
            olivia@scholar.social
            wrote last edited by
            #5

            @aoanla indeed, but they are also implicitly claiming they don't know what a summary is (even if they do know) to trap us into that (waste of time) cycle of explaining, if that makes sense?

            aoanla@hachyderm.ioA 1 Reply Last reply
            0
            • olivia@scholar.socialO olivia@scholar.social

              @aoanla indeed, but they are also implicitly claiming they don't know what a summary is (even if they do know) to trap us into that (waste of time) cycle of explaining, if that makes sense?

              aoanla@hachyderm.ioA This user is from outside of this forum
              aoanla@hachyderm.ioA This user is from outside of this forum
              aoanla@hachyderm.io
              wrote last edited by
              #6

              @olivia I think that's also possibly a cycle of self-justification? No-one wants to *admit* that they're using a tool just because they don't *want* to do a thing (possibly even to themselves).

              There was a article on Bloomberg going around today about how a majority of hiring managers admit that they say layoffs are due to "AI" now because it "sounds good" rather than because it's true (versus "we needed to cut costs"). I think much of the discourse about "why we use AI" is riven by the same lack of transparency for motivation by the adopters.

              olivia@scholar.socialO royalrex@mastodon.onlineR 2 Replies Last reply
              0
              • aoanla@hachyderm.ioA aoanla@hachyderm.io

                @olivia I think that's also possibly a cycle of self-justification? No-one wants to *admit* that they're using a tool just because they don't *want* to do a thing (possibly even to themselves).

                There was a article on Bloomberg going around today about how a majority of hiring managers admit that they say layoffs are due to "AI" now because it "sounds good" rather than because it's true (versus "we needed to cut costs"). I think much of the discourse about "why we use AI" is riven by the same lack of transparency for motivation by the adopters.

                olivia@scholar.socialO This user is from outside of this forum
                olivia@scholar.socialO This user is from outside of this forum
                olivia@scholar.social
                wrote last edited by
                #7

                @aoanla money top-down also let's not forget

                1 Reply Last reply
                0
                • urlyman@mastodon.socialU urlyman@mastodon.social

                  @olivia me neither

                  Jonathan Schofield (@urlyman@mastodon.social)

                  I have a client who is a lovely person, but they are all in on LLM ‘help’. Yesterday, a bit frazzled after getting nowhere with a tech challenge for another client, I asked them to repeat and clarify some requests they had made of me which I needed to action. They did, which is great. But they also sent me an AI summary of a meeting we had had, pointing out that the information I needed was in that summary. Except it wasn’t, and the bit that was in there was wrong 🤷‍♂️

                  favicon

                  Mastodon (mastodon.social)

                  olivia@scholar.socialO This user is from outside of this forum
                  olivia@scholar.socialO This user is from outside of this forum
                  olivia@scholar.social
                  wrote last edited by
                  #8

                  @urlyman it was a rhetorical hook, see next post below

                  1 Reply Last reply
                  0
                  • aoanla@hachyderm.ioA aoanla@hachyderm.io

                    @olivia I think that's also possibly a cycle of self-justification? No-one wants to *admit* that they're using a tool just because they don't *want* to do a thing (possibly even to themselves).

                    There was a article on Bloomberg going around today about how a majority of hiring managers admit that they say layoffs are due to "AI" now because it "sounds good" rather than because it's true (versus "we needed to cut costs"). I think much of the discourse about "why we use AI" is riven by the same lack of transparency for motivation by the adopters.

                    royalrex@mastodon.onlineR This user is from outside of this forum
                    royalrex@mastodon.onlineR This user is from outside of this forum
                    royalrex@mastodon.online
                    wrote last edited by
                    #9

                    @aoanla @olivia the Bloomberg article 👇
                    https://mas.to/@carnage4life/116232921533312412

                    1 Reply Last reply
                    0
                    • R relay@relay.an.exchange shared this topic
                    Reply
                    • Reply as topic
                    Log in to reply
                    • Oldest to Newest
                    • Newest to Oldest
                    • Most Votes


                    • Login

                    • Login or register to search.
                    • First post
                      Last post
                    0
                    • Categories
                    • Recent
                    • Tags
                    • Popular
                    • World
                    • Users
                    • Groups