Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Category error!

Category error!

Scheduled Pinned Locked Moved Uncategorized
17 Posts 8 Posters 6 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • olivia@scholar.socialO olivia@scholar.social

    Category error! I'm sick to the back teeth of wrongheaded comparisons of inanimate objects to humans. It's so rife even colleagues do it. What's next?

    > I compared a rock and a person, and challenged them to stay still the longest and the rock won! Wow!

    Things thought up by the unhinged & those who wish to dehumanise for profit.

    Gift Articles (@GiftArticles@tomkahe.com)

    Who’s a Better Writer: A.I. or Humans? Take Our Quiz. (Gift Article) https://www.nytimes.com/interactive/2026/03/09/business/ai-writing-quiz.html?unlocked_article_code=1.R1A.VoOi.CqmTPKAuPwGv&smid=bs-share

    favicon

    Tomkahe (tomkahe.com)

    Link Preview Image
    xameer@mathstodon.xyzX This user is from outside of this forum
    xameer@mathstodon.xyzX This user is from outside of this forum
    xameer@mathstodon.xyz
    wrote last edited by
    #2

    @olivia @wim_v12e yes why compete with bots who don't have life

    1 Reply Last reply
    0
    • olivia@scholar.socialO olivia@scholar.social

      Category error! I'm sick to the back teeth of wrongheaded comparisons of inanimate objects to humans. It's so rife even colleagues do it. What's next?

      > I compared a rock and a person, and challenged them to stay still the longest and the rock won! Wow!

      Things thought up by the unhinged & those who wish to dehumanise for profit.

      Gift Articles (@GiftArticles@tomkahe.com)

      Who’s a Better Writer: A.I. or Humans? Take Our Quiz. (Gift Article) https://www.nytimes.com/interactive/2026/03/09/business/ai-writing-quiz.html?unlocked_article_code=1.R1A.VoOi.CqmTPKAuPwGv&smid=bs-share

      favicon

      Tomkahe (tomkahe.com)

      Link Preview Image
      abucci@buc.ciA This user is from outside of this forum
      abucci@buc.ciA This user is from outside of this forum
      abucci@buc.ci
      wrote last edited by
      #3
      I was so exasperated by the Donald Knuth thing the other day that I wrote this on a post about it:
      There is a rhetorical move here supporting a metaphysical claim that conflates a human activity with the activity of a machine. This, again, is not scientific; it also demands explanation and justification that goes beyond presenting evidence. If someone rides a bicycle down the road, nobody says that the bicycle walked down the road. If someone flies a simulated plane from Boston to Chicago in a flight simulator, nobody says the person traveled to Chicago. Yet somehow when people think with the aid of a certain kind of AI machine, we're meant to refer to that as the machine doing the thing humans do (thinking, solving a problem, inventing, or what have you). We're meant to believe that what the machine is doing is not meaningfully different from what humans do despite the obvious layers of metaphor involved. This conflation is not scientific, it's metaphysical. It demands an explanation and justification that goes beyond just presenting evidence because it is making a claim about how the world works or is structured.
      olivia@scholar.socialO 1 Reply Last reply
      0
      • abucci@buc.ciA abucci@buc.ci
        I was so exasperated by the Donald Knuth thing the other day that I wrote this on a post about it:
        There is a rhetorical move here supporting a metaphysical claim that conflates a human activity with the activity of a machine. This, again, is not scientific; it also demands explanation and justification that goes beyond presenting evidence. If someone rides a bicycle down the road, nobody says that the bicycle walked down the road. If someone flies a simulated plane from Boston to Chicago in a flight simulator, nobody says the person traveled to Chicago. Yet somehow when people think with the aid of a certain kind of AI machine, we're meant to refer to that as the machine doing the thing humans do (thinking, solving a problem, inventing, or what have you). We're meant to believe that what the machine is doing is not meaningfully different from what humans do despite the obvious layers of metaphor involved. This conflation is not scientific, it's metaphysical. It demands an explanation and justification that goes beyond just presenting evidence because it is making a claim about how the world works or is structured.
        olivia@scholar.socialO This user is from outside of this forum
        olivia@scholar.socialO This user is from outside of this forum
        olivia@scholar.social
        wrote last edited by
        #4

        @abucci you're so patient and yes, that was unsettling

        abucci@buc.ciA 1 Reply Last reply
        0
        • olivia@scholar.socialO olivia@scholar.social

          @abucci you're so patient and yes, that was unsettling

          abucci@buc.ciA This user is from outside of this forum
          abucci@buc.ciA This user is from outside of this forum
          abucci@buc.ci
          wrote last edited by
          #5
          @olivia@scholar.social It really tries my patience when people say AI has "solved" math or some nonsense like that.

          Speaking of patient, though, you're really fighting the good fight 💪
          1 Reply Last reply
          1
          0
          • R relay@relay.infosec.exchange shared this topic
          • olivia@scholar.socialO This user is from outside of this forum
            olivia@scholar.socialO This user is from outside of this forum
            olivia@scholar.social
            wrote last edited by
            #6

            @abucci it's honestly unbelievable people say that, not only false but destructive to maths cc @Iris

            anwagnerdreas@hcommons.socialA 1 Reply Last reply
            0
            • olivia@scholar.socialO olivia@scholar.social

              @abucci it's honestly unbelievable people say that, not only false but destructive to maths cc @Iris

              anwagnerdreas@hcommons.socialA This user is from outside of this forum
              anwagnerdreas@hcommons.socialA This user is from outside of this forum
              anwagnerdreas@hcommons.social
              wrote last edited by
              #7

              @olivia @abucci @Iris How do I put this? I find the "category error" criticism accurate. And it rightly prepares and leads into the socioeconomic criticism of dehumanisation of work and into the sociopsychological deskilling criticism.

              What I wonder is: is there any "progress" or "benefit", say, of/for the discipline of mathematics once a certain proof exists (assuming the rest of the discipline manages to continue evolving without deskilling) that we risk blinding ourselves to by just insisting that AI isn't itself "proving" things? This focus on the category error makes it sound like if we managed to avoid anthropomorphisation and use different vocabulary for the function AI plays, the criticism would miss the point.

              1/2

              anwagnerdreas@hcommons.socialA abucci@buc.ciA 2 Replies Last reply
              0
              • anwagnerdreas@hcommons.socialA anwagnerdreas@hcommons.social

                @olivia @abucci @Iris How do I put this? I find the "category error" criticism accurate. And it rightly prepares and leads into the socioeconomic criticism of dehumanisation of work and into the sociopsychological deskilling criticism.

                What I wonder is: is there any "progress" or "benefit", say, of/for the discipline of mathematics once a certain proof exists (assuming the rest of the discipline manages to continue evolving without deskilling) that we risk blinding ourselves to by just insisting that AI isn't itself "proving" things? This focus on the category error makes it sound like if we managed to avoid anthropomorphisation and use different vocabulary for the function AI plays, the criticism would miss the point.

                1/2

                anwagnerdreas@hcommons.socialA This user is from outside of this forum
                anwagnerdreas@hcommons.socialA This user is from outside of this forum
                anwagnerdreas@hcommons.social
                wrote last edited by
                #8

                @olivia @abucci @Iris

                While writing this, it occurs to me that it's naïve to assume what is in brackets above: that the disipline can evolve in an "untainted" way while accepting on a regular basis proofs that have not been conceived by human scholars. But what kind of taint is that? - I have a hunch it's none of the problems mentioned before. Or is it?

                I'll re-read your nice paper on "human-centered AI" and think about how its analyses apply to maths as a discipline.

                2/2

                1 Reply Last reply
                0
                • olivia@scholar.socialO olivia@scholar.social

                  Category error! I'm sick to the back teeth of wrongheaded comparisons of inanimate objects to humans. It's so rife even colleagues do it. What's next?

                  > I compared a rock and a person, and challenged them to stay still the longest and the rock won! Wow!

                  Things thought up by the unhinged & those who wish to dehumanise for profit.

                  Gift Articles (@GiftArticles@tomkahe.com)

                  Who’s a Better Writer: A.I. or Humans? Take Our Quiz. (Gift Article) https://www.nytimes.com/interactive/2026/03/09/business/ai-writing-quiz.html?unlocked_article_code=1.R1A.VoOi.CqmTPKAuPwGv&smid=bs-share

                  favicon

                  Tomkahe (tomkahe.com)

                  Link Preview Image
                  naturemc@mastodon.onlineN This user is from outside of this forum
                  naturemc@mastodon.onlineN This user is from outside of this forum
                  naturemc@mastodon.online
                  wrote last edited by
                  #9

                  @olivia It makes me especially sick that it happens in times, when we should discuss the legal #personhood of #nature: https://en.wikipedia.org/wiki/Environmental_personhood and listen much more to ideas and thoughts of indigenous people about nature (including pebbles and rocks).
                  This is becoming increasingly important for survival. Instead, we grant empty algorithms more #life than living #ecosystems!

                  #climateCrisis #biodiversityLoss

                  1 Reply Last reply
                  0
                  • anwagnerdreas@hcommons.socialA anwagnerdreas@hcommons.social

                    @olivia @abucci @Iris How do I put this? I find the "category error" criticism accurate. And it rightly prepares and leads into the socioeconomic criticism of dehumanisation of work and into the sociopsychological deskilling criticism.

                    What I wonder is: is there any "progress" or "benefit", say, of/for the discipline of mathematics once a certain proof exists (assuming the rest of the discipline manages to continue evolving without deskilling) that we risk blinding ourselves to by just insisting that AI isn't itself "proving" things? This focus on the category error makes it sound like if we managed to avoid anthropomorphisation and use different vocabulary for the function AI plays, the criticism would miss the point.

                    1/2

                    abucci@buc.ciA This user is from outside of this forum
                    abucci@buc.ciA This user is from outside of this forum
                    abucci@buc.ci
                    wrote last edited by
                    #10
                    @anwagnerdreas@hcommons.social Hi Andreas, there are lots of ways to consider this question:
                    is there any "progress" or "benefit"...that we risk blinding ourselves to by just insisting that AI isn't itself "proving" things?
                    but the first one that springs to my mind is this. Isn't the more interesting, and pertinent, question "is there any progress or benefit that we risk blinding ourselves to by NOT insisting AI isn't proving things"? Your version of the question takes a default optimistic stance that the use of AI is not harmful or obfuscating to human mathematical thought and practice, when we cannot know one way or the other at this stage. I note that this stance is heavily pushed by the US tech sector, and is therefore already worthy of skepticism. Besides that, mathematics has been around for thousands of years; what justifies enthusiasm for such a radical change to our way of practicing it? Aren't we meant to be conservative about our knowledge production systems? I find discussion of these sorts of questions largely absent in the discourse about AI, at least the mainstream discourse, but shouldn't they be central, given what's at stake? We risk doing the equivalent of throwing away our financial security betting on a slot machine because we won once or twice and the guy next to us claims he made a fortune that way.

                    @olivia@scholar.social @Iris@scholar.social
                    anwagnerdreas@hcommons.socialA 1 Reply Last reply
                    0
                    • abucci@buc.ciA abucci@buc.ci
                      @anwagnerdreas@hcommons.social Hi Andreas, there are lots of ways to consider this question:
                      is there any "progress" or "benefit"...that we risk blinding ourselves to by just insisting that AI isn't itself "proving" things?
                      but the first one that springs to my mind is this. Isn't the more interesting, and pertinent, question "is there any progress or benefit that we risk blinding ourselves to by NOT insisting AI isn't proving things"? Your version of the question takes a default optimistic stance that the use of AI is not harmful or obfuscating to human mathematical thought and practice, when we cannot know one way or the other at this stage. I note that this stance is heavily pushed by the US tech sector, and is therefore already worthy of skepticism. Besides that, mathematics has been around for thousands of years; what justifies enthusiasm for such a radical change to our way of practicing it? Aren't we meant to be conservative about our knowledge production systems? I find discussion of these sorts of questions largely absent in the discourse about AI, at least the mainstream discourse, but shouldn't they be central, given what's at stake? We risk doing the equivalent of throwing away our financial security betting on a slot machine because we won once or twice and the guy next to us claims he made a fortune that way.

                      @olivia@scholar.social @Iris@scholar.social
                      anwagnerdreas@hcommons.socialA This user is from outside of this forum
                      anwagnerdreas@hcommons.socialA This user is from outside of this forum
                      anwagnerdreas@hcommons.social
                      wrote last edited by
                      #11

                      @abucci

                      Hi Anthony, thanks for your response.

                      The scenario I had in mind was mathematicians of the 2070s still being pretty much like our mathematicians today and those of the past, and looking at the corpus of problems, theorems and proofs established until then, and not caring much about when and in which way a specific proof was introduced. As long as the proof itself is correct as evaluated by those mathematicians themselves. Proving and correctness may lie in the eyes of the human observer, not in the neural network that has outputted the proof. But that does not detract from said correctness at all. I feel uneasy if we focus mainly on how we call this "outputting" or deny that there is a new proof there.

                      I have already ack'd in the other toot: the scenario is naïve insofar as it assumes the only thing to have changed would be a handful of additional proofs with a different genesis. I'd like to understand the other changes we should expect for the discipline.

                      @olivia @Iris

                      xgranade@wandering.shopX 1 Reply Last reply
                      0
                      • anwagnerdreas@hcommons.socialA anwagnerdreas@hcommons.social

                        @abucci

                        Hi Anthony, thanks for your response.

                        The scenario I had in mind was mathematicians of the 2070s still being pretty much like our mathematicians today and those of the past, and looking at the corpus of problems, theorems and proofs established until then, and not caring much about when and in which way a specific proof was introduced. As long as the proof itself is correct as evaluated by those mathematicians themselves. Proving and correctness may lie in the eyes of the human observer, not in the neural network that has outputted the proof. But that does not detract from said correctness at all. I feel uneasy if we focus mainly on how we call this "outputting" or deny that there is a new proof there.

                        I have already ack'd in the other toot: the scenario is naïve insofar as it assumes the only thing to have changed would be a handful of additional proofs with a different genesis. I'd like to understand the other changes we should expect for the discipline.

                        @olivia @Iris

                        xgranade@wandering.shopX This user is from outside of this forum
                        xgranade@wandering.shopX This user is from outside of this forum
                        xgranade@wandering.shop
                        wrote last edited by
                        #12

                        @anwagnerdreas @abucci @olivia @Iris

                        I see it as twofold: a burden of proof argument, and a question about where energies are best spent. For the former, whenever proposing a new tool, the onus is on the person advancing said new proposal to show that it works, or at least works well enough to be worth consideration.

                        For the second, cranks *could* be right about their wild mathematical claims, but we rightly often reject them out of hand as a timesaving heuristic.

                        xgranade@wandering.shopX 1 Reply Last reply
                        0
                        • xgranade@wandering.shopX xgranade@wandering.shop

                          @anwagnerdreas @abucci @olivia @Iris

                          I see it as twofold: a burden of proof argument, and a question about where energies are best spent. For the former, whenever proposing a new tool, the onus is on the person advancing said new proposal to show that it works, or at least works well enough to be worth consideration.

                          For the second, cranks *could* be right about their wild mathematical claims, but we rightly often reject them out of hand as a timesaving heuristic.

                          xgranade@wandering.shopX This user is from outside of this forum
                          xgranade@wandering.shopX This user is from outside of this forum
                          xgranade@wandering.shop
                          wrote last edited by
                          #13

                          @anwagnerdreas @abucci @olivia @Iris It's impractical to individually evaluate the claims of every crank theorem, and so we largely don't do it.

                          When it comes to LLM-generated "proofs," I think it's worth comparing to Lean and other formalized proof systems. We rationally have enough confidence in how Lean builds proofs from lower-level theorems and axioms that it's worth approaching Lean-based proofs in good faith. LLMs, by contrast, do not offer any such structure we can use.

                          abucci@buc.ciA 1 Reply Last reply
                          0
                          • xgranade@wandering.shopX xgranade@wandering.shop

                            @anwagnerdreas @abucci @olivia @Iris It's impractical to individually evaluate the claims of every crank theorem, and so we largely don't do it.

                            When it comes to LLM-generated "proofs," I think it's worth comparing to Lean and other formalized proof systems. We rationally have enough confidence in how Lean builds proofs from lower-level theorems and axioms that it's worth approaching Lean-based proofs in good faith. LLMs, by contrast, do not offer any such structure we can use.

                            abucci@buc.ciA This user is from outside of this forum
                            abucci@buc.ciA This user is from outside of this forum
                            abucci@buc.ci
                            wrote last edited by
                            #14
                            @xgranade@wandering.shop You make several great points.

                            The non-surveyability issue is a big one: https://en.wikipedia.org/wiki/Non-surveyable_proof . A bunch of people rejected the computer-assisted proof of the four-color theorem until it was significantly simplified. Imagine a math LLM spitting out considerably more complicated proofs at a breakneck pace. I argue that eventually such a thing would be indistinguishable from a random string generator. It'd also waste the time and energy of a whole lot of mathematicians in the process, as you pointed out.

                            We are already seeing code review---human beings checking pull requests etc---being overwhelmed by LLM code generators. Some organizations are abandoning this step as a result. What purpose is served by introducing this kind of dynamics into mathematics, of all things? It's quite strange to me, this bias towards always accelerating everything whenever that's possible to do, regardless of systemic or other risks.

                            Proofs written in Lean and similar systems have the very big benefit of surveyability, and there's probably a world in which ethically made and constituted LLMs could add beneficial features to such tools.

                            The analogy to cranks is interesting. I guess in my head it's similar to why we don't throw a handful of leaves up in the air and try to read a proof out of the pattern they make when they fall to the ground (usually!). It's the folly of approaching the problem of finding a needle in a haystack by making the haystack bigger. People love making the haystack bigger for some reason.

                            @anwagnerdreas@hcommons.social @olivia@scholar.social @Iris@scholar.social

                            1 Reply Last reply
                            0
                            • olivia@scholar.socialO olivia@scholar.social

                              Category error! I'm sick to the back teeth of wrongheaded comparisons of inanimate objects to humans. It's so rife even colleagues do it. What's next?

                              > I compared a rock and a person, and challenged them to stay still the longest and the rock won! Wow!

                              Things thought up by the unhinged & those who wish to dehumanise for profit.

                              Gift Articles (@GiftArticles@tomkahe.com)

                              Who’s a Better Writer: A.I. or Humans? Take Our Quiz. (Gift Article) https://www.nytimes.com/interactive/2026/03/09/business/ai-writing-quiz.html?unlocked_article_code=1.R1A.VoOi.CqmTPKAuPwGv&smid=bs-share

                              favicon

                              Tomkahe (tomkahe.com)

                              Link Preview Image
                              shanecelis@mastodon.gamedev.placeS This user is from outside of this forum
                              shanecelis@mastodon.gamedev.placeS This user is from outside of this forum
                              shanecelis@mastodon.gamedev.place
                              wrote last edited by
                              #15

                              @olivia "I found these three books in the Library of Babel, and they're not only as good as the originals but they also do not have any typos the originals have. So there we have it, not only is the Library of Babel better, it makes one wonder if perhaps _it_ was plagiarized rather than the other way around."

                              1 Reply Last reply
                              0
                              • fazalmajid@social.vivaldi.netF This user is from outside of this forum
                                fazalmajid@social.vivaldi.netF This user is from outside of this forum
                                fazalmajid@social.vivaldi.net
                                wrote last edited by
                                #16

                                @abucci @olivia nothing new under the Sun, see: Tibetan prayer wheels

                                Link Preview Image
                                Prayer wheel - Wikipedia

                                favicon

                                (en.wikipedia.org)

                                abucci@buc.ciA 1 Reply Last reply
                                0
                                • fazalmajid@social.vivaldi.netF fazalmajid@social.vivaldi.net

                                  @abucci @olivia nothing new under the Sun, see: Tibetan prayer wheels

                                  Link Preview Image
                                  Prayer wheel - Wikipedia

                                  favicon

                                  (en.wikipedia.org)

                                  abucci@buc.ciA This user is from outside of this forum
                                  abucci@buc.ciA This user is from outside of this forum
                                  abucci@buc.ci
                                  wrote last edited by
                                  #17
                                  @fazalmajid@vivaldi.net I don't understand the connection. Can you please elaborate a bit?
                                  @olivia@scholar.social
                                  1 Reply Last reply
                                  1
                                  0
                                  Reply
                                  • Reply as topic
                                  Log in to reply
                                  • Oldest to Newest
                                  • Newest to Oldest
                                  • Most Votes


                                  • Login

                                  • Login or register to search.
                                  • First post
                                    Last post
                                  0
                                  • Categories
                                  • Recent
                                  • Tags
                                  • Popular
                                  • World
                                  • Users
                                  • Groups