I don't want to formalize any of my work on mathematics.
-
@johncarlosbaez What I find interesting about this is that during my mathematical training I was taught there is but one way to do mathematics: ZFC. When I got to meet people working in proof verification I learnt that there is a whole world of formalisms out there.
@dpiponi @johncarlosbaez I managed to get as far as a DPhil without any formal (sorry) education in the foundations of maths. The only times I recall it rearing its head were the passing comments in topology that Tychonoff's theorem needed the axiom of choice, and likewise in measure theory for non-measurable sets.
-
@andrejbauer - I agree: I want a diversity of approaches. Quotes like "It forces you to think about mathematics in the right way" are what scare me - together with the money that's getting poured into Lean right now, which could fool a young mathematician into thinking this is "the" right way to do math.
I have no fear that *you* will be pressured into doing anything you don't want to, just because it's the hot new trend.
@johncarlosbaez @andrejbauer You don't have to think in the formally correct way, but it helps to know what the formally correct way is.
-
@MartinEscardo Re: trusting C code, I heard horror stories about compilers for aeronautics, where one of the criteria for acceptance is that the "compiler" is very transparent and that the assembly code is auditable on its own. Apparently, getting these sort of people to accept an optimising compiler, even CompCert, was a hard battle… (But it was won!)
@mevenlennonbertrand @MartinEscardo I have developed stuff using C language for many years. The thing is, it requires a good understanding of the C in order to properly program stuff. C is notorious for many UBs (undefined behaviors), for lax typing, etc. And it requires more coding and debugging to achieve things. This is one reason why many other languages such as Python are used. I myself use Python instead of C for quick programming (zero worry about memory leak or corruption). Just FYI.
-
@dpiponi @johncarlosbaez I managed to get as far as a DPhil without any formal (sorry) education in the foundations of maths. The only times I recall it rearing its head were the passing comments in topology that Tychonoff's theorem needed the axiom of choice, and likewise in measure theory for non-measurable sets.
@RobJLow @dpiponi - I spent a huge amount of time learning logic, computability theory, and set theory as a kid, then got sick of that because it seemed the rest involved large cardinals and forcing, which didn't appeal to me. So around 1982 I quit thinking about this stuff. By the time I heard about the exciting newer approaches to logic, like topos theory and type theory, I was much less engrossed and picked it up slowly except for the categorical approach to quantum logic. Then homotopy type theory was born... right around when I'd lost interest in homotopy theory.
-
@johncarlosbaez @dougmerritt @MartinEscardo @JacquesC2 @pigworker Somewhat unexpectedly, I find myself on the same side as @xenaproject on this one, I suppose because I read "the right way" differently from @johncarlosbaez
Formalized mathematics makes us think "the right way" in the sense that it requires mental hygiene, it encourages better organization, it invites abstraction, and it demands honesty.
Formalized mathematics does not at all impose "One and Only Truth", nor does it "nail things down with rigidity" or "impose concensus". Those are impressions that an outsider might get by observing how, for the first time, some mathematicians have banded together to produce the largest library of formalized mathematics in history. But let's be honest, it's miniscule.
Even within a single proof assistant, there is a great deal of freedom of exploration of foundations, and there are many different ways to formalize any given topic. Not to mention that having several proof assistants, each peddling its own foundation, has only contributed to plurality of mathematical thought.
Current tools are relatively immature and do indeed steal time from creative thought to some degree, although people who are proficient in their use regularly explore mathematics with proof assistants (for example @MartinEscardo and myself), testifying to their creative potential.
Finally, any fear that Mathlib and Lean will dominate mathematical thought, or even just formalized mathematics, is a hollow one. Mathlib will soon be left in the dust of history, but it will always be remembered as the project that brought formalized mathematics from the fringes of computer science to the mainstream of mathematics.
@pigworker @andrejbauer @xenaproject @JacquesC2 @MartinEscardo @johncarlosbaez @dougmerritt btw, in light of this discussion, you folks might be interested in this work: https://hci.social/@chrisamaphone/116324984760765758
-
@johncarlosbaez @andrejbauer You don't have to think in the formally correct way, but it helps to know what the formally correct way is.
@sjb I think the point that both @johncarlosbaez and @andrejbauer are making is that there is no single correct to way to think, whether you are working formally or traditionally.
-
@MartinEscardo - how long would it take to "try it once"? Then I can weigh that against other things I could do with that time.
I have absolutely nothing I want to formalize, and I'd be weighing time spent on this against things like learning how to make music on a digital audio workstation, which I put off because it doesn't provide the instant gratification of playing a piano, but which I know I'd like in the end. In either case I know I'll start by suffering for hours, learning someone else's syntax and conventions instead of thinking my own thoughts. But for music software I know I'll eventually get a product I enjoy that I couldn't make any other eay.
Have a look at these Lean games:
It's good to work a few examples to see how they spell out in a proof assistant. It's not difficult! It's fun!
I see proof assistants as a sort of calculator for assumptions. They really shine when you throw lots of stuff at them, like verifying computer programs. Which involves lots of boring stuff, and a proof assistant can help with that! If it's a burden then don't use a proof assistant.
-
Have a look at these Lean games:
It's good to work a few examples to see how they spell out in a proof assistant. It's not difficult! It's fun!
I see proof assistants as a sort of calculator for assumptions. They really shine when you throw lots of stuff at them, like verifying computer programs. Which involves lots of boring stuff, and a proof assistant can help with that! If it's a burden then don't use a proof assistant.
@johncarlosbaez asked:
> how long would it take?I just noticed I didn't answer your question. An hour is fine. More is more.
-
@MartinEscardo - how long would it take to "try it once"? Then I can weigh that against other things I could do with that time.
I have absolutely nothing I want to formalize, and I'd be weighing time spent on this against things like learning how to make music on a digital audio workstation, which I put off because it doesn't provide the instant gratification of playing a piano, but which I know I'd like in the end. In either case I know I'll start by suffering for hours, learning someone else's syntax and conventions instead of thinking my own thoughts. But for music software I know I'll eventually get a product I enjoy that I couldn't make any other eay.
@johncarlosbaez It is absolutely fine not wanting to try it.
But then please stop saying things such as "I don't think formalizing mathematics at all makes us think the right way" or other claims about formalization.
If you want to speak with authority about formalization, then learn it first. And, yes, it is not easy or fast to learn it.
-
@JacquesC2 @johncarlosbaez @dougmerritt @MartinEscardo @andrejbauer @pigworker Lean is worse, but, infamously, https://en.wikipedia.org/wiki/Worse_is_better
@markusde @JacquesC2 @johncarlosbaez @dougmerritt @MartinEscardo @andrejbauer @pigworker Ok, so what is better (ie worse) than Lean?
-
@mc These areas might be more painful, but they contain mistakes. I found a few measure theoretic mistakes in some recent papers / theses over the last year or two.
@JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject
@ohad @JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject yeah but that's not my point. is formal-first, agda-as-a-blackoboard, a good fit for those disciplines with the current tools? one can definitely argue for formalization of the results *at some point* as a means to check their correctness (I'm a bit skeptical of that too, but it's not a hill I want to die on)
-
@mc @JacquesC2 @cbaberle @johncarlosbaez @dougmerritt @MartinEscardo @pigworker @xenaproject I'd go further and "blame" the philosophers of language, too.
@andrejbauer @JacquesC2 @cbaberle @johncarlosbaez @dougmerritt @MartinEscardo @pigworker @xenaproject what do you mean? (for the record, *I* am not blaming anyone for anything)
-
@ohad @JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject yeah but that's not my point. is formal-first, agda-as-a-blackoboard, a good fit for those disciplines with the current tools? one can definitely argue for formalization of the results *at some point* as a means to check their correctness (I'm a bit skeptical of that too, but it's not a hill I want to die on)
@mc
As a rule of thumb, I'd say if a domain has had 5+ formalizations already, then yes, formal-first works. Two or less, likely not.My 5 might be too low.
@ohad @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject
-
@johncarlosbaez @dougmerritt @MartinEscardo @JacquesC2 @pigworker @xenaproject Well, it should be obvious, but let me say it anyway.
We need mathematicians of every kind: the thinkers, the dreamers, the formalizers, and even physicists.
@andrejbauer @johncarlosbaez @dougmerritt @MartinEscardo @JacquesC2 @pigworker @xenaproject And as well as those working at the cutting edge, we want mathematicians working on more mundane matters. Playing with simple (especially by cutting edge standards) probability and statistics matters, I see and use some things that didn't exist when I was an undergraduate. For example, estimating the confidence interval of a binomial probability, methods have got better.
-
@mc
As a rule of thumb, I'd say if a domain has had 5+ formalizations already, then yes, formal-first works. Two or less, likely not.My 5 might be too low.
@ohad @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject
@JacquesC2 I haven't got to the point where I can use a tool like Agda the way I use a whiteboard for my own research the way Martín uses it.
I have used it as a whiteboard to explain concepts to other people, but that's probably because we insist on clipping our students' wings by not teaching them the relevant mathematics.
I say 'students' here with a very broad interpretation, we are all students indefinitely.
@mc @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject
-
@ohad @JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject yeah but that's not my point. is formal-first, agda-as-a-blackoboard, a good fit for those disciplines with the current tools? one can definitely argue for formalization of the results *at some point* as a means to check their correctness (I'm a bit skeptical of that too, but it's not a hill I want to die on)
You don't need to formalize all prior art to be able to do what you want in blackboard mode.
For example, @de_Jong_Tom and I used a result by Mike Shulman, which was published and also formalized in Rocq/HoTT, in TypeTopology.
We didn't want to formalize it (although I have been entertaining the possibility of formalizing it, to understand it better, in my own terms - formalizing for the sake of thinking about it).
So we just used it as a hypothesis to our desired result. That is, we proved "Shulman's result implies our result".
This doesn't establish the ultimate truth of our result in TypeTopology.
But it does allow us to keep using TypeTopology as a blackboard, without worrying about formalizing everything that every mathematician has done in this planet in order to carry on.
And without any lack of honesty. All assumptions are explicitly given. The adopted `--safe` flag guarantees that. There are no postulates. Just explicit assumptions.
@ohad @JacquesC2 @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @pigworker @xenaproject
-
@JacquesC2 I haven't got to the point where I can use a tool like Agda the way I use a whiteboard for my own research the way Martín uses it.
I have used it as a whiteboard to explain concepts to other people, but that's probably because we insist on clipping our students' wings by not teaching them the relevant mathematics.
I say 'students' here with a very broad interpretation, we are all students indefinitely.
@mc @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject
@JacquesC2 that said, I typically don't use tools like Agda to check my maths anyway. Rather, I do some maths (traditionally, on the whiteboard, then on paper, then in LaTeX) in order to work out how to explain enough of it to Agda or Idris so it will let me write a program I cannot write in other kinds of languages.
@mc @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject
-
@andrejbauer @johncarlosbaez @dougmerritt @JacquesC2 @pigworker @xenaproject
It is not inconceivable that one day in the not-too-distant future, proof assistants will be able to "understand" proofs written in sufficiently careful, informal, mathematical vernacular and translate it to a suitable formal language.
And this formal language doesn't need to be fixed. The mathematician just chooses a foundation, or, in case they don't care, they let the proof assistant choose a suitable one for the informal (but hopefully rigorous) mathematics at hand.
I don't mean AI, but people are certainly trying this with so-called AI nowadays (personally, I think this is the wrong approach, but **I don't want** this to become the subject of discussion here).
In any case, a person will need to check that the definitions and the statements of the theorems and constructions are correctly translated (*). Then the formal proofs obtained from informal proofs don't need to be checked by people.
(*) At least at the beginning. For example, we now trust that C compilers produce correct machine code and don't check it ourselves.
In any case, all of the above can happen only step by step, and currently we are at an important step, I think, where the first were in the 1960's by de Bruijn.
As I said before, I use proof assistants as smart blackboards. If I could get interactive help while I write in mathematical vernacular, I would immediately adopt this incredible new proof assistant.
And, I repeat, I don't mean the kind of non-help I get from ChatGPT, Gemini, Claude, DeepSeek, or what-you-have - I feel I help them rather than the other way round.
I mean the kind of help I already get in non-AI-based proof assistants
@MartinEscardo @andrejbauer @johncarlosbaez @dougmerritt @JacquesC2 @pigworker @xenaproject
It seems to me that anything that translates careful but informal mathematical vernacular into a formal language, must ipso facto be AI. But that doesn't mean that it would be the kind of AI that you describe as unhelpful at the end of your post. Rather, it means that AI encompasses much more than those things.
-
@JacquesC2 I haven't got to the point where I can use a tool like Agda the way I use a whiteboard for my own research the way Martín uses it.
I have used it as a whiteboard to explain concepts to other people, but that's probably because we insist on clipping our students' wings by not teaching them the relevant mathematics.
I say 'students' here with a very broad interpretation, we are all students indefinitely.
@mc @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @MartinEscardo @pigworker @xenaproject
@ohad writes "we are all students indefinitely."
+∞
I certainly am. Of mathematics and everything.
@JacquesC2 @mc @cbaberle @johncarlosbaez @andrejbauer @dougmerritt @pigworker @xenaproject
-
@MartinEscardo @andrejbauer @johncarlosbaez @dougmerritt @JacquesC2 @pigworker @xenaproject
It seems to me that anything that translates careful but informal mathematical vernacular into a formal language, must ipso facto be AI. But that doesn't mean that it would be the kind of AI that you describe as unhelpful at the end of your post. Rather, it means that AI encompasses much more than those things.
@TobyBartels It is interesting you say this.
Some things that were considered AI in the past are now deterministic algorithms.
For example, checking formal proofs with incomplete information (to be reconstructed by the checker) for correctness.
@andrejbauer @johncarlosbaez @dougmerritt @JacquesC2 @pigworker @xenaproject
-
R relay@relay.mycrowd.ca shared this topic