the last decade, it's been my privilege to learn at the feet of some really exceptional people, whose erudition in all sorts of fields has served to ensure that I keep Socrates' famous aphorism concerning knowledge at the forefront of my thinking. It's long been my view that the most important attribute of a true intellectual is the preparedness for being wrong. Indeed, if you're not prepared to be wrong, you're doing it wrong, thus you're always wrong in some measure. I've been wrong many, many times and, where I've been right, it's almost invariably been because of the intervention of one or other of my extremely knowledgeable friends.

This offering has kindly been presented by one of the bright lights in this constellation, my dear friend Phil Scott, a.k.a. @inhabitingvoid. He generally describes himself, quite humbly, as a computer scientist. That description may or may not be apt but, whenever I've said something stupid about mathematics or logic - a rather more regular occurrence than I'd care to admit too vociferously - he's been ready to intervene and spare my blushes. This post is just such an intervention, and I'm grateful to him for it.

I'll shut up now and give the floor to Phil.

_________________________________________________________________________

Definitions and Axioms

In another post on this blog, my friend Hackenslash talks about the relationship between mathematics and science, responding to Eugene Wigner’s paper The Unreasonable Effectiveness of Mathematics in the Natural Sciences. He volunteers some ideas on the definitions and axioms of mathematics:

The beauty of mathematics is that it’s axiomatically complete. We can build from the simplest of axioms, defining our terms as we go, and be sure that what we’re building on has good, solid foundations. Specifically, the axioms of mathematics aren’t accepted because they seem to be true, but because they are definitionally true. In other words, we define 1 as the singular integer, and 2 is 1+1. You could say that the definition of ‘1’ is the first axiom of mathematics, upon which all other axioms are built. Thus we can define 2 as as the sum of the integers 1 and 1. And from there we can build another axiom, namely, the addition of two integers gives the sum (we’ll make this explicit shortly ). Much of the rest is about the relationships between operators, so that we can build up to 16 being 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1, and that being the same as 8+8, which is the same as 8x2, from which we can build another axiom, namely that the multiplication of two integers gives the product. Then we can build a whole set of relationships around the equivalence of these operations on specific sets of integers to build an axiomatically complete system, and it is this way because of the fact that the core axioms are necessarily true by definition.

This isn’t right to my modern eyes, but looking back over the history of maths, the account puts Hackenslash in fine enough company. In this post, I’ll try to look through some of that history, and draw on the veritable revolution in our understanding of axiomatics in the last 150 years with which we can clarify what is going on in with axioms and definitions with laser precision. While I don’t really have the space to get into the technical details (and I fear doing so would make this post very dry), Hackenslash has kindly allowed me to express my broader thoughts on the matter.

When we’re discussing the

*axioms*of mathematics, we’re talking about something with a long history. Maths is really

*old*. Even the useless abstract stuff is old. I mean, there’s pretty good evidence that about three and a half thousand years ago, in present day Iraq, the Babylonians had got so bored of using numbers for useful stuff like accountancy that they had worked out an algorithm to calculate integer solutions to Pythagoras’ Theorem and were going around chiselling the output on stone tablets. This was a thousand years before a Greek latecomer took credit for the theorem.

Don’t get me wrong. I don’t like to put down those canny Greeks. The Greeks were prolific, not only discovering techniques of extraordinary subtlety, but also hitting on the importance of mathematical

*proof*. They also hit upon something wonderful that we would nowadays recognise as

*axiomatics*, or, as they might have called it, the

*Elements*of mathematics.

Axiomatics may not appear to have a clear function. It isn’t a way to generate

*new*mathematics. And this singular uselessness is mostly still the case today. So why bother? I suspect one reason the Greeks pursued it was because they had accumulated so much mathematical knowledge that they needed a way to properly organise it. Their exposition of axioms and fundamental definitions were not a means to figure out the nature of mathematics, but only a means to curate and catalogue what they already knew. Their insight here was that they could use the notions of proof and logical consequence in order to build a mathematical taxonomy. Just as a modern biologist orders life by genetic history, the writer of an elements orders mathematics by logical derivation. And just like in biological taxonomy, mathematical taxonomy could show that huge diversity can originate from just a few ultimate ancestors,

*the axioms*.

The only elements that has survived the last two millennia is the one written by a Greek mathematician called Euclid. The guy may not have come up with a single original mathematical theorem himself, but he did such a great job curating what was around him that he did perhaps more than anyone else to preserve Greek mathematical genius through the darker ages. He produced a multivolume book which would become the most influential textbook of all time, and whose broader influence in the West is perhaps only surpassed by the Bible.

Despite this, I often find that my hotel rooms are missing a complimentary copy of Euclid’s Elements, so I have to go here. Browsing through, I am always reminded how utterly profound I find this book. Oh, it may not represent the height of ancient Greek mathematical sophistication. If you want that, you should probably start with Archimedes, and then if you’ve still got an appetite, read Appollonius’

*On Conics*, which might be one of the most complicated mathematical treatises ever written, and not in a good way.

Nevertheless, if you want to see how axioms, assumptions and definitions work in mathematics, you get a good taste with Euclid, in a way which still characterises how mathematics is practised today. The book goes to great effort to find the first principles of mathematics, and attempts what Hackenslash attempts above, to define the most elementary notions. Here’s Euclid’s definition of 1 and of number:

A unit is that by virtue of which each of the things that exist is called one.

A number is a multitude composed of units.

Okay, I’ll be frank. While I like these words, they don’t exactly resonate with my sense of mathematical respectability. But it’s still curious that Euclid thought that “number” was a term that needed a definition at all. But it’s even more curious that this definition appears in the

*seventh*volume of Euclid’s

*Elements*, a

*long*way from where you’d expect to see the elementary definitions of mathematics.

This is because Euclid doesn’t start with numbers. He starts with geometry: points, lines, triangles, circles, that sort of thing. His first two definitions attempt to define just the “points” and “lines”:

- “A point is that which has no part.”

- “A line is a breadthless length.”

- To draw a straight line from any point to any point.
- To produce a finite straight line continuously in a straight line.
- To describe a circle with any centre and radius.
- That all right angles equal one another.

So for Euclid, numbers are not axiomatic at all. Instead, numbers needed to be built out of bits of geometry. Specifically, a number would be made up of units, and a unit would be whatever geometrical object you used for such a purpose. You will usually see Euclid

*draw*his units as simple line segments, and then draw bigger numbers by connecting the unit line segments end-to-end.

It’s like we have a bunch of unmarked rulers of various lengths on the floor, and we pick one up and say to the world “this is to be the unit; I will thenceforth measure everything in relation to it.” Humans have done this throughout history, variously declaring and then standardising their unit to be cubits, hands, feet, inches, leagues, metres of measurement. The choice of a unit is arbitrary. We just need to agree on it, and be able to use units interchangeably for whatever we’re trying to do (construction or land surveying, say).

But this all too tangible, and I say tangibility is a barrier to the mathematician’s imagination. The barrier explains why the Greeks did not have the foresight to invent the number 0 or to invent the negative numbers: they thought numbers were line segments, things you could join end-to-end. How could you have a negative number of those? How could you put units together to get 0? How do you use a ruler to measure nothing?

There are fair reasons for the Greek’s self-imposed handicap. In antiquity, numbers were impoverished. Counting numbers were well understood: one potato, two potato, three potato four. Ordinal numbers were similarly easy: first place, second place, third place, fourth. And fractions aren’t too hard either: divide the cake into six bits. The six bits make the whole.

But just what the hell is the square root of 2? I cannot ignore such a number. because if I draw a square, and declare its side to be of unit length, I would be flummoxed when asked how to measure the diagonal.

Us moderns, familiar with Pythagoras’ Theorem, will say that the diagonal would be measured as \(\sqrt 2\), but the Greeks and their predecessors had no idea how to notate this. So their numbers were not up to the task of describing even very basic geometry such as the diagonals of unit squares.

And so if numbers had a geometrical counterpart, but not every piece of geometry had a numerical counterpart, it must be the case that the world of numbers is more impoverished than the world of geometry. Choosing between the two, it is clear that geometry would have to be the foundation, and numbers would have to be built over the top.

We shouldn’t feel any superiority here. It’s something of a cheat to say that the diagonal of the unit square has length \(\sqrt 2\). When you unpack this claim, you end up going around in a circle: “the diagonal of the unit square is whatever is that number which measures the diagonal of the unit square”. Saying just what this number is without running in circles takes a surprising amount of mathematical sophistication that wasn’t available until the 19th century, and the solution is still sufficiently sophisticated that we don’t bother exposing students to it, even ones studying mathematical sciences at university.

Students nowadays are mostly taught that numbers come first, and that geometry is something you do with numerical coordinates. Geometry is based on numbers, rather than numbers being based on geometry as the Greeks had it. It took almost two thousand years after Euclid to get to this conception. In the 16th century, the mathematician and occasional philosopher, Rene Descartes, sowed the seeds by showing how geometry could be based on equations involving coordinates on a graph, and thus showed how geometrical problems could be reduced to high-school algebra. The techniques of algebra had themselves been well-developed over the millennia and they were a much more powerful tool for solving geometrical problems than what was available to the Greeks.

But it is important to realise that Descartes never proposed to replace geometry with coordinates. Descartes still took Euclid as his starting point, and laboriously derived coordinate geometry from Euclid’s axioms. The result was a tiered construction: Euclidean geometry forms the base. The base supports coordinate geometry and algebra. And algebra would be the force-multiplier to support the rest of geometry. What a cunning trick!

However, a few centuries later the game had changed. Mathematicians were feeling ever more confident about numbers and their ability to serve as foundational in mathematics. Numbers had evolved in strange ways, in part because of big money.

No, I’m not talking about banking or other financial services. I’m talking about the fact that back in the Renaissance, maths was a spectator sport, and you could earn a living by competing in tournaments where you had to solve algebraic equations against a clock.

Cash strapped mathematicians invented some truly clever tricks to win in these tournaments, ultimately inventing both negative numbers and numbers that acted as the square roots of negatives, which we now call “imaginary numbers”. These bizarre objects were viewed as mere conceptual artifacts in secret methods to win big in maths competitions. Adjectives such as “imaginary” reflected a justified suspicion concerning these strange objects, a suspicion born of the fact that no-one could give them the sort of definition that geometers had given for ordinary numbers. But their profound utility could not go ignored for long. The existence of these extremely useful new numbers would upset the previous dominance of geometry: numbers turned out to have a richness all of their own.

An exodus from geometry occurred in the 18th century, and was completed when the geometric foundation itself fell into total crisis. Remember the four axioms from Euclid I gave above? I missed out the fifth, not simply because it is an overly complex axiom, but because its status as axiom was in question from the early commentaries of Euclid. It is an axiom governing parallel lines, but perhaps the most familiar of its consequences is that the angles of a triangle add up to 180 degrees.

With the axiom’s status in doubt, several mathematicians tried to show its necessity by showing that its denial would be commitment to absurdity. In the late 18th century, they began exploring strange and speculative worlds in which the angles of a triangle do not sum to 180 degrees. They encountered plenty of departures from Euclidean mathematics, but an outright absurdity that would guarantee the truth of Euclid’s controversial fifth axiom was not discovered.

There was good reason for this:

*there was no absurdity*! Using the force multiplier of algebra, one mathematician was able to show that the bizarre world implied by denying Euclid’s fifth axiom was realisable, from within Euclidean geometry itself! It seemed that Euclidean geometry admitted that its own axioms were not absolute, and thus confessed that the nature of geometry was forever a moving target. The solid foundations of geometry were replaced with a fluid space of infinite geometries, a shifting sand that was no place to build the rest of mathematics.

And so the arithmetic tier, that had once been built over geometry, was to become the new foundation of mathematics. But there was a problem: arithmetic wasn’t axiomatic, and the loss of an axiomatic foundation left a vacuum that was abhored by mathematicians. But they were ready. By this time, they had a whole slew of new insights in the axiomatic game.

One of the main insights was a symbolic conception of logic and axiomatics. It may come as some surprise, but the use of symbols to do algebra, our \(xs\) and \(ys\) and \(zs\) and equations, is a Renaissance invention. Prior to this, algebra was done without symbols, using only prose. It seems that the invention of a symbolic algebra opened another window in the mathematician’s imagination, and in his Laws of Thought, George Boole leapt through when he experimented using the symbols of algebra as symbols for logic. He saw profound analogies in the laws that would be seen again and again in disparate areas of mathematics, ushering in the abstract turn that modern mathematics has famously taken in the last century.

And so when Dedekind and Peano rebuilt the foundations of arithmetic, they were able to do so in new symbol systems designed for the purpose. Peano was particularly enamoured with Boole’s symbolic approach to logic, seeing mathematics as essentially the explicit assignment of meaning to symbols. His Italian school would be of great influence in the further development of mathematical foundations, and much of our modern logical notation is due to them.

Here is how Peano begins:

- The sign \(N\) means
*number*(positive integer). - The sign 1 means
*unity*. - The sign \(a + 1\) means the
*successor of*\(a\), or \(a\)*plus*1. - The sign \(=\) means
*is equal to*.

So, far, I’m not sure we’re doing so much better than Euclid’s definitions, who said things like “a point is that which has no part.” But something very different is going on here with Peano:

*he doesn’t call these definitions*. Instead, he calls them

*explanations*, and reserves definition for a different class of declaration. This distinction, by whatever name you call it, is crucial, and has survived into our modern and ultimate standards of mathematical rigour.

For now, I shall attempt to explain Peano’s axiomatics in prose:

**A1**

If \(n\) is a number, so is its successor, written \(S(n)\).

**A2**

If two numbers share a successor, they are the same number.

**A3**

1 is no number’s successor.

**A4**

If a set of things contains 1, and contains all its numbers’ successors, then the set includes all the numbers.Peano also makes some definitions:

**D1**

I define 2 as the successor of 1.

**D2**

I define 3 as the successor of 2.

**D3**

I define 4 as the successor of 3.

**D4**

I define \(m + S(n)\) as \(S(m + n)\).And now let’s do our first theorem:

Theorem: \(2 + 2 = 4\)

Proof: We just need to unfold definitions:

\begin{align*} 2 + 2 &= 2 + S(1)&\text{(by D1)}\\ &= S(2 + 1)&\text{(by D4)}\\ &= S(3) &\text{(by D2)}\\ &= 4 &\text{(by D3)} \end{align*}

And so, according to Peano, it seems that \(2 + 2 = 4\) is just a matter of definition! Poor Baldrick. He lived too many centuries before our modern advanced mathematics:

What about the axioms? Our definitions are all well and good for computing \(2 + 2\), but this hardly exhausts mathematics. He’s a trivial thing that we cannot prove by definition: a number is never equal to its successor. I’ll take a little diversion to go through the proof.

We use A4. We need to think of a set to use this axiom, and the one we will consider is the set of numbers which are not equal to their successors. We expect all numbers to be in this set, and that’s what we’ll use A4 to prove.

We start by confirming that 1 is in the set. That is, we will confirm that 1 is not its own successor. That’s A3: 1 is no number’s successor, and so it is not its own successor.

Next, we confirm that if a number is in the set, so is the number’s successor. That is, we make a supposition for some arbitrary number \(n\):

**H**: the number \(n\) is in the set, meaning that \(n\) is not its own successor

We now confirm that, on this supposition, \(S(n)\) is in the set. That is, we must confirm that the successor of \(S(n)\) is not the same as the successor of \(n\). This follows by A2. If \(S(n)\) and \(n\) share a successor, then \(S(n) = n\), which we know is false by our supposition H. Hence, \(S(n)\) is not equal to its successor, and must belong to the set.

Thus, by A4, \(n\) is not equal to \(S(n)\) for any number.

Let us get back to our story. In the time when Peano was laying down foundations for arithmetic, we find that axiomatic geometry had not been abandoned. It had just been relativised. There were now multiple geometries, and each could be given its own axioms. And the axiomatics of these geometries marks a spectacular success of the axiomatic method. The mathematician David Hilbert arrived at an insight that would carry the day. Where Peano had deigned to

*explain*that “the sign 1 means unity”, Hilbert would suggest that such signs should be left

*meaningless*. In his hugely influential

*Foundations of Geometry*, Hilbert begins saying that there are just things that are to be called

*points*,

*lines*and

*planes*, without further explanation, insisting indeed that you might as well read

*mug*,

*table*and

*chair*throughout.

If Peano had recourse to this idea, he might have opened his axiomatics with:

- There are things called “numbers”.
- 1 is such a number.
- Every number has something called a “successor”.

and then let the axioms speak for themselves. It is

*this*which defines the most modern of the axiomatic methods, which now combines the symbolics of Peano such that all the axioms, definitions, theorems and proofs of mathematics can be reduced to code that a dumb computer can understand, precisely because when you get down to bedrock, there is

*nothing to understand*. The final symbols are meaningless, and all that is left to decide what counts as a theorem are the rules of a truly frigid logic.

So to go back to Hackenslash’s opening account:

We define 1 as the singular integer, and 2 is 1+1. You could say that the definition of ‘1’ is the first axiom of mathematics, upon which all other axioms are built.

I say again that he’s in fine company, looking back over the history of mathematics. But we’ve hit absolute pedantry in modern axiomatics, and so the correct account is now this. Rendered in my meagre prose, I can only assure you that the underlying logic is so rigorous that it can be typed straight into a computer:

- We do not define ‘1.’
- We do not define “numbers.”
- We do not define “successor.”
- There is something we shall call “numbers”, in which there is something we shall call ‘1’ such that:

- Every number has a number we call its “successor.”
- No successor is 1.
- To have the same successors is to be the same.
- A number is 1 or a successor. If a set contains 1 and contains its successors, then it contains the numbers.

These are now our axioms. They do not say what numbers are, only assert that there are things which behave as we expect numbers to behave. Numbers, whatever they are, begin with something called ‘1’, and then arise by taking successors, with any number eventually being revealed in this process.

Everything anyone would ever want to know about numbers has now been given by these axioms. All theorems about numbers are a consequence of these axioms. So whence definitions?

Definitions are, to a plausible extent, merely conveniences. No-one wants to have to write out “the successor of the successor of the successor of ‘1’” when a single symbol such as ‘4’ would do just as well. And so we define ‘4’ to be the successor of the successor of the successor of 1.

All definitions have this character. They are

*abbreviations*that, as you unfold them, eventually take you back to basic undefined notions such as “number”, ‘1’ and “successor.” If we had space to write them out in full (both on paper and in our heads), we may not have bothered with them at all.

Some definitions are somewhat more advanced. Peano defined \(m + S(n)\) as \(S(m + n)\), but this is quite a funky definition, and must be treated with some logical care. It would do no good to drop the \(S\) from the left hand side of the equation and define

\(m + n = S(m + n)\)

because we proved earlier that this situation is impossible.

So why is Peano’s definition allowed? Peano himself

*defined*definition in terms of his assignment of meaning, but he didn’t give any method to check that meaning had been assigned correctly.

Dedekind took a more rigorous approach that addressed this issue. His treatise on the nature and meaning of numbers takes, as its foundation, set of

*things*, a thing being any object of thought. This highly abstract starting point characterises Dedekind’s general approach, and permitted him, as he saw it, far more avenues for mathematical creativity than would otherwise be possible.

Dedekind did not claim that the nature of numbers was given by axioms, but instead that numbers were already somehow given in the abstract concept of sets. In order to make headway here, Dedekind had to start by finding some infinite set, and here he does wander quite dangerously into metaphysics:

Theorem. There exist infinite systems. Proof. My own realm of thoughts; i.e. the totality \(S\) of all things, which can be objects of my thought, is infinite. For if \(s\) signifies an element of \(S\), then is the thought \(s'\), that \(s\) can be object of my thought, itself an element of \(S\) […and…] there are elements of \(S\) (e.g. my own ego) which are different from such thought \(s'\).

This is a wonderful paragraph, in my opinion, especially when one has the hindsight to know where Dedekind is going with it. He is suggesting that one could analogise a thought such as “my own ego” with the number 1, and then take the thought “my own ego is a thought” as the number 2, and the thought “to think that my own ego is a thought is yet another thought” as the number 3, and in this manner, obtain an infinite tower of thoughts that resemble numbers.

Dedekind then proves that any infinite set, such as his infinite set of thoughts, contains within it exactly such a tower, a tower which satisfies the axioms given by Peano. Indeed, he shows that infinite sets always contain such towers. For Dedekind, this was all number needed to be. If some set resembled numbers in as much as they satisfied the axioms of Peano, then for all intents-and-purposes, they

*were*numbers. Numbers, to Dedekind, were a kind of structure, not a specific thing. There was no ultimate thing which was the number 1, only number structures which contained something that could be treated as 1.

Moreover, Dedekind could

*prove*, and not merely assert, that the definitions for addition and multiplication given by Peano were sound. Thus, in this respect, he was on much more secure logical footing than Peano. In another respect, he was in quite controversial territory. He was perhaps walking too far into the metaphysics department, treating the infinite as an object of mathematical thought.

It turns out that there are critical pitfalls when one starts doing this sort of thing. A year after Dedekind, the mathematician and occasional philosopher1 Gottlob Frege independently published an important treatise on logic and the foundations of number, but this time with such precision that his rules for logic could be reduced to almost pure mechanism on symbols.

Dedekind, as I mentioned, had no determinate concept of number, and viewed them instead as being any set of things which have a certain structure. Frege sought out something more solid, and his idea was elegant and ingenious.

Frege’s logic was not a logic of sets like Dedekind’s, but a logic of abstract properties and relations. Properties are things like the the property of being an odd number, or the relation of one number being greater than another, or the property of being bald, or being the colour blue, or whatever. For Frege, even properties have properties. And Frege used these “higher-order” properties to identify the numbers: the number 1 would be the property of properties that are satisfied uniquely. The number 2 would be the property of properties that is satisfied dually. The number 3 would be the property of properties that are satisfied trially, and so on.

The problem is that, on this account, the number 1 is

*big*. Really, massively big. There are not just infinitely many properties which are satisfied singularly. There’s even more than that! And such massively big things in logics such as Frege’s and Dedekind’s contain fatal traps. Russell, a mathematician and occasional philosopher, who was hugely inspired by Frege’s treatise, first noticed that Frege’s beautiful computation rules, when applied to really big properties, could be shown inconsistent. The paradox he identified bears his name.

Another mathematician, Georg Cantor, a close friend and correspondent of Dedekind who did more pioneering work on the theory of mathematical sets than anyone in the 19th century, had long recognised that some sets were just too big to be contained in thought. He was not as perturbed as Frege by this outcome, and happily attributed divine significance to these conceptions which he referred to as “actual infinities.”

Cantor’s stamina in the face of logical complications is admirable, but mathematicians could hardly be expected to follow him in his metaphysical musings. Russell, an analytical atheist, continued his own investigations into the foundations of mathematics after Frege had abandoned the project, inventing the important concept of type in order to prevent the paradoxes. Meanwhile, theorists following Cantor and Dedekind would take a wholly different approach, and exploit the axiomatic method. The found a set of axioms to describe logical sets which were carefully chosen so as to avoid the obvious paradoxes. Today, most mathematicians regard these axiomatic set theories as circumscribing the whole of mathematics, delineating the basis for further mathematical definitions and the scope of mathematical proof.

In these axiomatic theories, Dedekind’s metaphysical claim that an infinite exists is made axiomatic, and metaphysical arguments for its truth are now left to philosophy. While Frege’s original idea, refined by Russell, that numbers have a determinate meaning in the theory of abstract properties and relations (a position known as logicism) has fallen out of favour. Reigning instead is Dedekind’s idea that numbers are merely a kind of abstract structure. However, Frege and Russell’s idea of specifying mechanistic rules for logic and mathematical reasoning at least took off in a big way with the arrival of the computer.

So where do we stand now? Personally, I am happy to follow Dedekind and say that numbers are things that satisfy the axioms identified by Peano, wherever you happen to find them, be it by reflecting on your own thoughts as Dedekind did, or by finding them as objects postulated to exist by an axiomatic set theory. Peano’s axioms thus also count as a definition for the structures we call numbers. The axioms just require that we can point to our first number 1, and to an operation called successor which allows us to obtain any other number. It is then a matter of logic that there are unique operations on numbers \(+\) and \(\times\) which satisfy the equations that Peano took to be definitions:

\begin{align*} m + 1 &= S(m)\\ m + S(n) &= S(m + n)\\ \\ m \times 1 &= m\\ m \times S(n) &= m \times n + m \end{align*}

And we can thus define addition and multiplication to be just these operations. From these and other definitions, all of our theory of numbers emerges.

But I do not believe this is the last word. As I look back over the history of mathematics, I note that this view of the nature of numbers is less than 150 years old, and comes at the end of numerous upheavals in our conceptions of what mathematics is about.

I hope there will be future mathematicians who, centuries from now, will enjoy new revolutions in our understanding. Perhaps one day, geometry will conquer both set theory and number and regain its throne as the ultimate foundation of mathematics. Or perhaps something else entirely will be conceived. The grand edifice of mathematics has survived these last few thousand years on shifting sands, and I think it is too early to claim that matters are finally settled. Who knows? Maybe we’ll scrap foundations entirely.

What I will say is that in the last 150 years, we have made huge strides in understanding axiomatics itself, and how one can axiomatically approach numbers. In whatever way the foundations of mathematics develops in the future, I expect this modern understanding to always be part of the conversation.

Footnotes:

1

I’m kidding with all such remarks. However, Frege did once suggest that every philosopher was at least half a mathematician and every mathematician at least half a philosopher. Admittedly, this probably says more about Frege’s outlook than it does about either mathematics or philosophy.

_______________________________________

So there you have it.

Hope this was as enjoyable and informative for all of you as it was for me.

I'll have a new offering in the next couple of days.

## No comments:

## Post a Comment