### Atheist Professor Destroys Evolution...

Well, that's a nice clickbait title, isn't it? Is there any substance to it?

Of course not, but we should go through the motions anyway, not least because we haven't had an evolution post for a bit, and this sort of thing, when presented as an argument, really makes me despair for the state of education.

This post was inspired by a certain Twitter user, one Bob Martin, a science-denier whose favourite fallacy seems to be our old friend the argumentum ad verecundiam, which you may remember from such posts as Argumentum ad Verecundiam and the Genetic Fallacy. In this instance, he wishes us to offer our reverence to Dr Richard Lumsden, Harvard alumnus and professor of parasitology and cell biology. Lumsden isn't the only 'academic' he offers, of course. He also likes arch-cretin David Berlinski, but that's for another post, maybe another offering on why philosophers who offer conclusions are doing it wrong.

Here, I want to focus on a video offering from Lumsden. Here it is.

I'm going to ignore the preamble and get straight to the questions. Here's the first:
Last month you taught that mutations were genetic disasters. How, by natural selection, can they produce new and better structures?
This question is incoherent, and if the good doctor didn't pick up on it, he's an imbecile. Mutations aren't 'genetic disasters'. Any competent professor, or even a competent layman - like me - who genuinely understood evolutionary theory, would have interjected at the end of the first sentence. Not having seen the lecture to which the putative student is referring, we can only assume that Lumsden did indeed teach that mutations are genetic disasters. How else to explain the fact that he didn't stop and correct the student there and then? Taking this episode at face value, it would appear that Dr Lumsden has managed to get through cell biology and parasitology to obtain a doctorate without even a rudimentary understanding of evolutionary theory.

In a previous outing, Has Evolution Been Proven, far and away my most popular post to date (which makes me wonder if I shouldn't just write a book about evolution for the sales), we talked deeply about what evolution is, what it isn't, and what some of the vast swathes of evidence are. Anybody who's read that post and absorbed it is equipped to spot the problems in the question presented here, which begs the question why a professor teaching evolution-oriented subjects struggled with it so much.

Let's be clear here, even though this will involve covering deeply rutted ground. The vast majority of mutations in the genome are neutral or nearly neutral. In that earlier post, we did a comparison between the precursor alleles for insulin in humans and lowland gorillas. This wasn't made explicit in that post, but the entire gene is 333 nucleobases long (meaning that if you put the two together, it's 666 nucleobases... woooOOOOOooo!). Of those 333 nucleobases, there were four that were different. That doesn't sound like much but, when you compare the per-generation rate of approximately 350 mutations in $3.2 \times 10^9$, you see that this is huge. Even with such a massive difference in the nucleobases in the alleles, the insulin precursor protein is identical. Some disaster!

For more information on how new structures can be constructed, see both the linked post and Irreducible Complexity and Evolution. We're going to move on to question 2.
Aren't the odds of the random assembly of genes mathematically impossible?
This is a stupendous question! Not because it's problematic for evolutionary theory, but because it exposes a deep failure of understanding of not only evolutionary theory, but the nature of probabilities. There's also a terminological problem that I'll come back to later, but let's first look at the probabilities, because there's an extremely common logical fallacy lurking in there, so it will be instructive.

I've said before that I'm no mathematician. Indeed, most of the time, when I stray tentatively into the realm of mathematics, I get into such trouble that I require the assistance of one of my wonderful friends to throw me a lifeline, as regular readers will already be aware, but even I grok probability better than this, yet somebody teaching undergraduates can't field this question without it challenging his entire education. Frankly, I find this fishy in the extreme, and suspect some skullduggery in the presentation. No matter, we'll treat it at face value.

Probabilities fall firmly between two values: zero and one. An event with zero probability is one that is not going to happen (with certain caveats). An event with a probability of one is a certainty (again, with some caveats).

Ultimately, any event with a non-zero probability, given sufficient time and/or a sufficiently large sample set, becomes statistically inevitable. Way back at the beginning of this venture, in In the Beginning, we looked at some fallacies of intuition, and specifically talked about an event whose probability is so close to zero that it's unlikely to happen any time in the past or future life of the universe, namely me (or somebody of my size) walking through a wall. The probability of this occurring is so ridiculously small as to appear as an atom in the shade of Jupiter when compared to the kind of numbers that creationists like to throw around yet, if this were not actually possible, then neither would our existence, because stellar fusion would be a vanishingly rare event. And forget computers reliant on microchips, as discussed in The Certainty of Uncertainty, because they rely on the self-same process.

ETA: Probably the Worst Argument in the World addresses spurious probability calculations in considerably more detail.

What about the term I glossed over? Well, it's one that crops up all the time in apologetic excrement such as this video, and it's one that we should be challenging vigorously and vociferously: Random.

This word is a dirty word in some circles, but it needn't be. It's ubiquitous in creationist apologetics, and is employed as a blunt tool to bludgeon opponents into frustration but, once properly understood, it's a source of great enlightenment.

Random, in the way it's employed in rigorous fields, means 'statistically independent'. It doesn't mean, as some suggest 'uncaused', which is the way the alleged student was probably using it, and certainly the way Lumsden was using it. It simply means that, of all possible outcomes, and single outcome is exactly as probable as any other. In Has Evolution Been Proven, we talked about what a proper treatment of randomness in evolution looks like, and we introduced a new term, stochastic, to describe how evolution really works.

Let's move on. Question 3.
Where exactly, in the fossil record, is the evidence for progressive evolution, the transitional forms between the major groups?

Here, I'm going to include Lumsden's response, because it's entirely the sort of bollocks we've come to expect from creationists.
You know, most of them, come to think of it, are fully-formed kinds in their own right...
Now Lumsden cuts back in with narration, but this snippet is sufficient to expose the lie. Either this man lost his marbles, or he taught evolutionary theory without understanding it for years, or this entire story is fabricated from whole cloth.

If you don't understand that all fossils are transitional, that all organisms are transitional, that evolution is itself nothing other than transition, then you aren't qualified to venture an opinion.

Also, nobody who remotely understands evolutionary theory would expect any organism to be partly formed. All organisms are fully formed, or they'd quickly go the way of the dodo. As discussed in the prior article, all organisms are the same species as their parents. This is really elementary stuff.

This man might have passed exams showing that he grasped the core mechanics of parasitology and cell biology, but he doesn't have even a basic understanding of evolutionary biology as can be found in any number of books for the lay audience.

In the portion that follows, he drops the bomb, the realisation that, in the face of these questions that he couldn't answer, that god exists. Wow! That was all it took? I can't answer these few terrible questions, therefore evolution is false, therefore the only explanation is god!

This, ladles and jellyspoons, is precisely why the argumentum ad verecundiam is a fallacy. Here's a man, qualified in a relevant field, yet unable to answer the most basic and idiotic questions pertaining to that field.

It's a rule that we're not supposed to speak ill of the dead. Well bollocks to that. This man is an imbecile and had no business teaching dogs to shit in the sand, let alone being left in charge of the education of impressionable young minds. Science is hard enough as it is, without its future being left in the hands of people whose grasp of logic and reasoning is barely up to the task of tying shoelaces unaided.

I'm going to link this post again, just to be sure. This is what evolution is, what it isn't, and how we know this is a fact.

Let me say that loud and clear: Evolution is a fact. It's been observed occurring at every level predicted by theory.

Deal with it.

Nits and crits welcome, as always. If anybody spots anything in the video that I didn't completely eviscerate, let me know and I'll revisit.

### I Had No Need Of That Hypothesis

There's a common argument that science must exclude God and the supernatural from its enquiries. This argument fundamentally misunderstands what science is, what it does, and what its remit is.

I've been plugging away at an offering on the inanity that the Earth is expanding, but my eyes keep glazing over, so I thought I'd set it aside for a short while and delve into the more light-hearted arena of scientific epistemology and address this and some related arguments having to do with the burden of proof and absence of evidence. These arguments are somewhat ubiquitous crutches of the religious apologist, and I intend to shred them, but let's first some housekeeping.

The place for us to begin is to talk about what it is to be a sceptic. Scepticism is often presented by the apologist as the position that everything should be doubted. This is a massive simplification of what scepticism really is. Scepticism is a process by which we assess truth-claims. It involves only ascertaining whether the available evidence is sufficient to warrant tentative acceptance of any given assertion, and whether there's anything standing in opposition to it. It's a heuristic tool, formulated to improve our understanding of the world and our confidence in the results of our enquiries. Properly applied, scepticism places limits on what can be deemed knowledge, and teaches us to take great care in our thinking. This self-same heuristic, when applied to phenomena, goes by another name: Science.

Science is a branch of philosophy. Many treat philosophy and science as having different remits, and even go so far as to say that philosophy deals with the questions that science can't answer. In some respects, I've already addressed these claims in earlier posts, particularly in Who, What, When, Where, How, Why? and Deduction, Induction, Abduction and Fallacy, which dealt with what philosophy really is and how logic is employed in the sciences respectively, but here I want to be a little more explicit, particularly to address some popular misconceptions.

In the first of those posts, I talked at length about the disconnect between what people perceive philosophy to be and what it actually is. Specifically, I argued that philosophy is about ensuring that we're asking the right kinds of question. Different branches of philosophy deal with different kinds of question, largely differentiated by the means by which we can test proposed answers to said questions.

In mathematics, we test our proposals by constructing proofs which, in practice, entails starting with statements that are taken to be axiomatic, and progressing from them to consequences that must also be axiomatic, as long as we haven't made any logical mis-steps. In science, we test our ideas by means of the predictions they generate and their correlation with observation. In other areas, there's not a huge amount we can do beyond testing our ideas for logical consistency. Ultimately, unless we can make contact with an observation at some point, or otherwise demonstrate that our starting assumptions are true, then our ideas have little epistemological value (there's an important caveat here; thought experiments have had great value in science in helping us to formulate better questions, but they must still make testable predictions).

There are some important ideas that we need to take into account when we're evaluating hypotheses, and among the most important is the idea of parsimony. Parsimony is simply economy, and it enters scientific thought in the form of a principle popularly known as Occam's Razor. This principle is often horribly misunderstood, not just by the laity, but even academic sources such as the Stanford Encyclopaedia of Philosophy and the Internet Encyclopaedia of Philosophy. In those and other sources, it's formulated as something along the lines of  'the simplest hypothesis is the best'. This is not a million miles away but, as we discussed in Irreducible Complexity and Evolution, simplicity and economy don't reside on the same spectrum.

Properly, the opposite of simplicity is complicatedness, while the opposite of economy is complexity. Complexity is what Occam's Razor actually deals with. Where two competing hypotheses are compared, we should select the one with the fewest entities or assumptions. That's not to say that the more economical or parsimonious hypothesis is necessarily correct, only that we should rule out the more parsimonious hypothesis before moving on to more complex hypotheses. The motivation for doing this should be reasonably clear. Obviously, the more economical ideas are going to be the easiest to rule out, in a process that underpins all scientific progress. This is where we'll go next.

Another critical idea in science is the notion of falsifiability. This was famously formalised by Karl Popper as a solution to the demarcation problem, the problem of how to separate scientific ideas from non-scientific. It tells us in a nutshell that, if there is no way in principle to show that an idea is incorrect, it isn't possible to test it, thus it isn't a scientific idea, and ultimately has no epistemological utility. Any such idea can and should be discarded on that basis alone.

A really important feature of any scientific idea is the null hypothesis. In rigorous terms, this is a prediction about what, if observed, will falsify a hypothesis. This is what solves the demarcation problem, as it provides a means by which we can robustly test our ideas.

Is God a scientific idea? This is a thorny problem, and it's kept philosophers up at night for centuries. Part of the problem is that there's very little information in that word for us to be able to reasonably assess it for scientific merit. There's actually an entire school of atheists whose position can be summed up as 'no deity has ever been sufficiently coherently defined'. These are the theological non-cognitivists, and they have a point. Many apologists will direct you to the 'God of classical theism', as if that answers the definition question but, insofar as this entity has been defined at all, the definition is highly problematic. I address some of the issues in All Kinds of Everything, in which I treat the famous 'omnis' from a logical standpoint, but the problems run far deeper than that because, aside from these logically absurd attributes, the entity still hasn't been defined in any robust sense. That said, this particular entity can be considered a scientific idea of sorts, because it's at least possible to falsify it, and this has been done.

Those uninitiated in public debate will define it as 'the God of the bible' but this is even more vague, and runs into issues as soon as you begin to address the claims, not least because the apologist will often insist that the counters offered don't apply to their conception for one reason or another. It's a kind of bait-and-switch, in which the switch is never completely made, only denial of the specific attribute at the root of the objection.

A corollary problem with this is that God seems to be a different entity for each believer, and it's difficult to reach agreement between one believer and another. This is the reason that there are so many denominations of the large religions. The problem goes even deeper than that, and there's an old joke that deals with it. It concerns the First Baptist Church, and how the Second Baptist Church can tell you everything that's wrong with the first. Matt Dillahunty quipped that it's even worse, and that those in the second row of pews in the First Baptist Church can tell you everything that's wrong with those in the first row.

The point is that there seem to be as many conceptions of deity as there are believers, and it's changeable. It simply isn't possible to debunk several billion conceptions of deity at once, and that makes the idea of god functionally, if not in principle, unfalsifiable, and therefore unscientific.

Individual conceptions are very much falsifiable, though. Any entity that is proposed to manifest in any way in the world should be falsifiable, dependent on the details. Indeed, it isn't stretching the point to far to suggest that any entity that can circumvent the laws of the universe on a whim is comprehensively falsified by the existence of science, as any universe containing such arbitrary processes would be one in which science would almost certainly not be possible.

There's a marvellous book by physicist Victor Stenger, God: The Failed Hypothesis, which fairly concretely eviscerates the idea of a deity as a scientific hypothesis, and I highly recommend it.

Anyhoo, there's a famous anecdote about Pierre-Simon Laplace presenting his masterwork Mécanique Céleste to Napoleon. Napoleon, apparently fond of asking awkward questions, remarked:

"M. Laplace, they tell me you have written this large book on the system of the universe, and have never even mentioned its Creator."
To which Laplace replied:
"Je n’avais pas besoin de cette hypothèse-là."
No need of that hypothesis indeed.

In short, science doesn't remotely need to exclude god, it just needs not to include it until there's need of the hypothesis, and until said entity is sufficiently well-defined to bring it into the realm of testability. If you can present evidence for such an entity, bring it on so we can test it. The same goes for any other asserted entities and phenomena.

### Guest Post: Definitions and Axioms

This is a singular treat for me and, I hope, for you, dear reader. Over the last decade, it's been my privilege to learn at the feet of some really exceptional people, whose erudition in all sorts of fields has served to ensure that I keep Socrates' famous aphorism concerning knowledge at the forefront of my thinking. It's long been my view that the most important attribute of a true intellectual is the preparedness for being wrong. Indeed, if you're not prepared to be wrong, you're doing it wrong, thus you're always wrong in some measure. I've been wrong many, many times and, where I've been right, it's almost invariably been because of the intervention of one or other of my extremely knowledgeable friends.

This offering has kindly been presented by one of the bright lights in this constellation, my dear friend Phil Scott, a.k.a. @inhabitingvoid. He generally describes himself, quite humbly, as a computer scientist. That description may or may not be apt but, whenever I've said something stupid about mathematics or logic - a rather more regular occurrence than I'd care to admit too vociferously - he's been ready to intervene and spare my blushes. This post is just such an intervention, and I'm grateful to him for it.

I'll shut up now and give the floor to Phil.
_________________________________________________________________________

Definitions and Axioms

In another post on this blog, my friend Hackenslash talks about the relationship between mathematics and science, responding to Eugene Wigner’s paper The Unreasonable Effectiveness of Mathematics in the Natural Sciences. He volunteers some ideas on the definitions and axioms of mathematics:

The beauty of mathematics is that it’s axiomatically complete. We can build from the simplest of axioms, defining our terms as we go, and be sure that what we’re building on has good, solid foundations. Specifically, the axioms of mathematics aren’t accepted because they seem to be true, but because they are definitionally true. In other words, we define 1 as the singular integer, and 2 is 1+1. You could say that the definition of ‘1’ is the first axiom of mathematics, upon which all other axioms are built. Thus we can define 2 as as the sum of the integers 1 and 1. And from there we can build another axiom, namely, the addition of two integers gives the sum (we’ll make this explicit shortly ). Much of the rest is about the relationships between operators, so that we can build up to 16 being 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1, and that being the same as 8+8, which is the same as 8x2, from which we can build another axiom, namely that the multiplication of two integers gives the product. Then we can build a whole set of relationships around the equivalence of these operations on specific sets of integers to build an axiomatically complete system, and it is this way because of the fact that the core axioms are necessarily true by definition.

This isn’t right to my modern eyes, but looking back over the history of maths, the account puts Hackenslash in fine enough company. In this post, I’ll try to look through some of that history, and draw on the veritable revolution in our understanding of axiomatics in the last 150 years with which we can clarify what is going on in with axioms and definitions with laser precision. While I don’t really have the space to get into the technical details (and I fear doing so would make this post very dry), Hackenslash has kindly allowed me to express my broader thoughts on the matter.

When we’re discussing the axioms of mathematics, we’re talking about something with a long history. Maths is really old. Even the useless abstract stuff is old. I mean, there’s pretty good evidence that about three and a half thousand years ago, in present day Iraq, the Babylonians had got so bored of using numbers for useful stuff like accountancy that they had worked out an algorithm to calculate integer solutions to Pythagoras’ Theorem and were going around chiselling the output on stone tablets. This was a thousand years before a Greek latecomer took credit for the theorem.

Don’t get me wrong. I don’t like to put down those canny Greeks. The Greeks were prolific, not only discovering techniques of extraordinary subtlety, but also hitting on the importance of mathematical proof. They also hit upon something wonderful that we would nowadays recognise as axiomatics, or, as they might have called it, the Elements of mathematics.

Axiomatics may not appear to have a clear function. It isn’t a way to generate new mathematics. And this singular uselessness is mostly still the case today. So why bother? I suspect one reason the Greeks pursued it was because they had accumulated so much mathematical knowledge that they needed a way to properly organise it. Their exposition of axioms and fundamental definitions were not a means to figure out the nature of mathematics, but only a means to curate and catalogue what they already knew. Their insight here was that they could use the notions of proof and logical consequence in order to build a mathematical taxonomy. Just as a modern biologist orders life by genetic history, the writer of an elements orders mathematics by logical derivation. And just like in biological taxonomy, mathematical taxonomy could show that huge diversity can originate from just a few ultimate ancestors, the axioms.

The only elements that has survived the last two millennia is the one written by a Greek mathematician called Euclid. The guy may not have come up with a single original mathematical theorem himself, but he did such a great job curating what was around him that he did perhaps more than anyone else to preserve Greek mathematical genius through the darker ages. He produced a multivolume book which would become the most influential textbook of all time, and whose broader influence in the West is perhaps only surpassed by the Bible.

Despite this, I often find that my hotel rooms are missing a complimentary copy of Euclid’s Elements, so I have to go here. Browsing through, I am always reminded how utterly profound I find this book. Oh, it may not represent the height of ancient Greek mathematical sophistication. If you want that, you should probably start with Archimedes, and then if you’ve still got an appetite, read Appollonius’ On Conics, which might be one of the most complicated mathematical treatises ever written, and not in a good way.

Nevertheless, if you want to see how axioms, assumptions and definitions work in mathematics, you get a good taste with Euclid, in a way which still characterises how mathematics is practised today. The book goes to great effort to find the first principles of mathematics, and attempts what Hackenslash attempts above, to define the most elementary notions. Here’s Euclid’s definition of 1 and of number:

A unit is that by virtue of which each of the things that exist is called one.

A number is a multitude composed of units.

Okay, I’ll be frank. While I like these words, they don’t exactly resonate with my sense of mathematical respectability. But it’s still curious that Euclid thought that “number” was a term that needed a definition at all. But it’s even more curious that this definition appears in the seventh volume of Euclid’s Elements, a long way from where you’d expect to see the elementary definitions of mathematics.

This is because Euclid doesn’t start with numbers. He starts with geometry: points, lines, triangles, circles, that sort of thing. His first two definitions attempt to define just the “points” and “lines”:

• “A point is that which has no part.”
• “A line is a breadthless length.”
These definitions really set alarm bells ringing for me. I fear that if I tried to explore them further, I’d risk walking into a metaphysics department and forgetting to do any maths. His five axioms, or postulates, are a good deal better:
1. To draw a straight line from any point to any point.
2. To produce a finite straight line continuously in a straight line.
3. To describe a circle with any centre and radius.
4.  That all right angles equal one another.
I’ve elided the fifth axiom. It’s pretty wordy, and no-one’s ever been happy with it. I’ll talk about it later.

So for Euclid, numbers are not axiomatic at all. Instead, numbers needed to be built out of bits of geometry. Specifically, a number would be made up of units, and a unit would be whatever geometrical object you used for such a purpose. You will usually see Euclid draw his units as simple line segments, and then draw bigger numbers by connecting the unit line segments end-to-end.

It’s like we have a bunch of unmarked rulers of various lengths on the floor, and we pick one up and say to the world “this is to be the unit; I will thenceforth measure everything in relation to it.” Humans have done this throughout history, variously declaring and then standardising their unit to be cubits, hands, feet, inches, leagues, metres of measurement. The choice of a unit is arbitrary. We just need to agree on it, and be able to use units interchangeably for whatever we’re trying to do (construction or land surveying, say).

But this all too tangible, and I say tangibility is a barrier to the mathematician’s imagination. The barrier explains why the Greeks did not have the foresight to invent the number 0 or to invent the negative numbers: they thought numbers were line segments, things you could join end-to-end. How could you have a negative number of those? How could you put units together to get 0? How do you use a ruler to measure nothing?

There are fair reasons for the Greek’s self-imposed handicap. In antiquity, numbers were impoverished. Counting numbers were well understood: one potato, two potato, three potato four. Ordinal numbers were similarly easy: first place, second place, third place, fourth. And fractions aren’t too hard either: divide the cake into six bits. The six bits make the whole.

But just what the hell is the square root of 2? I cannot ignore such a number. because if I draw a square, and declare its side to be of unit length, I would be flummoxed when asked how to measure the diagonal.

Us moderns, familiar with Pythagoras’ Theorem, will say that the diagonal would be measured as $\sqrt 2$, but the Greeks and their predecessors had no idea how to notate this. So their numbers were not up to the task of describing even very basic geometry such as the diagonals of unit squares.

And so if numbers had a geometrical counterpart, but not every piece of geometry had a numerical counterpart, it must be the case that the world of numbers is more impoverished than the world of geometry. Choosing between the two, it is clear that geometry would have to be the foundation, and numbers would have to be built over the top.

We shouldn’t feel any superiority here. It’s something of a cheat to say that the diagonal of the unit square has length $\sqrt 2$. When you unpack this claim, you end up going around in a circle: “the diagonal of the unit square is whatever is that number which measures the diagonal of the unit square”. Saying just what this number is without running in circles takes a surprising amount of mathematical sophistication that wasn’t available until the 19th century, and the solution is still sufficiently sophisticated that we don’t bother exposing students to it, even ones studying mathematical sciences at university.

Students nowadays are mostly taught that numbers come first, and that geometry is something you do with numerical coordinates. Geometry is based on numbers, rather than numbers being based on geometry as the Greeks had it. It took almost two thousand years after Euclid to get to this conception. In the 16th century, the mathematician and occasional philosopher, Rene Descartes, sowed the seeds by showing how geometry could be based on equations involving coordinates on a graph, and thus showed how geometrical problems could be reduced to high-school algebra. The techniques of algebra had themselves been well-developed over the millennia and they were a much more powerful tool for solving geometrical problems than what was available to the Greeks.

But it is important to realise that Descartes never proposed to replace geometry with coordinates. Descartes still took Euclid as his starting point, and laboriously derived coordinate geometry from Euclid’s axioms. The result was a tiered construction: Euclidean geometry forms the base. The base supports coordinate geometry and algebra. And algebra would be the force-multiplier to support the rest of geometry. What a cunning trick!

However, a few centuries later the game had changed. Mathematicians were feeling ever more confident about numbers and their ability to serve as foundational in mathematics. Numbers had evolved in strange ways, in part because of big money.

No, I’m not talking about banking or other financial services. I’m talking about the fact that back in the Renaissance, maths was a spectator sport, and you could earn a living by competing in tournaments where you had to solve algebraic equations against a clock.

Cash strapped mathematicians invented some truly clever tricks to win in these tournaments, ultimately inventing both negative numbers and numbers that acted as the square roots of negatives, which we now call “imaginary numbers”. These bizarre objects were viewed as mere conceptual artifacts in secret methods to win big in maths competitions. Adjectives such as “imaginary” reflected a justified suspicion concerning these strange objects, a suspicion born of the fact that no-one could give them the sort of definition that geometers had given for ordinary numbers. But their profound utility could not go ignored for long. The existence of these extremely useful new numbers would upset the previous dominance of geometry: numbers turned out to have a richness all of their own.

An exodus from geometry occurred in the 18th century, and was completed when the geometric foundation itself fell into total crisis. Remember the four axioms from Euclid I gave above? I missed out the fifth, not simply because it is an overly complex axiom, but because its status as axiom was in question from the early commentaries of Euclid. It is an axiom governing parallel lines, but perhaps the most familiar of its consequences is that the angles of a triangle add up to 180 degrees.

With the axiom’s status in doubt, several mathematicians tried to show its necessity by showing that its denial would be commitment to absurdity. In the late 18th century, they began exploring strange and speculative worlds in which the angles of a triangle do not sum to 180 degrees. They encountered plenty of departures from Euclidean mathematics, but an outright absurdity that would guarantee the truth of Euclid’s controversial fifth axiom was not discovered.

There was good reason for this: there was no absurdity! Using the force multiplier of algebra, one mathematician was able to show that the bizarre world implied by denying Euclid’s fifth axiom was realisable, from within Euclidean geometry itself! It seemed that Euclidean geometry admitted that its own axioms were not absolute, and thus confessed that the nature of geometry was forever a moving target. The solid foundations of geometry were replaced with a fluid space of infinite geometries, a shifting sand that was no place to build the rest of mathematics.

And so the arithmetic tier, that had once been built over geometry, was to become the new foundation of mathematics. But there was a problem: arithmetic wasn’t axiomatic, and the loss of an axiomatic foundation left a vacuum that was abhored by mathematicians. But they were ready. By this time, they had a whole slew of new insights in the axiomatic game.

One of the main insights was a symbolic conception of logic and axiomatics. It may come as some surprise, but the use of symbols to do algebra, our $xs$ and $ys$ and $zs$ and equations, is a Renaissance invention. Prior to this, algebra was done without symbols, using only prose. It seems that the invention of a symbolic algebra opened another window in the mathematician’s imagination, and in his Laws of Thought, George Boole leapt through when he experimented using the symbols of algebra as symbols for logic. He saw profound analogies in the laws that would be seen again and again in disparate areas of mathematics, ushering in the abstract turn that modern mathematics has famously taken in the last century.

And so when Dedekind and Peano rebuilt the foundations of arithmetic, they were able to do so in new symbol systems designed for the purpose. Peano was particularly enamoured with Boole’s symbolic approach to logic, seeing mathematics as essentially the explicit assignment of meaning to symbols. His Italian school would be of great influence in the further development of mathematical foundations, and much of our modern logical notation is due to them.

Here is how Peano begins:

• The sign $N$ means number (positive integer).
• The sign 1 means unity.
• The sign $a + 1$ means the successor of $a$, or $a$ plus 1.
• The sign $=$ means is equal to.

So, far, I’m not sure we’re doing so much better than Euclid’s definitions, who said things like “a point is that which has no part.” But something very different is going on here with Peano: he doesn’t call these definitions. Instead, he calls them explanations, and reserves definition for a different class of declaration. This distinction, by whatever name you call it, is crucial, and has survived into our modern and ultimate standards of mathematical rigour.

For now, I shall attempt to explain Peano’s axiomatics in prose:

A1
If $n$ is a number, so is its successor, written $S(n)$.
A2
If two numbers share a successor, they are the same number.
A3
1 is no number’s successor.
A4
If a set of things contains 1, and contains all its numbers’ successors, then the set includes all the numbers.
Peano also makes some definitions:

D1
I define 2 as the successor of 1.
D2
I define 3 as the successor of 2.
D3
I define 4 as the successor of 3.
D4
I define $m + S(n)$ as $S(m + n)$.
And now let’s do our first theorem:

Theorem: $2 + 2 = 4$

Proof: We just need to unfold definitions:

\begin{align*} 2 + 2 &amp;= 2 + S(1)&amp;\text{(by D1)}\\ &amp;= S(2 + 1)&amp;\text{(by D4)}\\ &amp;= S(3) &amp;\text{(by D2)}\\ &amp;= 4 &amp;\text{(by D3)} \end{align*}
And so, according to Peano, it seems that $2 + 2 = 4$ is just a matter of definition! Poor Baldrick. He lived too many centuries before our modern advanced mathematics:

What about the axioms? Our definitions are all well and good for computing $2 + 2$, but this hardly exhausts mathematics. He’s a trivial thing that we cannot prove by definition: a number is never equal to its successor. I’ll take a little diversion to go through the proof.

We use A4. We need to think of a set to use this axiom, and the one we will consider is the set of numbers which are not equal to their successors. We expect all numbers to be in this set, and that’s what we’ll use A4 to prove.

We start by confirming that 1 is in the set. That is, we will confirm that 1 is not its own successor. That’s A3: 1 is no number’s successor, and so it is not its own successor.

Next, we confirm that if a number is in the set, so is the number’s successor. That is, we make a supposition for some arbitrary number $n$:

H: the number $n$ is in the set, meaning that $n$ is not its own successor

We now confirm that, on this supposition, $S(n)$ is in the set. That is, we must confirm that the successor of $S(n)$ is not the same as the successor of $n$. This follows by A2. If $S(n)$ and $n$ share a successor, then $S(n) = n$, which we know is false by our supposition H. Hence, $S(n)$ is not equal to its successor, and must belong to the set.

Thus, by A4, $n$ is not equal to $S(n)$ for any number.

Let us get back to our story. In the time when Peano was laying down foundations for arithmetic, we find that axiomatic geometry had not been abandoned. It had just been relativised. There were now multiple geometries, and each could be given its own axioms. And the axiomatics of these geometries marks a spectacular success of the axiomatic method. The mathematician David Hilbert arrived at an insight that would carry the day. Where Peano had deigned to explain that “the sign 1 means unity”, Hilbert would suggest that such signs should be left meaningless. In his hugely influential Foundations of Geometry, Hilbert begins saying that there are just things that are to be called points, lines and planes, without further explanation, insisting indeed that you might as well read mug, table and chair throughout.

If Peano had recourse to this idea, he might have opened his axiomatics with:

• There are things called “numbers”.
• 1 is such a number.
• Every number has something called a “successor”.

and then let the axioms speak for themselves. It is this which defines the most modern of the axiomatic methods, which now combines the symbolics of Peano such that all the axioms, definitions, theorems and proofs of mathematics can be reduced to code that a dumb computer can understand, precisely because when you get down to bedrock, there is nothing to understand. The final symbols are meaningless, and all that is left to decide what counts as a theorem are the rules of a truly frigid logic.

So to go back to Hackenslash’s opening account:

We define 1 as the singular integer, and 2 is 1+1. You could say that the definition of ‘1’ is the first axiom of mathematics, upon which all other axioms are built.

I say again that he’s in fine company, looking back over the history of mathematics. But we’ve hit absolute pedantry in modern axiomatics, and so the correct account is now this. Rendered in my meagre prose, I can only assure you that the underlying logic is so rigorous that it can be typed straight into a computer:

• We do not define ‘1.’
• We do not define “numbers.”
• We do not define “successor.”
• There is something we shall call “numbers”, in which there is something we shall call ‘1’ such that:
• Every number has a number we call its “successor.”
• No successor is 1.
• To have the same successors is to be the same.
• A number is 1 or a successor. If a set contains 1 and contains its successors, then it contains the numbers.

These are now our axioms. They do not say what numbers are, only assert that there are things which behave as we expect numbers to behave. Numbers, whatever they are, begin with something called ‘1’, and then arise by taking successors, with any number eventually being revealed in this process.

Everything anyone would ever want to know about numbers has now been given by these axioms. All theorems about numbers are a consequence of these axioms. So whence definitions?

Definitions are, to a plausible extent, merely conveniences. No-one wants to have to write out “the successor of the successor of the successor of ‘1’” when a single symbol such as ‘4’ would do just as well. And so we define ‘4’ to be the successor of the successor of the successor of 1.

All definitions have this character. They are abbreviations that, as you unfold them, eventually take you back to basic undefined notions such as “number”, ‘1’ and “successor.” If we had space to write them out in full (both on paper and in our heads), we may not have bothered with them at all.

Some definitions are somewhat more advanced. Peano defined $m + S(n)$ as $S(m + n)$, but this is quite a funky definition, and must be treated with some logical care. It would do no good to drop the $S$ from the left hand side of the equation and define

$m + n = S(m + n)$

because we proved earlier that this situation is impossible.

So why is Peano’s definition allowed? Peano himself defined definition in terms of his assignment of meaning, but he didn’t give any method to check that meaning had been assigned correctly.

Dedekind took a more rigorous approach that addressed this issue. His treatise on the nature and meaning of numbers takes, as its foundation, set of things, a thing being any object of thought. This highly abstract starting point characterises Dedekind’s general approach, and permitted him, as he saw it, far more avenues for mathematical creativity than would otherwise be possible.

Dedekind did not claim that the nature of numbers was given by axioms, but instead that numbers were already somehow given in the abstract concept of sets. In order to make headway here, Dedekind had to start by finding some infinite set, and here he does wander quite dangerously into metaphysics:

Theorem. There exist infinite systems. Proof. My own realm of thoughts; i.e. the totality $S$ of all things, which can be objects of my thought, is infinite. For if $s$ signifies an element of $S$, then is the thought $s'$, that $s$ can be object of my thought, itself an element of $S$ […and…] there are elements of $S$ (e.g. my own ego) which are different from such thought $s'$.

This is a wonderful paragraph, in my opinion, especially when one has the hindsight to know where Dedekind is going with it. He is suggesting that one could analogise a thought such as “my own ego” with the number 1, and then take the thought “my own ego is a thought” as the number 2, and the thought “to think that my own ego is a thought is yet another thought” as the number 3, and in this manner, obtain an infinite tower of thoughts that resemble numbers.

Dedekind then proves that any infinite set, such as his infinite set of thoughts, contains within it exactly such a tower, a tower which satisfies the axioms given by Peano. Indeed, he shows that infinite sets always contain such towers. For Dedekind, this was all number needed to be. If some set resembled numbers in as much as they satisfied the axioms of Peano, then for all intents-and-purposes, they were numbers. Numbers, to Dedekind, were a kind of structure, not a specific thing. There was no ultimate thing which was the number 1, only number structures which contained something that could be treated as 1.

Moreover, Dedekind could prove, and not merely assert, that the definitions for addition and multiplication given by Peano were sound. Thus, in this respect, he was on much more secure logical footing than Peano. In another respect, he was in quite controversial territory. He was perhaps walking too far into the metaphysics department, treating the infinite as an object of mathematical thought.

It turns out that there are critical pitfalls when one starts doing this sort of thing. A year after Dedekind, the mathematician and occasional philosopher1 Gottlob Frege independently published an important treatise on logic and the foundations of number, but this time with such precision that his rules for logic could be reduced to almost pure mechanism on symbols.

Dedekind, as I mentioned, had no determinate concept of number, and viewed them instead as being any set of things which have a certain structure. Frege sought out something more solid, and his idea was elegant and ingenious.

Frege’s logic was not a logic of sets like Dedekind’s, but a logic of abstract properties and relations. Properties are things like the the property of being an odd number, or the relation of one number being greater than another, or the property of being bald, or being the colour blue, or whatever. For Frege, even properties have properties. And Frege used these “higher-order” properties to identify the numbers: the number 1 would be the property of properties that are satisfied uniquely. The number 2 would be the property of properties that is satisfied dually. The number 3 would be the property of properties that are satisfied trially, and so on.

The problem is that, on this account, the number 1 is big. Really, massively big. There are not just infinitely many properties which are satisfied singularly. There’s even more than that! And such massively big things in logics such as Frege’s and Dedekind’s contain fatal traps. Russell, a mathematician and occasional philosopher, who was hugely inspired by Frege’s treatise, first noticed that Frege’s beautiful computation rules, when applied to really big properties, could be shown inconsistent. The paradox he identified bears his name.

Another mathematician, Georg Cantor, a close friend and correspondent of Dedekind who did more pioneering work on the theory of mathematical sets than anyone in the 19th century, had long recognised that some sets were just too big to be contained in thought. He was not as perturbed as Frege by this outcome, and happily attributed divine significance to these conceptions which he referred to as “actual infinities.”

Cantor’s stamina in the face of logical complications is admirable, but mathematicians could hardly be expected to follow him in his metaphysical musings. Russell, an analytical atheist, continued his own investigations into the foundations of mathematics after Frege had abandoned the project, inventing the important concept of type in order to prevent the paradoxes. Meanwhile, theorists following Cantor and Dedekind would take a wholly different approach, and exploit the axiomatic method. The found a set of axioms to describe logical sets which were carefully chosen so as to avoid the obvious paradoxes. Today, most mathematicians regard these axiomatic set theories as circumscribing the whole of mathematics, delineating the basis for further mathematical definitions and the scope of mathematical proof.

In these axiomatic theories, Dedekind’s metaphysical claim that an infinite exists is made axiomatic, and metaphysical arguments for its truth are now left to philosophy. While Frege’s original idea, refined by Russell, that numbers have a determinate meaning in the theory of abstract properties and relations (a position known as logicism) has fallen out of favour. Reigning instead is Dedekind’s idea that numbers are merely a kind of abstract structure. However, Frege and Russell’s idea of specifying mechanistic rules for logic and mathematical reasoning at least took off in a big way with the arrival of the computer.

So where do we stand now? Personally, I am happy to follow Dedekind and say that numbers are things that satisfy the axioms identified by Peano, wherever you happen to find them, be it by reflecting on your own thoughts as Dedekind did, or by finding them as objects postulated to exist by an axiomatic set theory. Peano’s axioms thus also count as a definition for the structures we call numbers. The axioms just require that we can point to our first number 1, and to an operation called successor which allows us to obtain any other number. It is then a matter of logic that there are unique operations on numbers $+$ and $\times$ which satisfy the equations that Peano took to be definitions:

\begin{align*} m + 1 &amp;= S(m)\\ m + S(n) &amp;= S(m + n)\\ \\ m \times 1 &amp;= m\\ m \times S(n) &amp;= m \times n + m \end{align*}
And we can thus define addition and multiplication to be just these operations. From these and other definitions, all of our theory of numbers emerges.

But I do not believe this is the last word. As I look back over the history of mathematics, I note that this view of the nature of numbers is less than 150 years old, and comes at the end of numerous upheavals in our conceptions of what mathematics is about.

I hope there will be future mathematicians who, centuries from now, will enjoy new revolutions in our understanding. Perhaps one day, geometry will conquer both set theory and number and regain its throne as the ultimate foundation of mathematics. Or perhaps something else entirely will be conceived. The grand edifice of mathematics has survived these last few thousand years on shifting sands, and I think it is too early to claim that matters are finally settled. Who knows? Maybe we’ll scrap foundations entirely.

What I will say is that in the last 150 years, we have made huge strides in understanding axiomatics itself, and how one can axiomatically approach numbers. In whatever way the foundations of mathematics develops in the future, I expect this modern understanding to always be part of the conversation.

Footnotes:

1
I’m kidding with all such remarks. However, Frege did once suggest that every philosopher was at least half a mathematician and every mathematician at least half a philosopher. Admittedly, this probably says more about Frege’s outlook than it does about either mathematics or philosophy.

_______________________________________

So there you have it.

Hope this was as enjoyable and informative for all of you as it was for me.

I'll have a new offering in the next couple of days.