### Careful With That Dial!

 Image from steelguitarforum.com
You can tune a piano but you can't tuna fish. Or a universe.

Among the arguments that get trotted out with cyclical regularity is that the universe is fine-tuned for life. On the face of it, this is actually one of the best arguments in the apologist's arsenal for several reasons. It's certainly not an easy argument to debunk with any rigour. That said, defeating it is fairly straightforward once you grasp the issues.

The problem for the counter-apologist is that this argument, unlike most of the arguments we see erected, actually seems to play by all the rules. It looks at the data, draws conclusions that actually relate to the data - or seem to, at any rate - and has what looks an awful lot like solid support for its premises. But does it really stack up?

The place to begin is to deal with what fine-tuning actually is. In the lexicon of the apologist, it's an indication that somebody had to set the parameters of the universe to specific values in order to allow life - and ultimately humans - to exist. There are deep issues with this, but I'm going to defer them for the knockout blow at the end.

In the lexicon of the physicist, it simply means that certain parameters must fall within a narrow range of values if the model under scrutiny is correct.

In some cases, fine-tuning can be seen as a problem for a model if there's no explanation. We met one of them in Before the Big Bang Part I, where we discussed some problems with the classic big bang. Among them was the flatness problem, an issue with the energy density of the cosmos having to fall within a very narrow range of values in order for the cosmos to be flat (Euclidean) on large scales. There is a kind of resolution offered by Alan Guth's inflationary theory, although there are those in the physics community who see this as something of a fudge, because inflation requires new physics that hasn't yet been observed (it also suffers some fine-tuning issues itself, not least the fine-tuning of the Lambda term in GR, also known as the cosmological constant). That said, there are at least candidate explanations that have observational evidence. Dark energy, for example, which is the name we give to the fact that the expansion of the cosmos is accelerating.

Ultimately all the fine-tuning 'problems' in physics are of this nature. They're parameters whose values require an explanation. Of course, what the apologist wants to do is to insert the default explanation that - because it's unexplained itself and isn't falsifiable in any way - doesn't, in fact, explain anything. This is even more of a fudge than inflation. We looked closely at this sort of thinking in Mind the Gap!

Let's look at a few common examples. The following text is from an exchange on Facebook, though a quick google tells me that this is nothing more than copypasta from somewhere else, which raises its own problems. I'll come back to those shortly, but for now, here's the text.

1. If the initial explosion of the big bang had differed in strength by as little as 1 part in 1060, the universe would have either quickly collapsed back on itself, or expanded too rapidly for stars to form. In either case, life would be impossible. [See Davies, 1982, pp. 90-91. (As John Jefferson Davis points out (p. 140), an accuracy of one part in 10^60 can be compared to firing a bullet at a one-inch target on the other side of the observable universe, twenty billion light years away, and hitting the target.)

The first and most obvious problem here, and what alerted me to the copypasta nature of it (beyond just experience, which tells me that such arguments are almost always copied from somewhere), is that first number. 1 part in 1060 isn't even a small number. Of course, what's actually happened (and it's been corrected in the second citation) is that the 60 should be an exponent, so it should read $1$ part in $10^{60}$, which is one with sixty zeroes after it.

As it turns out, the source of this was almost certainly the Discovery Institute, not least because their citation in an article from 1998 included the same text with the same missing exponent and the same later correction. At the risk of committing the genetic fallacy, any citation whose source is the Duplicity Institute should raise red flags in abundance at the very least, given their penchant for using what the new Drumpf administration has termed 'alternative facts'.

Another issue is that the citation of Paul Davies is from about the same time that inflationary theory was being developed, and well before the DI even erected this argument.

That said, we should, as always, be wary of accepting a claim at face value, even when the source is a reputable physicist, so let's have a look at what Davies actually said and see if the apologetic actually reflects it.
It follows from (4.13) that if p > p_crit then k > 0, the universe is spatially closed, and will eventually contract. The additional gravity of the extra-dense matter will drag the galaxies back on themselves. For p p_crit, the gravity of the cosmic matter is weaker and the universe ‘escapes’, expanding unchecked in much the same way as a rapidly receding projectile. The geometry of the universe, and its ultimate fate, thus depends on the density of matter or, equivalently, on the total number of particles in the universe, N. We are now able to grasp the full significance of the coincidence (4.12). It states precisely that nature has chosen N to have a value very close to that required to yield a spatially flat universe, with k = 0 and p = p_crit
Well, this is exactly what the discussion in Before the Big Bang Part I was dealing with, namely the flatness problem. This is concerned with the energy density of the cosmos p and the critical density p_crit to attain a spatially flat (Euclidean) cosmos and, apart from the fact that Davies has used slightly different notation, he's expressing precisely what that post dealt with. In other words, there's a resolution on the table for this, so it's not so much a problem as an open question. Moreover, as Davies goes on to say:

At the Planck time – the earliest epoch at which we can have any confidence in the theory – the ratio was at most an almost infinitesimal $10^{-60}$. If one regards the Planck time as the initial moment when the subsequent cosmic dynamics were determined, it is necessary to suppose that nature chose p to differ from p_crit by no more than one part in $10^{60}$.
In short, he's dealing with an instance of what we were looking at right at the head of this post. The apologist wants it to say that the universe is fine-tuned, and that therefore somebody had to twiddle some dials, but what he's actually saying is that this particular parameter, the energy density of the cosmos, must fall within a narrow range of values if the theory under scrutiny is correct. No measurement has taken place here, it's simply that if space is flat, the energy must hit close to that density. If the density varies by a large amount in the early cosmos, expansion will be attenuated by gravity and the cosmos will recollapse. Moreover, as discussed in Scale Invariance and the Cosmological Constant, we know that it was discovered in the '90s, more than a decade after this time, that the expansion of the cosmos began to accelerate some four billion years or so ago. What this means is that, wait for it... the expansion rate of the cosmos is a variable! Far from being restricted to a particular value, it can change over time. More importantly for our purposes here, and going back to that definition of fine-tuning employed by physicists, the model upon which Davies' calculations are predicated - the classic big bang - is wrong! Thus, the fine-tuning he cites vanishes in a puff of future research.

The simple fact is that there is no instance of fine-tuning mentioned by physicists that doesn't fall under this rubric, because this is what fine-tuning means to a physicist.

Let's move on and look at some other instances for completeness before we deliver the fatal blow.
2. Calculations indicate that if the strong nuclear force, the force that binds protons and neutrons together in an atom, had been stronger or weaker by as little as 5%, life would be impossible. (Leslie, 1989, pp. 4, 35; Barrow and Tipler, p. 322.)
This is a nice example, mostly because it furnishes me with an opportunity to talk about some really interesting physics. This is going to get a bit quantum.

In The Certainty of Uncertainty, we looked at some of the implications of Heisenberg's Uncertainty Principle. One of those implications, later demonstrated experimentally, is the idea that, because the value and rate of change of any field is a pair of conjugate variables, they're subject to the same uncertainty relationship as the position and momentum of a particle. The corollary to this is that field values must fluctuate. This is the now-famous 'zero-point energy', which manifests as virtual particle pairs that arise as a differential in their field, move apart, and then come back together in annihilation to equalise the differential. These have an interesting effect when it comes to the strength and range of forces, so it's apposite to mention them here.

Let's start with the electromagnetic force. Take two charged particles a little way apart, and bring them slowly together. When they're some distance apart (we're talking about extremely short distance scales here, in the Angstrom range), they're somewhat attracted, but as they come closer together, the attraction ramps up dramatically. Why? Well, the process of pair-production acts as an insulator for the electromagnetic force, so that the further you get away, the more insulation there is, but when you get really close, this is reduced to pretty much nothing and it ramps up dramatically. So the electromagnetic force is attenuated by virtual particles.

Now, in the case of the strong and weak nuclear forces, the opposite occurs, and the pair production acts as a conductor, so that when you are further away (still on extremely short distance scales, as these are very short range forces) they are actually more strongly attractive. As you get close enough to penetrate the barrier of pair production, they fall away in strength dramatically.

So how is that a problem for this apologetic? Simple; this is an entirely random process. Thus, we're being asked to accept that a completely random process is fine-tuned. Since this process plays a part in the strengths of the forces, we have to accept that the fine-tuning applies there as well, which is patently nonsense.
3. Calculations by Brandon Carter show that if gravity had been stronger or weaker by 1 part in 10 to the 40th power, then life-sustaining stars like the sun could not exist. This would most likely make life impossible. (Davies, 1984, p. 242.) 4. If the neutron were not about 1.001 times the mass of the proton, all protons would have decayed into neutrons or all neutrons would have decayed into protons, and thus life would not be possible. (Leslie, 1989, pp. 39-40 ) 5. If the electromagnetic force were slightly stronger or weaker, life would be impossible, for a variety of different reasons. (Leslie, 1988, p. 299.)
I'm going to lump these three together, because the resolution to them is the same.

First, we should note that, in isolation, these claims are nonsense, and I don't need to inconvenience any physicists to demonstrate it.

Mass, according to our best current models, is a function of interaction with the Higgs field. Gravity is the curvature of spacetime in the presence of mass. In other words, there's a deep relationship between mass and gravity. This is why treating these things in isolation is nonsense. If we increase the strength of gravity on its own, then stars would tend to be more dense, which can cause problems. However, if you simultaneously reduce the value of interaction with the Higgs field, the result is... no change! In short, these are not independent variables, so treating them as independent is silly.

Secondly, the latter of those claims fails for the reasons identified with pair production above.

I said that the source of the claims was an issue, especially the obvious fact that the apologist who presented this had copied the text from somewhere else. The biggest problem is that he didn't actually understand it. That missing exponent is a big clue. Anybody who truly understands the arguments they're making wouldn't overlook something as critical as an exponent, especially where they're trying to blind you with big numbers. This is exactly the same approach to argumentation we met before in Probably the Worst Argument in the World, wherein big numbers were used to try to show that something was impossible. The only difference here is that the numbers are being used in a slightly different context, but the goal is exactly the same, namely to show that such numbers can only be achieved by magical intervention.

When arguments are copied wholesale from elsewhere, the problem the counter-apologist faces is that the person copying the arguments rarely understands them well enough to recognise a valid objection. Indeed, I went through a period at one time of simply responding 'one-eyed trouser-trout' to such instances, because the apologist hasn't a hope of getting whether this rebuts his argument or not.

In this case, the apologist simply got shrill and insisted that I refute the arguments as presented, and that they must be true because 'physicists accept fine-tuning', entirely overlooking the fact that I'd already refuted them. In short, the problem is that the apologist can erect these arguments and think that they have the effect of making them look clever but, because they can't actually own the arguments, the arguments become self-defeating in the hands of the apologist.

Here's the thing: We have, as yet, no good scientific reasons for supposing that the constants and values we experience in the universe could be any different. We certainly engage in thought experiments in which they are because, often, looking at things from an alternative perspective leads us to ask questions that might not have occurred to us had we just taken them at face value. Rather than asking why water finds a level, we ask what would happen is water didn't find a level, but stacked up. This leads to some really interesting questions about the nature of water that otherwise may not have arisen.

We've also looked at the idea of a 'multiverse' (a term I have some distaste for), because the mathematics physicists use to deal with the evolution of the cosmos seem to point in that direction but, beyond that, they're devices for generating questions. Maybe those values can vary, and give rise to incredibly different kinds of cosmos. It's fruitful from a heuristic perspective to think about such things.

So, are there any other problems? Once again conscious that this is becoming a quite voluminous entry, I'm going to very briefly touch on some glaring issues with any argument that the universe is fine-tuned for intelligent life and, ultimately, humans.

The first station on our whistle-stop tour of culmination will be the weak nuclear force. This is one of the four fundamental forces of the universe (we should probably say three, since it's far from clear that gravity is actually a force, but that's an unnecessary complication for now). It's responsible for some forms of radiometric decay and nothing else, as near as we can tell. There was a marvellous study conducted in 2006 by Harnik et al of the High-Energy Physics team at Cornell University that suggests that this force could be removed entirely from the universe without appreciably affecting the evolution of the cosmos. How does this help? Well, without the weak interactions driving the decay of some isotopes, there would be more stable isotopes capable of forming long complex chains, something restricted to carbon and silicon when the weak force is included. Thus, any designer fine-tuning the universe for intelligent life would not, given a choice, include this force.

Another issue arises when we consider two areas of research that have really found their feet in the last century or so; chaos theory and quantum mechanics. I won't detail chaos theory in any detail here, because this is the topic of a future post, wherein it will be treated comprehensively. However, it's apposite to understand just what chaos is, because it's another one of those terms that gets lobbed around in arguments without being properly understood.

Chaos is, in the simplest terms possible, sensitivity to initial conditions. What this means is that if the initial conditions of a complex system are changed in a tiny way, the long-term evolution of the system can show dramatic differences as a result.

The classic analogy is known as the 'butterfly effect'. In this thought-experiment, we're asked to imagine a butterfly flapping its wings in China and the result being a hurricane in the Caribbean. This is a bit of a simplification and doesn't capture the true essence of chaos. The thing we really need to focus on is that the flap of wings is a change in the initial conditions of the system, and that the long-term evolution of the system is affected by it.

So, let's wind the universe back to the beginning so that we can run it again. One might think that starting with exactly the same conditions will mean that the outcome will be the same, but then we have to consider quantum effects, which are truly random. As discussed in Has Evolution Been Proven, evolution is stochastic. The same is true of the evolution of the universe. This means that the future evolution of the system depends upon initial conditions plus one or more random variables. Since QM tells us that there are many, many random events involved in the evolution of the universe, not least quantum fluctuations during inflation being stretched to macroscopic scales defining the inhomogeneities that eventually give the cosmos the structure we see, the chances of getting Earth are vanishingly small, let alone humans. Thus, fine-tuning of initial conditions with any goal in mind is utter nonsense, and not to be given credence by anybody with more than three or four functioning neurons.

It's also worth noting briefly that our idea of what constitutes the possibility of life is eternally subject to revision. The last several decades have revealed organisms that thrive in conditions we'd previously thought impossible. I won't list all the different environments that extremophiles have been discovered in, I'll simply link to the Wikipedia classification section. Ultimately, all assertions regarding what conditions are necessary for life are extremely (see what I did there?) anthropocentric, and should be treated with some scepticism.

In the end, though, there's one reason above all others that fine-tuning arguments for the existence of god all fail, and it's this: They suggest that god had no choice but to set those values where they are. In other words, god was constrained, and those values were imposed, which about wraps it up for omnipotence. In reality, fine-tuning can serve neither as an argument for nor against god. If god is omnipotent, then whatever values he deems fit will seem to us to be fine tuned, meaning we can take no meaning from the values whatsoever.

So, while this is quite probably the best argument the apologist has available to him, it fails to withstand any sort of rigorous scrutiny.

Here's the late, great, hugely-missed Douglas Adams, from The Salmon of Doubt:
This is rather as if you imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn't it? In fact it fits me staggeringly well, must have been made to have me in it!' This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, frantically hanging on to the notion that everything's going to be alright, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
Not to be believed by a thinking person.

Nits, crits and typos always welcome.

### There's This Book...

I love books. They're probably the single greatest love of my life. I've talked before about having several books on the go at any given time, and how I flit between them like a monarch butterfly in a field of thistles. It is, for me, the greatest form of escapism. Those who are familiar with my output may find this odd, not least because I've previously said that I haven't read any fiction (apart from Terry Pratchett) for well over twenty years.

It isn't really odd, though. There's something - dare I say it - spiritual about contemplating the deepest workings of the universe. As the Hitch once put it:
One page, one paragraph, of Hawking is more awe-inspiring, to say nothing of being more instructive, than the whole of Genesis and the whole of Ezekiel.
Hitchens was very fond of pointing out that, contrary to the views of those who thought that they had all the windows on beauty and the numinous in their possession, atheists were not immune to such feelings. Look away from the breathtakingly spectacular images from Hubble, if you feel you can, he enjoined, and say you're still impressed by a burning bush.

Richard Feynman expressed a similar sentiment in discussion with an artist friend of his, who insisted that he could appreciate the beauty of a flower with his artistic sensibility far more deeply than Feynman could, with his reductionist view. Feynman, of course, retorted that, with his deep understanding of the inner workings of the flower, being able to appreciate the beauty of the cell and its mechanisms, that he couldn't help having a deeper appreciation, that understanding something can only add to the beauty, not detract from it.

Anyhoo, I digress.

Books are fantastic. They're a wonderful way of communicating ideas and, better still, of stimulating discourse. With books, we can delve into the minds of the long-dead, giving them voice and allowing them to speak to the present and the future. They're a tangible link to the past, reminding us of where we've come from, and often illuminating where we're going.

That's not to say, of course, that books aren't without their pitfalls, and this is the real purpose of today's musing.

Some books can be problematic not because they're read, but because they aren't. An obvious example of this is the book that first introduced me to evolutionary biology, Dawkins' seminal work, The Selfish Gene. This book, now a mainstay of evolutionary thought, has had many critics, and it's far from perfect, but the vast majority of criticisms I've come across seem to be by people whose criticism is based entirely on having read only the title. Many have been rooted in the idea that the book is saying that selfishness is a winning evolutionary trait when, in fact, it tells us quite the opposite, advocating what Dawkins terms 'reciprocal altruism'. Indeed, the tenth chapter of the book carries the chapter title You Scratch My Back, I'll Ride On Yours, which should be enough to dispel this view on its own.

The view the book actually presents is quite simple, namely that the thing that's doing all the surviving in 'survival of the fittest'* is, rather than the organism carrying the genes, the genes themselves.

In The Map Is Not The Terrain, we looked at another example from this book, the concept of a meme. A meme, in sensu Dawkins, is essentially a 'living' idea. He coined it as an illustrative to talk about how genes survive by comparing them to ideas that have a life of their own, by having some characteristic that makes them attractive to people, by which mechanism they gain popular traction. In that earlier post, I talked about how the meme 'meme' has quite literally taken on a life of its own, now popularly used to mean an image conveying some message. I recall seeing a recent interview in which Dawkins was chatting to somebody and the concept of a meme was raised in this context. Dawkins clearly didn't grasp this definition, and thought the term was being used in the sense of his original coinage.

These examples should serve to illustrate a deep point regarding some of the issues with cursory appraisals and how misunderstandings can arise, and this is while the author is still alive and in the public eye!

And this isn't the only problem, of course. Those of us who've spent any considerable time in counter-apologetics have encountered a problem known as 'quote-mining'. This is a phenomenon in which a quote, often by a famous scientist or other academic, is taken so horribly out of context as to make it appear that the author is saying exactly the opposite of what was intended.

Possibly the most commonly erected example of this is from Darwin's most famous work, On the Origin of Species. In Irreducible Complexity and Evolution, we touched briefly on the following passage.
To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree.
It really does look an awful lot like Darwin was saying that this was something his theory couldn't account for. This is probably the most regularly quoted passage from all of Darwin's voluminous output, yet what follows it probably only accounts for half the number, if that. In what follows, of course, Darwin knocks this down fairly comprehensively, yet it rarely gets quoted except to rebut the erection of the former passage. Here it is:
Reason tells me, that if numerous gradations from a simple and imperfect eye to one complex and perfect can be shown to exist, each grade being useful to its possessor, as is certain the case; if further, the eye ever varies and the variations be inherited, as is likewise certainly the case; and if such variations should be useful to any animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, should not be considered as subversive of the theory.
Here the lie is exposed for what it is, and it is undoubtedly a lie. As previously discussed, the reason for Darwin's erecting the former passage was that he wanted to ensure that he'd addressed every objection to his theory that he could think of, in a process described in detail in Onus Probandi, Assertionism and Peer-Review.

Shakespeare is a source of almost limitless inspiration for many, while inspiring only yawns in others. Among the most famous passages in Shakespeare is Hamlet's soliloquy, quite possibly the most famous dramatic passage in literary history. Often taken as being a pondering on death, and as Hamlet's contemplation not just of mortality but also of the prospect of dealing with his problems by simply removing himself from the picture. The text, read at face value, seems to be his internal struggle with whether he should fight the usurpation of his father's crown by his uncle, of which he learned from his father's ghost, or whether he should take a permanent vacation, as it were. However, there are some who read into it a subtext, one in which 'not to be' means 'pretend to be'. In other words, to be or merely to pretend. This is a fairly common theme in Shakespeare, and we can see this manifest in many works in one of Shakespeare's favoured plot devices, the play within a play, an instance of which appears in this very play.

These interpretational issues are very difficult to do anything about. All writers will encounter this sort of problem at some point. Indeed, I encountered just such a problem in a discussion with one of my excellent friends only a few hours before beginning this disquisition, somebody I've been communicating with for over a decade, and with whom I've had very few - and invariably trivial - disagreements. I'd written something about rights with which he disagreed. In this instance, it turned out to be purely a semantic issue which, once elucidated, evaporated. That this can happen between writers who know each other and each other's intentions fairly intimately based on millions of words of discourse shows just how easy it can be for misunderstanding to arise.

Academics of all stripe, including philosophers, mathematicians and scientists, are keenly aware of this issue. I've written a fair bit in the past about semantics and how important it can be, most notably in the aforementioned The Map is Not the Terrain and Are Babies Atheist? Philosophers especially devote a huge amount of their writings to defining terms. Mathematicians and scientists rely on well-defined conventions precisely to circumvent such problems. This can, as we've already seen, lead to many misunderstandings, especially where those terms carefully defined for rigour have less careful, broader treatments in the vernacular.

At least the titular issues can be circumvented. It's very difficult to run away with a misleading impression if the title has no information in it, isn't it? So, we can solve that problem by keeping the title really simple. Let's call it The Teaching, or maybe The Recitation. Better yet, let's go for the simplest title possible, and just call it The Book.

Unfortunately, these titles have been tried, although when we cast them in their original languages, which we'll do in a moment, we'll see that even worse problems can manifest.

What all of the above should be leading to is that, although books are brilliant, and can be incredibly useful for conveying information, they can still pose problems, especially when we have to try to glean some meaning without being able to quiz the author on what he meant.

So, suppose I'm the ruler of the universe, and I want to convey to my creation - specifically to my chosen species, the most important aspect of the entire enterprise, and for which the universe was created - my message, telling them all of the important things they need to know - how I want them to behave, what it's all about, where they can put their penises and what position I wish them to adopt while engaging in this practice, how they're to treat each other - the rules of the game. What, given all of the above, and given that I'm supposed to have perfect, infallible knowledge of the entirety of the universe including all of space and time and the thoughts of every entity within it, is the best plan I can come up with?

I know, I'll use a book, ambiguous, open to interpretation, riddled with vague metaphor and straight up factual errors that are demonstrably not in accord with experience. Let my infallible knowledge result in a failure to correctly count the legs on an insect, or assert that the genomes of organisms can be changed wholesale by having their parents bump uglies alongside coloured sticks. Let's give it a nice, simple title, like The Holy Teaching (Torah) or The Holy Recitation (Qu'ran) or maybe simply The Holy Book (Bible).

Let's overlook all of the issues arising in the foregoing discussion, knowing about all of them. Let's ignore the fact that some people will use these books as weapons to beat, figuratively and literally, others who don't accept it. Let's make it so that no group of believers can agree with any other group about what I was trying to say. Let's ignore the fact that people will literally kill and die over the nonsensical contents of this book, that it will be the cause of millions upon millions of deaths and depredations, denial of basic rights, justification for slavery and subjugation, and treating followers of other iterations of the same book as second-class, or subhuman, all allegedly representing what I, author and inspiration for all of these books, and architect of the universe, purportedly want to see.

Nope, I can't think of a better plan than that. What do you think?

It's interesting to me that, when I ask for evidence that the purportedly omnipotent, omniscience creator of the universe has any basis in reality, I often get a response along the lines of 'well, the Bible says...'

This is the thing: The Bible, the Torah, the Qu'ran, these aren't evidence. If you want to present these as evidence for your deity, what you're actually presenting is evidence that the deity you believe in is an incompetent moron. These books are the claim, not the evidence. On what basis should I be tempted to accept the claim of these books at face value, when I know at least some of them to be flat-out wrong?

Not to be believed by a thinking person.

*This phrase is horribly misunderstood, largely because, like 'theory', the term 'fitness' has a very specific meaning in evolutionary biology. In the jargon, fitness is a measure of performance against a specific measure, namely 'expected number of offspring'. It doesn't mean that the fastest, biggest, strongest, etc., will survive, only that the organism with the greater fitness will be better represented in future generations with a statistical weighting. See Has Evolution Been Proven? for more on this. A vernacular phrasing that more accurately conveys the proper treatment would be 'survival of the sufficiently fit, on average'.

### Mind the Gap!

If it takes five men four hours to dig one hole, how long does it take one apologist to fill it?

This is a post about gaps.

Gaps are really beloved by apologists, and not just gaps in knowledge, but gaps of all kinds. Indeed, there's a famous fallacy known as the 'god of the gaps' fallacy, in which some gap in our knowledge is automatically filled with a deity. Nothing is more likely to get a religious apologist excited than a gap. In reality, it isn't even god they're worshipping in these circumstances, it's the gaps themselves, so perhaps we should rename this fallacy 'god IS the gaps'.

Here, I want to pick apart this fallacy and show some popular instances of its commission. Along the way, we're going to learn some interesting things about gaps that are often not well appreciated among the uninitiated and in apologetics circles.

Properly, the 'god of the gaps' fallacy is a subset of the argumentum ad ignorantiam, or argument from ignorance. It's committed when we identify something that we don't know about and insert our preferred solution in its place. It actually commits several other fallacies by default, including the bare assertion and the false dichotomy (wherein the dichotomy is implied).

Many of the gaps leapt upon with relish are genuine gaps in knowledge, while others are not. Specifically, there's a single type of gap that is treated as a gap in knowledge when it really isn't or, at least, not in the sense that the apologist needs it to be.

I often trawl Youtube for interesting topics to talk about, and a pass at a very old episode of The Atheist Experience brought this particular instance to mind. In it, Martin Wagner was dealing with somebody who asserted that evolution is faith-based. I won't waste time padding the specifics out, not least because I've covered them at some significant length elsewhere, but Martin gave an analogy that was wonderfully elegant and dealt with just this kind of gap; a gap in information. I will, as usual, add the video at the bottom of the page.

Now, one might think that a gap in information constitutes a gap in knowledge, but it needn't be the case, and Martin's analogy highlights this quite nicely.

Picture a jigsaw puzzle. As you get toward the end of the puzzle, it becomes clear that quite a lot of pieces are missing. Nevertheless, once all the pieces you do have are in place, it becomes fairly clear that the puzzle is a picture of the Eiffel Tower. Those missing pieces, while certainly constituting a substantial lack of information, don't actually affect the fact that the overall picture is clear.

In this instance, the caller said that, when he visited Paris during the Millennium celebrations, the tower had a countdown clock on it and that, because the puzzle might not show this countdown clock, that somehow our picture was wrong. This is silly, of course, because while there are things not included in our puzzle, and we may be missing details that would be apparent if those pieces were in place, there's absolutely no uncertainty that what we're looking at is a picture of the Eiffel Tower.

A fair bit of science is like this. In fact, this is the main reason that scientific conclusions are somewhat tentative. To stretch this analogy a bit further, let's reverse the numbers so that we have even less information.

Now, our certainty is massively reduced, yet we can still be justified in positing that this is a tower of some description, or a pylon, or some such. We could even infer that it's probably the Eiffel Tower, though our justification would be quite weak on the information available.

Sometimes, we can even think we have all the pieces and that the picture is complete. Just such an instance is Newtonian gravity, which stood unchallenged for more than two centuries. We had a perfect picture of a tower. It withstood every assault, and predictions validated up the wazoo for all that time.

Then something happened that changed the game. We realised, via a few critical observations - such as a long-known disagreement between Newton's predictions and observations of the precession of Mercury's perihelion - and the work of one man, that there was something not quite right with the picture; that we were missing not just a smattering of individual pieces, but entire sections of the puzzle.

It turned out that Newton was wrong. When Einstein published the general theory of relativity in 1915, it became clear that what Newton thought was the whole puzzle was just a tiny piece of it. The general theory showed us not only the most accurate picture of the Eiffel Tower we'd ever seen, it revealed a vast cityscape dominated by Montmartre. All right, mea culpa. As is my wont, I've stretched this analogy to breaking point (which is not to say that I won't revisit it).

It's worth noting here that, although Newton was wrong, Einstein showed that the more accurate relativistic picture reduces approximately to Newton's picture under certain circumstances, those circumstances being the speeds and distances that Newton had access to. This is why all the world's space agencies still use Newtonian mechanics for all space missions, namely that it's close enough for government work - the errors being so small as to be insignificant - as well as being considerably easier to work with. For example, the Cassini-Huygens mission, after executing four gravitational slingshot manoeuvres (two around Venus, one around Earth and one around Jupiter), and a journey through space of something in the order of $15 \times 10^{11}$ kilometres, inserted itself into Saturn's orbit within twenty metres of its intended target. That's a fairly spectacular demonstration of just how piddling the errors are at those speeds and distances.

Anybody who's good at pool and has ever tried to play snooker on a full-size table will have a visceral grasp of how this works. Increasing distance magnifies errors. What's a fairly straightforward shot on a six-foot table can miss the pocket by a margin on a twelve-foot table. The same is true of speed. Once we get past about ten percent of the speed of light, inaccuracies are commensurately magnified.

So, moving on from that minor digression, I want to look at another gap, and this one is truly colossal, yet is still subject to the same reasoning as above. It's the 'missing link' or 'no transitional forms' gap (these are not the same thing, but they are closely related, so I'm going to treat them as the same for now).

Back in 2006, a wonderful discovery was announced. This was, of course, the first the wider community had heard of it, and it still isn't fully appreciated what this discovery signified, so let's get the backstory out of the way.

It's often thought that fossils are found in a somewhat arbitrary manner and, in many cases, they can be, but there are certain finds that buck this. For a long time, we had fossil evidence for later lobe-finned fish, in the form of, for example, Panderichthys, which was discovered in 1930, and early tetrapods, such as Ichthyostega, discovered in 1932. It's worth noting here that both these organisms show transitional features between fish and amphibians, a point we'll return to shortly. These two organisms are separated in time by approximately twenty million years, Panderichthys having lived about 385 million years ago, and Ichthyostega having lived about 365 million years ago.

On the basis of these, and taking into account plate movements in plate tectonics (the original habitat of this organism was equatorial streams), a team from the University of Chicago, led by Neil Shubin, predicted that an organism showing transitional features between these two groups would be found in the region of Ellesmere Island in Northern Canada. Several expeditions and six years later, in 2004, the team hit the jackpot, finding a fossil of an organism now known to the world as Tiktaalik roseae, an intermediate between fish and amphibians, living almost exactly in between those other organisms at 375 million years ago.
 By Graphic by dave souza, incorporating images by others, as description - via Wikipedia
This is a wonderful example of something that apologists insist evolutionary theory is not, namely a fully predictive science.

I'd like to add that, when something of this nature is pointed out to the apologist, an extremely common counter is 'now you have two gaps where before you only had one!' This shows quite beautifully how debilitating cognitive inertia can be. It takes a special kind of barrier to think this is a valid counter when presented with evidence.

I should note for completeness that the placement of Tiktaalik  as a transition between fish and tetrapods is currently a matter of some discussion based on research that's been conducted in the interim, not least by the wonderful Per Ahlberg, one-time denizen of the talkrational forum and creationism counter-apologist. This uncertainty doesn't materially affect any of the points being made here. What is meant in the relevant fields by transitional doesn't actually rely on an organism actually being directly ancestral to later organisms, only that features can be shown to appear in the fossil record with some degree of time and taxonomic ordering or, as it's known in the jargon, nested hierarchy.

As we can see, this example is a perfect reflection of what we were discussing earlier, namely that, while this discovery fills in a large gap in informational terms, it doesn't affect the overall picture at all. In other words, we're still looking at the Eiffel Tower.

Tiktaalik isn't the only example we have of this, of course. There are many of these fossils between major groups, not least the stunning Archaeopteryx, which was discovered in Darwin's time, with the first feather being discovered only a year after the publication of On the Origin of Species, and a full skeleton being found a year later still. Indeed, the primary literature is replete with examples of just such findings, and that's setting aside the fact that this idea of a transitional form is, if not treated carefully and with a little understanding, problematic in and of itself.

What all of these fossils do, in terms of the overall picture, is simply add detail. In the grand scheme of things, they have very little impact on evolutionary theory other than to raise interesting questions and new lines of research. As far as the veracity of evolution is concerned, they have no impact whatsoever, not least because evolution has been observed occurring at every level predicted by the theory. It certainly isn't the case that all of evolutionary theory is based only on a few fossils. We have millions of them, and all reflect the broader picture of how evolution occurs.

It's also important to note that, even were we entirely bereft of fossils, the evidence that we have for evolution - notably the molecular evidence - makes it a slam-dunk for the best supported theory in science.

Much of what I've said here also applies to the 'missing link' version of the god-of-the-gaps fallacy, but the missing link in particular carries with it what I'm sure the Hitch would refer to as a 'curious species of solipsism', and reminds us that we have to be careful in our thinking. It's a term that harks back to Lamarckian evolution, the idea that 'lower animals' were recent creations, and that all life was striving to 'more evolved' status. The term itself generally refers to any transitional form, but most often more narrowly to human ancestry, and more specifically to the last common ancestor of chimpanzees and humans.

The fact that we still think in terms of missing links is largely a result of shoddy science journalism since, in light of Darwin's work, it's mostly nonsense. In a previous post, Fuzzy Logic, Classification and the Fallacy of the Excluded Middle, we looked at some of the problems inherent in our need to put things in little boxes, while nature itself is rarely so conveniently digital. Still this notion is promulgated in the popular science press, so that every new discovery of a species that could plausibly be a distant ancestor is hailed as the long-lost missing link. This entire notion is anthropocentric in the extreme, and we have to be wary of such targeted thinking. Evolution has no targets, no goals, and no purpose. It's simply a process.

Here's the problem, and it's where I'm going to finish. What evolutionary theory teaches us without ambiguity or equivocation is that all species are transitional. Indeed, all organisms are transitional. Evolution is itself nothing more nor less than one big transition. It also teaches us that there are no such things as 'lower animals', no such thing as 'more evolved'. I am exactly as evolved as a modern paramecium, and so are you.

Ultimately, and this is the important thing to take away from this waffle, there is no gap in our knowledge that requires a deity to fill it.

I can't recall where I got this quote from, but it's worth sharing anyway:
Blaise Pascal stated that there's a god-shaped hole in all of us. However, he wasn't noted as an anatomist.
Pascal almost certainly didn't say that, but where's the comedy in that? Either way, if you're of a certain disposition, all holes are god-shaped, which answers the question at the head of the post nicely. The answer is, of course, no time at all, because the apologist has instantly filled it with god.

Thanks for reading. Typos, nits and crits welcome, as always.

Other posts on evolutionary theory:

### Probably the Worst Argument in the World

It's a well-known fact that 97.84392% of all probabilities cited in arguments claim a precision not justified by the methods employed.

In this outing, I want to spend a bit of time looking at something that comes up an awful lot in apologetics, sometimes made up on the spot, sometimes quote-mined from reputable sources, always used in a manner that betrays a failure to understand the underlying logic. That something is, of course, probability. Here's a recent example.
Looking at that, it's easy to see why this is leapt upon by apologists. All other considerations aside, these are huge numbers, which they think lends weight to their arguments. With such huge numbers, surely the things they're talking about are impossible, right?

Before we get to that, let's look a little bit at what probabilities are, how they're derived, and what's required to generate them.

Probability mathematics is, in the minds of a mathematician or a scientist, a robust mathematical framework that deals with predicting variables. In the mind of an apologist, however, it takes on a whole new complexion, being little more than a branch of apologetics.

The first thing to note is that all probabilities must fall between zero and one. An event with a zero probability is not going to happen - with some caveats, which we'll circle back to later - and an event with a probability of one is statistically inevitable - again, with some caveats. Probabilities can be expressed as percentages or, more often, ratios, but they always represent some number between zero and one.

Further, and this is the bit that can make it tricky when dealing with large numbers of variables, all probabilities must sum to one.

Among the major reasons that apologists like to cite these numbers, aside from the fact that they're impressively large numbers, is that very few of the people they're likely to attempt to impress with these numbers are going to be in a position to do anything to gainsay them, and this is the motivation for this article.

The first thing to do is to check the numbers, and this is no trivial matter. I've said before that I'm no mathematician, but I can at least fumble my way through these calculations to show how probabilities work.

Starting with the first number above, we should begin by calculating the probability for a single royal flush. To do this, we need the probability for each card, which is taken by the number of cards that can contribute divided by the number of cards available. There's a simple equation that deals with this:

$P(E)={r\over n}$ where $P(E)$ is the probability of our event, $r$ is the range of outcomes satisfying our conditions and $n$ is the range of possible outcomes. Thus, to get any qualifying card, the probability is the number of qualifying cards divided by the number of all available cards. The first card is one of twenty cards from an available fifty-two, because any of the cards in any suit from ten to ace will satisfy the conditions. The second card is more restricted because, once the first card has been selected, the other three suits are excluded from our satisfying conditions, and our deck is one card short, so the card is one of four from fifty-one cards. The third is one of three from fifty, the fourth is one of two from forty-nine, and the final card is one from forty-eight.

Now we multiply all those together to get the probability of drawing a royal flush.

$20/52 \times 4/51 \times 3/50 \times 2/49 \times 1/48 = 1.539 \times 10^-6$, which equates to a probability of $1:649,740$.

Now, to get the probability of five royal flushes in a row, we need to take that number and raise it to the power of five. A bit more calcium carbonate in the air, and we get:

$649740^5 = 115,797,189,947,256,250,531,862,400,000$. To convert that to a percentage, we divide 1 by that number to get a ratio, then multiply it by 100.

$1/115,797,189,947,256,250,531,862,400,000 \times 100 = 8.6357881435247595067029089939491^{-28}\%$.

Remove the exponent by putting 28 zeros at the beginning, and we can see that the calculation is indeed correct, and that the percentage is $0.00000000000000000000000000086357881435247595067029089939491 \%$, although of course, as highlighted in the funny at the top of the page, there's really no need to cite this to so many significant figures. We could just round it off pretty much anywhere. The only reason to do that is to make the number look as humongous as possible to confound the reader and make countering it more difficult. We could just as easily say that the probability is $8.64 \times 10^{-28}\%$ and leave it at that. It's still a huge number (actually a tiny one), but now it doesn't look quite as impressive on the page, and that's really the point of the whole exercise.

So, even though there are some numbers in there that look really huge, even to the point of being intimidating to a non-mathematician, the derivation of them is really quite straightforward once you know how.

I'm not going to derive all the numbers in the graphic, not least because I don't want to make this post top-heavy with mathematics, and indeed I haven't checked them. I'd need to know where the numbers were obtained and how they were derived to mount any sort of serious attack on them, so I'm going to proceed as if they're correct because, as we'll see, they're really not going to do what the creationist who erected the argument needs them to do. The reason for this is rooted in how probabilities work and, once the underlying fallacy is exposed, we should be well-equipped to counter any such numbers thrown at us without having any need to be able to check whether they're correct.

As we've already discussed, probabilities must fall between zero and one. Generally speaking, any event with a zero probability can be thought of as impossible, but there are a few caveats to be noted here. For example, if you were to randomly choose any number on the real number line the probability of choosing any individual number is zero. That sounds counter-intuitive, and it is, and highlights one of the many problems one can run into when treating infinity as a number. The problem arises because the reals are infinite, and any finite number divided by infinity is zero. Because of this, any choice of number from the reals constitutes such a division, meaning that choosing, say, the number one, has a probability of zero, yet it's perfectly possible to choose that number at random from the reals.

This should be highlighting a serious point and, if there's one thing to take away from this post, that would be it, namely that events with a non-zero probability are perfectly possible. This is no mathematical trick, it's simply logic and integers.

I want to move now to a subtle point that's often lost on the apologist, and it's important for us to bear it in mind when we encounter these spurious probability calculations. It's something that my good friend the Blue Flutterby, the inimitable Calilasseia, highlighted in one of the great forum posts of all time, in my opinion. He quite rightly termed it the 'serial trials' fallacy. It's an extremely common feature of such arguments and, quelle surprise, the above commits it beautifully. A part of me would have liked to have simply reproduced Cali's post here as a guest post, but I'll link to it at the bottom while we look at some much simpler examples of the reasoning in action.

Let's take a fair die*. We're going to work out the probability of rolling all six numbers in a row. We'll keep it simple and allow them to come up in random order, so they don't have to come up in sequence, because that would make the numbers considerably larger, which would defeat our purpose here. Note that I'm flying somewhat by the seat of my pants so, if it doesn't work out in a simple manner, you'll never know that I used this example, because I'll have to start again, as simplicity is our benchmark for this exercise. Thankfully, we know we're only dealing with numbers from one to six, so we should be OK.

Let's work out what the probabilities are. As above, we only need work with $P(E)={r\over n}$, except that, in this case, the number of available options doesn't reduce from throw to throw because, unlike the cards, all the numbers are available for every throw. This means that our probability calculation works as follows:

$1 \times 5/6 \times 4/6 \times 3/6 \times 2/6 \times 1/6=0.015$, or $1.5 \%$. Given this percentage, if we run a trial of this nature with 67 people, we would expect that at least one of them will get this result on the first attempt!

This is a simple example of a much broader principle. It doesn't matter what the probability is as long as it isn't zero because, if the sample set is large enough, the probability of any occurrence tends to one.

Now, looking at those numbers in the graphic, we can begin to see why they're complete nonsense, even if they happen to be correct. I'm going to crib a bit from Cali's work here, and simply point out that, even only working with the top 100m of seawater we can expect the number of interacting particles to be approximately $3.07 \times 10^{43}$ particles, which starts to make that second number look a bit smaller.

Another faux pas is to consider these probabilities in isolation. A really useful way to look at this is to consider a lottery. The odds of any particular person winning the UK lottery is approximately 1 in 14 million. That's a pretty low probability in the scheme of things. However, and this is the bit about how probabilities work that apologists seem to miss, and it's critical to defeating the above argument. The probability that somebody, somewhere will win the lottery at some point is exactly one. It's inevitable. There is literally no chance that the lottery will continue to go unclaimed. The way the UK lottery is set up, if there are four consecutive rollovers, the fourth jackpot is divided between those with 5 numbers. However, even if this were not the case, the jackpot would still be won with a probability of precisely one.

There's one more thing we need to consider, and that's the idea of 'chance', as erected in the graphic. This is a canard that comes up with a regularity almost comparable to those caesium clocks we discussed in several past posts. Science doesn't posit 'chance', it posits well-defined natural mechanisms that can be quantified and predicted. In Has Evolution Been Proven? we looked at the distinction between a random system and one that is stochastic. A stochastic system is one whose future states depend on initial conditions plus one or more random variables, where random means 'statistically independent'. We looked at a system involving coins and variation. We saw there that evolution, properly described, is a stochastic system. Here, we're talking about exactly the same kind of system. We're not relying on a functional sequence of amino acids arising from scratch via chance, but via interaction upon interaction, with each new reaction being dependent on the compounds in existence plus one or more random variables. In short, each new state of the system brings down that probability, as each successive stage is built upon previous stages.

Finally, that last number, which is most definitely extracted directly from somebody's rectal sphincter. There is no way to derive a probability calculation for such an event without a designer, for the simple reason that a designer is a variable in the calculations, and we have no probabilities for a designer to work from. For that, we'd need some designers. Thus, this isn't a valid variable.

ETA: One thing I'd meant to include and overlooked during writing is a peculiar thing about how probabilities work that isn't immediately obvious. We've seen that, given enough time and/or a sufficiently large sample set, we can reasonably expect any event with a non-zero probability to manifest. What's not so clear is that even events whose probability is such that a vast swathe of time and/or a huge amount of resources in terms of interacting agents would be required in order to have them happen with a reasonable degree of expectation can happen immediately with a small number of interacting agents. We've looked in previous posts at, for example, radioisotope decay, and learned that this is a truly random process, with decay for a single atom happening at any time from straight away until the end of the universe. This same principle applies to all events that have non-zero probability. If an the probability of an event is 1 per 1,000 years, what this means is not that it will take 1,000 years to happen, but that it's statistically unlikely to happen more often than that, and even that probability has to be averaged out. Something with such a probability could happen twice in rapid succession and then not occur again for 2,500 years, which renders exactly the same probability.

In summary, if you encounter assertions containing probability calculations like the one above, step on them, because they're almost certainly bollocks.