### Paradox! A Game For All the Family!

This sentence is a lie.

That might seem a strange way to open a post, but I want to have a little bit of fun and dedicate some space to a logical concept that I know causes a fair bit of confusion, sometimes even to those who've been formally educated in philosophy: Paradox. There were some things I wanted to cover in my last post, but I was keenly aware that it was a lengthy post dealing with some tricky stuff to internalise, and treating this subject will allow me to tie up a few loose ends from that and earlier posts.

At its most basic, a paradox is the obtaining of two or more mutually exclusive circumstances. The opening line of the post is a good example. It says it's a lie, so it must be the truth, in which case it's a lie, and around and around and aro... You see where it's going; it's a contradiction. Most of us accept that contradictions are a good indicator that something's gone wrong.

I want to look at some apparent paradoxes, some of which are still being pondered today, despite their solutions having been found centuries ago, and some of which have simply found a way into the public consciousness

I'm going to begin by laying down a challenge:

As far as I'm aware, there are no genuine paradoxes in science, and there's a good reason for that. There may be unanswered questions, but no genuine paradoxes. Indeed, in an earlier post on logic and how it's used in the sciences, we discussed the law of non-contradiction, which tells us that no statement can be simultaneously true and false. This tells us that the statement that opened the post is nonsense.

I assert that there are no genuine paradoxes in science. There are many things that, to a mind unaccustomed to doggedly seeking resolutions to unanswered questions, as opposed to accepting the first potential explanation that comes along, might seem paradoxical, but I'm fairly confident that no genuine paradoxes inhabit the same universe we do. If you think you've identified a genuine paradox in science, I want to hear from you. If you really have identified a paradox in science, there are some extremely serious peeps in Stockholm that will throw you a party and give you a nice, shiny medal with your name engraved on the obverse of a relief of the man who invented dynamite.

Time travel. That would be fantastic, wouldn't it? It does have some problems, though, the clearest of which involves what would happen if we were to travel back in time and do something that changed the future. This is most succinctly expressed in the first of our family-related paradoxes, known as the grandfather paradox.

What if I were to travel back in time to a time before my father was conceived and accidentally kill my grandfather? Well, I wouldn't be born, for a start, which would mean that I couldn't exist to go back in time, which would mean that I couldn't kill my grandfather, which would mean that I would be born, and so on. It's a thorny problem, to say the least, and a naïve appraisal of time travel would suggest that it's impossible, but it isn't without proposed resolutions.

One of the solutions proposed comes to us from an interpretation of quantum theory, Everett's 'many worlds' interpretation (MWI). In the last post, Give Us A Wave, we talked about the collapse of the wavefunction. The wavefunction is how we treat the information that can be extracted from a system of interest. We discussed wave-particle duality, and how observation could impact what we observe. There are several interpretations of the result, and Everett's is one of them. In Everett's formulation , we inhabit a multiverse. Every possibility for the path a photon takes from source to detector when not observing, in MWI, represents another universe, in which each of those possibilities is manifest. Indeed, all possible outcomes for every possible decision is represented in some universe. The details are largely unimportant, but this opens up a possible resolution to the Grandfather Paradox by the simple expedient of entering one of the other universes when I travel back in time so that, in the universe in which I originated, my grandfather survives my bungle and I can still be born to go back and kill him, because only the grandfather in the universe I entered is killed.

In reality, this is highly speculative, but MWI is quite well-regarded among physicists. It potentially resolves quite a few issues in physics and while, on the face of it, it seems horribly unparsimonious, it's actually quite elegant.

This is a paradox arising from set theory. First formulated by Georg Cantor in 1874, set theory was part of his study of infinity. Cantor's formulation was informal, which is to say that it was constructed in natural language, as opposed to using formal logic. One potential issue with this is that, in natural language, many terms are not rigorously defined, for example and, or, if...then, etc. Such a set theory is described as 'naïve'.

Ultimately, the major problem with this kind of set theory is that it's rooted in an assumption that items can be freely collected into sets without restriction based on some qualifying property. This is where the wheels start to come off. Discovered independently by Ernst Zermelo and Bertrand Russell, such an assumption leads to the ability to construct sets that cannot exist, such as the set R of all sets that do not contain themselves as a member. If R is not a member of R, then it must be a member of R. Contradiction!

Russell himself proposed a resolution to this problem, in his formulation of type theory. Generally, mathematicians use the axiomatic set theory of Zermelo and Fraenkel with the axiom of choice (ZFC).

Naïve set theory is still taught today, because it's useful in dealing with sets, which are incredibly important in mathematics, not least because the language of set theory can be employed in the definition of all mathematical objects.

Another apparent paradox that I still see cited a fair bit is this one:

Xeno's paradox is actually one of a collection of paradoxes attributed to Xeno of Elea in the 4th century BCE. The best known of them is the paradox of Achilles and the tortoise.

Achilles is in a race with a tortoise. Achilles, being sure of himself, gives the tortoise a head start. When Achilles reaches the point where the tortoise started, the tortoise has moved further on. When Achilles reaches that point, the tortoise has moved on again. Xeno asserts that Achilles can never overtake the tortoise, because each time Achilles gets to where the tortoise was before, it isn't there any more.

Of course, the resolution to this is incredibly simple and straightforward. What's actually happening here is that, despite time being ostensibly prevalent throughout, it's actually being excluded throughout, in the form of speed. In other words, Achilles moves faster than the tortoise. Once we work out how much faster Achilles can run than the tortoise, we can trivially calculate how far the tortoise will get before he's caught. If the tortoise starts 100 metres ahead, and can move at 1 metre per second, while Achilles can run at 10 metres per second, he'll catch the tortoise at a little over 11 seconds. No paradox.

Right, now we've got some easy ones out of the way, let's try a couple of harder ones. Since no post is complete without some mention of Einstein, let's get relative.

This is probably the most famous paradox in physics, although it isn't actually a paradox. In a recent article, The Idiot's Guide to Relativity, we dealt with observers in different inertial frames, Tami and Joe, observing the same event and how, even though their interpretations of the event are different, they will both agree on the distance in spacetime, and they're both correct. This leads to some interesting consequences, one of which is the following apparent paradox. This is what special relativity actually predicts:

Tami and Joe start out on Earth together, each carrying a synchronised caesium clock. Tami likes to travel, but Joe prefers to stay at home. One day, Tami decides to hitch a ride on a passing spaceship (the reason we're hitching a lift on a passing ship is to avoid all that tedious mucking about with getting up to speed, which cocks the sums up and makes it harder). Thankfully, the ship isn't captained by a Vogon, so she's made welcome on the ship, which is travelling at 0.6 c. The ship continues on its journey toward a star some 6 light-years away, or is it? Because of Lorentz contraction in the direction of travel, to Tami, once aboard, it's actually only 4.8 light-years away, thus Tami covers the distance to the star in 8 years. To Joe, this will look like it took 10 years, although by the time Joe can see that she's reached the star, his clock will actually read 16 years, because it will take the photons 6 years to make it back to Joe.

When Tami gets to the star, she's lucky enough to spot a spaceship coming back the other way toward Earth at 0.6 c. She hitches a lift and heads home. Tami arrives back at Earth, with her clock reading 16 years. She brings it back together with Joe's clock, which reads 20 years.

So what gives?

I've seen a lot of attempts to explain this over the years and make it more intuitive, and almost all of them made perfect sense to me but, on seeing some of the questions in response to said explanations, it's clear that it hasn't been grokked. It's so contrary to our middle-world intuitions that we have trouble getting beyond incredulity to absorb the explanation. Indeed, I myself started to write this post, confident that I'd finish it in one sitting. When I got to this bit, I still hadn't decided on an approach to explaining this, so I busied myself padding out some of what went before, laying the foundations for what's to come, writing two other rambles on unrelated topics, and generally procrastinating while I tried to resolve this conundrum.

I reasoned that nobody ever seems to encounter problems of intuition with the explanation of how observers in relative motion observe events when we look at the examples of the racetrack and the light-clock on the train, so let's start there and work our way up.

In the previous post, linked at the top of this section, we talked about the racing cars on the track so that we could get an intuition for how motion in one dimension robs you of motion through all other dimensions, and that the fastest anything could travel as a result is s, for motion through spacetime corresponding to c through space, the speed of light. You are moving through all dimensions at this rate, all of the time. Because of this relationship, light always travels at the same speed from the perspective of any given observer, as extracted from Maxwell's equations.

We also talked about the light-clock on the train, and this is where we're going to go next. We saw that Tami was sitting on a train with her light clock. There's nothing special about this clock. A clock is simply some mechanism that cycles with regularity. Back in the day, we used a sundial, but it does exactly the same thing and, the theory says, if you were to conduct this experiment using a solar system as a clock, the result would be exactly the same, although you'd best be careful you don't burn yourself when you wind it up.

The reason we use a light clock is that, because the mechanism is nicely linear in both time and space, it's a good illustrator of what the theory says. It's just a single photon bouncing between two mirrors.

So, as we observed in the earlier post, Tami sees her photon bouncing up and down between the mirrors. Nothing special seems to be happening, but then she passes the station, where Joe's on the platform, and this is what he sees. Joe still measures the speed to be c, but he's clearly seen the photon travel further than Tami observed it to travel. His measurement of the distance that the photon travelled will be different to Tami's thus, because he measures the same speed of light, Tami's clock will take a little over one second, as measured by Joe, to complete the cycle between the mirrors. In other words, where Tami measures one second, Joe will measure a smidgeon more than that.

What happens if we swap them around? Let's give Joe the light-clock, standing on the platform. He, clearly, will see this, although he might spend months on Twitter insisting that he didn't.

When Tami comes past on the train, what does she see? Of course, she sees the light travel further, because from her perspective, the light is travelling past her frame. In short, just as above with the opposite case, Tami will observe Joe's clock complete a cycle in a little over one second while, from Joe's frame, it took exactly a second.

So which one's correct? Well, in a stunning break with tradition, they both are, for only the second time in recorded memory! What's actually happening is that, from the perspective of each, time is running slower for the other. Because each observes the photon in the light-clock of the other to travel further, and because light must always travel at the same speed regardless of the motion of source or observer, Tami measures Joe's clock to take slightly longer to tick, and Joe measures the same of Tami's.

We talked about how the principle of relativity states that every observer in an inertial frame has equal claim to being at rest. Being in an inertial frame simply means that no accelerations are being experienced, where 'acceleration' means any change in velocity*. Just as Joe can say he is standing still on the platform while the train comes past, Tami can say that she's sitting still on the train while the platform comes past with Joe on it.

In other words, in the jargon we've already encountered elsewhere, their experiences have translational symmetry. A naïve appraisal might lead one to the conclusion that, in the case of the spaceship, Tami and Joe should be the same age when they come back together, but they aren't, so the symmetry must have been broken somewhere. So where does it happen?

The answer to this is the key to the whole paradox, and it's when one of them leaves their inertial frame or, to be explicit, when one of them experiences an acceleration. Recall that acceleration is any change in velocity, where velocity is a vector quantity, meaning that it has both magnitude and direction, and that an acceleration means that the observer experiences a force. For the outbound journey, once Tami is in motion on the ship at 0.6c, each of them has equal claim to being at rest, just as in the train example above. When Tami turns around, she changes from an inertial frame to an accelerated frame, and this is where the symmetry is broken. Joe sees Tami reach the star while his clock measures 16 years and then, 4 years later, she arrives home, having travelled what, to Tami, measures 4.8 light-years and takes 8 years to traverse. Thus, Tami has been gone for what she measures to be 16 years, while Joe has measured 20.

The reason this looks like a paradox is that we, with our middle-world intuitions, still think of time and space as being separate entities, and immutable. What relativity tells us is that this is a mistake, and it gives us a framework that treats them as a single entity, spacetime. With this framework, and with only the postulate that light must be measured at the same speed for all observers regardless of motion of light-source or observer, we arrive at the seemingly absurd conclusion that the travelling twin will be younger. Of course, when you look at the details, particularly the equation dealing with distances and speeds in spacetime, you discover that both Joe and Tami, although they differ on the space and time aspects of Tami's journey individually, will agree on the distance Tami travelled through spacetime. That is, they will each agree that their respective solutions for the journey will satisfy the following equation and give the same distance in spacetime:

$s^2=(ct)^2-x^2$

The same is also true of Joe's journey in spacetime.

For the remainder of this post, I want to focus on one of the most famous battles in the history of physics, that between Einstein and Nils Bohr.

Einstein hated quantum mechanics. Possibly his most famous quotation was about quantum theory.
"Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the "old one." I, at any rate, am convinced that He does not throw dice."
What he was expressing here has been much misunderstood over the years. His distaste arose from the fact that he expected a good theory to give definite predictions about the universe while, of course, quantum mechanics can only talk about probabilities. He was saying that the laws of the universe don't gamble.

Despite the fact that it was some of his own work that underpinned quantum mechanics, such as the work on the photoelectric effect and Brownian motion that won him the Nobel Prize, he was deeply unsatisfied with a theory that had at its core something so undermining to epistemology as Heisenberg's Uncertainty Principle, the idea that, the more accurately we can determine one of a pair of 'conjugate variables', the less accurately we can determine the other. Indeed, so objectionable did he find it that he dedicated a fair portion of his later years in attempting to debunk it. Somewhat paradoxically (pun intended), Einstein later in life became great friends with Kurt Gödel, famous for his incompleteness theorems, which were at least as undermining to epistemology, though in a slightly different manner.

At the heart of Einstein's objections to quantum theory were two principles, both of which need a little bit of unpacking. The first of these is 'locality'.

Locality is a fairly straightforward idea, namely that an object can only be influenced by things in its immediate surroundings. All classical physics obeys this principle, including special relativity. Indeed, special relativity constrains the principle of locality by limiting the speed at which any influence can travel to c, the speed of light. To cast this in terms of the prior post on relativity, an object can only be influenced by things that fall within its past light-cone, or between the 45 ° lines below the x axis in the Minkowski diagram. The principle comes from classical field theories, such as Maxwell's theory of electromagnetism, in which the influence is mediated by the electromagnetic field, or what we now understand to be the exchange of photons.

The other principle we need to look at is realism. Realism in physics is closely related to philosophical realism. It's simply the idea that parameters have well-defined values regardless of whether they're being observed. Einstein and his collaborators, Nathan Rosen and Boris Podolsky, asserted that the limits of measurement represented by Heisenberg's Uncertainty Principle must be breachable, and that therefore the wavefunction couldn't provide a complete physical description of reality, meaning that the Copenhagen interpretation of quantum theory was not satisfactory.

There's an interesting consequence of quantum theory that we haven't yet discussed in any previous posts, known as quantum entanglement. This is a situation in which multiple quantum entities interact or are generated in such a way as to function as a single system, so that their properties are correlated (actually counter-correlated, but that's a complication we don't need for our purposes here). An obvious example of entangled entities is virtual particle pairs, as discussed in The Certainty of Uncertainty, in which a particle-antiparticle pair arise via the uncertainty principle with energy borrowed from spacetime and then annihilate.

In any situation involving entangled entities, there arise some interesting consequences. Any particle is defined by three key properties; mass, charge and spin. In an entangled pair, all of these are correlated. If we measure, for example, the spin of one of a pair of entangled particles, we immediately know that measurement for its entangled partner. This is true even if the particles are on opposite sides of the universe. As we've seen, this seems to create a problem because, in order for one of them to respond to the outcome of a measurement on the opposite side of the universe, information would seemingly have to be transmitted at greater than c, which would violate the speed limit defined by special relativity.

Einstein, Podolsky and Rosen came up with a cunning thought experiment that they thought highlighted a flaw in this, and it brings us back to those pairs of conjugate variables discussed in the above linked article.

They reasoned that, if one could measure one of a pair of conjugate variables, say spin about a given axis, for one of the pair, and then measure the other of the variables on the other of the pair, they should be able to extract more information about both of them than would be allowed by the uncertainty principle and that thus, we could measure both quantities with arbitrary precision, violating the central law of quantum mechanics. This is the famous 'EPR paradox'.

From this, they concluded that quantum theory was incomplete and should be extended with local hidden variables. In short, they wanted to retain both locality and realism.

Enter Irish physicist John Bell.

There's a straightforward logical principle regarding inequality in binary variables. In particular, it tells us that in any set defined by binary properties, certain inequalities will always be visible.

To make this explicit, let's select a set. Take a random selection of Tweeps. I assert the following about this selection:

The number of theists who do not accept evolution plus the number of people who do accept evolution and are not male is greater than or equal to the number of theists who are not male. It seems like it could be a bold assertion on the face of it, but the application of a bit of logic shows that it cannot be otherwise.

Let's label these binary variables X, Y and Z, where X = theist, Y = accepts evolution and Z = male.

$N(X, ¬Y) + N(Y, ¬Z)\geq N(X, ¬Z)$

If we take the first grouping, it tells you nothing about the gender, which means that it is, in and of itself, a binary grouping, of the number of theists who do not accept evolution and are male, and the number of theists who do not accept evolution and are not male. You can do this for each grouping, with leaves us with the following groups on the left of the equator (numbered for convenience):

$N1(X, ¬Y, Z) + N2(X, ¬Y, ¬Z)$

$N3(X, Y, ¬Z) + N4(¬X, Y, ¬Z$

And the following on the right:

$N5(X, Y, ¬Z) + N6(X, ¬Y, ¬Z)$

Simply by noting that N3 and N5 subtract to cancel each other, and that N2 and N6 do the same, we're left with the following conclusion.

$N1(X, ¬Y, Z) + N4(¬X, Y, ¬Z \geq 0$

In other words, it's telling us that the number of members in a set based on a set of binary properties X, Y and Z cannot be a negative number. This is an obvious tautology, which means that the inequality statement:

$N(X, ¬Y) + N(Y, ¬Z)\geq N(X, ¬Z)$

Is also a tautology. This is simple logic applied to binary properties.

Now we move to the quantum world. For our grouping here, we're going to use the binary properties regarding angular momentum about a particular axis. These are measurable properties. We can see that the angular momentum about a given axis will always be clockwise or anti-clockwise (which we'll denote '¬').

Let's label our axes X, Y, and Z. We're looking now at the angular momentum of, say, an electron about the X axis, the Y axis and the Z axis. From the above, we should be able to say that our reasoning above applies to the relationships between groupings if EPR is correct and we're dealing with hidden variables.

This is the troubling bit: When we measure the spin of an electron about these axes in the lab, they don't satisfy this inequality, and what we actually see is this:

$N(X, ¬Y) + N(Y, ¬Z) < N(X, ¬Z)$

Quantum mechanics violates Bell's Inequality, and tells us that, in particular, the number of electrons with spin clockwise about X and anti-cockwise about Y plus the number of electrons with spin clockwise about Y and anti-clockwise about Z, is fewer than the number of electrons with spin clockwise about X and anti-clockwise about Z. What Bell has proved with this is that no theory that is both local and realistic can reproduce all the predictions of quantum theory, predictions that are observationally verified.

Coming back to entangled particles, for which the properties are correlated, because each of the particles is a portion of a system, as opposed to being separate, this inequality is again violated by experimental measurement.

Sorry Albert, but it looks like we're stuck with 'spooky action at a distance'.

There have been examples over the years that have cast some doubt on some of the results here, by highlighting loopholes. The last of these seems to have been closed last year with an exciting study by Hensen et al [1], which I'll leave to the interested reader.

*Velocity is a vector quantity, which means that it has information concerning magnitude and direction (e.g. 40 kph in a North-Easterly direction). This is distinct from a scalar quantity, which only contains magnitude information (e.g. 40 kph). Thus, turning from 40 kph in a North-Easterly direction to 40 kph due North is an acceleration.

# And this brilliant lecture by Allan Adams of MIT. I recommend watching the full series. Possibly the best physics lecturer I've seen.

### But No Simpler

What is complexity? What does it mean for something to be complex?

This term is thrown around a fair bit in apologetics, but it isn't very well understood by many who employ the term. It's often contrasted with simplicity, which is a mistake, because complexity and simplicity are not on the same spectrum. In this post, I want to pick apart what it means for something to be complex and to deal with some of the apologetic that's erected concerning complexity and the underlying fallacies applying committed in all such apologetic.

As is often the case, much of the wibble on this topic arises from not fully grasping what the term means so, as always, let's unpack it.

There are quite a few ways that the term 'complex' is used and, while they're not directly equivalent, they are closely related. For example, we might talk about a complex of buildings, or a psychological complex.

At it's most basic, the term simply means 'composed of parts' from the Latin 'com-' (together with) and 'plectere' (braided, woven). Essentially, something is complex if it comprises two or more parts and it displays behaviour that is emergent from the combination. The term can also be reasonably applied to any system that's difficult to predict, and again this is related, because unpredictable systems are almost universally made up of multiple parts.

Let's look at an example. I've chosen this example specifically because it's an incredibly simple system exhibiting complex behaviour, which should highlight why placing simplicity and complexity on the same spectrum is a problem.

 Image source: Wikipedia
As we can see, this is a simple double pendulum. It is composed of only two moving parts. Properly, this system is chaotic. Chaos is another term that is often confused because, in the vernacular, it simply means 'disordered' but, in scientific jargon, it means 'sensitive to initial conditions'. This is again one of those situations in which the oft-maligned 'semantics' is incredibly important, because while it could reasonably be said that this system exhibits some disorder, disorder isn't a necessary outcome of chaos, in exactly the same way that disorder doesn't necessarily result from entropy, as we discussed in an earlier post. I'll be covering chaos in a later post, but I highly recommend the excellent Chaos by James Gleick in the meantime.

The point to take away here is that complexity is often conflated with complicatedness, and that this is an error. As is all too clear from this example, an extremely simple system can exhibit complex behaviour. Indeed, this system is 'irreducibly complex', and this is often erected as a problem for evolutionary theory. It isn't, but to understand why requires a little more work.

Apologetic regarding irreducible complexity can actually be traced to Darwin, and his seminal work, On the Origin of Species
'If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.'
This has been leapt on by creationists, most notably Michael Behe, who has essentially redefined it in a subtle way for the precise purpose of attempting to debunk evolution, a truly futile endeavour, not least because evolution has been observed occurring. Behe has built a massive castle in the air around this idea, in a spectacular commission of a fallacy I've termed argumentum ad elbow-joint-of-the-lesser-spotted-weasel-frog, after an example employed by Richard Dawkins in The God Delusion. Behe, in his employment of this fallacy, has hit upon several examples of what he deems 'irreducible complexity', but there are several reasons why they have all failed. As always, actually looking at examples should prove instructive. Let's start with the favourite, one highlighted by Darwin himself.

'To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of Spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree... Reason tells me, that if numerous gradations from a simple and imperfect eye to one complex and perfect can be shown to exist, each grade being useful to its possessor, as is certain the case; if further, the eye ever varies and the variations be inherited, as is likewise certainly the case; and if such variations should be useful to any animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, should not be considered as subversive of the theory.'
If you paid attention there, you'll have noted the ellipsis, highlighted in red between 'degree' and 'reason'. I've highlighted that for a specific reason, which I'm going to come back to later in the post, because it deals with another fallacy common among apologists.

Here, though, we just want to look at the example of the eye, which is a favourite organ that's allegedly irreducibly complex. The claim is that the human eye 'with all its inimitable contrivances', couldn't have come about by a process of cumulative construction over generations, because part of an eye would be useless. It should be fairly simple to spot the flaw in this, of course.

The eye, complex as it is, is actually reasonably simple in terms of the core principles on which it operates. We have photoreceptor cells in our eyes, containing proteins known as 'opsins', which are sensitive to light and mediate the conversion of photons into electrical impulses, in a process known as phototransduction, that are in turn translated by the brain. This is a simplification, of course, and the actual signal cascade is itself fairly complicated, but based on these very simple principles. The brain, and indeed all neurological systems, are rooted in the same electrochemical foundations, and these are universal in the biosphere. Humans have a trichromatic (three-colour) system that translates photons of different frequencies into red, green and blue, much like your television. Other organisms have bichromatic (including some humans) or tetrachromatic (four-colour) systems. Each photoreceptor cell needs multiple photons of a certain energy in order to trigger.

For more on photons, energy and colour, see previous post Give us a Wave, which is pretty cool and has lasers in it.

We can start with a really simple system that will give us some insight into what use 'half an eye' might be. Here's the beginning of a progression of various prototype eyes so we can see how the whole process might work. First, a simple cluster of photoreceptors. It might be difficult to see how this can provide any sort of advantage, but remember that, in the land of the blind, the one-eyed man is king. Indeed, in that simple statement is the key to evolution by natural selection. As we discussed in the earlier post dealing with observations of evolution, the key to understanding the phrase 'survival of the fittest' is that the organisms that are involved in the competition for resources are almost always the same species (and for some of those resources, always).

Now let's look at what happens if we add a slight complication. In this instance, we have exactly the same cluster, but now we have a bit of curvature.

With just this tiny modification of our cluster, we can see that we have some directional sense, in that this configuration can tell us vaguely where light is coming from.
This is the same configuration but with a light source added. As can be ascertained fairly easily, this configuration, because it has curvature, can get some sense of the direction the light is coming from.

It's worth noting that this is unlikely to occur in a single generation. This tiny modification could take many, many generations, with minuscule, almost imperceptible modifications from the initial cluster of photoreceptors, which itself would have taken many generations to evolve from the most basic of light-sensing proteins. Over more generations still, this curvature might become more and more pronounced, with better and better directional sense.

We can see that this complexity, from simple photoreceptors, on through indentations getting deeper, then with a film forming over the eye socket and the cavity filling with fluid, the aqueous humour, carrying on through the formation of a rudimentary lens, and then more and more tiny, incremental improvements, until we arrive at the modern eye.

I won't belabour this point any further, but I will pop a lovely video in at the end of a youngish Richard Dawkins in a classic edition of the Royal Institution Christmas Lectures from 1991. The Royal Institution Christmas Lectures series is one of the greatest treasures of the scientific world, and I really can't recommend it highly enough. Instituted by Michael Faraday in 1825, these lectures are held at the famous lecture hall at the Royal Institution each Christmas, aimed at children. The edition I'll show is episode 3 from that year, entitled Climbing Mount Improbable. I also highly recommend Dawkins' book with the same title, in my opinion his best book, and one which, properly read with an open mind, comprehensively demolishes all creationist objections to evolution.

That said, let's move on, and deal with the other darling of the cdesign proponentsists*, the bacterial flagellum.

As the name suggests, the flagellum is basically a whip (the word 'flagellate' comes from the same root). In some bacteria, this is a wonderful bit of evolution in action. It consists of a whip-like tail, driven by what can only be described as a motor, even to the extent that some of the components are named after their mechanical counterparts. As far as I'm aware, this is the only true freely-rotating motor in nature, and it really is an incredible bit of engineering. This is what a cross-section looks like.
 Image source: Wikipedia
It's really easy to see why creationists have singled this out as an example of irreducible complexity. A naïve appraisal of this would certainly enforce the idea that it must be designed and, without proper study, it looks an awful lot like it fits Darwin's criterion and offers a thoroughly robust challenge to evolutionary theory.

This is one of those rare occasions in which 'creation science' has paid dividends because, since the erection of this as an example of irreducible complexity sufficient to undermine evolutionary theory, the flagellum has been extensively studied by real scientists, as opposed to creationists in stolen lab coats. Here, I want to share some of the findings of those studies.

Let's start with the big clue right at the bottom of that diagram. At least, it's a big clue when you know what you're looking for...

The flagellum is used to give the bacterium motility. The motor rotates, and the flagellum spins around like a little propeller, giving the bacterium motive control and allowing it to propel itself.

Not all bacteria have flagella, however. Some bacteria, especially gram-negative pathogenic bacteria such as Salmonella typhimurium, have other mechanisms in the same place. These mechanisms, known as secretion systems, are employed in infecting host organisms. In this instance, the Type III secretion system (T3SS), on of five known systems with the same function, contains everything from the rod on down. In the case of the T3SS, the rod is actually a needle, which is what's used to secrete effector proteins into the host.

The flagellum, it appears, has been constructed on top of a previously existing structure that once had a different function. Now, it may well be that this was not a direct transition from T3SS to flagellum, and that other functional structures were in place in the interim, but it's pretty clear from the morphology that the T3SS was the precursor to the flagellum. One of the unresolved issues is that the C-ring in the flagellar motor has not been observed in any T3SS.

There's an important point to be noted in there, the idea that a functional mechanism can be constructed on top of a previously existing structure that itself had function. It might seem strange that this could happen but, as always, the devil's in the details.

One of the major underlying themes in evolution is economy. It's known, for example, that some species of cave fish that live entirely in the dark have not only lost use of their eyes but, in many cases, their eyes have stopped being synthesised. The same can happen with any previously functional structure and, of course, this can leave the way open for the synthesis of new functional structures that utilise part or all of the pre-existing structure for new purposes.

Further, we know from embryology that there are transient structures that grow specifically so that functional structures can grow on top of them, to be removed later in the embryological process. From a mechanical perspective, this is like a 'centring', used in construction of arches.

As it happens, though, the bacterial flagellum has been categorically demonstrated not to be irreducibly complex. In recent experiments, the flagellum was subjected to deconstruction, including the axle, and it still worked! Further, there has been work done on the actual genes that code for the requisite proteins in flagellar synthesis, particularly two of them, namely FliL and FliH. It appears that knocking out the FliL gene buggers up the flagellar synthesis. But, and you'll love this, because it shoots the irreducible complexity nonsense in the foot, if you knock out the FliL and the FliH at the same time, flagellar synthesis resumes! I'll cite the studies at the bottom.

I think we've spent enough time on the flagellum so, before I move on to the sting in the tail, I thought it worth noting that we've already touched on an example of irreducible complexity in our earlier outing on evolution linked up the page, in the form of Richard Lenski's long-term E. coli experiment.

We talked about how, in that experiment, although the E. coli already had the ability to transport citrate, making it available as a source of subsistence, it wouldn't do so in an aerobic environment. The mechanism that allows it to transport citrate in an aerobic environment is manifestly irreducibly complex. If you remove any of the parts, it stops working. Each of those parts, just as in the example of the T3SS and the flagellum above, had an alternative function prior to becoming part of the function of aerobic citrate transport, and this was observed evolving.

Finally, and this is the bit that really nails the whole irreducible complexity canard to the wall (no ducks were harmed in the writing of this article), while no truly irreducibly complex structures in sensu Darwin as cited above, irreducible complexity in the sense that Behe uses the term and as described in those examples cited above is actually a prediction of evolutionary theory. Yes, you read that right. Evolutionary theory predicts that such structures will arise via mutations and natural selection. This was first formalised by Hermann Joseph Müller some thirty years before Behe was even born, in a process that has since come to be known as the Müllerian Two-Step. It works, simplistically, like this.

2. Make it necessary.

Now, there's one last thing to deal with, namely that ellipsis in red in the second of those quotations from On the Origin of Species. An ellipsis is used in text to denote missing text. In this instance, I used it to truncate the passage and remove an interjection that made the quoted passage longer than it needed to be and added nothing useful in terms of the discussion, but it's worth noting that, when you see this in a creationist quotation of a scientist, it should raise a red flag. Indeed, whenever you see it, be ready to check the source, especially if the quoted text seems incongruent with other things that the quoted scientist has said, or where it doesn't gel with their general position.

There's a common practice in apologetics - and always remember that science-deniers treat science as nothing more than a branch of apologetics (often committing a fallacy known as the fallacy of stolen concept, which I'll cover in some detail soon) - known as 'quote-mining'. This is a particularly pernicious practice that constitutes a spectacular breach of the ninth commandment. Indeed, it breaches that commandment directly, by bearing false witness. This practice entails taking something somebody has said and removing portions of the text so that it looks like they've said something else, often directly contradicting the original intent.

If you look at the quotation from Darwin only up to the ellipsis, it looks an awful lot like he's saying that his theory seems absurd. Creationists will often omit what comes after it, in which Darwin categorically states that he doesn't think so. Doing this to Darwin's work is particularly fruitful for creationists, not least because Darwin was so incredibly diligent in his research that, when everything was in place, he systematically went through his theory and erected all the possible objections to his theory, precisely for the purpose of knocking them down and showing that said objections were not substantive and were countered by the evidence.

In short, if you see something quoted that seems to counter what a scientist has said, check it, and especially if you see an ellipsis. I'm not suggesting for a second that all instances will be dishonest misrepresentation, but it occurs so very often that it's something to watch out for.

The Talk Origins website has been running a quote-mine project for many years, and has a nice collection of expositions of this horribly dishonest practice. I'll pop a link in at the bottom.

Growing Up In The Universe: Climbing Mount Improbable

*This, which looks like a typo, is in fact a typo. From a famous book written by Percival Davis and Dean Kenyon. In the original edition, the book had reference to 'creationists'. At the time, creationists were trying to generate some distance between their ideas and the idea of god, so that their pseudoscientific ideas wouldn't fail the Lemon test, a precedent in ensuring that separation of church and state was maintained in line with the establishment clause of the US constitution. This typo in a later edition arose because when the word 'creationist' was replaced with 'design proponents', something went wrong at the typesetters, leaving 'cdesign proponentsists' in its place.

[1] Axle-Less F1-ATPase Rotates In The Correct Direction by Shou Furuike, Mohammad Delawar Hossain, Yasushi Maki, Kengo Adachi, Toshiharu Suzuki, Ayako Kohori, Hiroyasu Itoh, Masasuke Yoshida and Kazuhiko Kinosita, Jr., Science, 319: 955-958 (No. 5865, 15 February 2008)

[2] Distinct roles of the FliI ATPase and proton motive force in bacterial flagellar protein export by Minamino & Namba Nature 451, 485-488 (24 January 2008)

Talk Origins website.

I should add that it's worth checking out AronRa's Foundational Falsehoods of Creationism on his Youtube channel. This is also now available for pre-order in book form, and I recommend it.

OK, here it is, it's the politics post.

I don't do 'isms'. I have this thing about taking on labels, not least because, once you have a label, your position is defined. In my opinion, once you accept an 'ism', you're shackled. It's a bit like, actually a lot like, having beliefs. I have very little use for the term, as I discussed in an earlier post.

Now, I hear the cry from the cheap seats, 'but you're an atheist, aren't you?' to which the reply is 'yes, I am, but that isn't an 'ism''. I won't belabour this point, I'll simply point you to my earlier submission 'Are Babies Atheist?' which should make clear that, being a privative, 'atheist' isn't something I am, it's something I'm not.

The only 'ism' I'll happily accept is 'pragmatism'. I have no objection to being called a pragmatist, because it's the only 'ism' that isn't dogmatic; it is, in fact, the best protection against dogma.

At various whiles during my socio-political education, I've accepted many labels, but I've eventually found them all wanting in some way, mostly just in that acceptance of the label involves acceptance of all that the label entails, and there's no 'ism' whose entailments I accept entirely.

The point here is that I'm largely non-partisan. I've voted almost entirely Labour during my adult life, not out of any sense of loyalty to the party, but because it's tended to be both most aligned with my general views and in a position to present a proper opposition to the self-interest represented by the Tories.

That said, I was quite invigorated when the news filtered through to me that Jeremy Corbyn had thrown his hat into the ring for the leadership of the Labour party after the 2015 election. 'At last,' I thought, 'somebody who actually gives a shit.'

Back in my younger days, I was something of an activist. I involved myself in several political campaigns. I was active in the anti-deportation movement, with the Viraj Mendis Defence Campaign and others, the North-West arm of the campaign against section 28 of the local government act 1988, which said that local authorities;
"shall not intentionally promote homosexuality or publish material with the intention of promoting homosexuality" or "promote the teaching in any maintained school of the acceptability of homosexuality as a pretended family relationship"

I also, during this time, spent some time on the 24-hour picket at the South African embassy in Trafalgar Square protesting Apartheid. It was here that I first met Jeremy Corbyn. I seriously doubt he'd remember me now, except maybe as a drunken singer of songs with a guitar and a very loud voice who kept everyone awake in the wee small hours in the biting cold (which pretty much sums up many people's impression of me even now).

Thing is, there's nothing glamorous about this. There are no cameras, no broad publicity, no political or public capital in this endeavour, only shrunken testicles, drunken passers-by, some of whom were quite abusive, and a commitment to addressing concerns for the good of the human race.

Anyhoo, that's by-the-by, and is only to illustrate that Corbyn has made a vocation out of concern for others, of equality of opportunity, and the general well-being of society.

The day of the leadership election was a highlight of recent UK politics. I was driving back to the PRM from Londinium with mum, still at that time a Labour councillor for our glorious republic, and we listened to it all on the wireless (that's Northern-speak for radio, for my non-UK readers). As somebody not normally given to excitement about such things, even I got swept along as Corbyn was elected leader with the biggest mandate in British political history.

Hardly was the acceptance speech finished when the knives came out. Immediately labelled 'unelectable', a truly asinine contention in light of the above, Corbyn has been vilified by all sides, including from within his own party, from that day onwards.

I do understand some of this, of course. I remember only too well the dark days of Thatcher, who systematically demolished the UK's national industries, lining the pockets of her chums in the process. I remember the feeling of relief when Blair won the '97 election, and the sense that the country was finally getting back on the right road after so many years in the wilderness. I really do understand those who think that moving back toward the left might leave us in the same position we were in during the Tory dominance, but the reasoning is flawed, in my opinion, and doesn't reflect all that's gone on in the interim.

Aside from all other concerns, we saw from the last two general elections that people have become disillusioned with 'New Labour', and not without reason. Many protest votes and spoiled votes in both, and a general feeling that the party isn't cutting it, have led first to an ill-conceived coalition and then to a small mandate for the tories. Of course, there's good reason to suppose that the 2015 election was won largely on the basis of the promised Brexit referendum.

Then came Brexit itself, and of course the attempt to apportion all the blame for the result to Corbyn, despite the fact that, by the admission of those within his party and without, he worked harder than anybody to deliver the remain vote. He was given instruction to go out and get the votes of the younger electorate, which he delivered in spades.

In the aftermath, when we've seen U-turns on all the promises by the leave campaign, when those responsible have turned tail and fled, with the exception of Boris the Buffoon who, despite coming only second to Phil the Greek in his record of offensive gaffes and pissing pretty much everybody off, suddenly seems like the ideal candidate to be Foreign Secretary, in about the biggest piss-take in British political history, the Labour party stage a coup in an attempt to oust Corbyn, even in the face of the constituents of those involved giving them clear instruction that this was not what they wanted.

When the leadership election is called, we then have the situation where, despite the rules of the party being crystal clear on the point, it's asserted that the incumbent is subject to the same support rules for the leadership contest as the challengers. That the NEC even had to give this any consideration is a sign of how utterly screwed up the party is. And all this taking place while the tories, largely responsible for the Brexit situation, sit back and laugh.

There are several things about this that would be amusing if the situation weren't so serious; that nobody seems able to work out why the man handed the largest mandate in history might not want to quietly step aside and let the old guard have their way without any sign of a fight; that the tories have been comprehensively let of the hook. I could go on.

Then we have the claims of anti-Semitism, which the tories have stoked gleefully. This, particularly, is distressing, especially when such an accusation is levelled at somebody like Ken Livingstone.

So, what started this, and was there any indication of anti-Semitism within the Labour party?

It all began when a blogger located a Facebook post of a meme shared by Bradford West MP Naz Shah in 2014, before she was an MP.

Now, it could certainly be said that this repost was ill-advised, but anti-Semitic? Really? How about when I advise you that the source of this meme was actually Norman G Finkelstein, American-Jewish political scientist? Does that make it any more or less anti-Semitic? It appears that Finkelstein posted this as a sort of political in-joke, something reasonably common in the US.

How about criticism of Israel? Is that anti-Semitic? Are we not allowed to highlight what we see as the depredations of governments and criticise them for their actions? If I say Robert Mugabe is a despot, or criticise his government for the wholesale killing of his own people, am I being racist?

Is Ken anti-Semitic for saying that Hitler supported the relocation of Jews to Israel? Did Hitler do this? The evidence is ambiguous at best, but it certainly seems clear that, at some point in his rise to power, he thought about several possible solutions. I know that at least one widely-read source suggests that he supported it, though the source is questionable at best. Is it anti-Semitism to repeat false claims when they're thought to be true?

Let me be clear here: Prejudice is a pox on mankind (I will be posting about the logic of prejudice in a near-future post), and we really need to be working extremely hard to stamp it out, but hysterically leaping on everything that can remotely be construed as  prejudicially-motivated isn't the way to go about it, not least because it shuts down discourse, and open and honest discourse is the only route to addressing these deep-seated issues in society.

So what happens? These incidents get blown up into the broad accusation that anti-Semitism is rife in the party, so the Labour leader calls for an investigation. When the investigation returns its result, and finds that, although there are things that could be done to improve diversity generally, there were no signs of systemic prejudice against any group of people within the party, the report is described by Corbyn's opponents as a whitewash, quelle surprise.

At the press conference, Corbyn spoke, and what he said caused such a bunching of panties as to lead to a tsunami of urine. What did he say?
“Our Jewish friends are no more responsible for the actions of Israel or the Netanyahu government than our Muslim friends are for those of various self-styled Islamic states or organisations.”
Comparing Israel to ISIS? The horror!

Except, of course, that he didn't. This is a blatant misreading, whether honest or intentional. It's often the case that analogies fail to hit the mark, either by failing to understand how analogies work, or by deliberately picking up on only the key words so that one can leap to the conclusion that somebody's trying to be offensive. So here's how analogies actually work, for the hard of thinking.

The beauty of any analogy lies in its imperfections. I like to use the example of a map, because it's the clearest instance of an analogy I can think of.

Say I have a map of London. The map is an analogy of London. If the map were perfect in every respect, it would be of no more use in finding my way around London than simply going there and wandering the streets. It's not the same scale, it doesn't have the same number of dimensions, it's simply not the same.

What's being equated is the general layout of the map and the general layout of London, and there the similarity ends. In Corbyn's statement, what's being equated is the lack of responsibility, and there the similarity ends.

No. Zionism isn't Israel, it isn't Judaism, it isn't Jews. While many Zionists are Jews, not all are. Zionism is a movement, one aimed at an independent state for the Jews. Many Zionists are actually fundamentalist Christians, who support the movement because it fulfils a biblical prophecy related to the end of days in their mythology. Indeed, there are some who will tell you that conflating Jews and Zionism is itself anti-Semitic. I'm not one of them, but I also reject the idea that criticising the Zionist is itself anti-Semitic.

For my part, I think the formation of the state of Israel in one of the most volatile regions on the planet was a mistake, and has caused considerable trouble for the world at large, not least those who were already living there. What's the solution? I don't know, and I don't pretend to. I do know that killing isn't it, whether that killing be done by Israel or the Palestinians.

Getting back to the main topic, all of this has, of course, been lapped up by the tories. Many of those who voted to leave the EU because of lack of access to unelected officials in the EU are now faced with unelected officials in Westminster to whom they have no access, so good result there.

Yesterday, I watched the debate on Victoria Derbyshire, and one thing stood out above all others, and it tells me what I need to know about the leadership contest, because it goes to the heart of what it means to truly be a leader. When asked the question 'if your opponent wins, will you be willing to serve in his cabinet?' The stark difference in their responses went straight to the integrity of each. Smith indicated that he wouldn't be willing to serve in Corbyn's cabinet, on the basis that he didn't think Corbyn was the right man for the job. Corbyn, of course, recognised that, if he was suitable to lead, he must be suitable to contribute to the cabinet. It's as simple as that. This is what separates them, in my opinion. Corbyn's concern is the good of the people and the good of the party, and he recognises that depriving the cabinet of his talents and insight doesn't serve either, while Smith fails to recognise it, which indicates to me that he hasn't got the best interests of the party or the electorate at heart, and that's sufficient to disqualify him on its own, regardless of any other considerations. The same is true of all those members of the PLP who rushed to try to oust Corbyn as soon as Brexit was returned, in what was so obviously pre-planned one can't fail to question the motivations of those involved.

I could say much more, but I feel like I've covered the major bases that I wanted to cover, so I'll leave with one last thought:

The world stands on the precipice at the moment. I'd have loved to see Bernie Sanders get the nomination, but he didn't. I note that many of his supporters are now turning to Jill Stein, the Green Party candidate. This is misguided in the extreme.

Voting down the ticket is a valid tactic, especially if your aim is to undermine the stranglehold on politics of a two-party system, but the time to do this is when you have moderate candidates, not when one of the candidates is an idiot blowhard lacking the intellectual capacity of an amoeba and unable to plan five seconds into the future. I can't say strongly enough that a man who, during an interview, asks three times why we can't use nuclear weapons, is not the man you want in charge of the launch codes.

A few people have, in recent times, wondered how Christopher Hitchens would have reacted to the situation. We can be fairly certain what his position would be although, as Sam Harris pointed out, we can't know just how beautifully he would have put it. For my part, I'm reminded of some of the things he said at the prospect of religious fundamentalists and other fuckwits getting their hands on weapons of mass destruction, and the warning that we were getting quite close with the research program in Iran.

Well, brothers and sisters, comrades, fellow primates, if we're not careful, we might be about to find out (Valé, Hitch; you're still missed).

### Argumentum ad Verecundiam and the Genetic Fallacy

Greetings!

I intended to write a post about paradox, but it's turned out to be considerably more involved than I envisaged, meaning I have lots of diagrams to do, so I thought I'd do a post about the genetic fallacy. This has been on my mind for a while, and I was motivated to get to it after watching an episode of Q&A, an Australian panel show much like Question Time in the UK. In this episode, which I'll pop in at the bottom, one of the panellists,  a newly-elected senator for Queensland, Malcolm Roberts, went head-to-head with my co-citizen of the People's Republic of Mancunia and physicist Professor Brian Cox on anthropogenic climate change. Here's a small cut of the exchange:

Interestingly, when I first saw the name, I was put in mind of another Mancunian, a singer from the '60s born in Blackley, also named Malcolm Roberts.

Anyhoo, there will no doubt be much said about this and, indeed, I see that it's making its way around the news cycle as we speak. I wanted, however, to focus on some specific aspects of what Roberts was saying; argument from authority and 'consensus isn't science'.

In Deduction, Induction, Abduction and Fallacy we covered how logic is used in the sciences, and how to spot some basic fallacies. Among them was a broad class of fallacies known collectively as the 'genetic fallacy'. This fallacy is committed when an argument is dismissed or accepted based only on some characteristic, whether merely perceived or demonstrably extant, of the source of the argument. The most common forms of this fallacy appear as either argumentum ad hominem (argument to the man) or Argumentum ad verecundiam (argument to reverence or authority). Roberts, of course, was accusing Cox of the latter of these with his citation of appeal to authority. His position fails for two clear reasons, and expose a real misunderstanding of how science is conducted, and what it means when a scientist gives advice. Cox attempted to draw him into this by questioning how Roberts thought we should go about attempting to obtain information about the future of the climate, but the format of the programme meant that was doomed to failure from the off, because it doesn't allow the kind of Socratic approach that Cox is fond of, which he quickly realised.

So why is appealing to authority a problem and, more importantly, why does the charge of fallacy not stick in this instance? Let's look at an example that highlights the fallacy.

P1. Richard Dawkins says the universe came from nothing.
P2. Richard Dawkins is a respected scientist.
C. The universe came from nothing.

There are those who would see no problem with that, though I doubt any of my readers will fail to spot the glaring flaw. Yes, Dawkins is a respected scientist, but his field is not cosmology, it's ethology. It is of no more moment that Dawkins said this (he didn't, by the way) than that Bill O'Really thinks we can't explain the tides.

Let's try another one.

P1. Lawrence Krauss says the universe came from nothing.
P2. Lawrence Krauss is a respected cosmologist.
C. The universe came from nothing.
Surely we're on firmer ground here?

Frayed knot! Yes, it's true that Lawrence Krauss is a respected cosmologist and, yes, he is an expert in the relevant field. The problem is slightly more subtle here, but it's that Krauss is one cosmologist, so it's of no more moment that Krauss said this (he didn't, by the way) than that Isaac Newton thought that gravity propagated instantaneously.

In another earlier post, I talked at some length about the process that occurs at the end of research. It's known as 'peer-review', and it's among the most important parts of the scientific process. It isn't without problems, but those problems are circumvented by the process itself in the long term.

All scientific hypotheses and theories go through this process. many think that it's confined only to publication, and indeed this is where many of the perceived problems reside. We should, then, distinguish between pre-publication peer-review and the broader process of peer-review because, as some have commented, there are problems with the pre-publication system. There was an investigation conducted by John Bohannon, a correspondent for Science magazine, that makes for interesting reading. These problems are acknowledged but, on the whole, it's been an effective process. However, that's not the end of the process.

The broader process of peer-review actually begins at publication. Publication is merely an indication that the review panel didn't find anything wrong with it. At this point, we can have some confidence in the findings of the study, precisely to the degree that it's passed initial review. Now, experts the world over are looking at the paper, and seeing if it has any implications for their own research, positive or negative. Often, where somebody else's research is very closely related to the paper, they'll try to replicate your results. If it directly contradicts their own research, it will be pulled to pieces. This process is ongoing, and self-correcting. At some point, a paper's contents are going to come hard up against something, just as Newton came hard up against a cosmic speed limit, despite having survived peer-review up the wazoo for several centuries.

This also provides another valuable lesson about how science is conducted, of course. We understand that positive statements arrived at inductively are tentative. Every time a scientist makes a positive statement, it should be appended with 'the best model we currently have, when applied to the available data, suggests that...'

Where more than one model is empirically equivalent, the same applies, because empirical equivalence means that they make the same predictions about what will be observed.

So, what did Cox actually appeal to in the exchange with Roberts? He certainly cited the public stance of several organisations, all engaged in climate research. Is this an invalid appeal to authority?

No, because of something that Roberts insists isn't science, but which the above should demonstrate readily is not only science, it's good science. Cox didn't only cite various well-respected scientific institutions, he stated that this is what the consensus is, among these organisations collectively. This is the only valid authority in science. When the vast majority of the world's experts in a given field make a positive statement, it's held as a prediction of what the best models tell us, as agreed upon by people who work with and understand the mechanisms and the data. Consensus doesn't tell us what is correct, because that's definitely not science, but consensus among experts tells us that the conclusion is what the available data suggests.

In reality, Cox isn't actually appealing to any authority except the authority of the data. That he cites these organisations at all goes to something that's often not understood.

Throughout this little collection of missives, you'll find liberal sprinklings of the names of well-known and lesser-known people who contributed to the history of science. There are some quotations of Feynman, including the quotation that Roberts so horribly mangled. Am I appealing to them? Not remotely. When, for example, I cite Einstein, I cite him because he's the person responsible for the underlying discoveries, but what I'm actually appealing to is his validated research, research that has undergone the peer-review process, both pre- and post-publication.

So here it is, Malcolm, the empirical evidence that you keep insisting hasn't been presented.

The world's experts in fields relating to long-term evolution of the environment are in consensus that the data suggest that anthropogenic climate change is real, and that the long-term projections of the models and the data upon which that consensus is based suggest that the future doesn't look good. In short, the consensus of recognised experts in relevant fields is empirical evidence. Let that sink in for a moment.

Ultimately, even if the consensus is wrong, it's deeply irresponsible to ignore it. If there's even a small chance that it's correct, then its incumbent on us to take it seriously, as opposed to dismissing it based on lack of understanding not only of the status of climate science, but a deep ignorance with regard to how science is actually conducted.

The simple fact is that, as Cox alluded to in the programme, all models predict a 'point of no return'. Climate change is dynamic. It doesn't proceed at fixed rates. There are also things that can affect the future evolution of the system as a result of warming that can accelerate the change, such as the release of methane bound up in permafrost. Methane is around 30 times more efficient as a greenhouse gas than carbon dioxide, and we know there's a huge amount of it tied up in the frozen parts of the world. It can reach a tipping point, beyond which we possibly won't even be able to prevent accelerating, let alone being able to stop or reverse it.

I'm no climatologist, but I understand the physics of climate well enough, and I'm under no doubt whatsoever, having looked at the data, that humans are contributing to the long-term evolution of the climate, in ways we can only assess in broad terms but which, when assessed, point unequivocally to this process, and it's hugely irresponsible to not only ignore consensus, but to actively fight against it.

Ask yourself honestly if this is what you expect from your elected representatives.

Q&A Full episode

### Give Us A Wave!

Hello!

Today I wish to talk about waves. No, not that kind of wave. I want to talk a bit about the mechanics of waves, and what it means for the universe.

In a previous article, The Certainty of Uncertainty, we discussed some of the foundations of quantum mechanics, and the history of the battle between particles and waves. We concluded, of course, that neither the particle model nor the wave model was correct, but that both were manifestations of different behaviours of something more fundamental; fields.

Here, we're going to explore some of the implications of wave behaviour and especially how they interfere with each other, and how this leads to some of the central features of quantum mechanics. I strongly recommend reading the earlier article, as this will build on some concepts discussed there. This promises to be about the most technical discussion you should encounter on this blog, although I don't intend to delve into all of wave mechanics, just enough to give a flavour of how the science works. I'm also going to be switching between sound and light so that I can make it as simple as possible to follow, and because I have a lot of musician friends, and some of them have noted that I haven't done anything about audio yet, despite promising to do so.

For simplicity, I'm going to stick with sine waves, but what we'll discuss applies to all waves. Here's a simple, single-cycle sine wave to get us started. All waves have three key features: Frequency, amplitude and phase.

Frequency is measured in cycles per second, or Hertz (Hz), after German physicist Heinrich Hertz, who first demonstrated the existence of electromagnetic waves. In sound, frequency manifests as pitch. If we take this frame as representing one second of time, then the frequency of this wave is 1 Hz. If this were a sound wave, it would be well below the range of human hearing which approximately runs, optimally and prior to any age or damage related loss, from 20 Hz to 20 kHz (20,000 Hz).

Here's another second of wave, where the frequency is now 16.35 Hz, which is the fundamental frequency of the lowest note (C) on a tuba or an imperial piano. On an instrument, this note would be made up of a fundamental and harmonics, the sum total of which make up the timbre of the instrument at a given note. The fundamental frequency would again be inaudible, and would thus be felt rather than heard. The harmonics, though, would tend to be audible. Amplitude is the height of the wave. In sound, this equates to level*. Thus the larger the amplitude, the louder the signal. In light, it would equate to intensity, or the number of photons.

Finally, phase is where in the cycle the wave is at a given time. For individual waves, this mostly has no effect, but it has some interesting properties that will become clearer when we begin to look at what happens when multiple waves interfere with each other.

An important concept in wave mechanics is known in the jargon as 'coherence'. Coherence deals generally with correlations between physical quantities in waves. In the most basic form, two waves are coherent if they're the same frequency and the phase relationship between them is constant.

As discussed in the earlier article, where the peak of one wave and another wave meet, they are 'in phase' and they constructively interfere, which has the effect of increasing the amplitude.

As we can see, if we take two waves of the same frequency and amplitude, in relative phase, the output is a wave of twice the amplitude. If we take three, we have triple the amplitude, and so on.

Where the peak of one wave meets the trough of another wave, they are 'out of phase' (180°) and they destructively interfere, with the result being cancellation, or silence.

For completeness, let's look at what happens when we put multiple waves of differing frequency together. Here are three fundamentals, 16.35 Hz, 20.61 Hz and 24.49 Hz respectively. This is the chord of C major. Unfortunately, you won't be able to hear this unless you have spectacular hearing, because all these are below or at the threshold of human sensitivity. Played loud enough, you'd certainly feel them.

The last of those waveforms is the combination of the three waves for the chord. You'll note that there are some regularities in the meeting of the waves. This is how harmony works. Harmony is a function of correlation of phase at different frequencies.

It's also worth looking at another example, by doubling the frequency of the basic wave:
As we can readily see, doubling the frequency gives an octave above. This is true for all sound waves.

What about frequencies that are closer together?
Here we have two frequencies quite close together. Anybody who's used a tuning fork to tune a guitar will be familiar with what's happening here. As you tune the string and get closer and closer to the correct tuning, you begin to hear a tremolo pulse (many guitarists think that tremolo is what you achieve with a whammy bar, but this is incorrect; a whammy bar modulates pitch, thus it's actually vibrato, while tremolo is a modulation in amplitude) in the interplay between the string and the tuning fork, quickly at first, then slowing down as the notes get closer and closer together. You can see why with the wave, as that pulsing is a function of where the waves are meeting. Where they meet at the top, the pulse is at its loudest, then it cycles through the phase difference getting quieter and then louder until they meet at the top again. Once the string and the tuning fork are playing the same pitch, the phase will meet and the pulse will disappear.

Incidentally, for those not of a musical bent, this is also what happens when somebody sings out of tune, and why it's so difficult for people with well-trained ears to enjoy shows like the X-Factor (and indeed a fair bit of popular music). Just as even untrained ears can appreciate harmony, very well-trained ears find the dissonance of out-of-tune singing quite unpleasant and even painful.

 image source: Wikipedia
As an aside, the above also gives some insight into how radio for broadcast works.

The broadcast signal from the station is encoded in a carrier wave of a given frequency, which is modulated in a specific manner pertaining to the band. For AM radio, the amplitude is modulated. For FM, it's the frequency that's modulated. Your radio contains a 'demodulator' that removes the modulation, leaving the broadcast signal. As you can see from the diagram, the modulation matches the changing amplitude of the broadcast signal, and it's this that allows us to tune into particular signals in the morass of radio waves.

So, now that that's out of the way, let's get back to the main topic.

Photons have zero rest mass, but they do have some mass associated with their motion, which manifests as frequency. In electromagnetic terms, frequency manifests as colour. We think of colour as applying only to visible light, but that's a mistake. Just as there are sounds that are outside our hearing range, there is colour beyond our visual range. Light is simply the visible range of frequencies in the electromagnetic spectrum. There are those who advocate calling the entire spectrum light, and it's hard to argue against. In any event, because the frequency of a photon is a manifestation of its mass, we can say that this also correlates to the amount of energy in a given photon, which means that the colour of a photon denotes the amount of energy it's carrying.

Another interesting thing about photons is that, unlike fermions (matter particles, loosely speaking), they're not subject to the Pauli Exclusion Principle. This principle prohibits fermions in the same quantum state to occupy the same place. Bosons are not subject to it, though, which means that we can concentrate photons into a very tight area.

When we combine all of the above, we can begin to get an idea of how lasers work. We take lots of photons of a given colour, with their phase correlated, and concentrate them into a tight beam using curved mirrors. Because we're sending lots through at once, and because they're coherent, the energy of each photon adds to the amplitude of the output. This is why lasers have such energy. It's also why they can be controlled to have very little diffusion, so that they remain tightly focussed over a long range, because it mitigates diffusion as a result of wave interference. Here's a nice little diagram of the workings of a helium-neon laser, for those with mechanical leanings.
 Image source: Thorlabs
So what does all this have to do with quantum theory?

In The Certainty of Uncertainty, we talked about a series of experiments, the double-slit experiments performed by Thomas Young, the basic iteration of which is extremely simple. Taken as a whole, though, this series of experiments has led us to an understanding of the world that is really quite disturbing, and conclusions so profound that they make other allegedly profound ideas seem rather mundane in comparison. Further, this experiment has many variations, and it's a variation of this experiment, the laser interferometer, that is now giving us a brand new way of looking at the cosmos, and promises to shatter some of the observational barriers we've faced until now.

But I'm getting ahead of myself. Here's the experiment in its simplest form:
 Image source: Wikipedia
On the left, we have a light source at a. In the centre, there's a screen with two slots cut into it. On the right, there's a photographic plate. As the light comes through the slits, the light propagates outwards in two arcs. Just beyond the slits, these arcs begin to overlap. On the far right is an illustration of what appears on the plate. Because of the details of overlap, the two wave arcs are interfering with each other, sometimes constructively, sometimes destructively, which is why we see the pattern of light bands where the amplitude is highest and dark bands where the waves have cancelled, exactly as discussed above in the context of sound waves.

When Young first proposed this at the Royal Society in 1800, he wasn't taken very seriously, largely because Newton's 'corpuscular' (particulate) theory was widely accepted. However, Young had observed this phenomenon in water (you can get a similar effect if you drop two pebbles into a pond side-by-side where the concentric rings overlap), and was confident that the experiment would show waves. He eventually performed the experiment in 1803.

Young was described in the title of a 2006 biography by Andrew Robinson as The Last Man Who Knew Everything. He really was a polymath, a medical doctor who made significant contributions to many areas of study, including physiology, solid mechanics, harmony, language, Egyptology (it was Young who noted the similarities between the Demotic script and the hieroglyphs on the Rosetta Stone, and that the hieroglyphs used phonetic spelling for foreign names) and others, and was professor of natural philosophy at the Royal Institution. I highly recommend Robinson's book, a beautifully-written and fitting tribute to one of the little-known greats of science.

Ultimately, after a series of anonymous attack's in the Edinburgh Review caused a publisher to back out of publishing his work, Young left physics behind to concentrate on medicine.

Anyway, the implication of the experiment is clear, and it seemed to nail shut the case for waves and overthrow Newton's particulate view. As discussed in the earlier article, there were other iterations of this test, and some interesting results came out of some versions of it. We discussed how Einstein, in his first paper of 1905, showed that it took a photon of a certain frequency (energy) to knock an electron of a metallic strip, the basis for solar energy. We covered the difference in outcome when we tried to extract certain information from the experiment, and the particulate implications, leaving us with the uncomfortable conclusion that both particle and wave were behaviours arising from our interactions, and that we were looking at something else. We also looked at polarisation, loosely the angle at which a quantum entity's wave-like component is waving, so now we've laid the groundwork, let's look at how all this ties together.

We've already met two examples of macroscopic quantum systems that exhibit coherence, namely the laser and the interferometer. What we didn't discuss in either case is how the concept of interference enters quantum mechanics.

Because of its wave-like component,  demonstrated by iterations of the double slit experiment in which a single particle is sent through at a time, any quantum entity can be modelled as waves, including the particulate component. An interesting consequence of this arises when we talk about the concept of 'superposition'. We call the wave component the 'wavefunction', generally denoted psi (Ψ). This wavefunction represents all the information we can obtain about how a quantum entity evolves all the time, specifically the probability that a particle will be found in a certain location when measured. For a free particle, the wavefunction is a sine wave, as we met above.

There's something implicit in the above which may not be immediately obvious. We've been dancing around what the wave component of a quantum entity actually represents in terms of probability, but we now need to make explicit a difference in how probabilities work in general and quantum terms, and it's simply this: In QM, probabilities are manifest. They really happen. When we talk about the probability given by the wavefunction (actually, the square of the wavefunction), we're saying it's actually there, with that probability. Let's look at a typical wavefunction and see what that might mean. Here's a graphic representation of the wavefunction of a single electron.
 Image source: beforeitsnews.com
Each point in that wavefunction represents where the electron is, with a given probability. It exists in a superposition of all those places. In other words, the electron is in some sense literally there prior to measurement. As with all cases, a probability must fall somewhere between zero and one. Prior to measurement, all the probabilities described by the wavefunction sum to one, because the probability that the particle will be measured somewhere is precisely 1. When we measure or observe the particle, all probabilities other than the location at which we measure it fall to zero, and the probability at that location snaps to one. This is what's described in the jargon as 'wavefunction collapse'. This is how the disappearance of the interference pattern in the double-slit experiment is explained. When we observe the particle going through a given slit, we collapse the rest of the wavefunction, so that the behaviour that registers on the photographic plate reflects the particulate behaviour of the entity. In short, without observation, the wavefunction travels through both slits at the same time. With observation, the particle goes through the slit we observe it to be going through and the pattern we see is two bands, as we'd expect from particles.

When we try to explain this, we're forced into the conclusion that a particle on its way to the screen in the double-slit experiment takes every possible path, including those that take it across the universe first. Luckily, because components of the wavefunction can interfere with each other, both destructively and constructively, the end result being that many of the lower-probability paths cancel each other out, while the higher-probability paths are amplified, so that we will tend to observe the higher probability paths at all times.

I'd wanted, in this post, to also talk more in-depth about quantum coherence and decoherence, as well as some other aspects of QM, but I'm conscious of the fact that this is already a long read so I'll leave it there for now and pick those topics up in a future post.

This has been a tricky topic to cover, and I have little doubt that I've made some errors on the way, so any feedback is welcomed, as always.

See you next time.

*you might call it 'volume', but this is a misnomer, and a pet peeve of many audio professionals. This is a term used by hi-fi manufacturers, and stems from the idea that, at optimum listening level, turning the dial to a given volume setting should optimise the output level of a system for a room of given size, hence volume.