Was That It?

Last One To Leave, Turn Out The Lights.

This was originally written for a science-writing competition on the now-defunct Richard Dawkins forum. I've presented it here with only a bit of formatting and tidying, but otherwise exactly as presented originally. The original date of publication was 19 February 2010.



________________________________________

How will the human species end? What will it look like?

Of late, there has been a good deal of focus on various stories concerning the end of the human species, and the 'destruction' of the planet. I want here to look at a couple of them, and then to look at what the science says about realistic scenarios with regard to the end of the human species.

It's a subject that has fascinated humans for as long as we have had stories. What will the ultimate fate of the species be?

The obvious place to begin, then, is some of the myths that seems to be on everybody's lips at the moment: 2012

Maya, 2012

According to some, the Mayan calendar ends on December 21st 2012, and this has been taken as a prophecy that this is when the world will end. Fortunately for us (he says with tongue firmly in cheek), this idea is not supported by any archaeological evidence at all. The date is a significant date in Mayan mythology, however, according to Maya expert and archaeoastronomer Anthony Aveni, of Colgate University, New York.

The Maya's relationship with time was on a scale almost unmatched until the advent of General Relativity. While other cultures the world over have constructed monuments that show a deep understanding of the passage of time, such as the neolithic monuments of Northern Europe, whose entrance passages are lit by the sunrise on specific days of the year, few cultures have so completely revolved around time as the Maya.

During the empire's heyday, the Maya invented what is called 'The Long Count', a circular calendar that began at what the Maya saw as the beginning of the last creation period on August 11th 3114 BCE, effectively fixing the beginning of their culture to that day, which they called 'day zero'. The 'Grand Cycle' that December 21, 2012 represents the end of is said to last for 1,872,000 days, or 5,125.37 years. At this point, the cycle ends and another cycle begins.[1]

Nibiru, 2012

There has, for some time, been a story circulating that a mysterious 'Planet X' is headed for Earth on a collision course. This planet, named Nibiru (probably from a term in Akkadian for 'crossing', but also appearing in ancient Babylonian astronomy as the highest point of the ecliptic, or Summer solstice) is allegedly going to hit Earth, or at least pass close enough to cause massive devastation, either by tidal means or by dragging bolides in its wake which will impact Earth.

This story seems to have its roots in work by Zecharia Sitchin, author of several books promoting 'theories' of human origins involving 'ancient astronauts'. According to Sitchin, Nibiru was the home of the Nephilim described in Genesis (he believes that they were also the Annunaki of ancient Sumerian myth). Nibiru allegedly collided with Tiamat, another planet supposed to lie between the orbits of Mars and Jupiter, creating Earth, the asteroid belt, and the comets. [2]

Sitchin believes that Homo sapiens were genetically engineered by the Nephilim by crossing extraterrestrial genes with Homo erectus.

Needless to say, no such planet has ever been discovered. David Morrison, of NASA's 'Ask an Astrobiologist' project, has this to say:

You don’t need to take my word for it. Just use common sense. Have you seen Nibiru? In 2008 many websites said it would be visible to the naked eye in spring 2009. If a large planet or brown dwarf were headed for the inner solar system in 2012, it would already be tracked by hundreds of thousands of astronomers, professional and amateur, all over the world. Do you know any amateur astronomers who are watching it? Have you seen any photos or discussion of it in the big popular astronomy magazines like Sky & Telescope? Just think about it. No one could hide Nibiru if it existed. [3]

V838 Mon, which has been passed off as evidence for Nibiru.
Of course, those of us who pay attention to reality know that it's a star with an expanding gas shell, and 20,000 light-years away.[4]

Polar Shift Scenario

This is a little myth that has been gathering some steam of late, and one that has also been connected to 2012 in some of the stories that have cropped up. First properly proposed by electrician Hugh Auchincloss Brown in 1948 (although there were earlier proponents in one form or another), it was claimed that polar accumulation of ice caused a repeated 'tipping' of the Earth's axis every 5,000 to 7,000 years or so. He argued that the 'wobble' of the Earth on its axis, combined with speculation that the crust slides on the mantle, meant that a shift was imminent, and even suggested that the polar ice caps should be broken up using nuclear explosions, in order to stem the accumulation of mass at the poles, which Brown thought could overbalance the Earth on it's axis, shifting the axis of rotation.

This is not to be confused with geomagnetic reversal, or plate tectonics and other associated theories, but refers only to the sudden, cataclysmic shifting of the Earth's axis of rotation.


Another early proponent, Charles Hapgood, in his book 'Earth's Shifting Crust', suggested that the accumulation of ice at the poles could somehow 'overbalance' the crust, breaking it free from the mantle and causing the crust to slide into a new position, leaving the axis of rotation essentially the same, but with the crust adopting a new position. A foreword was written to this book by Einstein, who was quite excited by Hapgood's writing. 
I frequently receive communications from people who wish to consult me concerning their unpublished ideas. It goes without saying that these ideas are seldom possessed of scientific validity. The very first communication, however, that I received from Mr. Hapgood electrified me. His idea is original, of great simplicity, and-if it continues to prove itself-of great importance to everything that is related to the history of the earth's surface.

Of course, this was before the development of plate tectonics, and Einstein had concerns of his own:

Without a doubt the earth's crust is strong enough not to give way proportionately as the ice is deposited. The only doubtful assumption is that the earth's crust can be moved easily enough over the inner layers.
In 1998, retired engineer James Bowles, proposed a mechanism, which he named 'rotational bending', a process that involved the gravitational pull of the Sun and Moon pulling the crust at an oblique angle. This wore away at the underpinnings that linked the crust to the mantle, and generated a 'plastic' zone that allowed the crust to move in relation to the mantle, and allowing the poles to migrate to the equator.[5]

The scientific evidence suggests that, although polar wander does occur[6], it occurs at a rate of less than 1 degree in 1 million years. The last conjectured rapid shift was around 200 million years ago.

Reality

I want to use the remainder of this essay to discuss two scenarios that have some basis in reality.

Viral pandemic

It has long been known that there are micro-organisms capable of ending it all for humanity. In the early 14th Century, bubonic plague swept across Europe, wiping out a third of the population. It originated with the marmot of the Gobi desert, passing to fleas, rats and eventually humans. Returning every generation or so until the 1700s, it is thought that the plague killed in the region of 200 million people in all, although there is some debate about whether all were killed with the same disease. It was almost certainly largely responsible for the collapse of the feudal system in Europe, to at least the degree in which 'serfs' began to have some economic clout, owing to the massive shortage of labour left in its wake. [7]

In the last two centuries or so, medical science has brought us some relief from such micro-organisms, beginning with Jenner's early experiments with cowpox as vaccination (from the Latin vacca – cow) against smallpox in 1796, and leading up to the development of antibiotics, stemming from Fleming's discovery of penicillin in 1928. Our reliance on antibiotics these days is more than apparent, but it brings with it its own problems.

One of the major current threats to our health today is the increasing resistance of micro-organisms to antibiotics. Beginning with one that I suspect all of us have heard of, due to its recent coverage in world news, namely MRSA.

MRSA, or Methicillin-resistant Staphylococcus aureus, is a strain of Staphylococcus, a bacteria that has developed a resistance to conventional antibiotics. Initially, Staphylococcus was treated with Vancomycin, a glycopeptide antibiotic first isolated in the jungles of Borneo by E C Kornfeld, but Vancomycin was never used as a first-line treatment, because it had to be administered intravenously, and it was subsequently supplanted by methicillin. Further, early, impure extractions of Vancomycin were shown to be toxic to the ears and kidneys.

In recent years, strains of MRSA have emerged that have developed resistance even to Vancomycin, and in reality we are running out of ideas in dealing with them.[8],[9]

It is also fairly certain that there are as yet undiscovered micro-organisms that pose a significant threat, especially in the modern world, in which international travel is ubiquitous, and viruses can be spread across the planet in a matter of days.

Bolide Impact

This is going to feel a little like cheating, but the following is what actually inspired this essay.

Probably the single biggest threat to life on Earth, notwithstanding climate change and the aforementioned pandemic, is that which is widely regarded as having brought the age of the dinosaurs to an end.

In 2005, US Congress mandated NASA to identify 90% of large Near Earth Asteroids (NEAs) by 2020. In a release yesterday, 18th February 2010, Alexis Madrigal, writing on behalf of the UK's Spaceguard Centre, discussed a report, Defending Planet Earth: Near-Earth Object Surveys and Hazard Mitigation Strategies , released by the National Research Council, in which it was suggested that this is not an attainable goal with current technology and funding. In reality, it is not actually known how many such objects are up there, and estimating the risk to humans is problematic. Michael A'Hearn, of the University of Maryland, writes:

Our estimates of the risk could easily be wrong by a factor of two or three, I don’t think they are wrong by a factor of 10, but the boundaries, again, haven’t been explored.” [10]
He also discusses the problems of understanding the physics of an impact, saying:
The first thing we need to do is understand what the hazard is. That’s partly finding them and partly understanding what their effect is. We have to understand in more detail how we’d mitigate against them.
Recently discussion arose in which a particular poster was discussing what he thought would be the best traits granted by evolution in the event that four (count them) Texas sized bolides were heading for Earth from different directions. He was suggesting that intelligence would grant the best strategies for survival. Given that mathematics is not my strong point, I asked Calilasseia if he would mind doing some back-of-the-envelope calculations with regard to the thermodynamic exchanges involved in such an impact, and he has kindly given permission to use them here.


The volume of a bolide the size of Texas is given by:

\(V = \dfrac{4}{3}πr^3\) where \(r = 622,000\ m\)

This gives us a value for the volume of our bolide of \(1.008 \times 10^{18}\ m^3\).

Now, to make life simpler, let's assume that we're dealing with an iron meteorite. Iron has a density of \(7,873\ Kg/m^3\) (Source: the properties of the elements table from Kaye & Laby's Tables of Physical & Chemical Constants), therefore an iron meteorite will have a mass of \(7.936 × 10^{21}\ Kg\), to a reasonable level of approximation. A more precise calculation would take into account that an iron meteorite actually contains around 6.7% nickel (source: the abundances of the elements table from the same source as above), and since nickel is denser than iron, leaving it out means that we're underestimating the mass, and erring on the conservative side again. By comparison, the mass of the Earth is \(5.9736 \times 10^{24}\ Kg\), so we're dealing with a body that is \(0.13%\) the mass of the Earth, which means that already, it's a significant mass.

Bolides typically move through space at speeds of around 20 Km s-1, and so, we can calculate the kinetic energy of such a bolide once we know its mass, courtesy of \(E = ½mv^2\). Feeding m = \(7.936 × 10^{21}\ Kg\) and \(v = 20,000\ m/s^{-1}\) into this formula, we arrive at a value for the kinetic energy of \(1.587 × 10^{30}\ J\).

Now, if a bolide of this mass impacted the Earth at that speed, and atmospheric braking wouldn't do much to slow it down, a significant fraction of that energy would be converted to heat upon being brought to a halt in an inelastic collision. Even if we assume conservatively that only 50% of that energy is converted to heat as it impacts the Earth and comes to a halt, that still leaves us with \(7.935 × 10^{29}\ J\) to play with. By comparison, the Tsar Bomba, the largest thermonuclear weapon ever tested in the atmosphere by humans, was puny - it had a yield of 50 megatons, or \(2.1 × 10^{17}\ J\). Therefore, if a bolide the size of Texas impacts the Earth, it will yield as much heat energy as 3.779 trillion Tsar Bomba H-bombs.

That amount of heat energy is going to have some significant effects to put it mildly. Let's assume for the sake of argument, again a radical simplification, that all the heat energy is dumped into the bolide mass itself, prior to transfer to the surroundings. Iron has a specific heat capacity by mass of \(442\ J/Kg^-1/K^{1-}\), which is the amount of heat energy required to raise the temperature of 1 Kg of iron by 1 Kelvin. So, to derive the temperature change \(T\), given a specific heat capacity \(C\), a mass m and an energy input \(E\), we have:

\(T = E/Cm\)

Feeding our data into the formula above, we have that the temperature change of the bolide will be approximately 225,000 K.[11]
Obviously, this is an extreme example, and just a bit of fun here, but the risk is very real. Cali sums up with the following, which seems a good place to leave this topic.
I'd say that if just one bolide the size of Texas hits Planet Earth, we are, not to put too fine a point on it, fucked.
Refs:
[1] The End of Time: The Maya Mystery of 2012 : Anthony Aveni 2009
[2] http://en.wikipedia.org/wiki/Zecharia_Sitchin
[3] http://astrobiology.nasa.gov/ask-an-ast ... nd-answers
[4] http://en.wikipedia.org/wiki/V838_Mon
[5] http://www.atlantisrising.com/backissue ... pgood.html
[6] http://en.wikipedia.org/wiki/True_polar_wander
[7] A History of Britain: Simon Schama
[8] http://en.wikipedia.org/wiki/Methicilli ... cus_aureus
[9] http://en.wikipedia.org/wiki/Vancomycin
[10] http://www.spaceguarduk.com/news/229-bi ... telescopes

Hack's Top Debate Tips

In the spirit of my newly-launched FAQ page, I thought it might be useful to start compiling some tips for debating sceptics, because it really does get tedious dealing with the same opening gambits and discursive tactics time and time again. What it results in, more often than not, is the sceptic getting pissed off before anything of substance is even discussed. In that light, here is my non-exhaustive - and subject to revision and update - list of top tips for debating sceptics, focussing specifically on the things one might do that may seem like productive tactics. They're organised in no particular order, exactly as they occur to me as I write. I'll also add links at the bottom to posts that deal with any of the issues raised here.

So, let's dive in and deal with some depressingly common things that all sceptics encounter in discussion with believers of all stripe:

1. Don't tell me you used to think as I do.

This is, frankly, a bloody awful move. It's among the most common, and it's entirely unconvincing to anybody who's spent thirty seconds thinking about it. The thing is, of course, it's entirely irrelevant. It's difficult not to see it as a tactical move erected specifically to lend an argument some weight. What it actually does is completely the opposite, not least because it's entirely transparent.

The underlying fallacy being committed in this tactic is the genetic fallacy, specifically the appeal to authority. What sets this particular tactic apart is that it's an attempt to set oneself up as the authority in question, the assertion being 'look, I used to think exactly as you did, but I saw the light'. 

When I see 'former atheist' in somebody's bio, for example, my first thought is most definitely not 'well, I'd better pay attention to this person, because they clearly know something about atheism that I don't'. No, my first thought is that the arguer either had poor reasons for their former position or that it's simply a fabrication.

Far from lending credibility, this robs credibility.

This is also true of asserting that this or that great thinker used to share my position and had their mind changed. That assertion tells me nothing whatsoever about whether or not they were justified in changing their minds. It wouldn't matter to me if Richard Dawkins himself knocked on my door and vouchsafed to me that he'd been wrong, and that God was real, just as it wouldn't matter to me if Charles Darwin knocked on my door and announced that he'd renounced evolution via natural selection. 

The simple fact is that the list of people whose belief on any given topic is an indicator of the veracity of said belief is entirely unpopulated, ever to remain so. Truth is independent of belief, and only one of these has any value in epistemic terms. What matters is whether this belief can be demonstrated to be in accord with observation and whether it can reasonably tested.

2. Don't quote your holy text.

As discursive tactics go, this is among the silliest.

Look, I get it. You think that the [insert name of holy book here] is the best possible evidence that your mythology is true. The problem is that you have it entirely wrong. Your holy book isn't the evidence for the claim, it IS the claim.

The only situation in which quoting the text fulfils any evidential obligation is when the claim is that the text says something. The text does not and cannot stand as evidence that the text is correct. This is, once again, the genetic fallacy writ large, as well as being horribly circular.

Quoting your holy text as a means of support for your claim is exactly like flinging turds at a passing stranger and expecting them to thank you for it. Rather than making even the tiniest dent in your onus probandi, it only makes it apparent that you don't understand the nature of evidence. In fact, when you're arguing for an omnipotent, omniscient entity (setting aside the logical absurdity of such an entity), your holy text actually constitutes evidence against the veridical value of your claims regarding the existence of said entity.

3. Don't preach

Preaching at those who don't share your views isn't just silly, it's offensive. If the content of your mythology were convincing, we'd already believe as you do. If the very best you can do is to recapitulate the contents of your mythology, then you really shouldn't be attempting to debate sceptics. 

4. Know the content of your mythology

In light of the previous two points, this might seem a little odd but, in fact, it supports them. 

One of the most frustrating things that sceptics encounter with alarming frequency is being confronted with people who don't actually know what they believe. One of the reasons that many sceptics find this particularly frustrating is that, in the vast majority of cases, the person you're talking to used to believe as you do in some measure and, in many cases, it was exactly taking the time to read the text and find out exactly what it said that led to the shedding of those beliefs.

In my case, although I never believed, I did try. That trying manifest as dedicating myself to reading the bible and finding out what it actually said on a range of issues, and finding that it didn't match up to my moral or epistemological understanding.

In my experience, the sceptic is often considerably more aware of the content of the mythology and the holy books than the believer. This is borne out by many surveys and non-rigorous studies. My knowledge of the bible, for example, is demonstrably more complete than that of the majority of apologists I encounter, and I know sceptics whose understanding puts mine in the shade.

5. Don't threaten me with hell.

It's depressing how often this gets wheeled out. It often gets erected out of frustration on the part of the believer that their 'evidence' isn't being accepted.

It's a truly spectacular bit of cognitive dissonance, completely overlooking the fact that, if I don't find the entity on which this fate is predicated credible, I'm hardly going to find the fate itself credible.

Not only is threatening me with the content of your mythology sophomoric and incredibly insulting, it's as effective in persuading me of the veracity of your claims as pissing in my pocket and telling me it's raining.

Threatening an atheist with hell is exactly like threatening a clove of garlic with Dracula.

6. Don't make claims you can't support.

This is the big one, in many respects. Much of what's contained in this post touches on this in some way, but this is slightly more explicit. 

Although it's treated as a logical fallacy, the shifting of the burden of proof is actually more of a discursive principle. What it is, in essence, is the notion that unless you can demonstrate that what you say is actually true, not only am I under no obligation to accept it, I'm actually obliged as a sceptic to dismiss your claim on that basis alone, and no further justification is required. Where it actually becomes a fallacy is where it's asserted that something is true on the basis that the sceptic can't prove that it isn't.

A good sceptic always operates from the position of the null hypothesis. That is, we operate on the basis that the negation is true until such time as the affirmative is properly justified. 

As a related aside, it's well worth noting that 'you can google it' or 'research it yourself' doesn't constitute support of any kind. Expecting somebody else to do the research to support your claim is exactly as effective in making your point as simply asserting it to be true, except now you've pissed somebody off as well. All other considerations aside, if you can assert something as knowledge, you know where to find the evidence that supports your knowledge. Expecting somebody else to go and find it is not only lazy, it's insulting.

7. Don't conflate faith and knowledge

Regardless of how strongly you think you know something, you can't honestly assert it as knowledge unless you can demonstrate it to be true. Faith is not knowledge. It doesn't matter how badly you want something to be true, or how much faith you have in the truth of a proposition, the role of the sceptic is to dismiss all claims until their truth is demonstrated or evidence sufficient to warrant acceptance is forthcoming.

8. Separate yourself from the ideas you present

It's extremely common for people to feel insulted when the ideas that they present are challenged. This is a mistake, and will tend to stifle discussion. 
If you feel under attack when the attack is on your ideas, not only are you committing a glaring logical fallacy (the category error), it's generally a reasonable sign that your confidence in your position is somewhat lacking.

I attack ideas without mercy or quarter, and most sceptics do exactly the same. I try very hard, though, not to attack people at all. I'm not always successful in avoiding this, especially when the ideas being presented are ones that I see as particularly dangerous or toxic to the society in which I an those I love must live, but I do try. Sometimes, frustration will get the better of me (I'm only human, after all) and I'll call you an idiot but, in reality, this is really a more forceful way of saying that what you're presenting is idiotic. 

Ultimately, ideas are disposable entities, and bad ideas exist only to be disposed of. If I showed everybody's ideas and beliefs that respect that the holders of said ideas and beliefs think they deserve, not only would I be treasonous to my own core principles, I wouldn't be showing those people the respect they most definitely do deserve.

9. Pay attention to the argument

Among the many pieces of advice I give to people when they enter into discussions with me, always driven by obvious indications that this isn't happening, is to 'engage in the discussion with the hackenslash on the forum, as opposed to the hackenslash that only exists in your head'. 

It's worth noting that this applies equally to all participants in discourse, and is not restricted to believers. All too often, people only read an argument up to the point where they think they've found something they can object to. This is a mistake, because it's far too easy to miss the critical context that renders it trivial. 

Worse still, when you capture the beginning of somebody's argument, and then assume you know how the rest of the argument will go, you run the risk of getting it entirely wrong and raising objections to an argument that hasn't been made. It very quickly becomes obvious when this has occurred. I'll link to a post at the bottom in which I respond to an instance of somebody making precisely that mistake, with predictable results.

There's a well understood distinction in psychology, taught in decent customer service training programmes the world over, between different kinds of listening, and it's an important one to consider here. Often, when we're listening to somebody speak, what we're actually doing is picking up only on keywords while we formulate our response. This is known in the jargon as 'passive listening'. This is opposed to 'active listening' in which all our attention is on the speaker and we're absorbing what they say and parsing it for semantic content. If we're doing the latter, we won't miss any learning opportunities, which brings me to:

10. Don't set out to win the debate

This is one of the most common mistakes made in discussion. This is again not restricted to believers, and this is advice that everybody should take note of, because it's precisely the reason that so many discussions end up fruitless, often before they've even entered any interesting territory.

The target outcome of any discussion should be learning. If nobody learns anything in a discussion, then nobody has won anything. If somebody - anybody - learns something, then everybody wins.

11. Learn about logical fallacies

One of the things that you're most likely to encounter in your discussion with a sceptic is the notion of a logical fallacy. A logical fallacy is an instance of drawing a conclusion that is unjustified by the argument erected in support of it.

Fallacies come in two broad types. The first of these is a formal fallacy. This is a fallacy in which the structure of an argument is such that the conclusion doesn't properly follow from the premises. 

The second is an informal fallacy. This is a fallacy wherein the structure of the argument is fine, but where there is some other reason that the conclusion cannot be taken to have been supported. An example would be something like the fallacy of composition, in which it's asserted that the properties of the constituents of something necessarily translate to the whole having the same properties. For example, I might say that the bricks in my front wall all weigh 3.5 kilograms, therefore the wall weighs 3.5 kilograms. This is obviously nonsense. 

I've heard people argue that a fallacy being informal means that it can be dismissed, but the simple fact is that any line of reasoning that can, in any situation, lead to a faulty conclusion is a line of reasoning that can't be relied upon to arrive at a true conclusion.

Now, it isn't uncommon for apologists to assert that the word 'fallacy' is just a buzzword that allows sceptics to dismiss arguments, and it's certainly true that simply saying the name of a fallacy doesn't demonstrate anything, but where a fallacy is committed and correctly exposed and explained, it's important that you learn from them.

I'll link some sources at the bottom dealing with fallacies.

12. Be prepared to admit being wrong

What it says on the tin. If you aren't prepared to be wrong, you're doing it wrong, thus you'll always be wrong in some measure.

13. Know the arguments against your position

This is so basic, it really shouldn't require any explanation. However, and depressingly, it appears that many don't realise the importance of this. 

If you hold a position - any position, whether in faith or not - you should be able to justify your position. One of the most important elements of being able to justify your position is understanding what the objections are to your position and being able to address them. It's why, for example, sceptics of Christianity know the bible so well. It's about preparation, and this really is debate preparation 101. If you enter into a debate with a sceptic without having spent any time studying the counters to your position, I'm confident that there will be only one outcome, namely a complete waste of everybody's time.

Of course, you may encounter novel objections, and there's nothing you can do except attempt to address them on the fly, but you will always be better placed to have a meaningful discussion if you've prepared by understanding all the stock arguments against your position. 

It's a common truism that failing to prepare is exactly the same as preparing to fail, but nowhere is this more starkly realised than in a debate situation in which you haven't learned some of the common arguments against the position you're defending.

14. Don't tell me to 'let God in'

It's pretty common to suggest to a sceptic that if they let God in, the truth will be revealed. This is precisely equivalent to saying 'if you believe, you'll believe', and is predicated on the notion of belief being a choice.

This is a silly notion. I could no more choose to believe that which is not robustly supported by evidence than I could choose to sing Nessun Dorma from my rectal sphincter. 

The simple fact is that, if the evidence were compelling, we'd already believe. We have standards of evidence because having such standards is a demonstrably successful strategy for attaining knowledge about the world, and the notion of a deity has singularly failed to meet those standards.


15. Stick to what I'm saying

Possibly the worst mistake you can make is to assume that I'm going to defend a position put forward by somebody else. Sceptics aren't a homogeneous group. We don't all have the same arguments or the same understanding. When you assume a position your opponent hasn't explicitly stated, you end up arguing with yourself, and you'll quickly come unstuck.

16. Don't tell me you'll pray for me

Few things are likely to get you consigned to the 'not discussing honestly' bin as telling me you'll pray for me. All else aside, prayer does nothing for anybody but yourself. In the only vaguely rigorous study ever conducted of intercessory prayer, it was shown to have no positive effects beyond that of placebo. In fact, this study, an analysis of the effects of intercessory prayer in patients undergoing heart surgery, to the extent that it showed any effects at all, showed that in the group that knew it was being prayed for, there was an even greater instance of post-surgery complications, probably a function of performance-anxiety.

In reality, the only positive effect of such activity is on the person doing the praying, in that it has the effect of making them feel like they've done something useful, absolving them of any responsibility in terms of actually doing something useful.

My stock response to the statement 'I'll pray for you' is 'I'll think for you', which is considerably more effective.
________________________________________________

Further reading:

On Whose Authority..? A treatment of the broad class of fallacies known collectively as the 'genetic fallacy'.
There's This Book A treatment of the issues inherent in books as a source regarding what the alleged controller of the universe wants us to know.
Onus Probandi, Assertionism and Peer-review A treatment of the shifting of the burden of proof and the role of peer-review in science.
Deduction, Induction, Abduction and Fallacy A broad but summary treatment of how logic works, and specifically how it's used in the sciences, along with a brief exposition of some common fallacies.
Patterns and the Inertia of Ideas Why it can be difficult to deal with challenges to deeply-held ideas.
The Map is not the Terrain Some common pitfalls when dealing with natural language.
What's In a Name? Some distinctions between scientific nomenclature and vernacular terms.
7 Reasons Why Apologists Should Stop Trying To Logic A response to an attempt at debunking one of my earlier posts. Very instructive in terms of the importance of paying attention to what's actually being argued.

Some external sources:

Taxonomy of Logical Fallacies Probably the best source on the web for logical fallacies. A pretty comprehensive table showing the relationships between fallacies and with some great expository examples.
Logical Fallacies Another great resource on logical fallacies. This isn't nearly as comprehensive, but much more approachable for the novice.



Public Service Announcements

Regular readers on social media will be aware that I occasionally post little bits of information, usually reflecting some commonly-held but incorrect notion regarding a principle of logic or some popular misconception about what science says on a given topic. I thought I might as well collect them here.































Memes, Demotivationals and Funnies.

This is just what it says on the tin. Some original crap from over the years.





























Why Are There Still Apes?

This is probably the most oft-asked question in creationist apologetics and, rather than posing any problem for evolutionary theory, it rightly indicates that some questions are so silly that they can only arise from an ignorance of that which it attempts to pose some sort of gotcha for.

The problem is that it indicates a horrible misunderstanding of just what evolutionary theory postulates.

Speciation doesn't come in a single flavour. In particular, there are broadly three modes of speciation. The first of these occurs when a single population diverges into multiple populations, often because of some environmental barrier becoming manifest.

For example, you could have something as simple as, say, a peninsula becoming an island due to changes in sea level, and then the peninsula being partially covered so that it looks more like an archipelago. Given sufficient reduction in gene flow, the populations can manifest some quite different mutations. Once gene flow is completely cut off by such separation, speciation occurs. This is sympatric speciation.

The second mode is when populations diverge genetically from their ancestors, to the degree that, were you to bring members of the child population and the ancestor population together, they would not be interfertile. This is allopatric speciation.

The third is very rare, but it occurs when a single population diverges while not subject to any environmental barriers. Generally, their ranges will be adjacent, and models predict that their will often be hybrids at the boundaries. This is parapatric speciation.

The problem with the question as posed is simply that it resides entirely on the assumption that all speciation is allopatric, which is not a prediction of evolutionary theory.

We should also note for completeness that the question is also incoherent for another reason. It asks why the existence of a subset doesn't pose a problem for the set. This is most easily rebutted by asking, "If ducks evolved from birds, why are there still birds?" or "If Americans came from Europe, why are there still Europeans?"

It's a silly question.



More on evolution:
Has Evolution Been Proven?

Irreducible Complexity and Evolution
Evolution and Entropy Revisited
Who Put it There? Information in DNA

Did You See That?!!

The observer, when he seems to himself to be observing a stone, is really, if physics is to be believed, observing the effects of the stone upon himself. - Bertrand Russell

In this outing, I want to address one of the most ubiquitous and egregious misrepresentations of science we encounter. It's the starting point for all manner of metaphysical bullshit, and the anchor for reams of shoddy thought and, worse, shoddy science journalism. It's an effect that manifests in all experiments dealing with quantum phenomena, and it seems to suggest that the world doesn't exist when we're not looking at it; the observer effect.

Note the operative word there: 'seems'. This word is a pernicious little beastie, and gives us no end of trouble when we think about what it means. The most famous soliloquy in all of Shakespeare's witterings, that of the eponymous Danish prince, has one interpretation that revolves around this word, the difference between seeming and being. We looked at such differences in a previous outing on semantics and the pitfalls of natural language in The Map is Not the Terrain, wherein we learned about how words can trip us up if we're not careful, and this is the core of today's topic.

First, though, let's talk about the popular misconceptions about what the observer effect is. 

As we learned in The Certainty of Uncertainty, there are some really counter-intuitive effects that arise from Heisenberg's Uncertainty Principle. The funny thing is that the observer effect really isn't one of them, once you understand what's going on. That said, the misunderstanding is an easy one to come away with if you don't have that grasp of the underlying physics, and it's one that's promulgated even by articles in popular nominally science publications such as an article published in New Scientist entitled Consciously quantum: How you make everything real


One of those counter-intuitive effects is one we've covered in various places on this blog and it goes under the name of 'wave-particle duality'. Often taken to mean that the things we think of as particles aren't only particles, but are also waves - another misconception we've dealt with previously - what it actually is is the simple principle that the things we think of as particles are neither particles nor waves, they're something else, something that can only be named by naming the entities themselves. In short, an electron, for example, is not a particle, nor a wave, nor both: it's an electron.

This highlights one of the main reasons that Quantum Mechanics is so counter-intuitive. When we want to explain something, we do so by comparing it with something we understand. In the case of so-called particles, there really isn't anything we can compare them with that will aid our understanding. They simply don't behave like localised chunks, so comparing them to chunky things is misleading. They also don't behave like distributed waves, so comparing them to waves is misleading. When we interact with them in one way, we see a behaviour that we associate with waves. When we interact with them in another way, we see a behaviour that we associate with particles.

This is, in a nutshell, the source of all the misunderstanding of the observer effect, and the real problem in there is the word "we". 

Let's have a look at what's really going on, and see if we can tease out why this view is wrong. For simplicity, from here on, I'm going to talk only about a single electron, because this will allow me to avoid all the pitfalls inherent in talking about particles and waves. Where I talk about particles or waves (which I may or may not do - simply treat this as a caveat), I'm talking about the wave-like or particulate aspects of the behaviour of our electron.

There's a set of relationships in quantum theory known as the de Broglie relations. It's worth getting the chalk out here, so that we can grasp them completely. We don't actually need them to illustrate the point, but it's useful for completeness because they will drill down to the reality of the observer effect, and because we haven't covered this in any previous outings on Quantum Mechanics. Let's start with a wave:

Every wave has associated with it a wavelength, which we denote lambda (\(\lambda\)). It also has a linear frequency nu (\(\nu\)). The latter, for us audio geeks, corresponds to Hertz (Hz). Hertz is the number of times a wave cycle completes per second but, in quantum theory, we have to deal with time and space, so we can't use time-dependent notation.

We also have an angular frequency omega (\(\omega\)), which is found by \(2\pi\nu\). The other important terms we need are the wave number \(k \), which is \(\dfrac {2\pi}{\lambda}\), and Planck's constant \(h\), which has a numerical value of \(6.626 \times 10^{-34}\; J\cdot s\) (joule-seconds; this is the dimensional integration of energy over time). Planck's constant will always be reduced from here on, because it means we don't have to keep dividing by \(2\pi\), so we use the reduced Planck constant \(\hbar\), which is \(\dfrac {h}{2\pi}\).

So, now we have our terms. There are two other terms that we'll be using, but they're very familiar to us, as we encounter them often. They are the energy \(E\) and the momentum \(p\).

Regular readers will be wondering when our old friend Einstein is going to pop up in this post, so we might as well get it out of the way.

In one of his annus mirabilis papers from 1905, Einstein gave us the first inkling of the duality of light. We already knew from Young's double-slit experiment that light behaved like a wave, because it showed the characteristic interference pattern. Einstein, in his work on the photoelectric effect that earned him the Nobel prize, showed that light must come in packets, or 'quanta'. In particular, he showed that an electron couldn't be dislodged from a photovoltaic panel unless it had a linear frequency \(\nu\) higher than a certain minimum value, no matter the intensity of the light shone on it (the number of photons thrown at it), and that linear frequency is related to energy via a proportionality constant, the aforementioned Planck constant. In other words, the energy of a photon is given by the equation \(E=h\nu\). The momentum of a photon is given by the equation \(p=\dfrac {h}{\lambda}\)

When we put all of this together, we get an interesting set of relationships that, because we have both energy and momentum, means that we have full wave-particle duality - that photons have both wave-like and particulate behaviours.

This relationship was taken a step further by French physicist Louie de Broglie, who postulated that, if these relationships hold for light, and given that light displayed wave-like and particulate behaviour, perhaps other particles might also have wavelike properties, and that therefore the energy relationship \(E=h\nu\) might hold for them. This being the case, he conjectured that the momentum relationship might also hold, which would mean that the inverse of the momentum relationship should be true, namely \(\lambda=\dfrac{h}{p}\).

Thus, rearranging all of this and using the reduced Planck constant, we get the following generalised relations for ALL particles:
\(E\sim\hbar\omega\)
\(p=\hbar k\)


Now, we really didn't need all that to get to the nitty-gritty of the observer effect, but it does put some flesh on the topic. The really important thing to take from all of this is the interdependent relationships between energy, wavelength and frequency. In summary, the greater the energy, the higher the frequency and the shorter the wavelength. This is about to become important when we look at what happens during observation.


There are lots of ways we could illustrate this, but I haven't found anything as elegant as the following. 

Let's take a little diversion, and imagine a very popular executive toy, a pinart puzzle. I'm sure this toy doesn't need any explanation, but the image on the left shows one. 

Now let's imagine trying to work out what one of these toys might be displaying by throwing things at it and registering the interactions. This might seem a bit of a strange way to go about 'seeing' it, but this is actually precisely what's happening when we see anything, namely that we register the photons bouncing off things.

Let's start with a football (being non-American, when I use the word 'football' I'm referring to a ball that you strike with your feet, as opposed to the hand-egg that Americans use the word for). Throw a football at the pinart toy and register the interaction. Yes, I'm aware that this is silly, but the silliness highlights something important. You simply wouldn't expect to get any information about what a pinart is displaying if you throw a football at it.

Let's try something smaller like a tennis ball. You might get a little information, but it isn't going to be much. You should be able to see where this is going by now. You can work your way down in scale using smaller and smaller balls, and you're going to get very little information about what the pinart is displaying until you get down toward the scale of the pins themselves. Go even smaller, and you'll resolve more and more detail until, once you get well below the scale of the pins, you can see the bevels on the edges of the individual pins. Smaller still, down to the scale of particles, and you can resolve the microscopic detail of the surface of the individual pin.

Once we grasp this, it becomes pretty obvious that there's a direct relationship between the scale of our 'observer' and the scale of the thing being observed. We simply can't resolve any detail unless the thing we're throwing at the subject is on a commensurate scale.


Let's have a look at a double slit experiment. For a complete explanation of this experiment, see The Certainty of Uncertainty. We're going to concern ourselves not with the experiment itself, but with what happens when we 'observe' what's going on. The experimental setup is exactly the same as always. The only difference is that eye, which represents a conscious observer. Which slit does the electron go through? 

The problem is clear. We can't actually resolve an electron with our eyes. We have to interact with the electron in some way in order to see which slit it went through. This is where what we learned above becomes relevant. In order to look at the electron going through the slit, we have to throw something at it. In other words, we have to inject an observer into the picture, and if we hit it with anything, we're going to impact its travel. 

Can we be really cunning and just tap it lightly? Maybe we can just give it a tiny nudge with a really low energy photon, and that will tell us which slit it went through? Let's try it.

And we can immediately see the problem. Because the wavelength is inversely proportional to the energy, any low-energy photon is going to have such a large wavelength as to make it impossible to resolve an individual electron going through an individual slit. In order to get the required resolution, we have to throw a short wavelength photon at it, which means high energy, and higher energy means giving the electron a bigger kick.

There's simply no way around this. This relationship lies at the heart of the observer effect, and underlies the vast majority of what we think of as 'quantum weirdness'. 

It's also worth noting that this is also precisely why we have to build more and more powerful particle accelerators, because we need higher energies to probe smaller and smaller scales, and all because of this inverse correlation between wavelength and energy. 

There are other ways to find out which slit a particle has gone through, but all of them eventually lead to the same problem, namely that we have to change the particle in some way in order to make it clear which slit it went through, whether that involves smacking it with a photon or forcing its spin around a particular axis. Every single way that we can get 'which path' information has ultimately the same effect. This is the observer effect. It doesn't for a second mean that an observer has to be conscious and, as we've seen here, the observer in the vast majority of cases is another particle.

Sleep easy. The world still exists when nobody's looking at it. Indeed, looking at it makes no difference whatsoever, interacting with it does. The wavefunction of every particle making up the world is collapsed by the interactions of every other particle. 

Thanks for reading.


Further reading:

Give Us a Wave! A treatment of waves, coherence and quantum theory.