Guest Post: Linnaeus, His Taxonomy, And Its Subversive Underpinnings

There is no more thunderous prescient of doom than the flutter of tiny wings - hackenslash

A very good friend of ours once said that you've got to be a real asshole to quote yourself* but, sometimes, the best way to express a notion is via repetition. Besides, let's face it, I'm a bit of an asshole.

Now, given the usual output of this little corner of the interwebs, one might think that little snippet to be some pithy commentary on chaos theory, a theory dealing with non-linear systems - systems that display sensitivity to initial conditions. This theory is most famously mangled in that scene from Jurassic Park in which the eponymous 'butterfly effect' is explained with the direction a drop of water cascades down the hand in a series of trials.

However, when I originally wrote that, it was an expression of admiration for our guest today. Calilasseia is a bit like the bogeyman, the monster that creationist parents tell their children about when they misbehave. Formerly moderator of the creationism section of the now-defunct Richard Dawkins forum, Cali is known to unleash tactical discursive ordinance on a scale that creation scientists are furiously working on a metric to be able to measure in a desperate attempt to find some way to counter even a tiny portion of it. His depth of knowledge and understanding of a huge range of topics makes that a somewhat quixotic enterprise.

Regular readers will recall having met him previously, in his spectacular and utterly comprehensive demolition of creationist dreck concerning radiometric dating which, I'm sure you'll agree, is a thing of beauty.

This one is a particular treat, as it's one I asked for specifically. Some will recall an incident - almost two years ago now - in which the Natural Environment Research Council has requested submissions from the public for names for its brand-new £200m research vessel, and my reasoned and probably only serious defence of keeping the name that won the public vote, the RSS Boaty McBoatface. In that piece, we explored some instances of scientists having a bit of fun with naming things, an attempt to undermine the notion that giving a serious research vessel a silly name was somehow disrespectful. I also talked about a fun evening trapping and cataloguing moths in which a certain individual regaled us with some of his knowledge of humorous binomials (two names) from the annals of Linnaean taxonomy. That individual was, of course, the inimitable Calilasseia. 

Ever since that outing, I've wanted to have a bit of fun and explore some of this wonderful territory, just to show that science is not the dry and dusty edifice that some might suppose.

Cali has offered the following introduction to Linnaean taxonomy which, I know, many of you will enjoy as much as I did. It also serves as a beautiful debunking of the notion that taxonomy is based on similarity alone.

Ladles and jellyspoons, the Blue Butterfly.


I was requested to provide a contribution on this topic, and so, I’ve finally set out and done just that. In short, what follows is an attempt at a concise history of the taxonomic system Linnaeus bequeathed to the world, after years of scientific endeavour. But, we need to cover some elementary concepts, before exploring the interesting history of that taxonomy, and the reasons for that taxonomy taking the form it did.

Quite simply, those Enlightenment scientists whose remit centred upon biology, quickly realised that in order to make sense of living organisms and the behaviour thereof, the first task to be undertook, was to identify those organisms. The basic principle being, of course, that if you’re going to talk about something, it helps to know what you’re talking about.

Now, for some organisms, distinguishing between them, and assigning an identity thereto, isn’t really that difficult. Most people would have little trouble telling a human being apart from an elephant. There are enough conspicuous differences between the two, to render this exercise simple. But when other organisms are examined, that task becomes a good deal more difficult. For example, a botanist could place two species of dandelion in front of an audience, and that audience, unless it consisted of other trained botanists, would be unable to tell the two apart. Likewise, an entomologist can easily alight upon very similar looking beetles, whose identity can only be resolved properly by other trained entomologists.

In the case of the dandelions, I’ll have to let a trained botanist provide the examples, as botany isn’t a discipline I possess great expertise in. However, thanks to my spending time studying entomology, I can provide a ready example in the case of beetles, and one that’s accessible to anyone who is willing to go out looking for them in the UK, where they can be found. Here they are:

Those beetles, incidentally, aren’t the only two I could have chosen. The UK fauna alone has dozens of similar pairings to select from. Here’s two more, again from the UK fauna, though the one on the left chose to tuck its head out of sight when photographed – that is not the key difference between these two:

So, in a world littered with lookalike organisms, how do we tell them apart, and resolve the identity problem?

Linnaeus wasn’t the only biologist working on this problem at the time, though he happened to be a fairly prolific contributor to the solution. That solution comes under the heading of comparative anatomy. In short, the procedure is as follows:

[1] Dissect the specimens of interest minutely;

[2] Diligently catalogue all of the anatomical features alighted upon during said dissection, along with sizes, shapes etc;

[3] Note where some specimens exhibit differences in precise anatomical construction from others.

Since more conspicuously different organisms (e.g., humans and elephants) exhibit observable anatomical differences (in this case, even before dissection), the biologists of Linnaeus’ day reasoned that the same would be true for every species they encountered. As with everything else in the world of biology, application of this principle is actually a little more complicated than I’ve just described, and modern data, whilst largely upholding this principle and its applicability, also provides us with a small number of counterexamples. But, I digress.

Armed with this idea, Enlightenment biologists set about dissecting with gusto. Linnaeus himself was a prolific practitioner of the art. Courtesy of this labour, made doubtless even more laborious, by only having quill pens and ink as documenting tools, they did indeed find that the principle expounded above, was generally applicable, and that comparative anatomy would provide a reliable foundation upon which to base species identity. Remembering of course that those biologists were pioneers in the field at that time, and it’s thanks to them, along with a lot of subsequent scientists and their labours over 250 years, that we know as much as we do now. We had to start somewhere, and whilst early biologists were laying some groundwork for some time before Linnaeus and his contemporaries, the exercise only truly started to become systematic and planned, around the time Linnaeus was in his childhood. Though there was one step missing from the enterprise, which it fell to Linnaeus to provide with his own later work.

That missing step, was the very simple one, of providing names for all these organisms, once they were determined to have separate identities. Indeed, that’s part of the central function of language itself – to provide us with a means of knowing what we’re thinking about, or subjecting to discourse. But, as anyone familiar with the existence of colloquialisms should already be aware, that function is compromised somewhat in everyday usage. Words in a language can become associated with multiple meanings, thus generating a need to extract the intended meaning from context, an area which makes English particularly hard for non-native speakers to learn. This one-to-many mapping of words onto meanings is also associated with the dual problem (again, prevalent with woeful frequency in English), namely the mapping of many words onto one meaning, or the existence of synonyms.

Matters became far worse, from the standpoint of biologists, when one looks at the names that even fairly common organisms had acquired over time, some being burdened with literally dozens of regional folk names. And that was just in the UK. A similar situation presented itself right across Europe, with folk names in abundance clouding the picture, even for organisms that were well-known and well-understood by the standards of the era.

What those Enlightenment biologists wanted, was to avoid those issues, and find a means of bestowing names upon organisms, that were unique and unambiguous.

Enter Linnaeus.

Linnaeus, indeed, was, in effect, the first individual to codify this requirement. But he went further. He not only made it a requirement of his taxonomic system, that names should be unique and unambiguous, but that those names should be connected to the anatomical data on those organisms arising from all that dissection work. The names to be chosen were, wherever possible, to be descriptive.

This set of requirements, on their own, would have constituted a significant step forward for the era. But Linnaeus went further, and in doing so, made his system, for reasons I shall come to, ever so slightly subversive.

Having engaged in much dissection work himself, Linnaeus, and for that matter, other biologists of the era, were moving toward a conclusion, even if they were not openly voicing that conclusion in explicit terms. That conclusion, and a very subversive conclusion it was to prove to be, was that organisms with very similar anatomy, and very few differences, were to be regarded as more closely related to each other, in some sense, than organisms with greater differences. This idea, of relatedness of living organisms, and the subsequent expansion of that idea within Linnaean taxonomy (I shall explain more shortly), was, of course, to be a central feature in later biological work. Indeed, it is possible that if Linnaeus had started to move away from taxonomy, and start investigating possible reasons for that relatedness, we could have had that later work much earlier than was the case. But this was not to be: taxonomy was Linnaeus’ overriding remit, and whilst pursuing the business of founding a workable taxonomic system, he simply treated relatedness of living organisms as a useful brute fact. But that he considered the concept valid at all, despite being, courtesy of the era he lived in, and the absence of competing explanations, a de facto creationist, should be making some of the audience of this exposition smile particularly mischievous smiles.

So, Linnaeus decided to embody that concept, of relatedness of living organisms, into his taxonomic system. He did so by arranging for organisms to have a two part name (hence the term ‘binomial classification’). One part of the name, the species identifier, would be associated with the particular organism in question, whilst another part of the name, the Genus, would indicate that the organism belonged to a well-defined group of anatomically related species.

Linnaeus took this idea of grouping anatomically related organisms still further. Not only were species to be grouped together into a Genus, but different Genera were furthermore to be grouped into Families, the Families into Orders, the Orders into Classes, and so on. In short, Linnaeus’ classification system, contained within it, an inherent idea of a tree of life. Yet, despite this direct embodiment of the tree-of-life idea into his taxonomy, Linnaeus managed to escape from some of the more febrile attention that was to be heaped upon some of his successors.

At this point, it is necessary to take a slight tangential diversion, and comment upon some of the particulars of his system. One feature thereof being a bone of contention for many, amateur and professional naturalists alike, and that feature centred upon his language choice. Linnaeus chose, in a move that seemed eminently sensible to him at the time, for reasons shortly to be revealed, to base his system upon Latin and Classical Greek.

The curious at this juncture are almost certainly asking why this choice was made. That has much to do with history, starting with the fact that Latin and Classical Greek had been mandatory entry requirements for European universities for at least 500 years, at the time Linnaeus was working. Since these two languages had been considered essentials of academic discourse for so long by Europe’s scholars, this choice made sense from that standpoint alone.

Those wondering at the underlying reasons for that persistence of these languages within European academia, have to look a little bit farther back in history. The earliest universities to arise in Europe, had their foundations at least in part, as offspring of theological seminaries, some more directly tied thereto than others, and inherited those language requirements from that background. At that point in history, theological seminaries were, of course, pretty much exclusively controlled in Europe by the Roman Catholic Church, an organisation for which Latin was the official language of discourse, courtesy of its own historical roots, and which wielded a large amount of political power across Europe during the era of the foundation of the earliest universities.

Classical Greek was included in the requirements, for more historical reasons, centring upon the fact that the New Testament was written almost exclusively in Classical Greek (to be precise, a dialect known as Koiné Greek), and of course, study of the New Testament was an essential part of the business of those ancestral seminaries. That Greek dialect itself, became the language of choice for the earliest New Testament writers, courtesy of being the lingua franca of commerce in the eastern Mediterranean, especially after the conquests of Alexander the Great. Additionally, Classical Greek civilisation bestowed upon the nascent Western Civilisation of Europe, a wealth of other documents, from authors ranging from Homer to Aristotle and Plato. Understanding their output, was, of course, an essential part of the business of the newly founded universities, and thus, Classical Greek and its variants became, alongside Latin, a mandatory entry requirement for anyone aspiring to be a scholar.

However, those two languages didn’t only have prestige and tradition to support that choice. Both languages contain rich vocabularies, which are of immediate use to any enterprise requiring descriptive nouns or adjectives, and the new taxonomy Linnaeus was introducing, was most assuredly an enterprise requiring this. Additionally, both languages possess systematic grammar constructs, allowing names to be devised in a systematic manner with relatively little effort. Much of the heavy lifting has already been provided by the existence of regular noun declensions and verb conjugations, to name but two entities of utility value in this regard. Since Linnaeus was very definitely interested in systematic name generation, these features of Latin and Classical Greek almost certainly endeared themselves to him, and were to be considered similarly useful by his similarly educated scholarly successors.

Furthermore, choosing two ancient languages enjoying universal prestige across Europe, neatly sidestepped the thorny issues that would have arisen, had a different choice been made. Given that Europe could be politically tumultuous in Linnaeus’ day, and in some cases for reasons that strike the modern reader as utterly trivial and banal, the choice was a wise one at that time. Selecting Latin and Classical Greek, neatly avoided giving the more febrile pursuers of European geopolitics in that era, another excuse to run riot, and launch into actual warfare, as was frequently their wont.

But, a choice that made much sense in 1758 (the year of publication of the tenth revision of Linnaeus’ Systema Naturae), seems pretty quaint to the Internet generation. In no small part, because study of those two languages, once considered an essential part of the school curriculum across Europe, is now very much a niche area. The educated layman of the 19th and early 20th centuries, was, paradoxically, much better equipped to understand the rationale behind the Linnaean taxonomic system, than the educated layman of the 21st century. In an era where manned spaceflight, supercomputing and direct manipulation of the genome are all engineering realities, the minutiae of languages that were last in serious use 1,000 years ago, tend not to be high priority topics for study.

From the standpoint of appreciating Linnaeus’ work to the full, this is, of course, a cause for lament, not least because there is much of interest lurking in that taxonomic system. One source being, the increasing dawning upon biologists, that their cataloguing exercise is far bigger than Linnaeus and his contemporaries imagined it would become. Having succeeded in giving the world a means of identifying and naming species, Linnaeus et al launched biologists on the path that would lead them to discover the magnitude of the task at hand, and the numbers in question are truly intimidating.

The classic case, of course, is the insects. The numbers of insects alone is staggering to behold. Even if we confine ourselves to the “Big Five” orders, the numbers are as follows:

Beetles (Coleoptera) : 400,000 species known to science
Butterflies & Moths (Lepidoptera) : 200,000 species known to science
Bees, Ants & Wasps (Hymenoptera) : 150,000 species known to science
True Flies (Diptera) : 125,000 species known to science
Bugs & Allies (Hemiptera) : 80,000 species known to science

Bear in mind, in addition, that those totals are growing. A new beetle is described by scientists at the rate of one per day. For the Lepidoptera, a new species each week is added to the tally. In the case of the Hymenoptera, that’s about one every three or four days. This process – the addition of new species to the list - has accelerated in recent years, as previously inaccessible parts of the planet have fallen within relatively safe reach of scientists, who no longer have to contend either with daunting natural obstacles, or the woes arising from the inability of some humans to coexist. Places previously unexplored due to hideous endemic diseases or internecine political strife, are now within reach of any institution that can afford the air fare, and the literature has been expanding in tandem with this development.

Those numbers, point to a big problem taxonomists have faced. Even those rich vocabularies I cited earlier, that were such an attractive feature of Latin and Classical Greek, start to run dry when there’s a million species to find names for. That number looks set to grow even larger, with some estimates reaching as high as 100 million species by the time the cataloguing has finished, and that estimate does not include fossil organisms - these too have been added to the list, and new fossils merely add to the naming problem. For example, there are 17,000 species of trilobite fossil known to science, and all of them have had to be named. A problem that living organisms do little to mitigate, when you have, say, 55 species of similar looking dull brown Skipper butterflies to name, and you end up running out of ways of saying “dull brown with bent antennae” long before the end of the list.

Consequently, some lateral thinking has had to be pursued. Mythology was, even as far back as the time of Linnaeus himself, quickly plundered for somewhat oblique descriptive references, with attributes of mythological characters tied to the anatomy or behaviour of several species. A particularly humorous example from my standpoint being a South American butterfly, named Styx infernalis. It acquired this epithet because taxonomists struggled for a century to find a home for it in that big tree of life, courtesy of the fact that upon dissection, it presents itself as a sort of ‘parts bin’ special, with features that could have come from any of four different Families. We had to wait for DNA technology to resolve that one, and, no doubt as an indication of their frustrations, taxonomists labelled this unfortunate butterfly as they did – Styx being, of course, the fabled river to the Underworld of Greek mythology, leading to this butterfly’s scientific name translating loosely as “the butterfly from Hell”!

However, there’s not merely exasperation to be found in some of those names. Every human emotion has, at some point, influenced some taxonomic decisions, and one need look no further than Linnaeus himself for a particularly juicy example. Here in the UK, there are four moths belonging to the Genus Catocala, all of which are characterised by dull forewings, that fold over and hide brightly coloured hindwings when the moths are at rest. One of these, Catocala fraxini, characterised by light blue hindwings, has a workmanlike choice of name, fraxini referring to the fact that the larvae of this species feed on Ash trees, the Ash tree itself being Fraxinus excelsior.

The other three, however, are named Catocala nupta, Catocala sponsa and Catocala promissa – the words sponsa, nupta and promissa all being related to brides, marriage and courtship related affairs. What prompted this choice? Back in Linnaeus’ native Sweden, it was the custom at the time, for brides to wear red petticoats on their wedding night, to be revealed just prior to the newlyweds skipping upstairs to the conjugal bedchamber. Consequently, at least one eminent scholar on the subject, has suggested that Linnaeus may have been gazing upon some nubile young maiden through his window whilst working on these moths, and the ensuing fond daydreams filtered through into his taxonomy.

That coupling of underwing moths in the Genus Catocala, to names connected with our own dalliances, as it were, persists into modern times. When, in more recent years, moths belonging to the same Genus were found living in North America, these too acquired their own courtship related names – some apposite ones to mention being Catocala amatrix (Sweetheart Underwing), Catocala amica (Girlfriend Underwing), Catocala cara (Darling Underwing), along with some species named after various mythological or literary temptresses. Thus we have Catocala desdemona, Catocala delilah, and Catocala miranda, just to add to the fun, and for those who welcome the requisite inclusion of diversity, there is also Catocala sappho to add to the list.

The topic of humour in taxonomy, incidentally, is now so voluminous as to require an article of its own, but I shall pause to mention but one example, where a desperation for new names led to some very off the wall thinking – namely, the bestowing, by the author, of the following four names on some new fly species (read them out loud for best effect):

Pieza pi
Pieza kake
Pieza rhea
Pieza deresistans

You get the picture.


Yes, we see (see what I did there?)

Thanks very much, Cali. Always a treat.

For those interested in discovering more hilarious examples, there's a marvellous website that Cali has recommended.

And finally, regulars can expect to see two new entries into the FAQ section in the next few days dealing with two very common apologetic fallacies which Cali has dealt with in his customary style and which he has kindly consented to reproducing here.

Thanks for reading.

*One prolific member of the Dawkins forum, Scholastic Spastic, had this quoted in his forum signature. Apologies for the in-joke.

Where Do You Draw The Line?

True intuitive expertise is learned from prolonged experience with good feedback on mistakes - Daniel Kahneman

Regular readers will by now - and long before now, one would hope - have grasped my purpose here. Ultimately, my aim is to see if I can't, in my own small way and with my tiny voice, aid our species in bootstrapping its way to better ways of thinking about things. We've touched on a diverse range of topics, including science, logic, epistemology, the core of philosophy, ethics, politics... let's face it, there isn't an awful lot of terrain we haven't covered in one way or another in broad terms, and one might think there's little left to do beyond the traditional dotting, crossing and punctuation. Such a conclusion would be premature, however, not least because there are always things to be learned, and paradigms to challenge. The simple fact is that we still have oodles of geography yet to be explored. 

But I digress, as I'm wont to do on occasion. So shoot me. Focus is the hobgoblin of narrow minds, as they say. Or maybe I just made that up.

For this outing, I want to start out by trying something a little different. It's going to be difficult to achieve the desired result in all its glory due to working in a largely textual format, and it certainly isn't going to be anything like rigorous or scientific, but it should be instructive nonetheless. 

I've run this exercise in a one-to-one setting verbally with a few people, and the results have generally been commensurate with what appears below. I'm going to use spoilers, so that I can get as close as possible to the verbal delivery. This works best if you don't pre-empt the spoilers, only clicking on them when you've read all the text leading up to them.

I ran this little exercise very briefly with my friends on Twitter and Facebook - after already having selected the spoilered images below, I should note - and the results were pretty interesting, not because they massively diverged from my expectations, but because they didn't. I asked them to conduct the exercise as described, and then to go and find the image on Google that most closely matched what they envisaged.

Here's the exercise:

Picture in your mind a painting of a ship. If it helps, do what the tweeps above did and go and find an image on google, selecting the one that most closely matches your vision. When you have a clear image, click on the button.

I suspect that, details aside, most will have something like the image spoilered there. You might have pictured a modern ship with engines and funnels, or maybe with more or fewer sails, or differences in general presentation but, in most cases, this will be something along the lines of what you pictured. 
Some will have had radically different imaginings. Some of my Twitter cohorts certainly did. What's interesting is the trend those imaginings show, and where they came from.

The tendency away from that paradigm was realised in very specific kinds of thinker, and that's where we repair to now.

Two of the respondents delighted me (one of whom I'll be discussing shortly), in selecting the painting I had lined up next.

It should be fairly obvious which direction I'm headed in now. The painting behind that spoiler is far and away my favourite painting ever. It's Turner's amazing Fighting Temeraire. It depicts one of the ships that had a pivotal role in the Battle of Trafalgar being tugged away to be decommissioned.

Turner brought with him a style of painting that defined an entire movement. The approach is one that shows a level of expertise utterly distinct from what had gone before. Indeed, in many ways, the world wasn't ready for it, but the impressionist movement that came after him shows just how revolutionary this approach was. Unlike the first spoilered image above, which depicts the subject directly, Turner's approach was to try to capture the light coming off the subject. 

Now, I've heard people talk about Turner's work as being something less than brilliant, precisely because of the lack of direct detail. However, this amounts to little more than knowwhatIlikeism, and fails to recognise that what a painter always paints is the light.

While I was writing this, one of my awesome friends penned a lovely piece that discusses some of what I'm talking about here, though from a slightly different perspective, not least because her motivation is how we think about the art itself, while I'm really using the art as an analogy. I highly recommend her article, which can be found HERE*. In this piece, Molly talks about stages even further on from Turner

The point here is that this shows the difference that expertise can bring to a situation. 

In the last couple of years, there seems to be a real resistance to expertise. Of course, I have to remind myself that it isn't anything new. I know, for example, from almost twenty years countering science-denial of all stripes on the internet, that the self-proclaimed internet expert pontificates on many and diverse topics she knows bugger all about. Much of the content of this blog is dedicated to debunking the faux-expertise of people who will confidently assert that evolution violates the second law of thermodynamics, or that the fact that the horizon is the same distance in every direction proves that the planet is a giant space-pizza. We have tributes to people with no medical expertise putting the lives of children at risk because of their faith in a falsified study from the seventies.

In the last couple of years, though. It's taken on a whole new complexion in the public sphere. First, the UK held a referendum - a referendum that was only called so a cretin could get re-elected - in which it rejected the expert opinions of many of the world's economists that the UK leaving the European Union was a really bad idea, even to the extent that Niggle Fromage, then head of the UK racist party, made this a campaign tagline.

Then, in a stunning DIY rhinectomy of absolutely gargantuan proportions, the US elected a self-admitted serial sexual predator to the highest office in the land, an office he's repeatedly shown himself unfit to hold (frankly, I wouldn't employ the moron to make tea).

The more I see the current sociopolitical situation in the world, between the stunningly incompetent moron in the White House, and the moronic wet blanket in Downing Street, the more convinced I am that not only is the amazing Turner seascape above well beyond our reach at the moment, all through lack of expertise, the first image above is probably out of our grasp. Indeed, with the level of expertise we have in politics between our two nations at the moment, I'm increasingly of the opinion that, if we survive at all, our future looks comparatively like the following. 

Thanks for reading.

*Indeed, I heartily recommend all of Molly's writings. She writes beautifully, and covers a range of topics, including art, poetry, and medical science. Her main page can be found at The Whispering Dark.

The Most Wonderful Time...

Christmas should be banned.

There, I've said it. 

For many, Christmas is allegedly a time of joy and wonder, but some will no doubt expend excesses of verbiage on that, so I'm going to talk about the things that we overlook.

Just for fun, though, let's talk about what Christmas really represents.

The history of celebrations at this time of the year tracks back to pre-pagan days, and quite possibly even earlier. The motivation in our deep history should be fairly obvious even to the uninitiated, especially when one looks to heterothermic latitudes. One can see the obvious wish to celebrate surviving the worst of the winter, getting over the hump of the season and into that part of the orbital cycle in which days begin to lengthen and nature starts to look alive again.

This is hardly a mystery, and speaks to the process we all go through in everyday situations in which we divide tasks up, and feel better when we get significant portions out of the way. This can even be reflected in some of the language we use. We might say, for example, that we've 'broken the back' of a piece of work, or even, as I used above, 'gotten over the hump'.

As our history progressed, such celebrations will have become codified and even ritualised, and we can see how we'd apportion significance in terms of the animisms of our earliest religious leanings and even take them forward into the paganist frameworks of our more recent ancient history.

The Romans, bless them, had some really interesting ideas about how to run an empire. Among them was the notion of maintaining, to some degree, the status quo. They cleverly tended to leave administration mechanisms and existing rulers in place, bringing them under the umbrella of the empire and improving their living conditions as reward. This meant they had little to do in terms of genuine conquest, comparatively speaking.

Later, when Constantine made Christianity the official religion of the empire*, and then later still, when the empire essentially morphed into the Catholic church, it seemed an obvious move to do something similar with their holy days. The most efficient way to do that was to usurp the existing festivals.

Of course, this was long before the modern commercialisation of them, when Easter, formerly a pagan fertility festival, became about chocolate eggs and Christmas an advert for Coca Cola, largely responsible for our modern vision of the festival, with their liveried version of St Nicholas, became about frenzied spending toward which large portions of our annual labours were geared.

The veneer of glitter and glitz, however, belies the dilapidated anachronism lurking beneath. In previous outings, which can be found in the physics and cosmology section of this very blog, we can find the genuine reason for the season. In DJ! Spin That Shit! for example, we learned about the planet's 23.5° axial tilt, first calculated by Eratosthenes some 2½ centuries before the focus of the modern holiday was even (allegedly) born. 

People talk about it as a time of joy and wonder, while the only wonder I associated with it is wondering why people buy into it.

Having worked for a time in customer service for a large UK retailer, my experience of what's beneath that veneer is far from joyous. It's a time of ridiculous levels of stress and the incurring of crushing debt in the name of something whose best feature is its end. The despicable ways that people treat other people over what amounts to a bit of shopping is something I've never been able to get my head around, especially when the holiday is painted as the season of peace and love. I can tell you from bitter experience that peace and love are the farthest things from the minds of some people when they discover that they should have done their Christmas shopping a bit earlier to avoid disappointment.

And the best of it is that I'm still only talking about the people who actually profess to enjoying it.

This commercialisation has a more sinister side beyond the ridiculous debt that people incur. First, it's inescapable. Everywhere you look for months beforehand there are flashing lights and advertising (and we won't get into the bloody awful dreck that passes for music that was shite to begin with and gets re-released every year and played ad nauseum on the radio and in every public venue). In and of itself, this isn't necessarily a problem, although it could at least wait. The real problem with it being inescapable is that this is a horrible time of year for many, and that's really the motivation for writing this.

For many, what this time of year represents is soul-crushing loneliness. For people who are alone, the outward presentation of all this is nothing more than a reminder of how alone and isolated they are. Even among those who aren't alone in the fullest sense, this can be a very difficult time, because the faux-jollity can isolate those who aren't feeling it. In those with a tendency to depression this can be especially pronounced.

In 2016, the UK mental health charity MIND sponsored some research into this, and it turned up some interesting statistics.

Eleven percent of people surveyed felt unable to cope at Christmas. This figure rises to thirty-one percent among people with mental health problems. Seventeen percent felt lonelier at Christmas than at any other time, rising to thirty-nine percent among those with mental health issues. Some of this harks back to those stress factors I mentioned earlier, with twenty-eight percent saying they felt pressure to have the 'perfect' Christmas.

One factor that seems to play an important part these days is social media, with people suffering from mental health issues being almost twice as likely to compare their Christmas to that of others on social media.

It's well understood that incidence of suicide and attempted suicide increases around this time, and the survey bears that out, with five percent considering suicide and fully twenty-two percent among those suffering mental health problems.

And, in one of the most damning statistics arising from the study, fifty-eight percent of those suffering with mental health felt they had nobody to confide in, with thirteen percent unaware of how to access professional help over the holiday period.

As I was gathering my thoughts for this piece, I had a bit of a rant on twitter. I'll come back to that in moment, but some of the responses I received, both in the open and by private message, highlighted some groups that especially feel the pinch of loneliness at this time, and I want to talk about them. 

There are several reasons why one might be ostracised by friends and family, but there are two really common ones that really impact those figures above. The first is people coming out as LGBTQ. 

Although I'm CIS in just about every way, I have two gay members of my immediate family and, as a young man, I worked in Manchester's world-famous gay centre. I remember that this time of the year was particularly difficult for many, especially those in their first few years, as they spent their first Christmases away from their families. Incidence of suicide and attempted suicide was high in this group at the best of times, but it ramped up during the holiday season. The gay centre provided a place of community, as well as support for young gay, lesbian and trans people, including an advice line for those in need of it. This experience taught me the value of having somebody to talk to during the difficult times.

The other group, although a group I identify as a member of, was considerably less affected, largely because I'm lucky to live in a nation that, despite being nominally Christian, is and has been for decades a de facto secular nation, and also because my immediate family is socio-politically aware.  I know that the UK is, along with much of Western Europe, fairly open to atheism and secular principles, but that's not true of all of the world. In other places, notably the US, which is ironically a nominally secular but de facto Christian nation, in which for much of the country coming out as an atheist is met with the kind of response you'd expect when somebody admits to being a serial killer. In places like the US, spending your first Christmas as an atheist can be particularly difficult.

Now, I'm a big ugly bugger perfectly capable of holding my own yet, every year, I'm still met with incredulity when I say I don't celebrate Christmas. I get called miserable, a party-pooper and, of course, the ever present epithet of 'humbug'.

Those who know me well will tell you freely that I'm far from being a humourless man, and indeed many will say that I'm a bit of a tit most of the time, but this really does wear thin, so I can only guess at the impact it has on somebody who's really struggling to cope.

So, that's that, and this is this, and this is the bit where I tell you all what the problem is that I really have with Christmas, and it's this:

I fundamentally object to the notion that we should set aside a time to be nice to each other. I realise that this is a radical notion, and it will not be well accepted by many, but there it is. In my opinion, the simple fact that this time is set aside for 'hope and good cheer' speaks to the fundamental sickness at the heart of our species. I know that there are people out there who are kind, graceful, loving, and deeply concerned about the failure of hope in humanity. I know them, many of them. I know that there are white people deeply concerned with the way that people of colour are systemically abused by society, that there are straight people advocating and championing the rights of the LGBTQ community, that there are humans out there fighting tooth and nail for humans, but the despair and the rot are still there and, in some places, growing bolder if not growing.

We're living at a crossroads, ladies and germs. We're on the cusp of a history long-forgotten and, if we're not careful, soon-to-be remembered. We need to protect the most vulnerable in our global society, and especially at this time when many of them feel at their most vulnerable.

So here it is. If you feel alone, unloved, vulnerable, suicidal at this time, I'm here for you. I know I'm not alone.

I know you don't know me. I know that I'm no replacement for the family that have disowned you. I know that there can be no replacement.

What I also know is that it's perfectly possible to choose your family. To move on and put the bullshit and the toxicity behind you, at least to the extent that you can feel loved. You ARE loved. Never forget that.

I'll finish up by providing a link to some useful sources of advice. The first is the link to the suicide prevention page of Stop Homophobia, a brilliant resource for all sorts of advice not just geared to LGBTQ people. 

Know this: There are some really fucking stupid people in the world. What really stupid people are most stupid about are things that frighten them, which in turn mostly stem from idiotic taboos handed down to society by religions.

That said, it's really important to remember that, despite your horrible experiences, we're still emotionally-driven, and that means that most humans can be swayed. 

Ultimately, the most important thing to remember is that you're not alone. There are people that love you not just because they aren't afraid of otherness, but because they're filled with and made of love. You are loved, even by people who don't understand you, because there really are people who aren't afraid of what they don't understand.

Don't feel that the world hates you, or that you're alone, or that you have no family. You're not alone, we love you whoever and whatever you are (unless you're a bigoted twat, in which case we want to educate you), and we are your family.

We are all the same species, and we are equal.

Oh, and Christmas should be banned.

Thanks for reading.

If anybody needs somebody to talk to, and doesn't feel any of those channels are quite right for them, feel free to contact me at

Edited to add:

*My fantasy time-machine scenario is going back in time and preventing this own-goal on humanity.

There's mounting evidence that what's been accepted by scholars for many decades doesn't stack up. One historian, Richard Carrier, has concluded that the evidence for Jesus' existence is less than awful. My own position is that it's shaky, and I can think of one argument against it, but Dr Carrier has directed me to an argument that he suggests defeats this. I haven't had leisure to check it out, and previous discussion with my friend historian Tim O'Neill has taught me to be circumspect in this arena, not least because it falls outside the areas I tend to spend my time investigating. Ultimately, the jury is out, and this is an issue that I don't think likely to be resolved in any categorical sense any time soon.

A Hitch In Time...

Today is the sixth anniversary of the death of Christopher Hitchens. In tribute, this is what I wrote in one venue on that day.

Valé Hitch, you're still sorely missed.


Every once in a while, somebody appears in one's life whose influence on one is impossible to calculate. Just such a person was Hitch. For clarity of thought, courage of conviction and, above all, soaring prose to which I can only aspire, Hitch was a giant.

Contrary to the written assessments of others, Hitch never changed his political stance. All his statements, from his polemics against the irrationality of unquestioning adherence to doctrinal imperatives to the support of the ousting of tyrants and despots, stemmed from the same core; his social conscience. The latter, especially, oft-misunderstood by many to a quite frightening degree, was consistent from his earliest days to his death. He supported the Falklands war for precisely the same reason that he supported the invasion of Iraq. He saw in Galtieri the same 'evil' that he saw in Saddam Hussein, and his response was the same in both cases. He never swung from the left to the right, he remained rigidly opposed to tyranny wherever he saw it, and was not afraid to voice his opinion, regardless of whether those whose political alignment was similar saw things in the same light, and certainly with no regard to the feelings of anybody who might be offended.

As somebody not generally given to hero-worship, eulogising, or indeed to grief, I am somewhat conflicted in writing this. Do I mourn the passing of Hitchens? Not really. I do mourn the prose that will now remain forever unwritten, for the simple reason that he was matchless in this regard, and because there simply aren't enough intelligent books to counter the tide of vacuous drivel, much of which will undoubtedly be written about the man now his light has gone from the world (quite probably including this very piece).

And such a light! Rarely have I come across such biting wit, such laser-like precision, such eloquent rhetoric. Journalist Michael J Totten says of him that he was 'the greatest writer of our time, who could talk off the top of his head better than most of his colleagues can write' and it's difficult to erect an argument against this.

Author and blogger Ray Garton said of god is not Great that it 'gave me goosebumps when I read it the first time because it read like it had been written by someone who had reached inside my head, rummaged through my thoughts and knew exactly how I felt.' He will be remembered by many as the man who told them it was OK not to believe in nonsense, that it was perfectly acceptable to think one's own thoughts and draw one's own conclusions, regardless of the opinions of the majority of those around us.

For my part, I choose not grief (although I do grieve in some measure) at the passing of the man, but a celebration of his life, conducted in the manner in which he taught me, by striving – even if in vain – to match his eloquence, by continuing to stand against the irrational, by simply living.

Farewell, Hitch, and thank you.

There's a Hole in my Model...

When you assume, you make an ass of U and Me, or so they say.

Yesterday, I was alerted to a discovery that has the potential to challenge some of our assumptions, by somebody who wanted to know what I thought about it. To be honest, it presents something of a puzzle.

The discovery in question was a black hole. Well 'so what?', you might be tempted to ask; after all, black holes are hardly an Earth-shattering (pun intended) revelation. So what's all the hoo-ha about?

Well, it isn't the black hole itself that's remarkable. What's really remarkable is how far away it is, and its size. To show just how remarkable, I'm going to break with tradition somewhat and go over some previously-covered ground.

We've talked a fair bit about the Big Bang in quite a few other posts. We've noted that, in the popular understanding, the Big Bang is a theory dealing with the beginning of the universe. Despite the fact that this model is still talked about as the standard for the evolution of the universe, it hasn't actually been that for several decades, not least because of some fatal flaws, which we've looked at in some detail in previous posts. I'll include links to these posts at the bottom, along with any other whose subject matter crops up during the course of what follows, as these observations would take us too far afield for our purposes today. What I will do is to note that the Big Bang as we use it now doesn't actually deal with the beginning of anything. These days, the term 'Big Bang' is really nothing more than the name we give to the observed fact that the universe is expanding and was therefore smaller in the past. 

It does, however, still contain some implications, not least in terms of what we observe when we look back through time at the light coming to us from the earliest times we can observe. It's certainly the case that the popular conception isn't far off the mark in some respects, and observations in the last few decades have reinforced some of them. 

One of the things that all of our models have, aside from a Big Bang, is a time when our universe was extremely hot and dense. We've previously discussed the earliest observations we have of the cosmos, in a post about biological evolution in the context of entropy. In that outing, we noted that: of our best current sources of information about the early universe is the cosmic microwave background radiation (CMBR), discovered serendipitously by Penzias and Wilson in 1964. What the CMBR actually represents is not, as I've seen suggested, the glow of the big bang, but the photons that come to us from the 'last scattering surface'. This is going to serve as a useful pointer to what we're discussing here, so it's worth spending a little time on.

For about the first 380,000 years or so after the Planck time - a theoretical construct dealing with the smallest useful amount of time after the 'beginning' of expansion - the cosmos was opaque to photons. The reason for this is that, after the onset of expansion, the cosmos was extremely hot and dense. So hot, in fact, that it was basically a white-hot plasma of photons and ionised hydrogen, which consists of protons and free electrons. Because of the abundance of free electrons at such temperatures, the distance that photons could travel freely was massively restricted due to Compton scattering.

Now, anybody who likes to smell nice knows what happens when a body of particles expands, because we've all felt the deodorant can cool down as we let out the smellies. The same thing happens with the cosmos. As expansion continues, the cosmos cools. Prior to a certain low temperature, electrons remain free, not bound to protons, because of a quirk of entropy, namely that, in the environment they find themselves in, there's nowhere for energy to go to become unavailable. As the termperature gets below about 4,000 Kelvin, however, something interesting happens; there's somewhere for energy to go, so the free electrons begin to bind to protons, forming the first neutral atoms. The reason they do this is because it's energetically favourable for them to do so. This is just another way of saying that, once they can shed some energy, their lowest available energy state consists of being bound to protons.

As an aside, this is where the CMBR comes to us from. By the time the temperature gets as low as 3,000K, most of the free electrons have become bound, and the photons from the last bit of Compton scattering are now free to travel through the cosmos, and it's these photons that we detect as the CMBR. Due to the expansion of the cosmos, these photons are hugely red-shifted, meaning that their wavelengths are stretched out, until their temperature is about 2.7K, or -270 degrees Celsius, a smidge less than three degrees above absolute zero.
Here's one of the latest images from the Planck satellite, the most recent generation of satellites imaging the CMBR.

So, as we can see, barring some variation, representing temperature differences of a few ten-thousandths of a degree between the hottest (red) spots and the coldest (blue) spots, it's pretty homogeneous.

Let's jump forward now, and deal with what's actually been discovered, and then we'll look at why it's a problem, as well as looking at some potential solutions.

The object that's been observed is a quasar, or 'quasi-stellar object'. A quasar is an active galactic nucleus consisting of a central black hole with an accretion disc of gas. As the gas spirals into the black hole at huge velocities, it emits enormous amounts of electromagnetic radiation. The name itself is a contraction of 'quasi-stellar radio source', as their earliest discoveries were as sources of radio waves which, in the visible part of the electromagnetic spectrum, looked an awful lot like stars. In any event the electromagnetic radiation they emit has huge luminosity, with the brightest ones giving off on the order of \(1 \times 10^{40}\) W of radiation across the spectrum. When you compare that with the estimated luminosity of the Milky Way at around \(1 \times 10^{36}\) W*, that's a future so bright that you've gotta wear shades. Or, at least, you would, if this were all visible light, and if we weren't talking about the past, rather than the future.

So, this quasar, known affectionately to its friends as \(J1342+0928\), comes to us with a redshift of \(z=7.54\). "Fascinating!" you might say. "What does that mean?" Well, as we've discussed previously, the universe is expanding. Because it's expanding at approximately the same rate everywhere - known as metric expansion - this means that objects further away are receding from us at a greater rate. 

As we know from earlier discussions, when something is moving away from us, the light we receive from it is shifted toward the red end of the frequency spectrum, as the light-waves we receive get stretched out, making them look redder than their intrinsic colour. Thus, the further away something is, the faster it recedes, and the more its light is shifted toward the red.

What this means is that we can tell by how far the light has shifted from its intrinsic colour how far away it is, as in the image on the left from the European Space Agency.

This is very similar to what we all know as the Doppler Effect, in which passing sirens have their sound frequency shifted as they pass you, and it's what we now know as Hubble's Law, after Edwin Hubble, the American astronomer who first discovered extra-galactic sources and the expansion of the universe in the 1920s.

In this, what that redshift figure is telling us is that this quasar is a very, very long way away. In fact, it's at a comoving distance with Earth of just a smidgen over 29 Gly (giga-lightyears, or billion lightyears). That figure in turn means that, when the light we see from it left its source, it was about 13.1 Gly from us. 

Now, this quasar is big. Really big. Not on the scale of those really bright ones discussed earlier, but still absolutely huge, on the order of 800 million times the mass of the sun, and with a luminosity of about \(4 \times 10^{14}\) the luminosity of the sun.

And this is where the problems lie. We know, or we think we know, how black holes form. Now, it's certainly true that stars with larger mass exhaust their fuel much quicker than stars with low mass. Indeed, there's a direct proportionality to the mass of the star and the speed with which it runs out of fuel. Going back to the Planck image above, we know that the universe was pretty isotropic (the same in all directions) to within a few ten-thousandths of a degree, and we know that temperature relates to density. How, then, can we have a black hole of 800 million solar masses within only a few hundred million years (about 690 million, but who's counting)? In fact, although the CMBR comes to us from just shy of 690 million years before, that CMBR is itself part of a process, and electron bonding would have been continuing for some time afterwards, which means this is even closer than that.

This is a puzzle, and no mistake.

Now, there are some possible ways to resolve this. The first is that this is the merger of several smaller black holes that formed fairly early on after the last scattering. This seems statistically unlikely, though we can't rule it out as yet and, since this is the first observation of a large black hole at such a distance, and since this comes from a survey of only a very small portion of the sky, it may be that we find that this isn't anything like as uncommon as we think. It does seem to put a bit of a spanner in the works in terms of obtaining such high density from such a smooth distribution only a short time before, so this is going to be fruitful ground for research, not only in further observing this quasar itself, but also in the search for similar objects at commensurate redshifts.

Another possibility, but possibly a somewhat more troubling one in terms of our models, is that this started forming during the rebonding epoch, although how it would do this without leaving any evidence in the CMBR creates even more problems.

And the final possibility, though not one I'd give a huge amount of credence to, is that this black hole actually formed while the universe was still a plasma. Again, I'd expect evidence of this to be pretty obvious in the CMBR, so it causes more problems than it solves.

We can't really rule out either of these two scenarios, despite the lack of observed evidence in the CMBR, until we rule out the possibility that we've actually observed and corrected for it without realising it. It's certainly been the case in the past that announcements have been made of this or that thing in the CMBR when we've failed to correct for something that accounts for it, like that BICEP2 discovery that hit the world with some fanfare a few years ago, so it would be hubris to suppose that there was no possibility that we've over-corrected, or that some other assumption has been making an ass of you and me.

As always, thanks for reading, and a special thanks to my donors and patrons. You are awesome.

Corrections, nits and crits always gratefully received.

Further reading:
Arxiv pre-print server upload of the full paper.
Nature publication of the paper.
All Downhill From Here Evolution and entropy, featuring a potted history of the universe.

It Wasn't Big, and it Didn't Bang Before the Big Bang Part I
You Must Be Off Your Brane! Before the Big Bang Part II
The Certainty of Uncertainty Before the Big Bang Part III
Scale Invariance and the Cosmological Constant Dark matter, dark energy, and the Lambda term in General Relativity
The Black Hole on the Edge of Forever Article about this discovery by Phil Plait, the Bad Astronomer

*This is an estimate based on the mass and the number of stars, among other variables. At this point, we don't actually know how many stars are in the Milky Way, but we estimate somewhere between 200 and 400 billion, which makes for a pretty large margin of error. This isn't an easy one to resolve because of our placement inside the galaxy. All else aside, our view is hugely obstructed. This is another of those areas in which we might see some hope for resolution now we've entered the era of gravitational-wave observatories (GWOs),but that's likely a long way off, as we're probably going to need second or third generation GWOs at least to achieve the required resolution; watch this space.