I've been asked by a Twitter friend to do a layman's

*précis*of a couple of papers published in the last couple of days by André Maeder of Geneva University. As they deal with some things we've covered that need some fleshing out, I thought I might as well address them in a new post.

First, as always, a bit of background. We've discussed in a previous post how Einstein's general theory of relativity, published in 1915, strongly implied that the cosmos couldn't be static, discovered by Georges Lemaitre, and independently by Alexander Friedmann. It had either to be expanding or contracting, because there was nothing to hold the cosmos open against the gravitational attraction of all the matter in it. If it was static, then the mass should cause it to contract. Lemaitre concluded that the cosmos must be expanding and formulated the first of a general class of theories now known as 'big bang'. Lemaitre's own idea was something he called the 'cosmic egg hypothesis'.

The majority of physicists of the time - Einstein included - were strongly of the opinion that the cosmos was static and eternal, so Einstein modified his equations to make it static by adding a term, the Lambda term (Λ), which he called the Cosmological Constant. Later, Edwin Hubble nailed the case shut (we think) when he observed that the universe was larger than our galaxy alone, and that it was expanding. Einstein removed the term and deemed the cosmological constant the worst blunder of his career. It was a blunder, though, despite Einstein's fans waxing lyrical. There was no good reason to insert the term other than to fudge the result based on a prejudice. This term, fixed by Einstein, denotes the rate of expansion or contraction via relation to its energy density. Properly, it's a variable, but one that had been assigned a fixed value by Einstein.

Then, in the 90s, astronomers were thinking about how, with all the mass in the universe, expansion should be slowing down, so they set out to measure it. They used a very clever trick to do so.

There's a well-understood phenomenon in the evolution of stars. Stars below a certain mass, the Chandrasekhar Limit, will always go through the same final stages in their evolution. Our sun is one such star. Initially it will swell to a red giant, and then it will shed some of its mass and shrink down to a white dwarf. The Chandrasekhar Limit is the maximum mass a star can have and still undergo this fate. Beyond this limit, it will either become a neutron star after collapse or, if it has sufficient mass, a black hole. The Chandrasekhar limit is about 1.4 solar masses.

Now, the thing about a white dwarf is, yer basic feature of a white dwarf is that it has a maximum mass, something that causes a marvellous effect where you find a white dwarf in, for example, a binary system. A white dwarf can still accrete mass if there's sufficient mass in its vicinity. Often, in binary systems, especially, a white dwarf can actually gain mass sucked off its partners surface. This means, of course, that it breaches the Chandrasekhar Limit and goes nova, resulting in a neutron star. Because this breach always occurs at the same mass, these particular supernovae, known as type 1A supernovae, always shine with the same intrinsic brightness. By calculating its apparent brightness, and doing some clever sums, we can work out how far away it is or, more accurately, how far away it was when the light left it and, by corollary, how long ago it left. By taking multiple observations of the same supernovae, we can calculate its red-shift, which tells us how quickly it's moving away from us (actually, it's not moving away at all, it's sitting still while the space between us expands).

When we put all this together, we can actually work out how fast the cosmos was expanding at different points in time.

So our two teams of astronomers, Saul Perlmutter and Adam Reiss in the US, and Brian Schmidt in Aus, set out to measure the rate of expansion, hoping to come away with a figure on how much expansion was slowing.

To their great surprise, observations showed that it wasn't slowing at all, but it was actually accelerating in its expansion! Indeed, it had started accelerating about four billion years ago, almost ten billion years after the Planck time.

What this means in a nutshell is either that a) one side of the equation for general relativity is wrong and we don't understand gravity as well as we thought we did, or the other side is wrong and there's some energy/mass in the universe we can't account for. Broadly, this effect is called dark energy (although dark energy is also the name of one of the models attempting to explain it).

In any event, we now need a term that deals with the rate of expansion, because it's now known to be variable. Where can we find something like that?

Ah, the cosmological constant is just such a term! Inserting the Lambda term again allows us to model the evolution of the cosmos over time, adjusting for the rate of expansion in a dynamic way, and we call it the cosmological constant as a matter of historical contingency.

Now, because we can now see that the expansion is accelerating, it means that there is some 'force' acting to overcome the attraction of all the mass in the cosmos. It may well be that this effect is simply gravity, which is known to be repulsive under some solutions of general relativity.

What about dark matter?

Dark matter is again a placeholder. It's the name we give to an effect that doesn't match our expectations regarding the orbits of stars in the outer edges of galaxies, especially small galaxies that rotate quickly, because given the amount of matter we can actually detect, these stars are travelling too quickly to be gravitationally bound to their galaxies. In other words, they're moving so fast in their orbits that they should be escaping. That means that either there is something wrong with our picture of gravity, or there's something there that we can't see that's exerting sufficient gravitational influence to keep those stars in their orbits. Whatever the solution is to these anomalous observations, the effect, which has been observed, is called dark matter. The term itself is another matter of historical contingency, arising from confidence that our model of gravity is largely correct, stemming from the fact that it's withstood huge amounts of testing.

The proposed solutions are many and varied, including WIMPs (Weakly Interacting Massive Particles), MACHOs (Massive Compact Halo Objects - because the effect is most observed in the haloes, or outer edges, of galaxies), an incorrect understanding of gravity, and ordinary matter residing on an adjacent brane, and ALL of them are called dark matter, because dark matter is simply what we call the observed effect.

Another important idea we need to touch on is 'scale invariance'. This does pretty much what it says on the tin, and deals with what we see at different scales in terms of difference. Dirac put it nicely, as cited in the first of the Maeder papers:

And the author goes on to point out that Maxwell's equations, for instance, are scale-invariant. There are many applications of scale-invariant functions in pure and applied mathematics. For example, it's an important factor in such things as probability distributions, fractal geometryIt appears as one of the fundamental principles of nature that the equations expressing basic laws should be invariant under the widest possible group of transformations.

It didn't have to be this way, of course. The author notes that Galileo had inferred that the laws of physics generally vary with scale, from observations that

*"the strength of materials were not in exactly the right proportion to their size*".

One thing we know is not scale-invariant is general relativity because there's a natural distance scale, in both distance and time, when a system has non-zero mass.

Anyhoo, the two papers are these, accepted by Arxiv for publication on 23rd May.

*Scale invariant cosmology I: the vacuum and the cosmological constant.*

*Scale invariant cosmology II: model equations and properties*

In the first paper, Maeder is proposing that a gauge transformation relates the

**Λ**term in GR and the

**λ**term in scale-invariant mathematics via the equations given in the paper, meaning that scale-invariance for empty space can enter a model with a cosmological constant.

The second paper continues and shows the derivations of the equations that relate the two frameworks.

I haven't unpacked the equations, but they're certainly interesting ideas, and could lead to a simplified approach to finding more classes of solutions to Einstein's field equations, which are notoriously difficult to find, which is why we still use Newton's theories for most everyday applications.

As always, I look forward to any nits, crits, corrections and suggestions.

Further reading:

Before the Big Bang Part II