Monday, 25 May 2015

Accelerating universe expansion and standard candle

Most type Ia supernovae are thought to arise from the thermonuclear detonation of white dwarfs that are composed almost entirely out of carbon and oxygen.



These white dwarfs are the cores of relatively low-mass stars that have lived their lives, gone through stages of core hydrogen and helium burning, leaving behind degenerate carbon/oxygen cores that become cooling white dwarfs after the outer envelope has been shed during the asymptotic giant branch and planetary nebula phases. As such, their composition, at least to first order, is almost independent of the initial composition of the star from which they were formed. That is, even if the progenitor star had a very low initial metal content, the white dwarf produced would still be almost exclusively a carbon/oxygen mixture, which had a similar Chandrasekhar mass and a similar explosive potential.



It is well known however that not all type Ia supernovae are the same. It has long been known that their light curves are subtley different and there is a so-called stretch factor that can be applied to get a "corrected" peak magnitude. a.k.a The width-luminosity relation.



More recently there has been a realisation that type Ia supernovae could arise from both accretion or mergers and there is clear evidence that the amount of radioactive Ni varies from explosion to explosion. A very recent paper by Milne et al. (2015) has however challenged the view of metallicity independence. They claim there are two populations of type Ia SNe, connected with progenitor metallicity, and that these populations become more apparent at high redshift when looking at rest-frame ultraviolet emission. The gist of their conclusions is indeed, as your question supposes, that this may go some way to ameliorating (but not eliminating) the need for dark energy.

No comments:

Post a Comment