Saturday 31 October 2015

botany - How to decide which is the correct scientific name for a particular species

When it comes to plants and animals, common names clearly differ from region to region.



A first effort to univocally classify them was done in the 16th century by Carl Linnæus (see my answer to this question for some historical background).



The nomenclature of plants is governed by the International Code of Nomenclature for algae, fungi, and plants (ICN), and for animals from the International Code of Zoological Nomenclature (ICZN).



The fact that there are rules, however, does not imply that names are conserved over time, nor that there is a rule for everything!



As time goes by rules change, certain species may be assimilated by others, or a certain subspecies will separate and become its own, and at times name changing proposals even raise havoc in the scientific community.



In summary, it is important to remember that taxonomy poses rules that are there to facilitate talking about science. Those are not absolute rules, sometimes they are arbitrary, and therefore for certain species there is no univocal name (and surely there is no correct name).

How to calculate the new orbit after a instantaneous change in a well defined initial one?

Say one has a simple two body system (and one body is much more massive than the other, for example a star, planet system) where all the quantities about it are know, that is the eccentricity, masses, perihelion, etc. Now, at some point in the orbit defined by the polar coordinates at that point, there is a instantaneous change in velocity. Now, the question is what will be the new orbit of the object around the other body? What are all the components of that orbit such as eccentricity, perihelion. Can the system even be solved exactly or only analytically?

Friday 30 October 2015

data analysis - Is there any astronomical phenomena that could emit strong radio waves with multiples of a discrete frequency?

In the New Scientist article Is this ET? Mystery of strange radio bursts from space, it is reported that several times since 2001, astronomers have detected fast radio bursts that seem to have a frequency of multiples of 187.5. In the article, the theory that it could be a pulsar is discounted.



Are there any astronomical phenomena that could emit strong radio waves with multiples of a discrete frequency?

Wednesday 28 October 2015

evolution - Are there differences in DNA between humans of today and humans from 2000 years ago?

The larger differences are most likely in epigenetic marks on the DNA. The environment is a lot different today than it was 2000 years ago and those differences are stronger determinants of epigenetic change than sequence change.



2000 years is only about 40 generations and that is not very much to see great differences in DNA sequence or allele frequencies - once founder populations are removed from the analysis.



Height is highly influenced by genetics, but can be masked by environment - notably diet. Diets higher in protein content are more likely to elicit the expression of the "height alleles" but only to a point as too much protein is also unhealthy.



Added in Edit 4 Apr 2012: I should add that my response is written from the perspective of frequency of alleles or epigenetic marks across a population.

Monday 26 October 2015

When the universe expands does it create new space, matter, or something else?

Yes, space is constantly being created. The new space does not hold any matter (like atoms) or dark matter. This means that the density of normal and dark matter decreases at the same rate as the volume increases. However, dark energy, which is something completely different and thought to be a property of vacuum itself, is being created with the new space, so the density of dark energy stays constant.



This in turns mean that while the early Universe (i.e. from it was 70,000 years old and until it was almost 10 billion years) was dominated by matter, the Universe is now dominated by dark energy.



And it will only get worse.

Venus transending behind the Earths moon December 7, 2015

Well, depending on where you live, the probability is either 0 or 1, modulo cloudy weather.



From the picture below (from this site), you see that you'll need to be in the US to see the occultation. However, only in the regions outlined in cyan (West Alaska, East Siberia, East Canada, and in Caribbean islands east of the Dominican Republic) will it be during darkness (just before sunrise / after sunset).



But with binoculars (or without, if you have very sharp eyes), you should be able to see it over all of the US.



occultation map



(cyan=occultation at moonrise/moonset; red dotted=daytime occultation; blue=twilight occultation; white=nighttime occultation)




Miscalculation

After your edit, I see that you are not referring to the probability of seeing the occultation, but the probability of a mis-calculation. I'd say that those odds cannot really be calculated, but it can be said that they are extremely small. No physical theory can ever be proved, but after a sufficient amount of verifications, we usually accept a theory as "true for all practical purposes, until disproved". Many, many factors go into a calculation like this (all the way down to mathematical axioms).



But a calculation like this has to do with celestial mechanics, which is very well understood, and which continuously makes accurate predictions and hence is continuously empirically verified. So the odds you request are definitely much, much smaller than, say, the odds of calculating the weather tomorrow, or the mass of a galaxy cluster.



Uncertainties

Of course there are always uncertainties associated with such calculations, which gives an uncertainty in the position of celestial bodies. But if you take a look at this video, you'll see that the predicted path of Venus is more or less through the middle of the Moon. If the actual path should not cross the disk of the Moon, the calculation would be off by $sim15$ arcmin, the probability of which is virtually zero.

Sunday 25 October 2015

astrophysics - Astronomy Olympiad Gravitation Qn

Presumably you're supposed to start by calculating the minimum orbital period a comet can have for a 10,000 AU aphelion distance.



Then divide that period (number of years) by five, assuming we see on average one of these every five years.



That will give you an estimate for how many there are out there.



(I would also tell the person setting the question that there are some pretty wild assumptions implied in the question!)

Saturday 24 October 2015

binoculars - Telescope optical tolerance from central axis

I sometimes encounter telescopes/binoculars with a decent objective diameter but are very difficult to use because they only work when the eye is looking down the central axis (i.e. in the centre of the field) or close to it; if you try to focus on an object towards the edge of the field of view it goes black. Is there a name for this tolerance? It doesn't sound related to exit pupil but probably is something to do with eye relief - i.e. the distance of eye to the eyepiece lens.

Friday 23 October 2015

quasars - What is projected separation and how can I make sense of its unit (h^-1 kpc)?

It means the separation between the two objects if they were both at the same distance away. This separation is found by multiplying the angular separation (in radians) by the distance to the objects.



Often, the distance to extragalactic objects depends on the assumed value of the Hubble parameter. In this case, $h$ is a dimensionless number corresponding to $h=H_0/100$, where $H_0$ is the Hubble parameter in km/s per Mpc.



The current best value of $H_0$ is about 70 km/s per Mpc, so $hsimeq 0.7$.

Thursday 22 October 2015

star - Astronomical databases for machine learning?

The European Southern Observatory has catalogues with image data available from http://www.eso.org/qi/, you will have to register before you are able to access them.



I'd suggest you look at other observatory's websites for their data. You will have to look past the pages targeted at the general public and find links for data or science, or user portal or something like that. They are sometimes difficult to find.



Ideally you would have a standard annotated data set of images for classification so that you will be able to compare your results with others. Unfortunately I'm not aware of any standard sets.



For literature on using pattern recognition on astronomical images, http://astrometry.net/biblio.html is a good resource. They've created an ML system that is able not only to distinguish stars from other object classes but also able to identify which stars are visible in an image! Very interesting research.



NB For classifying stars in clusters, the problem is likely not classification, but more segmentation.

Wednesday 21 October 2015

Theoretically, what is the biggest optical telescope that may exist?

It's complicated.



Until late-20th century, we've tried to make bigger and bigger monolithic telescopes. That worked pretty well up to the 5 meter parabolic mirror on Mount Palomar in California in the 1940s. It kind of worked, but just barely, for the 6 meter mirror on Caucasus in Russia in the 1970s. It did work, but that was a major achievement, for the twin 8.4 meter mirrors for the LBT in Arizona in the 2000s.



We've learned eventually that the way to go is not by pouring larger and larger slabs of low-expansion glass. It is generally accepted that somewhere just below 10 meters diameter is about as large as possible for monolithic mirrors.



The way to go is by choosing to make smaller mirror segments (1 meter to a few meters in diameter each) and combining those into a tiled mirror. It's somewhat harder to carve the asymmetrical parabolic (or hyperbolic, or elliptic, or spherical) reflecting curved surface in a segment like that, but it's far easier to manage thermal and cooling issues when you have to deal with smaller solid objects.



Each segment is mounted in an active mirror cell, with piezo actuators that very precisely control its position. All segments must combine into a single smooth surface with a precision better than 100 microns (much better than that in reality). So now you have a large array of massive objects, dynamically controlled via computer, each with its own vibration modes, each with its own source of mechanical noise, each with its own thermal expansion motions, all of them "dancing" up and down a few microns on piezo elements.



Is it possible to orchestrate a very large system like that? Yes. The 100 meter OWL was considered feasible technically. From the perspective of keeping the mirrors aligned, an even larger structure should be doable; the computer-controlled actuators should overcome most vibrations and shifts up to quite large distances.



Like you said, the real limits are financial. The complexity of such a system increases with the square of the diameter, and with complexity comes cost.




The entire discussion above was about "filled aperture" telescopes: given a round shape of a certain diameter, it is filled with mirror segments. For a given aperture, this design captures the largest amount of light.



But the aperture does not have to be filled. It can be mostly empty. You could have a few reflecting segments on the periphery, and the center would be mostly void. You'd have the same resolving power (you would see the same small details), it's just that the brightness of the image would decrease, because you're capturing less light total.



This is the principle of the interferometer. The twin 10 meter segmented Keck mirrors in Hawaii can work as an interferometer with a baseline of 85 meters. This is effectively equivalent to a single 85 meter aperture in terms of resolving power, but obviously not in terms of image brightness (amount of light captured).



The US Navy has an interferometer in Arizona with mirrors placed on 3 arms in a Y shape, each arm 250 meters long. That gives the instrument a baseline (equivalent aperture) of several hundreds of meters.



U of Sydney has a 640 meter baseline interferometer in the Australian desert.



Interferometers cannot be used to study very faint objects, because they can't capture enough light. But they can produce very high resolution data from bright objects - e.g. they are used to measure the diameter of stars, such as Betelgeuse.



The baseline of an interferometer can be made extremely large. For terrestrial instruments, a kilometers-wide baseline is very doable now. Larger will be doable in the future.



There are talks about building interferometers in outer space, in orbit around Earth or even bigger. That would provide a baseline at least in the thousands of kilometers. That's not doable now, but seems feasible in the future.

plant physiology - Why can you graft two unrelated cacti successfully, but you cannot do this on garden trees?

Firstly, different genera of trees can occasionally be successfully grafted. For example, quince, genus Cydonia, may be used as a dwarfing rootstock for pear, genus Pyrus.



However, it is true to say that this is the exception rather than the rule.



In the case of plants in the family Cactaceae, I would suggest that grafting is usually successful for two main reasons:



  • Genetic similarity. Plants in the family Cactaceae while known to be morphologically often very different, have relatively little genetic diversity. This will increase the likelyhood of a successful graft.


  • Ease of aligning vascular cambium. This is required for a successful graft, and in Cacti this is quite easy to achieve since the cambium is clearly visible when you prepare the scion and stock.


It is also possible that Cacti readily form callus tissue, which would also aid in the grafting process.



Nyffeler, R., 2002, doi: 10.3732/ajb.89.2.312 Am. J. Bot. 1 February 2002 vol. 89 no. 2 312-326

Monday 19 October 2015

gravity - "Up" and "down" in Space

From my understanding, "up" and "down" in space is going towards (down) an object's gravitational pull or (up) which is going away from it. I get confused with this explanation, and I believe simply because of how we perceive up and down here on earth.



My question(s) is this: Can we go in all directions in space with the "Earth" interpretation of up, down, left, right, north west, south east, etc if we can get past gravitational pulls of objects in space?



Space Drawing



Looking at my Gimp version of the universe, I have the Earth and stars. So going up would essentially be go towards any of the stars in the picture that has the strongest gravitational pull, although each object are on different planes?



In relation to my question above, I have imagined Earth in space as to like the 10 floor on an infinite elevator that can go in all directions. Say we can somehow go through star A and escape it's gravitational pull, we would be essentially "going down" right?



Now my other question: Are there objects in every single direction is space? How about in the Bootes void? Since it's such a massive void and there isn't much objects around to pull you in "down" or go away from "up", essentially can't we go in any direction?



If it may seems that my question about this is kind of confusing, I apologize because I'm confused about this myself!

Sunday 18 October 2015

zoology - How many mice are on the Earth?

Mark-recapture is the most frequently used method for small mammals. It's best when combined with uncertainty estimates and population dynamics models (e.g. projection matrix).



The fluctuations of small rodent populations have long fascinated scientists, and various models have been developed. Logistic regression models can be used to estimate likelihood of mice outbreak based on local rainfall data.



Estimates vary.



Rat population in New York city, for example, was thought to be 1 rat to 15 humans in the mid 40s, 1:36 in the late 40s (Davis ratio), 12:1 in 2002.



The population of mice is often estimated per hectare - 0.25-60/ha in buildings and 1-200 in the field, over 1000/ha during outbreaks. 10^9-10^10 hectares of habitable land means
that populations of mice and men are, probably, of the same order of magnitude.

Saturday 17 October 2015

star - What supernova has created the iron currently found in Earth core?

Iron is mainly made or is the product of decays from nuclear processed material inside supernovae.



As the Earth (and solar system) is around 4.5 billion years old, then the stars that manufactured the iron that is currently in the Earth's core died more than 4.5 billion years ago. Note that the solar system formed out of the gases that had been enriched by the supernova explosions of hundreds of millions of stars all mixed up together.



There are basically two categories of star that can have exploded as supernovae and disseminated this iron into the interstellar medium and from which the solar system could then have formed. The first is massive stars ($>8-10M_{odot}$). These can produce iron and nickel in their cores as a result of silicon burning in the final stages of their evolution. There is then a brief core-collapse followed by an explosion and the resulting supernova can scatter some of this processed, iron-rich material into space.
The remnants of these ancient supernova explosions could be a neutron stars or a black holes. These are almost untraceable/unobservable now, but there should be around a billion of them in our Galaxy.



The second category is the progenitors of what are called type Ia supernovae. These are thought to arise from the thermonuclear explosion of a white dwarf star. White dwarfs are the end point of the evolution of less massive stars. Iron-producing type Ia supernovae $>4.5$ billion years ago, would have begun as stars with masses between about 1.5 and 8$M_{odot}$. These would have burned hydrogen, then helium to produce a degenerate carbon and oxygen core. This core then simply cools and fades away as a white dwarf star in most cases. In type Ia supernovae, some event later in their lives, either mass transfer from a companion, or merger with a companion, caused the white dwarf to exceed its Chandrasekhar mass and triggers an instability that leads to the rapid total consumption of the star in a thermonuclear explosion. The products of this explosion include a large amount of Nickel, that then radioactively decays into Iron. Nothing is left of the white dwarf.



EDIT: Having established this we can begin to look at your edited question. Firstly, the gas and stars in the Galaxy are basically orbiting the Galactic centre. The orbital period at the Sun's radius is about 230 million years, so it has completed many Galactic orbits. Not only that, but it could have migrated in orbital radius as well. There are claims and counter claims in the literature and the issue is not settled. The Sun could have moved in or out by a significant fraction of its current Galactic orbital radius.



The high mass progenitors of core collapse supernovae will have been born (and died) very close to the Galactic plane. The same is not so true of type Ia supernovae, which had longer lived progenitors that could have moved significantly from the Galactic plane before exploding, and indeed would themselves have orbited the Galaxy many times. The gas expelled in a supernova explosion spreads out (over thousands of years) over tens of light years and becomes mixed into the interstellar medium. The interstellar medium is itself stirred and mixed by the energy input from these supernovae, but also due to the heating and winds of other stars, the tides of the galaxy and spiral arms. The interstellar medium appears to be quite homogeneous in terms of chemical composition, though radial gradients exist with scale lengths of order ten thousand light years.



In conclusion, what you ask is almost impossible to answer. The solar system iron almost certainly came from countless supernovae with a variety of progenitors, that would have exploded any time between almost the birth of the Galaxy 11-12 billion years ago (in fact the supernova rate was likely higher then) up until the Sun's birth. The biggest contributors would come from those stars inhabiting an annulus of many thousands of light years across, centered roughly on where the Sun was born, which is itself uncertain.

Friday 16 October 2015

genomics - DNA methylation and genome size


Is there any relationship between DNA methylation as a level of
stability to epigenetic states and genome size?




I would say yes, because methylation is used to disable genes in differentiated cells. Disabled genes in differentiated cells generally need to stay disabled to maintain normal behavior for the cell type. Larger genomes usually encode more different types of cells.




it is claimed that DNA methylation is not required for epigenetic
stability in Drosophila melanogaster and yeast, both genomes much
smaller than mammalian or plant genomes.




According to my book (S377 molecular and cell biology, book 2, p140) Drosphila don't need methylation because they perform transcriptional regulation by persistent chromatin remodelling instead. Yeast being single celled would have less need for this type of control.




Could it be that DNA methylation is needed to help activate/repress certain genomic regions on top of other epigenomic marks when the genome is so vast that there is a need for an extra level of marking?




It is used as a repressor because it inhibits hydrogen bonding of transcription factors and the like. I have not heard of it functioning as an activator.



There are also other uses for methylation, for example newly synthesized DNA is hemimethylated - where the parent strand is methylated and the child one not. From this it can be determined which is the parent strand during DNA repair.

Thursday 15 October 2015

homework - What forms the human amniotic sac?

In humans, the amnion (amniotic sac) persists from the primitive amniotic cavity1. One side of this is formed from the cytoblast (a prismatic epithelium) and the plasmodioblast. Together these two layers are the ectoplacenta or chorion. They are also referred to as Rauber's layer. These replace the lining epithelium of the uterus, whereupon internal cells undergo atrophy to create the amniotic sac.2



The other side of the amniotic sac is formed of the epiblast/ectoderm (internally) and the hypoblast (externally). Within the epiblast the other layer is comprised of prismatic cells whereas the inner layer are flattened (the hypoblast/entoderm). This double layer forms the bilaminar blastodermic membrane.2



I'd really reccomend that you borrow the referenced books from your university library as I found the diagrams much easier to understand than the text.




1 Gray, Henry. "Embryology: Formation of Membranes." Ed. Robert Howden. Anatomy Descriptive and Surgical. Ed. T. P. Pick. 15th ed. London: Chancellor, 1994. 90. Print. Colloquially Gray's Anatomy



2 Gray, Henry. "Embryology: The Ovum." Anatomy Descriptive and Surgical. Ed. T. P. Pick and Robert Howden. 15th ed. London: Chancellor, 1994. 82-83. Print. Colloquially Gray's Anatomy

star - Why is the Sun's brightness and radius increasing, but not its temperature?

The effective temperature $T_mathrm{eff}$ of a star, which is presumably what's been plotted, is defined through its relationship with the star's radius $R$ and luminosity $L$ by



$$L=4pi R^2sigma T_mathrm{eff}^4$$



This comes from the assumption that the star radiates like a black body at the photosphere. While this isn't strictly true, it's quite accurate, and regardless, that's how we define the effective temperature. The actual surface temperature will be slightly different but also behave roughly as plotted.



So, even if $T_mathrm{eff}$ is constant, the star expands if it grows brighter. Also, you can see that the sensitivity to temperature is steeper than radius, so a moderate change in luminosity can be absorbed by a relatively small change in effective temperature.



While the luminosity is basically determined by the simple behaviour of the nuclear reactions in the core (in terms of temperature and density), the surface properties depend on how energy is being transported near the surface. For radiation, you have to consider what the opacity of the material is, which itself depends on ionization states and whatnot. It's easy enough to see why the luminosity grows (the core gets denser and also hotter, producing energy faster) but the determination of the surface properties is more complicated. For the Sun, it turns out the way shown in the plot after you solve all the equations with the relevant opacities.



Also, as an extreme counterexample to "brighter means hotter and smaller", remember that red giants are much brighter but also much cooler!



PS: I'm not sure the source of the data but I would guess the wiggle at the start is because of the star finishing its contraction onto the main sequence. That is, before the first minimum, energy is being released by gravitational contraction. After it, the energy from nuclear reactions starts to dominate.

Wednesday 14 October 2015

mission design - Why doesn't the New Horizons probe fly any nearer than 10,000 km from Pluto?

The long distance to the Sun mandates long exposure times. The New Horizons spacecraft needs to be relatively stable and its pointing accurate throughout these long exposure times.



New Horizons does not have a scan platform. The cameras and other science instruments are fixed with respect to the vehicle. The satellite has to turn as a whole to keep the its scientific instruments pointed at Pluto. New Horizons also doesn't have control moment gyros or reaction wheels. All attitude control is via attitude thrusters.



The vehicle has to rotate by 180° from well before closest approach to well after closest approach. With a somewhat remote flyby, this 180° turn is spread out a bit. With a close-in flyby, this 180° turn has to happen rather quickly, right at flyby. The constant on/off thrusting that would be needed for a very close approach would do significant damage to the quality of the close-in imagery. There's little value to a close-in flyby if all that one sees is fuzz. A close-in flyby would also require considerably more fuel than a more remote flyby.

Friday 9 October 2015

space - At what size do objects burn up in the atmosphere when falling from orbit?

The Earth atmosphere protects us from small impacts from both asteroids and man made objects. This is well known from meteoroids, where meteoroids as large as a few tens of meters in diameter usually fail to penetrate into the lower atmosphere because they get fragmented and dispersed at high altitude. Fragmentation height depends mainly on the meteoroid strength, only strongest irons reach the surface in one piece.



We could extrapolate from meteoroids to man made objects that even large do not have the same physical strength than a massive meteoroid body. This means than nearly all man machine objects will disintegrate before reaching earth. Heavy metallic ones (iron) will disintegrate at lower altitudes than lighter ones such glass and plastic.



Two comments:



  • This is not true for objects not having reached orbit, e.g. rocket 1st or 2nd stage engines will fall without disintegrating.

  • Even if they disintegrate, nuclear powered satellites will cause some level of radioactive pollution, however widely scattered in the atmosphere.

Although the risk to human beings walking on earth is next to nil, there is a real risk of impact from man made objects littering Earth orbits to other spatial vehicles.



Reference: "The Impact Hazard", David Morrison, Clark R. Chapman and Paul Slovic, in "Hazards due to Comets and Asteroids", T. Gehrels, Ed., 1994, The University of Arizona Press

Thursday 8 October 2015

What would happen if we changed Earth's orbit?

If one day we have enough technology to push the Earth away a bit further from the sun to reduce global warming, is it true that it will start to distort the orbits of Mars and Venus, giving them a larger eccentricity and finally colliding with sun or each other?

evolution - Is there a dominant gene for right-handedness?

OMIM (online mendelian inheritance in man) is a good example to explain how complicated heritability usually is. Simple Mendelian traits, where smooth or wrinkly peas will have wrinkly offspring, allow us to segregate individuals with pure dominant / heterologous and pure recessive traits. When we have done so the chance of inheriting the trait can be predicted precisely (i.e. 25/50/25%).



The number of simple mendelian traits which are observed in an individual are pretty doggone rare. Even things that are dominant like dark hair are usually more complex. I'm of 100% japanese descent, but my son was born with blonde hair and it looks like it won't go darker than mousy brown - black hair is not really dominant, he will have a color between my wife's and mine.



I also need to make a distinction between mendelian traits, which are observed in the individual organsim as opposed to specific DNA variants like a single site mutation (A->T, C->T etc) which are usually mendelian.



The rarity of mendelian inheritance is because there are so many checks and balances in our gene networks that no single gene is solely responsible for any function. Medical research has discovered many single mutation (Mendelian) diseases, but they are very rare in practice. There are single mutations that cause diabetes like systems (called MODY diabetes) for instance, but more than 99% of diabetes are not caused by these mutations. A single mutation advantage is quickly adapted so that single mutations cannot nullify the advantage.



Common mutations that cause diseases or other disadvantages are likewise relatively quick to disappear through adaptations.



I once saw the old text bound copy of "Mendelian Inheritance in Man" It was not a small book, but it seemed to me that when I paged through it, when you look at traits like 'handedness'. there were only a few hundred simple mendelian traits identified. Most of what we are is the result of the interplay of several or even many of our ~30,000 genes.



For an example of how this works, see this article about what happens to new fly genes.

Wednesday 7 October 2015

star - Supernova explosion nearby

The Chandrasekhar limit in general does not pertain to the mass of the star as a whole. It addresses the mass of the degenerate core. It's only in white dwarfs where the Chandrasekhar limit applies to the mass of white dwarf as a whole, but that's because white dwarfs are almost entirely degenerate matter.



Consider a 1.6 solar mass that is not a member of a multiple star system. Even though this star's total mass exceeds the Chandrasekhar limit, this star will never go supernova. This star will instead live for billions of year on the main sequence, then a bit longer as a post-main sequence star, and finally end it's life as a white dwarf with considerably less mass than that original 1.6 solar masses (and considerably less than Chandrasekhar limit).



The star leaves the main sequence when it burns all of the hydrogen in its core. Two things happen at this point. One is that it starts burning hydrogen in a shell around an inert core of helium. The other is that it expands into a red giant. (Note: Some people are taught that a star becomes a red giant when it starts burning helium. This is not the case. This post-main sequence star is a red giant that is not yet burning helium.)



That inert core of helium increases in mass as hydrogen shell burning proceeds. A funny thing begins to happen with this increase in mass: The inert core becomes degenerate. An even funnier thing happens next: Adding even more mass makes the degenerate core shrink in size. The star is poised on the next phase of its evolution, which is helium burning. The formerly inert, degenerate core of helium is no longer inert or degenerate at this stage. Think of this phase of a star's life as a main sequence helium burning star. It's definitely not on the main sequence, but much of the physics is the same. One thing that is different: The shell burning of hydrogen is still occurring.



This phase doesn't last long. The star will soon consume all of the helium in its core. At this point, the star becomes a red giant for a second time. This is the asymptotic red giant branch. The star has an inert core of carbon and oxygen surrounded by a shell of fusing helium, which in turn is surrounded by a shell of fusing hydrogen.



These end of life convulsions are not good for the stuff that surrounds the shell of fusing hydrogen. The star expels a lot of that gas into nearby space. Eventually it will expel most of the burning shells of hydrogen and helium, leaving an inert, degenerate core of mostly carbon and oxygen. This is a white dwarf. A planetary nebula of expelled gases surrounds the white dwarf. The white dwarf itself has but a fraction of the star's initial mass. Most of the mass is in that planetary nebula.



Larger stars are even more proficient at expelling mass than was this 1.6 solar mass star. For stars with an initial mass of about eight solar masses or less, the white dwarfs left behind at the end of the stars' lives will be less than the Chandrasekhar limit.



The fate of stars over ten solar masses is much more violent. At the end of their lives, they have lurking inside them a series of burning shells that surround an inert, degenerate core of iron. A star has about five days to live once that core of iron starts forming. That's about how long it takes to create (from scratch) a degenerate iron core that approaches the Chandrasekhar limit. At or near the limit, the core starts to collapse, and the star forms a type II supernova.



A white dwarf can form a type Ia supernova with a little help from a binary neighbor from which the white dwarf is stealing mass. Conditions have to be just right for this to happen, but when it does, the white dwarf can gradually accumulate mass and go supernova. Since white dwarfs have expelled all of their hydrogen, there won't be any signatures of hydrogen in the supernova explosion. In fact, it's the lack or presence of hydrogen in the supernova signature that distinguishes type I from type II supernova.

Monday 5 October 2015

human anatomy - Why do stars disappear when I look at them?

When there is little light, the color-detecting cone cells are not sensitive enough, and all vision is done by rod cells. Cone cells are concentrated in the center of the eye, whereas rod cells are very rare in the center (image source):



Density of rod and cone cells



When you focus on the star, the light is projected close to the center of the retina, where it will hit few rod cells. Thus the star appears to vanish.

human biology - What is the role of acetylcholine in blood pressure regulation?

I hope this answers your question:



So the heart is an electrically excitable tissue: it pumps due to action potentials that start from specialized heart cells called nodal cells and these nodal cells spread the action potential to surrounding heart cells. These nodal cells are found throughout the heart and they make up what's called the conducting system. These nodal cells don't need neural or hormonal input for them to spread action potentials, they have this mechanism to spontaneously start action potentials on their own called a pacemaker potential, BUT they can be influenced by neural (think sympathetic and parasympathetic) or hormonal input. These external inputs change the frequency (the pace) of the heart beat by influencing the pacemaker potential of the nodal cells.



The sympathetic nerve fibers send out epinephrine which cause the pacemaker potential to occur at a faster rate, that's why during exercise (when epinephrine is released... hopefully) the heart beats faster. Parasympathetic nerve fibers send out acetylcholine which causes the pacemaker potential to occur at a slower rate, thus slowing down the heart. What does this mean in regards to the cardiac cycle? It influences how much blood (the volume) is ejected during the cycle. The volume of blood that is ejected from the ventricles, called cardiac output (CO), which is also how many liters/min, is determined by 2 other factors: stroke volume (volume of blood ejected during each ventricle during systole) and heart rate (CO = HR x SV). And therein lies a part of the answer to your question: when heart rate decreases because of parasympathetic stimulation, cardiac output should also decrease, granted stroke volume stays constant.



I unfortunately do not know how acetylcholine affects blood pressure. I've read a paper of how acetylcholine induced relaxation in hypertensive rats, but I'm not sure of the mechanism, like how it affects systolic or diastolic BP.

Thursday 1 October 2015

.2015: When last did both New/Full moon in a fort-night cause an Eclipse?

As written here:




Another oddity of nature is that solar eclipses and lunar eclipses tend to come in pairs – a solar eclipse always takes place about two weeks before or after a lunar eclipse.




And here:




Rules of Eclipses (Solar and Lunar) ... Eclipses tend to go in pairs or threes : solar-lunar-solar. Lunar eclipse always preceeded by or followed by a solar eclipse (two weeks between them)




So, seems like a totally common thing.



P.S. The previous pair was first lunar on Oct 8, 2014, and then solar on Oct 23rd.