Sunday, 30 June 2013

fundamental astronomy - Cosmological redshift and comparing past galaxy sizes

If you were to divide the present-day
universe into cubes with sides 10 million light-years long, each
cube would contain, on average, about one galaxy similar in size
to the Milky Way. Now suppose you travel back in time, to an era
when the average distance between galaxies is one quarter of its
current value, corresponding to a cosmological redshift of z = 3.
How many galaxies similar in size to the Milky Way would you
expect to find, on average, in cubes of that same size? In order to
simplify the problem, assume that the total number of galaxies of
each type has not changed between then and now. Based on your
answer, would you expect collisions to be much more frequent at
that time or only moderately more frequent?



I am very confused with this question. From my intuition, I take that volume would equal to distance^3. They have given us that the past distance would be 1/4 of current. Therefore, do I cube 1/4distance? Also how does the redshift of 3 play a part in the solution. Thanks for any help! I do not need an answer, just how to start.

orbital mechanics - When sending a probe to Mars, how is the optimal travel path calculated?

To define "optimal" you need an objective function that you are maximizing or minimizing. What is your objective function?



For real Mars missions, the objective function can be quite involved, since many factors are considered. Let's assume an impulsive trajectory (i.e. very close to on target to Mars immediately after launch). Then there is the mass that can be delivered to the target, which is a function of the launch energy and departure declination. There is the arrival velocity, which determines the orbit insertion propellent for a lander, or the heat shield capability and aspects of the entry trajectory for a lander. There is the approach declination, which determines what orbits you can get into with one burn, or what landing site latitudes you can access. There is the visibility of the insertion burn or entry from Earth for telecommunications during critical events, so that you have data if something goes wrong. There is Mars relay orbiter coverage constraints for the entry and landing event, again to get more data in case something goes wrong. You will need to define a few weeks of available launch days (usually three weeks) to allow for weather, range, launch vehicle, or spacecraft delays. Over that launch period, you will need to satisfy all the other constraints. You may want to have the arrival day be the same for all launch days, in order to simplify planning. You may want to not have that arrival day be on Super Bowl Sunday, so as to get better press coverage and to not annoy the crew. (I seriously took that into account once.)



I could go on. Does this answer your question?



Update:



To address the comment on optimizing for a specific parameter, e.g., $C_3$, the process for an impulsive trajectory is to make a porkchop plot. Like this:



porkchop plot



For a given departure date from Earth and arrival date at Mars, there is one short, prograde orbit that connects them (see Lambert's problem). You then make a contour plot over a range of departures and arrivals of the parameters of interest. In the plot above, the blue contours are the injection energy. You can see two local minima, both around a $C_3$ of 16, for the 2005 opportunity. Though as noted, the $C_3$ doesn't tell the whole story. The departure declination, if greater than the launch site latitude, will reduce the injected mass at the same $C_3$.

Saturday, 29 June 2013

velocity - Question regarding the Milky Way when calculating galactic space velocities for galaxies

I have been calculating galaxy space velocities (where proper motions are known) in order to measure their orbits of the Milky Way using the method proposed in the appendix of http://www.aanda.org/articles/aa/pdf/2011/01/aa13415-09.pdf



The question I have is: Do we put the Milky Way through the same calculation process to get an initial velocity, or do we have that it's (0,0,0) as it is essentially our origin point (ie the U velocity component would mean that the Milky Way is moving in the direction away from its own galactic centre which doesn't really make sense to me as this could be any direction)?



I have put it through the process and found an initial space velocity of the Milky Way of (-11.1,232.24,7.25) (my method differs slightly from the link since I have used an updated velocity vector for the motion of the sun with respect to the Local standard of Rest) which essentially just comes from correction solar and galaxy rotation motions.



So, is this calculated vector correct for the Milky Way or should it be (0,0,0)?



Thanks

stellar evolution - Where does energy at the beginning of a star's lifecycle (before any nuclear reactions) come from?

This is basic thermodynamics.



When you compress a gas, you inject energy into it. Think of the pump you use to inflate the tires on your bike. It takes some force to move the piston, right? That effort is not wasted, but goes directly into the air in the pump. Now the air has more energy.



But what happens to a gas when you put energy into it? It's molecules jiggle around faster. Well, faster jiggling is basically the definition of higher temperature. By putting more energy into the gas, you raise its temperature.



You can actually tell that the bike pump is getting warmer if you pump quickly and forcefully - this is something you can experience yourself.



Same with stars - the whole star is the "bike pump", and gravity is the one who's pushing the piston. Due to compression (shrinking) under gravity, the gas gets hotter and hotter. It turns out a star has A LOT of gravitational energy, so the gas can get VERY hot.



In your terms, yes, it's the acceleration that molecules experience falling into the gravity well of the star that makes them move faster. Faster moving molecules = higher temperature. Pretty straightforward phenomenon, really.




Historically, gravitational compression was thought to be the main source of energy for the stars, before the discovery of nuclear physics. Helmholtz and Lord Kelvin proposed this hypothesis in the 1800s.



The pressure-temperature relation of any gas was originally known as the Gay-Lussac law. Now we know it's just a particular case of more general phenomena (ideal gas law) tying together pressure, temperature, volume, and various kinds of energy.



A spectacular application of the p-T relation is the so-called "fire piston" or "fire syringe", which can ignite small pieces of cotton or paper by just hitting a piston really hard (extremely strong compression = big temperature rise). Search Youtube for some videos like this one:



https://www.youtube.com/watch?v=4qe1Ueifekg

Wednesday, 26 June 2013

How do we find the exact temperature of a star?

This question is very broad - there are very many techniques for estimating temperatures, so I will stick to a few principles and examples. When we talk about measuring the temperature of a star, the only stars we can actually resolve and measure are in the local universe; they do not have appreciable redshifts and so this is rarely of any concern. Stars do of course have line of sight velocities which give their spectrum a redshift (or blueshift). It is a reasonably simple procedure to correct for the line of sight velocity of a star, because the redshift (or blueshift) applies to all wavelengths equally and we can simply shift the wavelength axis to account for this. i.e. We put the star back into the rest-frame before analysing its spectrum.



Gerald has talked about the blackbody spectrum - indeed the wavelength of the peak of a blackbody spectrum is inversely dependent of temperature through Wien's law. This method could be used to estimate the temperatures of objects that have spectra which closely approximate blackbodies and for which flux-calibrated spectra are available that properly sample the peak. Both of these conditions are hard to satisfy in practice: stars are in general not blackbodies, though their effective temperatures - which is usually what is quoted, are defined as the temperature of a blackbody with the same radius and luminosity of the star.



The effective temperature of a star is most accurately measured by (i) estimating the total flux of light from the star; (ii) getting an accurate distance from a parallax; (iii) combining these to give the luminosity; (iv) measuring the radius of the star using interferometry; (v) this gives the effective temperature from Stefan's law:
$$ L = 4pi R^2 sigma T_{eff}^4,$$
where $sigma$ is the Stefan-Boltzmann constant. Unfortunately the limiting factor here is that it is difficult to measure the radii of all but the largest or nearest stars. So measurements exist for a few giants and a few dozen nearby main sequence stars; but these are the fundamental calibrators against which other techniques are compared and calibrated.



A second major secondary technique is a detailed analysis of the spectrum of a star. To understand how this works we need to realise that (i) atoms/ions have different energy levels; (ii) the way that these levels are populated depends on temperature (higher levels are occupied at higher temperatures); (iii) transitions between levels can result in the emission or absorption of light at a particular wavelength that depends on the energy difference between the levels.



To use these properties we construct a model of the atmosphere of a star. In general a star is hotter on the inside and cooler on the outside. The radiation coming out from the centre of the star is absorbed by the cooler, overlying layers, but this happens preferentially at the wavelengths corresponding to energy level differences in the atoms that are absorbing the radiation. This produces absorption lines in the spectrum. A spectrum analysis consists of measuring the strengths of these absorption lines for many different chemical elements and different wavelengths. The strength of an absorption line depends primarily on (i) the temperature of the star and (ii) the amount of a particular chemical element, but also on several other parameters (gravity, turbulence, atmospheric structure). By measuring lots of lines you isolate these dependencies and emerge with a solution for the temperature of the star - often with a precision as good as +/-50 Kelvin.



Where you don't have a good spectrum, the next best solution is to use the colour of the star to estimate its temperature. This works because hot stars are blue and cool stars are red. The colour-temperature relationship is calibrated using the measured colours of the fundamental calibrator stars. Typical accuracies of this method are +/- 100-200 K (poorer for cooler stars).

telescope - How much competition there is for jobs in astronomy compared to other fields of science?

The competition for permanent positions in astronomy is very tough. The field as a whole produces roughly ~200 Ph.Ds per year, but there are usually only a handful (say ~10) tenure-track positions that open up every year. So perhaps ~5% of Ph.Ds end up in tenure-track positions in astronomy. There are more permanent positions in astronomy that aren't tenure track positions, but not too many more, probably enough to support about ~20% of Ph.Ds.



As for non-permanent positions in astronomy, there are plenty of those. The funding situation for postdoctoral positions is such that, while there is still a great deal of competition for open positions (stemming partially from the fact that almost everyone applies to almost every open job --- I exaggerate, but only slightly), there are enough open positions, that almost all astronomy Ph.Ds who want a postdoc will be able to get one. But those positions only last for an average of three years.



However the job prospects for astronomy Ph.Ds who decide to leave the field of astronomy (like myself) are quite good. Those holding an astronomy degree have one of the lowest unemployment rates of anyone (around 0.3%).



Here are a few links to some papers on the state of the job market in astronomy:

Friday, 21 June 2013

planet - How did Mars come to have a 24 hour 39 minute day?


"It's believed that the Earth was rotating about once every 5 hours
before the theorized collision with a Mars sized coorbiting object
referred to as Theia."




Almost. Theia did not have to be co-orbiting, just an intersecting orbit. We have no idea what the Earth's spin was before the collision, but it is theorized that the Earth rotation had a 5 hour period after the collision with Theia, at the time of the Moon's formation from the debris.



The fact that Mars and Earth have such a similar period is a coincidence, perhaps you are asking why Mars is spinning so fast? Well actually Mars is not the odd man out, Mercury and Venus are. Most planets spin fast. exactly which spin orientation is somewhat arbitrarily determined by the vagaries of the ways the planetesimals collided to form them. The fact that Venus and Uranus have unusual spin orientations is just the way things turned out.



Both Mercury and Venus used to spin much faster. Mercury's spin was tidally slowed down by the Sun and Mercury's orbit was (and still is being) driven further away by the Sun (just like the Moon and Earth: Why is the Moon receding from the Earth due to tides? Is this typical for other moons?). Eventually Mercury was held in that 2:3 resonance. Which, by the way had a certain amount of luck involved (see: Mercury’s capture into the 3:2 spin-orbit resonance as a result of its chaotic dynamics ). Venus, we are not so sure of.



The tidal force from the sun is much much less for Venus than for Mercury, but much more than for Earth. However Venus has a dense hot massive atmosphere, which can be forced into both gravitational bi-modal (two peaks) tides and thermal uni-modal (one peak) tides. The bulge lags behind the tidal forcing peak, which creates a torque by the sun to slow it down. This is fiendishly complex (See: Long term evolution of the spin of Venus - I. Theory )




P.S. Actually Phobos, and probably Deimos, are thought to be constructed fairly recently (millions of years) from debris from a collision of Mars with a large asteroid. There is no way to capture a whole asteroid into orbits that close.

Thursday, 20 June 2013

Does time slow down because the universe is expanding at an accelerating rate?

Yes, time does run slower$^dagger$ for far-away objects, as observed from our point of view, because they recede from us at high speeds. And yes, because expansion accelerates, this time dilation slowly, very slowly, becomes more pronounced. This is a well-known effect, and is always taken into account when doing observations. For instance, when observing distant supernovae, one is often interested in how their luminosities decrease as a function of time. This is called their lightcurve. In order to compare lightcurves at different redshifts, they are usually converted to their restframe, i.e. how they would look if you were "standing next to the supernova".



However, time dilation does not work exactly as you seem to think. At galaxy at a redshift of $z$ has its time dilated by a factor of $1+z$, so time runs twice as slow for a galaxy at, say, $z=3$ than for a galaxy at $z=1$. Galaxies with redshifts larger than $zsim1.5$ recede faster than the speed of light, and time does not at all stop here. Only for $zrightarrowinfty$, i.e. at the beginning of time at Big Bang, does the time dilation approach infinity.



$^dagger$EDIT: I originally wrote "faster" instead of "slower"

Monday, 17 June 2013

hydrostatic equilibrium - Why 2007 OR₁₀ is not a dwarf Planet yet?

As the comments already say, an object being a dwarf planet is a matter of convention. If the IAU says it's a dwarf planet, it's a dwarf planet. Otherwise, it's not. The requirements you are listing from Wikipedia are the IAU criteria for pronouncing objects as dwarf planets, but that does not mean that all objects fulfilling these criteria are dwarf planets. (don't ask me for the reasonings of the IAU, i'm not an expert on astronomer's brain twists)



There are a lot more examples of objects that could be dwarf planets, and are considered as being that by many researchers, despite these objects not having the status of dwarf planet. Wikipedia's List of possible dwarf planets has about 200 objects facing the same issue as 2007 OR10.



edit: as someone in the comments mentioned, not all objects on this list are actual candidates for dwarf planets (which is not what I meant, but I might indeed have implied this)

Saturday, 15 June 2013

photography - Gallery of 'actual images' from space?

Where can a gallery of actual unaltered photographic images taken in (or of) space be found? Specifically ones that are untouched, not colorized (not necessarily black and white, but they usually are), and taken by natural light photography? Pictures and videos claiming "actual image" are few and far between.



E.g., this:



enter image description here



Not false color:



enter image description here



Both of these pictures are from NASA's (Voyager) Saturn Images gallery. Some of the other ones there are listed as false color, some aren't (but obviously they are). Or maybe not so obviously, hence the question: what does it really look like out there?



I've a pretty good idea of what Saturn looks like IRL, because I've seen it in a telescope (exactly like the first picture, except it's more colorful - absolutely nothing like the second). For most other celestial objects, I have no such baseline.



The title of the website I'm looking for would be along the lines of: View of our solar system through the eyes of a human. Decidedly, not containing any pictures from the HST, as all of them are photoshopped.

Has celestial navigation been materially impacted by the imperfect nature of celestial reference frames over time?

In this video on inertial reference frames, it is mentioned that the stars are humanity's best inertial reference frame: the earth experiences a subtle acceleration relative to the sun due to the earth's own orbit and rotation; therefore the earth is only roughly an inertial frame. But, the stars are relatively fixed, they say.



However, the stars in galaxies move relative to each other--I assume many with some sort of rotational velocity and therefore acceleration--and galaxies themselves are all slowly moving away from one another, perhaps even with centrifugal acceleration of their own.



Has this acceleration been great enough that humanity has needed to update navigational tools that rely on the assumption that stars are an inertial reference frame?



Is it possible that every observable object in the universe is in fact accelerating relative to an unobservable reference frame?

star - Find constellation over Earth coordinates on a specific date-time

I would like some help in finding the exact constellations, or some sort of visualization of the stars over an specific location an time. The idea is to find out the closest star or group of stars on the vertical of that position in that exact moment.



I'm also aware than "vertical" can have some different interpretations, but a nice approximation should be good enough.



I'm not an expert in physics nor astronomy, but I think providing the coordinates and the date/time should be enough for an approximation.
According to google maps, the coordinates are:



39.467062 -0.377381



No idea what format that is, I'm familiarized with degree/minutes/seconds but I don't know exactly how to translate those. The time would be:




March 25, 2015, 09:40 (Spain)




, which would be 08:40 in UTC, correct me if I'm wrong.



Any help would be appreciated, if you can't provide a direct answer, at least point me to some website form or program (Linux preferably) I can use to find this out.



In case someone is wandering, this involves a girl and a tattoo, so please bear in mind this is something I'm going to be carrying for the rest of my life :)



Thanks!

Thursday, 13 June 2013

gravity - What are the arguments against the Feng and Gallo thin disk explanation of galactic rotation curves?

Feng & Gallo have published a series of extremely similar papers, all of which essentially claim that they have "discovered" a major flaw in the way (some) astrophysicists think about rotation curves. Instead of assuming spherical symmetry, they try to solve for the mass distribution, using a rotation curve, without assuming spherical symmetry, instead adopting a planar geometry with cylindrical symmetry.



Of course they do have a point; statements that the flat rotation curve can be compared with a Keplerian prediction (that assume spherical symmetry, or that all the mass is concentrated at the centre) are overly simplistic. So far so good, but they then go on to claim that their analysis is compatible with the total stellar mass of galaxies and that dark matter is not required.



So, in their planar model (and obviously this is open to criticism too) they invert rotation curves to obtain a radially dependent surface density distribution that drops pseudo-exponentially.



Problem 1: They concede (e.g. in Feng & Gallo 2011) that "the surface mass density decreases toward the galactic periphery at a slower rate than that of the luminosity density. In other words, the mass-to-light ratio in a disk galaxy is not a constant". This is an understatement! They find exponential scale lengths for the mass that are around twice (or more for some galaxies) the luminosity scalelengths, so this implies a huge, unexplained increase in the average mass to light ratio of the stellar population with radius. For the Milky way they give a luminosity scalelength of 2.5 kpc and a mass scale length of 4.5 kpc, so the $M/L$ ratio goes as $exp[0.18r]$, with radius in kpc (e.g. increases by a factor 4 between 2 kpc and 10 kpc). They argue this may be due to the neglect in their model of the galactic bulge, but completely fail to explain how this could affect the mass-to-light ratio in such an extreme way.



Problem 2: In their model they derive a surface mass density of the disk in the solar vicinity as between 150-200 $M_{odot}/pc^2$. Most ($sim 90$%) of the stars in the solar neighbourhood are "thin disk" stars, with an exponential scale height of between $z_0= 100-200$pc. If we assume the density distribution is exponential with height above the plane and that the Sun is near the plane (it is actually about 20pc above the plane, but this makes little difference), a total surface mass density of $sigma = 200M_{odot}/pc^2$ implies a local volume mass density of $rho simeq sigma/2z_0$, which is of order $0.5-1 M_{odot}/pc^3$ for the considered range of possible scale heights. The total mass density in the Galactic disk near the Sun, derived from the dynamics of stars observed by Hipparcos, is $0.076 pm 0.015 M_{odot}/pc^3$ (Creze et al. 1998), which falls short by an order of magnitude. (This does not bother the cold, baryonic dark matter model because the additional (dark) mass is not concentrated in the plane of the Galaxy).



Problem 3: For the most truncated discs that they consider with an edge at $r=15$ kpc, the total Galaxy mass is $1.1times10^{11} M_{odot}$ (again from Feng & Gallo 2011 ). The claim is then that this "is in very good agreement with the Milky Way star counts of 100 billion (Sparke & Gallagher 2007)". I would not agree. Assuming "stars" covers the full stellar mass range, then I wouldn't dissent from the 100 billion number; but the average stellar mass is about $0.2 M_{odot}$ (e.g. Chabrier 2003), so this implies $sim 5$ times as much mass as there is in stars (i.e. essentially the same objection as problem 2, but now integrated over the Galaxy). Gas might close this gap a little, white dwarfs/brown dwarfs make minor/negligible contributions, but we still end up requiring some "dark" component that dominates the mass, even if not as extreme as the pseudo-spherical dark matter halo models. Even if a factor of 5 additional baryonic dark matter (gas, molecular material, lost golf balls) were found this still leaves the problem of points 1 and 2 - why does this dark matter not follow the luminous matter and why does it not betray its existence in the kinematics of objects perpendicular to the disc.



Problem 4: Feng & Gallo do not include any discussion or consideration of the more extended populations of the Milky Way. In particular they do not consider the motions of distant globular clusters, halo stars or satellite galaxies of the Milky Way, which can be at 100-200 kpc from the Galactic centre (e.g. Bhattachargee et al. 2014). At these distances, any mass associated with the luminous matter in the disk at $r leq 15$ kpc can be well approximated using the Keplerian assumption. Proper consideration of these seems to suggest a much larger minimum mass for the Milky way independently of any assumptions about its distribution, though perhaps not in the inner (luminous) regions where dark matter appears not to be dominant and which is where F&G's analysis takes place. i.e the factor of 5-10 "missing" mass referred to above may be quite consistent with what others say about the total disk mass and the required dark matter within 15kpc of the Galactic centre (e.g. Kafle et al. 2014). To put it another way, the dynamics of these very distant objects require a large amount of mass in a spherical Milky Way halo, way more than the luminous matter and way more even than derived by Feng & Gallo. For instance, Kafle et al. model the mass (properly, using the Jeans equation) as a spheroidal bulge, a disk and a spherical (dark) halo using the velocity dispersions of halo stars out to 150 kpc. They find the total Galaxy mass is $sim 10^{12} M_{odot}$ and about 80-90% is in the spherical dark halo. Yet this dark halo makes almost no contribution to the mass density in the disk near the Sun.



Problem 5: (And to be fair I do think this is beyond the scope of what Feng & Gallo are doing) Feng & Gallo treat this problem in isolation without considering how their rival ideas might impact on all the other observations that non-baryonic dark matter was brought in to solve. Namely, the dynamics of galaxies in clusters, lensing by clusters, the CMB ripples, structure formation and primordial nucleosynthesis abundances to state the obvious ones. A new paradigm needs to do at least as well as the old one in order to be considered competitive.

solar system - How well do planetary orbits fit with Johannes Kepler's in- & circumscribed Platonic solids?

It is easy enough to do the calculations, formulae for the in and circum radii of the Platonic solids can be found here which give ratios of circ to in radii of (note formulea for the radii have dropped common factor of the side length, which we don't need as we are interested in the ratios):



>ri4=sqrt(6)/12,rc4=sqrt(6)/4
0.204124
0.612372
>
>ri6=1/2,rc6=sqrt(3)/2
0.5
0.866025
>
>ri8=sqrt(6)/6,rc8=sqrt(2)/2
0.408248
0.707107
>
>ri12=sqrt(250+110*sqrt(5))/20,rc12=(sqrt(15)+sqrt(3))/4
1.11352
1.40126
>
>ri20=(3*sqrt(3)+sqrt(15))/12,rc20=sqrt(10+2*sqrt(5))/4
0.755761
0.951057
>
>rho4=rc4/ri4
3
>rho6=rc6/ri6
1.73205
>rho8=rc8/ri8
1.73205
>rho12=rc12/ri12
1.25841
>rho20=rc20/ri20
1.25841


Which may be compared with the orbital radius ratios from here (radii in km)



>RMecury=57.9e6;
>RVenus=108.2e6;
>REarth=149.6e6;
>RMars=227.9e6;
>RJupiter=778.3e6;
>RSaturn=1426.7e6;


Now we can compare the corresponding radii ratios:



>[RVenus/RMecury,rho8]
1.86874 1.73205
>[REarth/RVenus,rho20]
1.38262 1.25841
>[RMars/REarth,rho12]
1.5234 1.25841
>[RJupiter/RMars,rho4]
3.41509 3
>[RSaturn/RJupiter,rho6]
1.8331 1.73205


Which, as these things go, is not bad.

planet - Can there be an object with planetary discriminant between Ceres and Neptune?

I know this is an old question, but recent developments in astronomy have caused this to become relevant again. Early in 2016 there was a paper published by Konstantin Batygin and Michael E. Brown (here) which indicates the possible existence of a new planet, dubbed "Planet 9". Subsequent work by other authors have lent more evidence to its possible existence, e.g., a paper by Fienga et al. (here) which concludes that including Planet 9 in orbital dynamic models decreases residuals.



This being said, the original paper specifically states




...our calculations suggest that a perturber on a $a'sim700:AU$, $e'sim0.6$ orbit would have to be somewhat more massive (e.g. a factor of a few) than $m'=10:M_oplus$ to produce the desired effect.




Such a planet is certainly massive enough to have cleared out its orbit under normal circumstances. For reference, Neptune has a mass of $17:M_oplus$ while Uranus has a mass of $14.5:M_oplus$. The special feature of this planet is that it is located $700:AU$ distant from the Sun which is in a very ambiguous and unknown region of our solar system. It is not clear at all how much, if any, debris in the form of comets resides out there. There are models suggesting a disk (referred to as the Hills Cloud) exists at those distances, implying that this new planet could reside inside an extended cometary disk. Of course this is only speculation, but it is a distinct possibility.



If the above-mentioned disk exists, we have to consider a few scenarios.



  • Planet 9 was "inserted" into this disk by being expelled from the inner solar system early on in its formation. This is a possible cause for its current position and suggests one of two outcomes.

    1. It has existed here long enough that it has cleared out its orbit, thus making its planetary discriminant well within the "planetary" side of the definition.

    2. It has either not existed in the disk long enough to clear the orbit, or has achieved some sort of equilibrium such that it can't now clear the orbit (considering most of the clearing occurs during formation). Here is where you may run into an ambiguous case. For all intents and purposes we may consider this a planet, but there remains the possibility that, despite it being more massive than the Earth, it could formally be considered a dwarf planet or else exist in this fuzzy region as you suggest.


  • Planet 9 formed at its current orbit (plus or minus some migration).
    In this scenario I think it is clear that it would have cleared its
    orbit and resoundingly be considered a true planet.

Now these are all wild suppositions. I don't have any of the math to back it up, and I'm not sure the numbers to plug into the various equations are well known, or even exist. I'm merely remarking on the possibility that the situation you asked about now seems like it could exist.

Tuesday, 11 June 2013

fundamental astronomy - Why do we need the "mean" in mean free path?

In every context we talk about "mean free path", we talk about particles. Particles usually bounce around a lot, from collision to collision. In between two bounces a particle moves in a straight line. That is the expected behaviour from Newton's laws of motion.



Between one collision and the next, a particle therefore moves a distance. That is the "free path". So why do we need to include "mean"? The thing is, you can never be sure about how far the particle moves after a collision. It might as well immediately bounce into another particle, as well as move a considerable distance. However, if you measure really many particles, their average is going to be pretty stable and predictable. That gives a nice, usable number. The "mean" here is for average. (In case you wonder, yes, this applies to optics too, but the concept requires a little more abstraction).

Saturday, 8 June 2013

Polar satellites for Global Navigation Systems

The satellites used in Indian Navigational System or so called Indian Regional Navigation Satellite System are not polar satellites.Three of them are Geostationary satellites and other four are Geosynchronous satellites.



Polar satellites revolve around the earth in North-South direction, where as these satellites revolve around with the same speed as that of the earth's rotation so that they appear to be fixed on sky for observers on earth. This is exactly what is needed for navigation systems.

Friday, 7 June 2013

Number of galaxies per redshift

The term "a distribution of number of galaxies per redshift interval" does make sense, but usally we restrict ourselves to a certain type of galaxies, e.g. Lyman $alpha$ emitters, Lyman break galaxies, sub-mm galaxies, etc. The reason is that galaxies are detected in a number of different ways, and no single selection technique captures all types of galaxies.



Moreover, since galaxies have no well-defined lower size/mass/luminosity threshold, we have to define a threshold above which we count them. For this reason, rather than talking about "the number density of galaxies per redshift", we usually use the very useful observable luminosity function (LF) which measures the number of (a certain type of) galaxies per luminosity bin, and per comoving volume, at a given redshift.



LFs have been probed out to very high redshifts, at the times when galaxies were just beginning to form at $sim$half a gigayear after Big Bang (e.g. Schenker et al. 2013).



You end your question by asking about the number of galaxies at $zle0.01$, which would be considered very low redshift in the context of galaxies. You can find the answer by considering all kinds of selection criteria, checking for overlaps between surveys (most local galaxies will be detected by multiple techniques), and integrating over volume the LFs down to your chosen threshold (e.g. Small Magellanic Cloud-sized). Doable, but not trivial. An alternative is to integrate numerically calculated dark matter halo mass functions, using e.g. this online tool. I get a number density of $nsim0.4$ halos per $h^{-3},mathrm{Mpc}^{3}$, or $1.2,mathrm{Mpc}^{-1}$. The comoving volume out to $z=0.01$ is $V=3.3times10^5$ Mpc, so the total number of (super-SMC-sized) galaxies is $N = n V simeq 3.8times10^5$.

Luminosity of black hole accretion disc

Three times the Schwarzschild radius corresponds to the closest stable circular orbit around a black hole. The general idea is that as matters moves in towards the black hole it gets stuck in an accretion disc where angular momentum has to be moved outwards in order to allow the matter to move inwards. However, once the matter gets inside $3r_s$, that problem disappears and the material is able to flow straight into the black hole.



Thus when we observe black hole accretion discs we expect them to be truncated at $3r_s$.



So I think the argument then is along the lines of - the gravitational potential energy of unit mass falling to $3r_s$ is converted into an orbital kinetic energy of $0.5v^2 = GM/6r_s$ per unit mass and the rest is converted to radiation. Thus
$$L = left[frac{GM}{3r_s} - frac{GM}{6r_s}right] frac{dM}{dt}$$
$$ L = frac{GM}{6r_s} frac{dM}{dt} = frac{1}{12}c^2 frac{dM}{dt} .$$

Wednesday, 5 June 2013

trying to understand stellar nucleosynthesis

The elements you are talking about are created by the r-process - the rapid neutron capture onto heavy elements that can occur in neutron-rich environments. The r-process is probably responsible to some extent for all the most neutron rich (i.e. more neutrons than protons) nuclei with atomic masses above that of iron and is probably responsible for all elements heavier than lead and bismuth. However, there are three broad peaks in the solar-system abundances, centred at Germanium, Xenon and Platinum (the first, second and third r-process peaks), where the r-process is thought to dominate the production over competing processes such as the s- and p-processes (plot taken from here).



Origin of the elements



If one subtracts the expected contributions from other sources one can estimate the fraction of a given element that comes from the r-process (plot taken from here).



Fractional contribution of the r-process



It is a contemporary astrophysical problem (i.e. there is no definitive answer) for the source of the r-process. In other words, it is an unsolved mystery as to what fraction of each r-process element comes from which astrophysical location and it may well vary as a function of cosmic time or location. The two main candidates for the sites of creation for r-process elements are core-collapse supernovae, the neutrino driven winds from supernovae and neutron star mergers.



The main problems to be solved are the detailed physics of supernova explosions and neutron star mergers, the relative populations and numbers of these events as a function of cosmic time and location and even the nuclear physics ingredients that go into calculating the yields of such elements.



A recent paper by van de Voort (2015) performs simulations that suggests most of the r-process elements in the universe could be created by neutron star mergers, and they point out that it is especially difficult for core-collapse supernovae to produce what are known as the "third-peak" r-process nuclei. These claims are all contentious, although the latter point about the heavier r-process elements is echoed in a number of previous works. (e.g. Wanajo 2013). Nevertheless, even this is disputed; for example Nishimura et al. (2015) perform simulations of rapidly rotating supernovae progenitors with strong magnetic fields and find that they can reproduce the abundance pattern of all the r-process elements seen in the Sun.



So if I wanted a vague stab in the dark at directly answering your question, I would choose the group of elements in the third r-process peak, with mass numbers $191 < A< 197$ as being most likely to have been (mostly) produced by neutron star mergers. This includes Osmium (192), Iridium (191, 193), Platinum (194, 195, 196) and possibly gold (197).

Sunday, 2 June 2013

Why is the detection of gravitational waves such a "Big" deal?

The way I see it, there are three reasons it's such a big deal.



The first, as you say, is the (further) confirmation of Einstein's theory of gravity. Newtonian gravity doesn't have gravitational waves. Their existence was already quite established by Hulse and Taylor's discovery and analysis of the binary pulsar PSR B1913+16. The system's orbit is decaying almost exactly as predicted by General Relativity. (Note that in the classic figure, the curve is not a fit: that is the prediction!) The direct detection of gravitational waves, however, better confirms that the waves behave as we expect they would, in the sense that the observations match the so-called "chirp" that we expected.



The second reason is because this is now a new way of measuring things in the Universe. Note that the announcement told us (roughly) the masses of the two black holes. It's actually very difficult to weigh black holes! For example, this 2010 paper discusses constraints on 23 stellar mass black holes, all from the fact that they're in binary systems, and the motion of the partner can be observed. Even then those are mostly mass functions that still depend on the inclination of the orbits, which is often unknown. So the gravitational waves give us a nice measurement, not just confirmation that the phenomenon exists.



With the method proven, there are some fascinating potential applications. One of my favourites is that LIGO (or similar) could potentially detect the formation of a black hole in a core-collapse supernova. Just like the detection of neutrinos from SN1987A confirmed some of our basic ideas about these events, detecting the gravitational waves from the birth of a black hole would inform our understanding further still. And it might even tell us, for example, how massive the black hole is at birth. There's no end to the novel ways that our theories could be tested!



The final reason is simply the astonishing technical achievement. They measured motions of at most a few thousandths of a femtometer over the 4km arms. That's mind-bogglingly precise. Hopefully this will give more impetus to other gravitational wave detectors. Here's hoping that eLISA will fly!



The parallel with the Higgs boson is quite apt. In both cases, we built a big machine to find something that we were pretty sure should be there. (Though the LHC was a tad more expensive...) Now it's a case of using the new tool to observe the Universe, and seeing if the waves behave as we think they should. And who knows, maybe some completely unexpected signal will appear to confound us completely.