Sunday, 29 November 2009

Can objects enter hydrostatic equillibrium through processes other than the influence of gravity?

Strictly speaking (as far as I know), hydrostatic equilibrium applies whenever a fluid balances external body forces with the pressure gradient. From Wikipedia:




In continuum mechanics, a fluid is said to be in hydrostatic equilibrium or hydrostatic balance when it is at rest, or when the flow velocity at each point is constant over time. This occurs when external forces such as gravity are balanced by a pressure gradient force.




I think the concept happens to be most frequently used in areas where gravity is the external force, but it could in principle be anything else. So, though I stand to be corrected, I think a droplet isolated in space long enough could be said to be in hydrostatic equilibrium, even though the most relevant force is the surface tension, rather than gravity.

observation - Can an observer on Earth only see half of the sky?

At any given instant of time, in any place on Earth, if the sky is clear and the horizon is low and flat, you see half of the celestial sphere - at that very instant.



But as the Earth keeps turning, you may end up seeing more, depending on where you are.



If you're at the North or South Pole, you see exactly half of the sky no matter how long you wait. That's all you'll ever see from there.



If you move closer to the Equator, you'll end up seeing more than half, if you're willing to wait.



At the Equator, you see pretty much all the sky, if you wait as the Earth keeps spinning around, revealing all the sky to you eventually.

Saturday, 28 November 2009

graph theory - Combining DAGs into an acyclic tournament

Since your question is somewhat open ended, here's an observation, although it doesn't go anywhere yet.



A 2SAT instance is a decision problem in which given a set of variables $V$ and a formula comprising a conjunction of clauses over them, each clause being distinct and containing exactly two distinct literals, one wishes to know whether there is a truth assignment to the variables that makes the formula true.



Each 2SAT instance induces a digraph on $V cup overline{V}$: an arc from $u$ to $v$ exists if there is a clause $overline{u} lor v$.



Conjecture: If this digraph is a DAG, then the 2SAT instance must be satisfiable.
If the 2SAT instance is satisfiable then the digraph must have exactly one variable in each of its strongly connected components.



Moreover, such ``2SAT digraphs'' are transposable: reversing their arcs gives a digraph isomorphic to the original.



Your question could be interpreted as being about a collection of 2SAT instances where one is allowed to negate the literals in all the clauses of any instance.

supernova - Question about the formation of elements

It's very likely, that we don't have discovered every non man-made element. For some elements there exist only very short-lived isotopes.



Plutonium...




is the heaviest primordial element by virtue of its most stable isotope, plutonium-244, whose half-life of about 80 million years is just long enough for the element to be found in trace quantities in nature




More about naturally occurring nuclides.



This paper mentions the detection of a curium ion in cosmic rays.
The process which forms curium can also form elements beyond curium, hence it's likely that there exist short-lived isotopes of elements in nature, which have not yet been detected. Research in this field is going on.

Why is the Hubble Telescope in space?

As @astromax observed, one of the primary factors that makes a space based telescope better than an equivalent telescope of the same size is scattering.



Along with scattering, there is also refraction which can be especially problematic when combined with atmospheric turbulence. In the modern era, this problem can be remedied to a certain extent using adaptive optics, but since the Hubble was designed, built, and launched before AO became practical in the 1990's, a space telescope represented the pinnacle of optical clarity for that time.



However, there is another important optical property of the atmosphere, absorption. Although the Hubble is primarily a visible light telescope, it does have instruments that cover both the near UV and near IR, both of which are absorbed by the atmosphere more than visible light.



Furthermore, there are practical advantages of space based telescopes. There is neither weather, nor light pollution in space.

Friday, 27 November 2009

observation - why does venus flick?

Because planets actually do twinkle. Most people were told that the major difference between stars and planets is that only the former twinkle - but that's an oversimplification. Given the right conditions, planets will twinkle too, it just happens more rarely.



Several factors that contribute to it:



  • lots of air turbulence; or, as astronomers call it, "bad seeing"


  • closeness to horizon; if the planets are high in the sky, the air column is shorter so there's less chance they will twinkle; but when they are low, their light goes through more air and so it is perturbed to a larger degree


The observation you've made, Venus twinkling, is not very unusual. Many stargazers are used to seeing that once in a while. I've seen Venus scintillate several times in the past, always at sunset when it was about to drop below horizon; I would presume you could see the same behavior very early in the morning as Venus has just risen.

inequalities - When does a real polynomial have a pair of complex conjugate roots?

We can assume $f$ has no multiple root (if the gcd of $f$ and $f'$ is not constant, divide by this gcd). Let $n$ be the degree of $f$. Compute
$$frac{f(X)f'(Y)-f(Y)f'(X)}{X-Y} = sum_{i,j=i}^{n}a_{i,j}; X^{i-1}; Y^{j-1};.$$
Then $f$ has all roots real iff the symmetric matrix $(a_{i,j})_{i,j=1,ldots,n}$ is positive definite. This can be checked for instance by computing the principal minors of this matrix and verifying whether they are all positive.



There are several methods for computing the number of real roots using signature of quadratic form : see for instance this note (in french).

dg.differential geometry - Is an injective smooth map an immersion?

Suppose $M$ and $N$ are smooth manifolds. An immersion is a smooth map $f: M rightarrow N$ whose pushforward is injective at each point.



Is a smooth injective map an immersion?



We can actually simplify the question further.



Suppose $f : M rightarrow N$ is a smooth injective map. Suppose $(U, phi)$ and $(V, psi)$ are smooth charts for $M$ and $N$ respectively. Fix $p in U$. Then



$$ f_ast = ( psi^{-1}circ psi circ f circ phi^{-1} circ phi)_{ast} = (psi^{-1})_ast circ (psi circ f circ phi^{-1})_ast circ phi_ast $$



As $phi$ and $psi$ are diffeomorphisms, $phi_ast$ and $(psi^{-1})_ast$ are linear isomorphisms.



Therefore, if $(psi circ f circ phi^{-1})_ast$ is injective then $f_ast$ is injective.



This shows that if every smooth injective map between open subsets of euclidean space is an immersion, then every smooth injective map between smooth manifolds is an immersion.

Thursday, 26 November 2009

ca.analysis and odes - What do we know about the space of finite order distributions ?

Hi,



(Question updated)



My question is about the space of distributions of finite order $mathcal{D}'_F$ (say on $mathbb{R}^n$). What do we know about it ?



From in the information I gathered, it seems that the natural topology on $mathcal{D}'_F$ is
the inductive limit topology of the spaces $(mathcal{D}'^m)$ of distributions of order $m$, or equivalently, the dual topology of $mathcal{D}_F$ [ this space being $mathcal{D}$ as a set, but with the coarser topology of the projective limit $(mathcal{D}^m)$ ($C^m$ functions with compact support, this is an inductive limit of Fréchet spaces with obvious semi-norms).
Note that $mathcal{D}_F$ is strictly coarser than $mathcal{D}$ (and strictly finer than the $mathcal{S}$), and that $mathcal{D}'_F$ is strictly finer than $mathcal{D}'$ (and strictly coarser than $mathcal{S'}$).



So, the question is: what do we now about this topology on $mathcal{D}_F$, and its strong dual $mathcal{D}_F'$ ?
It is clearly not Frechet, but is it complete ? Montel? Barrelled ? Nuclear ? Reflexive ?
More generally, do we have most of the nice properties of $mathcal{D}'$ for $mathcal{D}_F'$
?



Thanks

star systems - Reason for disqualifying Pluto as a Planet?

Pluto:



It was disqualified as a planet because orbital dominance was not achieved in the case of Pluto. Orbital dominance means that a the planet candidate should have remove all the small bodies from its orbit, by impact, capture of gravitational disturbance.



Planet:



According to the IAU in its resolution B5 (IAU is the International Astronomical Union; in particular it is in charge of naming celestrial bodies), a planet is a celestial body:



  1. in orbit around the Sun

  2. with a sufficient mass so that self-gravity overcome rigid body forces (that assumes a hydrostatic equilibrium)

  3. has cleared its neighbourhood.

Exoplanets:



As you see in the resolution B5, this definition is specified for "Planet in the Solar System". It is mainly because it does not really make sense now that it is already challenging to observe planets, so the distinction between a planet and a dwarf planet is not yet crucial for exoplanets. So the official definition of an exoplanet by IAU makes mainly a difference between a planet and a brown dwarf.



However, I guess the IAU definition of a planet should be kept for the other planetary systems if we are able to detect objects such as dwarf planets.

Wednesday, 25 November 2009

Do NEA (Near Earth Asteroids) have minable water ice?

Unlikely.



Plugging numbers into the Stefan–Boltzmann law gives us a temperature near 273°K (0°centigrade) for bodies near earth's orbit. The exact answer for atmospherless bodies depends on albedo.
Any water on nearby asteroids will thus boil until it freezes, and then sublimate.
That's why the search for nearby ice is focused on lightless, cold regions of craters near the moon's south pole.



Space probe Rosetta and its attendant comet are still 200 million km from the sun, and already outgassing water. IIRC, that started back in January, when the comet was 390 million km from the sun, well beyond Mars. Black body temp. out there would be around -100°C.



Looks like I was wrong about when jets first appeared: More jets from Rosetta's comet! September 19, 2014. On that day, the comet was 500 million km from the sun. That's outer belt. No spectra that I know of, so possibly not water. Water seems most likely though.



Likely we'll have to go out at least that far to find ice on small bodies.

How Are Radioactive Decay Rates Influenced by Neutrinos - On Earth and Other Dense Planets

In the paper that this report is based on, 1, they simply see an annual period in the $beta$ decay rates of radioactive isotope samples in the lab. Basically, the rate is a fraction of a percent higher in winter than in summer. They conclude that absent any simple instrumentation explanation:




we conclude that these results are consistent with the hypothesis that
nuclear decay rates may be influenced by some form of solar radiation.




Several things can be changing in a laboratory during a year. Obviously, temperature and humidity changes and these were tested in the experiment. But, also radon levels change as the amount of outside air exchanged with inside air is changed. The solar cosmic ray flux (high energy electrons, protons, and He nuclei generated in the chromosphere of the sun) changes as the Sun angle changes, and neutrinos (produced in the core) also as Sun angle changes. These could be affecting nuclei decay rates directly or the instrument used to measure these (subtle changes in threshold energies, false counts from ions produced in the instrument, potential shifts, etc.).

cosmological inflation - Is the expansion of the universe greater than the speed of light?

After inflation, the expansion of the universe did indeed slow down. During the inflationary epoch (lasting roughly $1 times 10^{-33}$ seconds), the universe expanded by a factor of $10^{26}$. That's incredible! However, the inflationary epoch didn't last long, and that incredible expansion ended pretty soon after it started.



The universe is, as you said, still expanding. In fact (like you also said), that expansion is increasing. However, we have to be careful when we talk about the rate of this expansion. Currently, the rate of expansion between two objects depends on the distance between them, which is encoded in Hubble's law:
$$v=H_0D$$
where $v$ is the recessional velocity, $H_0$ is Hubble's constant, and $D$ is the proper distance between the objects. This relation proves that objects that are farther away are moving away at a greater speed. We can extrapolate from that that, at a sufficiently far distance, two objects would be moving away from each other at the speed of light or greater! However, this is only true for objects very far from each other. But yes, eventually, any two objects sufficiently far from each other will move apart at the speed of light - and then greater.



Source (for length of inflationary epoch): http://www.universe-galaxies-stars.com/Cosmic_inflation.html



Source (for Hubble's law): http://map.gsfc.nasa.gov/universe/uni_expansion.html

Tuesday, 24 November 2009

special relativity - Length contraction of a star

Note that for cosmic expansion, special relativity only applies when the spacetime region in question is approximately flat, which in our case happens when it is small.




As the M31 galaxy is moving toward us at great speed it's "depth" should appear slightly flattened for us. A sphere moving toward us at the speed of light will appear "pancaked" in lack of a better word.




No, that's not correct. You have a worldline in spacetime, and its direction at an event is your four-velocity. The orthogonal complement of your four-velocity, i.e., all directions perpendicular to it, is your space. What Lorentz-FitzGerald length contraction says is that if you project a moving sphere onto your space, the result is contracted along the direction of motion.



The difference between this and what you say is that there is no mention of how the sphere will appear. The sphere is contracted in the inertial frame comoving with you at that instant. It doesn't appear contracted--in fact, if you took a picture of it with a hypothetical super-fast camera, the relativistically moving sphere will appear... still completely spherical. But not quite the same as a non-moving sphere if it has features on the surface, as Penrose-Terrell rotation makes some of its "back parts" visible.




Does this also mean that electromagnetic radiation (light) from a flattened star is focused in the direction of motion relative to the observer?




Yes. Just as a vertically falling rain comes at an angle when you're riding in a car, the angles at which any signal comes are different in a moving frame compared to a stationary frame. This is described by relativistic aberration. Additionally, Doppler shift changes the intensities of a radiating source along different directions, making the overall effect relativistic beaming.

Monday, 23 November 2009

telescope - Starting out in practical astronomy

If you're dead broke (like most people) I would suggest you take look at a small refractor telescope, you could pick up a cheap-o Celestron Powerseeker 60/70 for something like 40 bucks. The useability of such a small telescope is very limited, but you'll be able to get a nice clear image of the rings of Saturn, four of Jupiter's moons, or nice views of the moon, maybe even a few Messier objects if you're patient enough.
-but-
If you're willing to drop a bit more coin, then I would suggest nothing smaller than an 8" reflector. For less than 200 bucks you can find something that will really impress you.



I used to take my crappy little telescope and drive up into the hills where light pollution was at a minimum, where I would just sit for hours trying to find Messier objects. Loads of fun!



As far as joining an amateur astronomy society, I would suggest you check with your local Community College's astronomy professors. At least here in California, one of the requirements of the astronomy classes is that there be opportunities to participate in actual observing. This means that there will more than likely be an astronomy group having "star parties" and other observing events, which will give you some good connections to competent groups.

nt.number theory - Are most cubic plane curves over the rationals elliptic?

Your question (as explained in the second paragraph) is not vague at all! In fact, it appears for instance after Conjecture 2.2 in http://www-math.mit.edu/~poonen/papers/random.pdf , which is Random diophantine equations, B. Poonen and J. F. Voloch, pp. 175–184 in: Arithmetic of higher-dimensional algebraic varieties, B. Poonen and Yu. Tschinkel (eds.), Progress in Math. 226 (2004), Birkhäuser.



The answer is not known, and the experts I've spoken to do not even have a convincing heuristic predicting an answer. Swinnerton-Dyer told me that he had a hunch that the answer was 0, and this is my hunch too, but we have little to back this up.



It is not even clear that the limit exists. One can prove, however, that the density (in your precise sense) of plane cubic curves that have points over $mathbb{Q}_p$ for all $p le infty$ is a number strictly between $0$ and $1$ (Theorem 3.6 in the Poonen-Voloch paper), so the lim sup of the fraction of plane cubic curves with a rational point is at most this; in particular, it's not 1.



One could try to estimate the size of the Tate-Shafarevich group of a "random" elliptic curve, to get an idea of how often local solvability implies global solvability, but even if one does this it is not clear that this is counting curves in the same way.

stellar astrophysics - How exactly is the Initial Mass Function (IMF) calculated?

What is it?



An IMF, $Phi(m)$, is defined such as $Phi(m){rm d}m$ gives the fractions of stars with a mass between $m - {rm d}m/2$ and $m + {rm d}m/2$, and with a normalized distribution



$$int_{m_{rm min}}^{m_{rm max}}mPhi(m){rm d}m = 1 M_{odot}.$$



Note that these boundaries ($m_{rm min}$ and $m_{rm max}$) are ill-defined, but typically of the order of 0.1 $M_{odot}$ and 100 $M_{odot}$, respectively.



IMFs



The various IMF used are the following, with their main characteristics:



  • the Salpeter's IMF, that is a parametrization of the IMF by a simple power-law, of the form
    $$Phi(m){rm d}m propto m^{-alpha}{rm d}m;$$

  • the Miller & Scalo's IMF, that is a parametrization of the IMF by a log-normal distribution of the form
    $$xileft(log(m)right) = A_0 + A_1 log(m) + A_2left(log(m)right)^2;$$

  • the Kroupa's IMF, that is a parametrization of the IMF by a broken power-law;

  • the Chabrier's IMF and Chabrier's system IMF, that is a combination of log-normal distribution (for low mass stars with masses less than 1 $M_{odot}$) and and a power-law distribution (for larger masses). The difference between the IMF and the system IMF is to merge resolved objects into multiple systems to compute the magnitude of systems instead of individual stars.

Determination



As you see, all these IMF are parametrization, deduced from observations. In general, the observations used to infer these mass functions come from star clusters in our galaxy. All you need to do it is to find a mass-magnitude relationship to deduce, from an observed luminosity, a mass function. In general, the number density distribution per mess interval, ${rm d}n/{rm d}m$, takes the following form
$$frac{{rm d}n}{{rm d}m}left(mright)_tau = left(frac{{rm d}n}{{rm d}M_lambda(m)}right) times left(frac{{rm d}m}{{rm d}M_lambda(m)}right)^{-1}_tau,$$
for a given age $tau$ and an observed magnitude $M_lambda$. Then, it is just a matter of parametrization, but also of how well it can arise from a proper theory.



For this matter, the Chabrier's IMF is probably the one that is best back up by theoretical arguments. It relies on a gravo-turbulent theory, taking into account all the possible supports (thermal support, turbulent support and magnetic support) plus the dual nature of turbulence, that both favors star formation by compressing the gas, and impedes star formation, by dispersing the fluid. All the dirty details are given in Hennebelle & Chabrier (2008) and Hennebelle & Chabrier (2009), showing how you can analytically deduce an IMF from these theoretical considerations.



Applications



As far as I know, these IMF are more or less used for every type of population. However, you won't favor Salpeter's IMF if you have enough resolution to resolve low-mass objects, that are not at all well-taken into account with this IMF. You should also favor the Chabrier's system IMF in case of unresolved objects.



To know if all these IMFs are really well-suited to any kind of population is an open and difficult question (the so-called question of the universality of the IMF), in particular because you need to resolve individual stars in clearly identified clusters to deduce an IMF. There are some papers investigating the question (for example, you could have a look at Cappellari et al. (2012) for a recent discussion of the problem).

rt.representation theory - What is the explanation for the special form of representations of three string braid group constructed using quantum groups information supplied

It is well-known that representations of quantised enveloping algebras give representations of braid groups. For the examples that I know explicitly the representations of the three string braid group take a specific form. Is there an explanation of this? The examples I know are the simplest examples so what can I expect in general?



More specifically: Fix a quantised enveloping algebra $U$. Let $V$ and $W$ be highest weight finite dimensional representations. Then the three string braid group acts on $Hom_U(otimes^3V,W)$.



The specific form that appears is the following. Let $P$ be the $ntimes n$ matrix with $P_{ij}=1$ if $i+j=n+1$ and $P_{ij}=0$ otherwise. Then we can write $sigma_1$ and $sigma_2$ with the following properties



  • $sigma_1$ is lower triangular

  • $sigma_2=Psigma_1P$

  • $sigma_i^{-1}=overline{sigma}_i$ which means apply the involution $qmapsto q^{-1}$ to each entry

The simplest example is
$$sigma_1=left(begin{array}{cc} q & 0 \ 1 & -q^{-1}end{array}right)$$



I get the feeling this has something to do with canonical bases.



A specific question is: Take $V$ to be the spin representation of $Spin(2n+1)$. Then do these representations have this form and if so how do I find it?



[In fact, I have representations of this specific form which I conjecture are these representations]



Further comment Assume the eigenvalues of $sigma_i$ are distinct. This condition holds for the spin representation. Then if this basis exists it is unique. Consider a change of basis matrix $A$ which preserves this structure. Then $A$ commutes with $sigma_1$ so is lower triangular. Then $A$ also commutes with $P$ so is diagonal. Then the final condition requires $A$ to be a scalar matrix.



The problem is existence. The Tuba-Wenzl paper shows such a basis exists in small examples.

Sunday, 22 November 2009

mp.mathematical physics - Constraints on the Fourier transform of a constant modulus function

If $g$ happens to be in $L^1$, then the amplitude of the Fourier transform of $fg$ is bounded by the $L^1$ norm of $g$, for any unimodular $f$. This is the only restriction from above since you can always choose $f$ so that $fgge 0$, thus bringing the (essential) supremum of $widehat{fg}$ up to $|g|_{L^1}$.



Another part of the question is how small we can make $A$. I guess "arbitrarily small", but don't have a proof. (Except for special case: if $g$ is in $L^1$, then we can chop it into pieces with disjoint supports and small $L^1$ norm, and then use $f$ to move the Fourier transforms of pieces far from one another.)

Saturday, 21 November 2009

solar system - Do origin theories imagine each planet to first orbit the Sun very irregularly before stabilizing?

Actually, the sun, planets and other small bodies are all created from the cloud of dust and gas that existed before the solar system. As the gas and dust collect towards the center, the mass rotates and soon an accretion disk forms (which is why the planets are nearly on the same plane).



So your notion of the beginning of the universe is incorrect. It did not start with a bunch of planets (or stars) randomly thrown into space. Also all the stars in a globular cluster are rotating in the same direction; the stars are in orbit in the cluster.



Added:
Yes, they did have irregular movements. It started with a large cloud of gas and dust, some from the big bang, mostly from exploded stars systems. The particles all had different directions and speed.



When gravity influenced enough particles to group and start accumulating is when the mass began to spin (conservation of momentum). The spinning mass flattens out to an accretion disk and planets can start to form (along with the sun).

Is it possible that all dark matter is made of rogue planets (free-floating planet)?

First of all I'll start with a few ideas:



  1. Baryonic Matter: Baryons are elementary particles made up of 3 quarks. This includes protons and neutrons, and the term baryonic matter refers to matter made of baryons, such as atoms. Examples of non-baryonic matter includes neutrinos, free electrons and other exotic matter.

  2. Things like planets, stars, dust, etc. are all made of atoms, and so are classified as baryonic matter.

Now, how do we know that dark matter is present in the universe?



Astronomers measure the gravitational pull of galaxies and galaxy groups/clusters based on how objects behave when interacting with these objects. Some examples of this include tidal gas/dust stripping, the orbit of stars in a galaxy and gravitational lensing of distant light from a large cluster. Using this they determine the mass of the galaxy (or galaxy group).
We can also determine the mass of a galaxy or group by looking at it and adding up the mass of all the objects (like stars, dust, gas, black holes, and other baryonic matter). While these methods both give us approximations, it is clear that the gravitational mass of galaxies and groups exceeds the baryonic mass by a factor of 10-100.



When astrophysicists first found this phenomenon they had to come up with a plausible explanation, so they suggested that there is some new, invisible matter called dark matter. (Aside: some astrophysicists also came up with other explanations like modified gravity, but so far dark matter does the best job at explaining observations).



Okay, so now how do we know dark matter is not any sort of baryonic matter?



There are a few reasons astrophysicists know that it is extremely unlikely that dark matter is baryonic. First of all if all the stars in a galaxy shine on an object it heats up, this heat causes the release of radiation, called thermal radiation, and every (baryonic) object above zero kelvin (or -273.14 deg celcius) emits this radiation. However, dark matter does not emit any radiation at all (hence the name dark!)



If dark matter were baryonic it would also mean that it could become light emitting. If we got a clump of baryonic matter* and put it in space it would gravitationally contract, and would eventually form a star or black hole** - both of which we would be able to see.



So, because of these reasons the dark matter in galaxies and in galaxy groups/clusters cannot be baryonic, and so cannot be planets, dead stars, asteroids, etc. It would definetely not be planets as there is no way 10-100 times the mass of the stars in a galaxy would be planets, as the mechanism for making planets relies on supernovae, and the number of supernovae needed for the that many planets would be far too high to match our observations. I hope that this answered your question!



*provided the clump of baryonic matter was large, and the amount there is in galaxies definitely is!



** we don't observe black holes directly, but can see radiation from their accretion disks.

general relativity - Does mass create space?

This is a tough question to answer because the dimensions of your proverbial cube would be affected by the mass inside (as dimensions can only exist in space, so anything that affects space will also bend your cube), and mass does not bend space, it bends spacetime. You have to keep in mind that when talking about special and general relativity, you are talking about four-dimensional spacetime. This is what is curved by mass. For example, due to the curvature of spacetime due to Earth's gravity, you feel the same effects on its surface that you would if you were accelerating upwards at 9.8 m/s^2 through deep space (where we assume there is no gravity). This doesn't mean the physical space is curved - if you draw a straight line on a piece of paper in space and then travel down to Earth's surface, it will remain straight. Instead, spacetime is curved, which can affect objects' paths of motion, but not the objects themselves. Now to your questions:



  • The volume inside of the cubes would be exactly the same, but two identical objects flying through each cube would follow two different paths, and if either looked at the other, it would see that the other's clock is moving at a different speed than its own due to time dilation (due to the curvature of spacetime towards mass, time moves more slowly closer to massive objects). We can see evidence of the effect of gravity on paths with gravitational lensing. Light travels in a straight line through spacetime, but when looking at the space around a star, you will actually see objects that should be directly behind the star and blocked from sight. This is because light rays that, in the absence of gravity, would pass next to the star and continue off at an angle are pulled towards the star by it's gravitational field (in reality they're just following the curvature of spacetime around the massive star), so they curve around the star, becoming visible to us, even though, in space alone, they should be blocked by the star.

  • The distribution of the mass has no effect on the gravitational field around it (same with electromagnetic fields). If you look at any equation that has to do with gravity (gravitational potential energy, gravitational force, etc.), you will find a term for the total mass of the object, but unless you are within the bounds of the control volume containing the mass, the distribution does not matter. This is one of the reasons why black holes are so hard to study - we can tell how much mass is inside of them, but without the ability to see inside, we have no other way of detecting their properties.

  • Like I said above, the volume is not affected because space is not warped by gravity, only spacetime. If you made a physical box (or we'll say a cube frame) around empty space and then moved it to a space that contained a star, it would remain the same size and shape (assuming that it was strong enough to withstand the force of gravity pulling it towards the massive star).

  • Neither, it simply curves spacetime.

Friday, 20 November 2009

solar system - How many planets do on average different star types support?

I'm working on a game which takes place in space. My team and I want to create a solar system generator for the game.



We have selected 8 types of stars:



  • White dwarf

  • Red Dwarf

  • Binary

  • Super Red

  • Proto star

  • White

  • Yellow

  • Blue

But we are wondering how many planets can each star type support on average, and what are the minimum and maximum number planets for each kind? Is there any such study suggesting number of planets that stars of different mass could support?

star - Entropy of black hole


But as far as I know entropy is the amount of disorder.




Entropy is a measure of the number of possible microscopic states consistent with an observed macroscopic state1, $S = k_text{B}ln N$. Fundamentally it has nothing to do with disorder, although as an analogy it sometimes works. For example, in simple situations like an $n$ point-particle gas in a box: there many more ways to put point-particles in a box in disorderly manner than an orderly one. However, the exact the opposite may be true if they have a positive size and the box is crowded enough. Overall, disorder is just a bad analogy.



1 Even that's not quite true, but it's better than disorder. Specifically, it's a simplification under the assumption that all microstates are equally likely.




A black hole is denser than a star. For a density that high, I assume a certain amount of order (inverse entropy?) is required.




If an object is crushed inside ideal box that isolates it and prevents any leaks to the outside, the crushed object still has information about what it was before. And an event horizon is about as an ideal box as there can be.



Classically, black holes have no hair, meaning that the spacetime of an isolated black hole is characterized by mass, angular momentum, and electric charge. So there are two possible responses to this: either the black hole really has no structure other than those few parameters, in which case the information is destroyed, or it does have structure that's just not externally observable classically.



Thus, if information is not destroyed, we should expect the number of microstates of a black hole to be huge simply because there's a huge number of ways to produce a black hole. Roughly, at least the number of microstates of possible collapsing star remnants of the same mass, angular momentum, and charge (though this is idealized because a realistic collapsing process sheds a lot).




For a density that high, I assume a certain amount of order (inverse entropy?) is required.




Quite the opposite; black hole are the most entropic objects for their size.



In the early 1970s, physicists have noticed an interesting analogies between how black holes behave and the laws of thermodynamics. Most relevantly here, the surface gravity $kappa$ of a black hole is constant (paralleling zeroth law of thermodynamics) and the area $A$ of a black hole is classically nondecreasing (paralleling second law). This is extended further with analogies of the first and third laws of thermodynamics with $kappa$ acting like temperature and $A$ as the entropy.



The problem is that for this to be more than an analogy, black holes should radiate with temperature that's (some multiple of) their surface gravity. But they do; this is called Hawking radiation. So the area can shrink as long as there is a compensating entropy emitted to the outside:
$$deltaleft(S_text{outside} + Afrac{k_text{B}c^3}{4hbar G}right)geq 0text{.}$$
Thus, semi-classically, the entropy of a black hole is proportional to its surface area. In natural units, it is simply $S_text{BH} = A/4$, which is huge because Planck areas are very small.



Thus, we know that in a semi-classical approximation, a black hole must radiate with temperature proportional to its surface gravity and entropy proportional to its area. It's natural to wonder the next step: if a black hole has all this entropy, where is the structure? How can it have so many possible microstates if it's classically just a vacuum? But going there takes us to the land of quantum gravity, which is not yet firmly established, and is outside the scope for astronomy.

ca.analysis and odes - The eliminant of a system of differential equations

In "Leopold Kroneckers Werke, 3. Band", Teubner (1899), published by K. Hensel, we find on p. 179 in a section about applications of modulsystems the phrase: "Die Resultante der Elimination von ...".



This suggests that Thorny's above conjecture is correct and the eliminant is also called resultant. The due reference given by José Figueroa-O'Farrill is confirmed by the McGraw-Hill Dictionary of Scientific & Technical Terms.



The eliminant has originally been defined for algebraic equations. See Otto Biermann's comprehensive and detailed paper: "Über die Bildung der Eliminanten eines Systems algebraischer Gleichungen", Monatshefte für Mathematik und Physik (1894) pp. 17-32, referring (without giving sources) to Salmon-Fiedler, Günther, Sylvester and Cayley. The mode of application to linear differential operators has been suggested here by Charles Siegel, and application to linear differential equations becomes obvious when the characteristic polynoms are formed in the common way by means of the exponential function.



Application of eliminant to differential equations can be found in Paul Funk's paper "Beiträge zur zweidimensionalen Finsler'schen Geometrie", Monatshefte für Mathematik 52 (1948) pp. 194-216, and also in modern literature, namely in A.P. Alexandrov's arXiv-paper: "Dynamic systems with quantum behaviour" on p. 99.



Apropos The word "eliminante" has acquired a general use in mathematics. This can be seen by the completely independent application of the word by Hermann Weyl in his paper "Reine Infinitesimalgeometrie" Mathematische Zeitschrift 2 (1918) pp 384-411.




Jene fünf Identitäten stehen in engstem Zusammenhang mit den sog. Erhaltungssätzen, nämlich dem (einkomponentigen) Satz yon der Erhalung der Elektrizität und dem (vierkomponentigen) Energie-Impulsprinzip. Sie lehren nämlich: die Erhaltungssätze (auf deren Gültigkeit die Mechanik beruht) folgen auf doppelte Weise aus den elektromagnetischen sowie den Gravitationsgleichungen; man möchte sie daher als die gemeinsame Eliminante dieser beiden Gesetzesgruppen bezeichnen.




Briefly: The conservation principles follow from the electromagnetic and the gravitational equations; one is tempted to denote them as common eliminant of these two groups of laws.

Thursday, 19 November 2009

nt.number theory - Can you get Siegel's theorem "for free" from modularity and Mazur's Eisenstein Ideal paper?

There is a well-known theorem of Shafarevich that given a finite set $S$ of primes the number of isomorphism classes of elliptic curves over $Bbb Q$ with everywhere good reduction outside $S$ is finite.



One way to prove this, which Cremona and Lingham use here to compute all such curves, is to use Siegel's theorem that an elliptic curve over $Q$ has only a finite number of $S$-integral points.



Here's a proof with overkill:



Given $S$ there are a finite number of possible conductors $N$ for elliptic curves with everywhere good reduction outside $S$. They must all be divisors of $2^8 3^5 d^2$ where $d$ is the product of those primes in $S$ different from 2 and 3.



The corresponding spaces $S_2(Gamma_0(N))$ of cuspforms for each of our finite list of $N$ is finite dimensional.



By the modularity theorem, there is hence finite number isogeny classes of elliptic curves with everywhere good reduction outside $S$.



By Mazur's Modular Curves and the Eisenstein Ideal there are only a finite number of isomorphism classes of elliptic curves in a given isogeny class.




Question 1: Does any of this machinery rely on Siegel's theorem?



Question 2: If the answer to question 1 is no, can this proof of Shafarevich's theorem be "cheaply extended" to deduce Siegel's Theorem from these seemingly unrelated powerful results?




By "cheaply extended" I mean without the use of techniques with the diophantine flavor of Baker's theory of linear forms in logarithms.

Wednesday, 18 November 2009

gr.group theory - Slight question variant on "order information enough to guarantee 1-isomorphism"

This is a very slight variant on the question order information enough to guarantee 1-isomorphism? that I asked a while back, with an answer in the negative.



Background repeated:



I define a 1-isomorphism between two groups as a bijection that restricts to an isomorphism on every cyclic subgroup on either side. There are plenty of examples of 1-isomorphisms that are not isomorphisms. For instance, the exponential map from the additive group of strictly upper triangular matrices to the multiplicative group of unipotent upper triangular matrices is a 1-isomorphism. Many generalizations of this, such as the Baer and Lazard correspondences, also involve 1-isomorphisms between a group and the additive group of a Lie algebra/Lie ring.



Consider the following function F associated to a finite group G. For divisors $d_1$, $d_2$ of G, define $F_G(d_1,d_2)$ as the number of elements of G that have order equal to $d_1$ and that can be expressed in the form $x^{d_2}$ for some $x in G$.



New question: If G is a finite abelian group and H is a finite (not necessarily abelian) group such that $F_G = F_H$, is it necessary that there is a 1-isomorphism between G and H.



For the original question, I had not insisted that one of the groups be abelian, and Tom Goodwillie provided a counterexample with both groups non-abelian of order 32.



The reason for my interest is as follows: I want to determine which groups are 1-isomorphic to abelian groups. This will help me with exploring some generalizations of the Lazard correspondence. To do this properly, I would need to construct a combinatorial structure (such as the directed power graph) that stores all the information of the group up to 1-isomorphism. However, constructing this structure and then verifying whether the graphs thus constructed for two groups are isomorphic is computationally somewhat harder. On the other hand, $F_G$ can be stored easily and we can quickly check for two groups whether their $F$s coincide.



Apart from this computational perspective, the question is also of academic interest to me.

Tuesday, 17 November 2009

observation - Where is the center point for the Supergalactic coordinate system?

According to Wikipedia, the origin of the system is:



The zero point (SGB=0°, SGL=0°) lies at (l=137.37°, b=0°). In J2000 equatorial coordinates, this is approximately (2.82 h, +59.5°).



In the same article there is a recent referred paper on this topic, which explains [The zero point] is one of the two regions where the SGP is crossed by the Galactic plane.



Perhaps you already know this website, which shows many good pictures. I was trying to find a better figure, but I guess you have to push your imagination according to the definition.



EDIT: According to this, SG coordinates are similar to the galactic ones, so that the North Supergalactic Pole is defined in Galactic coordinates, which means that they have the same origin (i.e., the Sun).

Sunday, 15 November 2009

ra.rings and algebras - Behavior of the projective dimension of modules in a continuous chain of extensions

Let $R$ be an arbitrary ring. Let $D$ be the class of $R$-modules of projective dimension less than or equal to a natural number $n$. If $L$ is the direct union of a continuous chain of submodules ${L_{alpha},alpha < lambda}$ for some ordinal number $lambda$ (this means that $L=bigcup_{alpha}L_{alpha}, L_{alpha}subseteq L_{alpha'}$ if $alpha leq alpha' <lambda$ and $ L_{beta}=bigcup_{alpha <beta} L_alpha$ when $beta < lambda $ is a limit ordinal) with $L_{0}in D$ and $L_{alpha +1}/L_{alpha}in D, forall alpha<lambda,$ can one show that $L in D$?



PS: We know that when $R$ is a perfect ring, then $D$ is closed under direct limits, then we can prove the above by transfinite induction. But if $R$ is not perfect, how can we show that?

Saturday, 14 November 2009

oc.optimization control - Uniformly distribute a population in a given search space

I am trying to uniformly distribute a finite number of particles into a 2D search space to get me started with an optimization problem, but I am having a hard time doing it. I am thinking that it might have something to do with convex sets, but I might as well be totally off, so I am asking you guys of a proper way to do it .



Edit: Ok, so I have to implement the Particle Swarm Optimization algorithm in order to get the polynomial input for Baker's algorithm and to get started with PSO, I have to uniformly distribute the particles in the search space (the initial example I got was of the distribution of particles inside of a cube, but that's kind of vague for me). What does it mean to uniformly distribute in the search space?

Friday, 13 November 2009

ca.analysis and odes - Approximation to divergent integral

First, cut off the tail towards infinity:



$$f(x) = int_{x}^1 frac{Phi(t)}{t^5} dt + int_1^{infty} frac{Phi(t)}{t^5} dt.$$



The second term is a constant, so you can compute it numerically once and for all.



Write
$$e^{i pi u^2/2} = 1 + frac{i pi}{2} u^2 +R(u)$$
and
$$int_{0}^t e^{i pi u^2/2} du = t + frac{i pi}{6} t^3 + int_{0}^t R(u) du.$$



So
$$frac{Phi(t)}{t^5} = left( t^{-4} + frac{i pi}{6} t^{-2} + t^{-5} int_{0}^t R(u) du right) left( 1 + frac{i pi}{2} t^2 + R(t) right)=$$
$$t^{-4} + frac{2 pi i}{3} t^{-2} + left( t^{-4} R(t) - frac{pi^2}{12} + int_{0}^t R(u) du right).$$



So
$$int_{x}^1 frac{Phi(t)}{t^5} dt = frac{1}{3}left( x^{-3} - 1 right) + frac{2 pi i}{3} left( x^{-1} -1 right) + int_{x}^1 left( t^{-4} R(t) - frac{pi^2}{12} + int_{0}^t R(u) du right) du.$$



The integrands in the last term are bounded functions, and they are being integrated over bounded domains, so there is no problem approximating them numerically.



If you want an asymptotic formula, instead of a numerical approximation, you should be able to keep taking more terms out to get a formula like
$$f(x) = frac{1}{3} x^{-3} + frac{2 pi i}{3} x^{-1} + C + a_1 x + a_2 x^2 + cdots + a_n x^n + O(x^{n+1}) quad mathrm{as} x to 0.$$
You probably won't be able to get the constant $C$ in closed form, because it involves all those convergent integrals. The other $a_i$ will be gettable in closed form, although they will get worse and worse as you compute more of them.

Thursday, 12 November 2009

formation - What is radiation pressure and how does it prevent a star from forming?

Radiation pressure is nothing but electromagnetic interaction.



Imagine a hydrogen atom hit by a stream of photons coming from the same direction. Although the atom as a whole is neutral, the electron and the proton are physically displaced, forming a dipole, i.e. a positive-negative charge couple. Some of the photons therefore scatter against the dipole transferring to it some momentum. So the atom start moving in the same direction as the photons. If the photons are in the ultraviolet, the electron can be exited to higher orbitals and possibly stripped from the atom. In this case the scattering is even more efficient.



Now imagine a star surrounded by a layer of hydrogen. Gravity attracts the layer towards the star. The photons emitted by the star try to push the hydrogen atoms away from it, through the electromagnetic force.



Very massive stars are very luminous and hot, which means that they emit a lot of ultraviolet photons. When the pressure transferred from the photons to the layer is larger than the gravitational attraction, then the layer begins expanding, effectively stopping growth of the star.



In the figure posted by the OP there is also dust. I don't know the details of photons-dust-gas interactions (we need a stellar atmosphere expert, I guess), but the basic principle is nonetheless the same.

Wednesday, 11 November 2009

amateur observing - Why are distant galaxies not visible in the observable Universe?

Quick answer: Because they didn't entered our event horizon. Some never will. And some will move out of our event horizon - their last photons that'll be received here being sent right now.



Let's do some fact checks first:




[...]the galaxies that are now at the 'edge' (not visible
theoretically) must have been (at some point in time) at place around
where the Earth is at now[...]




According to current calculations, the first galaxies may have formed around 200 million years after the Big Bang - older estimates went with the 400-500 MY range. For a long, long while, there were no stars to be seen. So if you go back in time you won't be seeing the same structures we see today.



Second, and that may be an awkward mental experiment, nothing else was occupying our place other than ourselves. I'll ask you to excuse the cliché, but the old balloon example is very apt to explain this:



enter image description here



As the universe expands, the distance between celestial bodies increases. Now, here's a way to put it: space is being generated in between the objects.



And not only that - while there's a limit on how fast you can move on the balloon's surface, it doesn't apply to the amount of space being generated.



As a direct consequence there's a bubble around us that basically works in the very same way as a black hole's event horizon does:



  • In a black hole, gravity is so strong that its pull on photons exceeds C (the speed of light, or 299,792,458 m/s);

  • On a fringe object, the amount of space generated by the expansion, per second, may exceed C; its photons will never reach Earth.

And it seems that we live in a accelerated universe - that is, the spatial acceleration ratio is actually going up. If that's correct, some fringe objects will slip away form our event horizon, disappearing (from our point of view) into a sample of the heat death of the universe.



Scary, huh?

rt.representation theory - How can we describe the splitting of nilpotent orbit for "very even" partitions in the special orthogonal group?

I don't know the answer, but I am posting to make sure I understand the question and give some partial ideas.



As Ben says, the question is hard to follow. I am interpreting it this way:



Consider the cone of nilpotent elements in $mathfrak{so}_{2n}$. How does it break into $SO_{2n}$ orbits under the adjoint action? How is this related to the decomposition according to Jordan normal form? (Note: the decomposition according to Jordan normal form can be thought of as the $SL_{2n}$ orbits for the nilpotent matrices in $mathfrak{sl}_{2n}$.)




Some partial thoughts. The Lie algebra $mathfrak{so}_{2n}$ is skew symmetric matrices. Any skewsymmetric matrix has even rank. Moreover, if $A$ is skewsymmetric, so is every odd power of $A$. So $A$, $A^3$, $A^5$ etcetera all have even rank. This implies that, if $lambda$ is the partition of $2n$ coming from the Jordan form of $A$, then every even part of $lambda$ occurs with even multiplicity. So, when $2n=4$, the only partitions that occur are $31$, $22$ and $1111$. So we have determined which Jordan forms appear.



In the particular case $n=4$, we have $mathfrak{so}_4 cong mathfrak{sl}_2 oplus mathfrak{sl}_2$. Using this isomorphsim, one checks that $n_1 oplus n_2$ is nilpotent if and only if $n_1$ and $n_2$ are. We get Jordan form $31$ if both $n_1$ and $n_2$ are nonzero nilpotents; we get $22$ is one is zero and the other isn't; and we get $1111$ if both are nilpotent.



So we see that Jordan form $22$ splits into two orbits, according to which of $n_1$ and $n_2$ is nonzero. Presumably, rajamanikkam knows some theorem which says that this happens whenever all the parts of $lambda$ are even, and would like to know how to make the theorem explicit.



In general, I know how I would attack this problem. The Jordan form of $A$ fixes the
Jordan form of $S_{mu}(A)$, for any Schur functor $S_{mu}$. But the $SO_{2n}$ conjugacy class fixes the Jordan form $A$ acting on any $mathfrak{so}_{2n}$ representation. In particular, the Jordan form of the action of $A$ on the spin representations should give additional data, allowing us to separate distinct $SO_{2n}$ orbits.



I don't know the details of how this works, but I'm sure someone reading this does!

ag.algebraic geometry - Why do Littlewood-Richardson coefficients describe the cohomology of the Grassmannian?

There are several rings-with-bases to get straight here. I'll explain that, then describe three serious connections (not just Ehresmann's proof as recounted in the OP).



The wrong one is $Rep(GL_d)$, whose basis is indexed by decreasing sequences in ${mathbb Z}^d$.



That has a subring $Rep(M_d)$, representations of the Lie monoid of all $dtimes d$ matrices, whose basis is indexed by decreasing sequences in ${mathbb N}^d$, or partitions with at most $d$ rows.



That is a quotient of $Rep({bf Vec})$, the Grothendieck ring of algebraic endofunctors of ${bf Vec}$, whose basis (coming from Schur functors) is indexed by all partitions. Obviously any such functor will restrict to a rep of $M_d$ (not just $GL_d$); what's amazing is that the irreps either restrict to $0$ (if they have too many rows) or again to irreps!



  1. Harry Tamvakis' proof is to define a natural ring homomorphism $Rep({bf Vec}) to H^*(Gr(d,infty))$, applying a functor to the tautological vector bundle, then doing a Chern-Weil trick to obtain a cohomology class. (It's not just the Euler class of the resulting huge vector bundle.)
    The Chern-Weil theorem is essentially the statement that Harry's map takes alternating powers to special Schubert classes. So then it must do the right thing, but to know that he essentially repeats the Ehresmann proof.


  2. Kostant studied $H^* (G/P)$ in general, in "Lie algebra cohomology and
    something something Schubert cells" (sorry!), by passing to the compact
    picture $H^* (K/L)$, then to de Rham cohomology, then taking $K$-invariant
    forms, which means $L$-invariant forms on the tangent space $Lie(K)/Lie(L)$.
    Then he complexifies that space to $Lie(G)/Lie(L_C)$, and identifies that
    with $n_+ oplus n_-$, where $n_+$ is the nilpotent radical of $Lie(P)$.
    Therefore forms on that space is $Alt^* (n_+) otimes Alt^* (n_-)$.


Now, there are two things left to do to relate this space to $H^* (G/P)$. One is to take cohomology of this complex (which is hard, but he describes the differential), and the other
is to take $L$-invariants as I said. Luckily those commute. Kostant degenerates the differential so as to make sense on each factor separately (at the cost of not quite getting $H^* (G/P)$).



Theorem: (1) Once you take cohomology, $Alt^* (n_+)$ is a multiplicity-free $L$-representation. So when you tensor it with its dual and take $L$-invariants, you get a canonical basis by Schur's lemma. (2) This basis is the degeneration of the Schubert basis.



Theorem: (1) If $P$ is (co?)minuscule, the differential is zero, so you can skip the take-cohomology step. That is, $Alt^* (n_+)$ is already a multiplicity-free $L$-rep. The Schur's lemma basis has structure constants coming from representation theory. (2) In the Grassmannian case, the degeneration doesn't actually affect the answer, so the product of Schubert classes does indeed come from representation theory.



I believe the degenerate product on $H^*(G/P)$ is exactly the one described by [Belkale-Kumar].



It's fun to see what's going on in the Grassmannian case -- $L = U(d) times U(n-d)$, $n_+ = M_{d,n-d}$, and $Alt^* (n_+)$ contains each partition (or rather, the $U(d)$-irrep corresponging) fitting inside that rectangle tensor its transpose (or rather, the $U(n-d)$-irrep).



I think this is going to be the closest to what you want, for other groups' Grassmannians.



  1. (No, 3. Silly site software!)
    Belkale has the best (least decategorified) proof I've seen. He takes three Schubert cycles meeting transversely, and for each point of intersection, constructs an actual invariant vector inside the corresponding triple product of representations. The set of such vectors is then a basis.

light - How do scientists determine the age of stars?

Scientists look at groups of stars to determine the ages. As another poster already said, the HR diagram is the tool used to determine ages. The colors emitted from the stars is also used to determine ages because color is indicative of where the star is on its life cycle.



How to Learn a Star's True Age



As the article says, stars are assessed by clusters. The stars formed around the same time and from there scientists determine the age. We have the following going on: brightness, color, and clusters. Size can be used as a determinant, however there are white dwarf stars which are very small, are dim as observed from Earth, yet are very old stars.



Also to clarify-- just because a star is far away and therefore can take thousands of light years for the light to reach is does not mean it's old. The speed of light is simply a measurement of time of travel.

Tuesday, 10 November 2009

the sun - How do you find the altitude of the Sun if you are on the Moon?

The question is:




If you stand on the equator of the Moon, at midday your position occurs when the Moon is located at the line of the node. What is the altitude of the sun at that moment?




Information: This is when the Moon is located at the line of node (line of node is a line that occurs from crossing between the ecliptic plane and the Moon's orbital plane, which makes an angle of approximate 5 degrees). The equatorial plane of the Sun makes an angle of 1 degrees, 32 arc minutes and 40 arc seconds.



The solution is 88 degrees, 27 arc minutes and 20 arc seconds through 90 degrees.



But I don't understand why the answer isn't 1 degree, 32 arc minutes and 20 arc seconds, because the ecliptic plane where the sun is located is above the horizon or celestial equator. equal to its altitude.

st.statistics - variance of $1/(X+1)$ where $X$ is Poisson-distributed with parameter $lambda$

Sorry, I gave a moronic answer before. Let me try to give a better one.



There should be no expression for $f(lambda) := sum_{k geq 1} lambda^k/(k^2 k!)$ in elementary functions. If there were, then $g(lambda) = lambda f'(lambda) = sum_{k geq 1} lambda^{k}/(k cdot k!)$ would also be elementary. But $g(lambda)=int_0^{lambda} frac{e^t-1}{t} dt$ and $e^t/t$ is a standard example of a function without an elementary antiderivative.

Monday, 9 November 2009

convention - Can a corollary follow a conjecture?

I think it's generally bad form to have a corollary dependent on an earlier conjecture. I recommend one of the following:



Theorem: Assuming Conjecture A, properties X, Y and Z are true.



or



Theorem: Conjecture A implies X, Y and Z.



Most importantly, it should be crystal clear that the result is dependent on the conjecture.

Saturday, 7 November 2009

dg.differential geometry - Curvature and Parallel Transport

It appears to me that one reason why nobody has proved the formula yet is that the formula is still wrong. First, the formula has to depend on $X$ and $Y$. If you rescale $X$ and $Y$, the left side of the formula scales but the right side stays constant. That can't be. Second, the two sides of the equation do not scale the same under a constant scaling of the metric.



I consider the derivation of the correct version to be a reasonable if challenging exercise for a serious graduate student in differential geometry, so I was expecting someone else to provide the details. You can do this using only the basic definitions and properties of a Riemannian metric, its connection, and Riemann curvature with the fundamental theorem of calculus and the product rule for differentiation. Although I learned most of my Riemannian geometry after I was out of graduate school, I spent many, many hours doing calculations and arguments like this over and over again. Almost all of global Riemannian geometry involves working with Jacobi fields using arguments like the one used to prove this local formula.



But I got tired of waiting, so I wrote out all the details. If you're a student, I recommend that you try to read as little of my proof as possible or just scan it quickly and try to finish it yourself.



Warning: I wrote this up very quickly and did not check for typos and errors. It's possible that my final formula is still not right, but I am confident that my argument can be used to obtain a correct formula. I also did not provide every last detail, so, if you're unfamiliar with an argument like this, you need to do a lot of work making sure that everything really works. The key trick is pulling everything back to the unit square, where elementary calculus can be used. I'm sure this trick can be replaced by Stokes' theorem on the manifold itself, but that's too sophisticated for my taste.



Holonomy calculation



ADDED:



The correct formula, if you assume $|Xwedge Y| = 1$, is



$P_gamma Z - Z = Area(c) R(X,Y)Z$



This scales properly when you rescale the metric by a constant factor. Notice that the left side is invariant under rescaling of the metric.



I recommend looking at papers written by Hermann Karcher, especially the one with Jost on almost linear functions, the one with Heintze on a generalized comparison theorem, and the one on the Riemannian center of mass. I haven't looked at this or anything else in a long time, but I have the impression that I learned a lot about how to work with Jacobi fields and Riemann curvature from these papers.



Finally, don't worry about citing anything I've said or wrote. Just write up your own proof of whatever you need. If it happens to look very similar to what I wrote, that's OK. I consider all of this "standard stuff" that any good Riemannian geometer knows, even if they would say it differently from me.



EVEN MORE: There are similar calculations in my paper with Penny Smith: P. D. Smith and Deane Yang
Removing Point Singularities of Riemannian Manifolds, TAMS (333) 203-219, especially in section 7 titled "Radially parallel vector fields". In section 5, we attribute our approach to H. Karcher and cite specific references.

computational complexity - P/poly algorithm for polynomial identity testing

The Schwartz-Zippel lemma is very fast, only one evaluation of the formula at one random point. There's nothing better known that minimizes time and error as well as Schwartz-Zippel. But Schwartz-Zippel requires a lot of randomness in each repetition: a fresh new point of n elements.



Have you tried some of the polynomial identity tests with better tradeoffs between randomness and error? Their running time (and the running time dependence on the error) is a bit worse than Schwartz-Zippel, but the number of random bits needed is much less than Schwartz-Zippel. So in the application of Adleman's theorem, the sizes of the witnesses you need to hard-code in the non-uniform circuit will shrink, but the time dependence on error increases, potentially making the number of necessary witnesses increase. Given these complex tradeoffs, I'm not sure which of them would work best for obtaining small circuits.



For a quick overview of these alternative identity tests and their tradeoffs, see the table on p.3 in Agrawal and Biswas: http://www.cse.iitk.ac.in/users/manindra/algebra/identity.pdf

the moon - How to find Exomoons?

I'm going to try to take a stab at answering this. With our current technologies, detecting exomoons can prove hard however there are various techniques being used today such as:



  1. Analyzing data from the Kepler Spacecraft

  2. Dynamic effects – the exomoon tugs the planet, which causes deviations in the times and durations of the host planet’s transits. This is similar to the radial velocity technique for detecting exoplanets. Source: UniverseToday

  3. Transit effects – the exomoon may transit the star immediately before or just after the planet does. This will cause an added dip in the observed light. See this video for a great demonstration. This is similar to the light curve technique for detecting exoplanets. Source: UniverseToday

  4. Gravitational Microlensing - which is a technique used to detect exoplanets like you stated above however it may also reveal signs of an exomoon. Read this source

I found this information doing some simple googling, feel free to edit or add to.

ag.algebraic geometry - MaxSpec, Spec, ... "RadSpec"? Or, why not look at all radical ideals?

The space Spec(R) has a universal property:



In the category of sets there is no such thing as the initial local ring into which some given ring R maps, i.e. a local ring L and a map f:R-->L such that any map from R into a local ring factors through f.



But a ring R is a ring object in the topos of Sets. Now if you are willing to let the topos vary in which it should live, such a "free local ring on R" does exist: It is the ring object in the topos of sheaves on Spec(R) which is given by the structure sheaf of Spec(R). So the space you were wondering about is part of the solution of forming a free local ring over a given ring (you can reconstruct the space from the sheaf topos, so you could really say that it "is" the space).



An even nicer reformulation of this is the following (even more high brow, but it nicely singles out the space):



A ring R, i.e. a ring in the topos of sets, is the same as a topos morphism from the topos of sets into the classifying topos T of rings (by definition of classifying topos). There also is a classifying topos of local rings with a map to T (which is given by forgetting that the universal local ring is local). If you form the pullback (in an appropriate topos sense) of these two maps you get the topos of sheaves on Spec(R) (i.e. morally the space Spec(R)). The map from this into the classifying topos of local rings is what corresponds to the structure sheaf.



Isn't that nice? See Monique Hakim's "Schemas relatifs et Topos anelles" for all this (the original reference, free of logic), or alternatively Moerdijk/MacLane's "Sheaves in Geometry and Logic" (with logic and formal languages).

general relativity - How are black holes doors to other universes?

It is correct that the Kerr black hole solution of GTR allows travel between universes. However, that does not mean that if you actually jump into any kind of black hole that you could go to another universe.



To motivate the resolution to this conundrum, let's start off very easy: suppose you stand on the ground with a ball in your hand, and you throw it with some initial velocity. For simplicity, let's ignore everything except a uniform gravity. Mathematics will then tell you that the ball follows a parabolic arc, and when and where the ball will hit the ground. And if you take the resulting equations too literally, then it will also tell you that the ball hits the ground twice: once in the future, once in the past. But you know the past solution isn't right: you held the ball; it didn't actually continue its parabolic arc into the past.



A morally similar kind of thing occurs for, say, a Schwarzschild black hole. If you look at it in the usual Schwarzschild coordinates, there's a problem at the horizon. Mathematics will then tell you that the problem is just with the coordinate chart, and that there's actually an interior region to the black hole that becomes apparent in different coordinates. And if you do this generally enough, it will tell you that there's more to it than even that: there's also white hole with a reverse horizon and its exterior region--another universe. This full "maximally extended" Schwarzschild spacetime has this other universe connect to ours via an "Einstein-Rosen bridge" and then "pinch off", producing separate black and white holes.



Of course, that too is an artifact of mathematical idealization: and actual black hole is not infinitely extended in the past and future; it was actually produced by something, a stellar collapse. (And the "bridge" isn't traversable anyway; one will be destroyed in the singularity if one tries.)



Finally, on to the Kerr solution, it's a bit better because formally the singularity is avoidable, unlike the Schwarzschild case. However, it's still physically unreasonable: in addition to the fact that actual black holes aren't eternal, the interior of the Kerr solution is unstable in regard to any infalling matter, which will perturb the solution into something else entirely. Therefore, it cannot be taken as a physically meaningful. Still, it is true that full Kerr spacetime contains a way into another universe--in fact, infinitely many of them, chained one after another.



If you're interested in the details of its structure, you could look at some Penrose diagrams of those black hole solutions.

Thursday, 5 November 2009

dg.differential geometry - What is the relationship between various things called holonomic?

I don't know the connection between the four points, but it may help to redefine holonomy in your first point. A constraint is generally a regular distribution (in the sense of a subbundle of the tangent bundle of the configuration manifold).



If that distribution is integrable, then the constraint is said to be holonomic. And indeed, by restricting the system to one particular integral submanifold, one obtains the definition of holonomy in your first point.



The point is, in mechanics, "holonomic" is just another word for "integrable distribution". If the constraint distribution is not integrable, the system is called "nonholonomic".



Maybe that may help to make a connection with point 4.?

the sun - One year on the sun

I hope you mean if a sun was 'rotating around' another sun, how long it would take to complete a revolution.



There is this thought process. Since they both have equal mass, they would definitely rotate around their center of mass. So now, many factors come into the picture.



  • Distance: How far they are from each other.

  • Mass: Mass of sun, i.e. their individual masses (same in this case of course)

Using the mass and distance we can make some basic calculations using $F = G*frac{M^2}{d^2}$, where $M$ is the mass of sun and $d$ is the distance of separation between the two 'suns'. Keeping in mind of the fact they must obviously be rotating about their center of mass which is half-way in this case, we can use $a = frac{v^2}{r}$ accordingly and get to the solution you are seeking.



So, depending on the above factors (there may be few more), we may find the linear or angular velocity with which they are rotating around their center of mass, and accordingly we can calculate the period of rotation.

Tuesday, 3 November 2009

propulsion - Is nuclear fusion a viable means of powering spacecraft?

Current experimental fusion reactors are still decades away from generating a net energy output. The test reactor at the National Ignition Facility made a crucial breakthrough last year when they first managed to get more energy out of their fusion chamber than they put into it. But keep in mind that:



  • The energy output was only positive when you measure the input energy as the energy of the laser beams entering the fusion chamber. But the process to create these laser-beams is only 1% effective. Also, there is no method yet to extract the energy from the fusion chamber and turn it into electricity. You can't assume that such a method would be 100% effective either. So the energy efficiency of the whole facility is still magnitudes away from being positive.

  • The positive output was merely for 150 PICOseconds

  • That test reactor currently has the size of a factory complex.

It will still take decades until we have a fusion facility capable of completely(!) powering itself. It will take even longer until the technology is feasible for economic use and even longer until the technology is miniaturized enough to be considered for use in space.



I will try to update my answer when this has happened. But I am not sure I will still be alive then.



There is, however, another method with which we successfully managed to create a fusion reaction with positive output: The fusion bomb. Unfortunately the only way to use their energy is in form of a single, huge explosion, larger than a common nuclear fission bomb (because a fission bomb is used to ignite the fusion reaction). Propelling a spacecraft through shockwaves produced by nuclear bombs is obviously crazy. But not so crazy that nobody ever though about it. There is a theoretical concept called Project Orion which is exactly that. In theory it doesn't even look bad. It might in fact currently be the only viable method for manned interstellar travel within our technical capabilities. But it never got above theoretical planning, because:



  1. It would need to be huge to be efficient and thus insanely expensive.

  2. Putting nuclear weapons into orbit would be a violation of the Outer Space Treaty of 1967.

Sunday, 1 November 2009

ag.algebraic geometry - References for Donaldson-Thomas theory and Pandharipande-Thomas theory?

I would be very happy if such material existed!!!



But just to statisfy the first curiosity,
There is a 1 hour lecture of Richard Thomas online on MSRI



Counting curves in 3-folds, 2009



http://www.msri.org/communications/vmath/VMathVideos/VideoInfo/4118/show_video



I would like to add just one little thing that I know about DT and find cool. Consider a 3-dimensional CY manifold X with a holomorphic volume form $W$.



Statement. On the space of smooth 2-dimesnional surfaces in X there is a natural (possibly multi-valued) functional F, defined by $W$. Moreover, holomorphic curves in X are exactly the critical points of the functional.



Definition of the functional. Take a surface S, and define F(S)=0, for any other surface $S_1$ homological to S consider a 3-manifold M whose boundary is $S-S_1$. Integrate W over M. This gives the value of F at $S_1$.



In is not hard to check that holomorphic curves are critical points of F, so couniting holomorphic curves in a CY 3-fold can be seen as finding the number of critical points of a functional.

ap.analysis of pdes - Short-time Existence/Uniqueness for Non-linear Schrodinger with Loss of Several Derivatives

Just a few thoughts. The answer to your question depends on a number of key factors. To focus, let us consider a nonlinear term like $F(D_x^k u)$ and let us work in one space dimension.



1) How large is $k$? If $kle2$ you can linearize the equation and work in Sobolev spaces. Of course you need some structural assumption on the nonlinearity (otherwise you may take e.g. $F(u_{xx})=pm 2i u_{xx}+...$, and create all sorts of difficulties). There are some classical works by Kenig, Ponce, Vega on this (see "The Cauchy problem for the quasilinear S.e." around 2002 I think) which more or less give the complete picture from the classical point of view i.e. without trying to push below critical Sobolev etc. So if this is the case what are you exactly looking for? if you prefer to work in smaller spaces, what you need is a 'regularity' result, i.e., if the data are in some smaller space, this additional regularity propagates and the solution stays in the same space for some time. There are some results of this type, in classes of analytical or Gevrey functions; but see below.



2) If $kge3$, then the same remark as in (1) applies, you need some strong structural assumptions on the nonlinearity. Indeed, now the poor $u_{xx}$ is no longer the leading term and the character of the (linearized) equation is entirely determined by the Taylor coefficient of $D^ku$ in the expansion of $F$. So then a careful case-by-case discussion is necessary. Unless...



3) unless, and we come maybe closer to your question, you decide to work in MUCH smaller spaces than $C^infty$. Gevrey classes are roughly speaking classes of smooth functions such that the derivatives of order $j$ grow at most like $j!^s$. For $s=1$ you get analytic functions. For $ 1 < s < infty $ you get larger classes $G^s$, with quite nice properties (to mention just one, you have compactly supported functions in these classes). For $ 0 < s < 1 $ the classes $G^s$ are rather small, strictly contained in the space of analytic functions. The only reason why $G^s$ for $s<1$ are useful is that you can prove a sort of very general Cauchy-Kowalewski theorem, local existence in Gevrey classes, for any evolution equation $u_t=F(D^k_xu)$, provided $s<1/k$. No structure is required on $F$, only smoothness. Contrary to the appearance this is a weak result, e.g. you can solve locally both $u_t=Delta u$ and $u_t=-Delta u$ in $G^{1/2}$ (and globally in $G^s$ for $s<1/2$ ), so you are essentially trivializing the equation and forgetting all of its structure. But if this is what you need I can give you pointers.



EDIT (since this does not fit in the comments):
I am a bit rusty on these topics, now I start to remember more details.



1) $s<1$. There is a problem with working in $G^s$ for $s<1$, and it is that this space is unstable for product of functions. So also in this case you need very special nonlinearities to work with. But the linear theory is straightforward and you can find an account maybe here. Also Beals in some old paper developed semigroup theory in Gevrey classes. BTW, if you find a way to handle products this might be quite interesting.



2) $s>1$. The Mizohata school and other japanese mathematicians worked on NLS in Gevrey classes quite a lot, see e.g. this paper



3) The Nash-Moser approach might be useful provided you can prove that the linearized of your operator is solvable in every Sobolev class, with a fixed loss of derivatives. If you want to try this route, the best introduction to the theory I know of is in Hamilton's 1992 (?) Bull. AMS paper. It's very long but extremely readable, give it a try.

ct.category theory - When does a certain natural construction on monoidal categories yield a Hopf algebra?

Let $mathcal C = (mathcal C_0,mathcal C_1)$ be a (small) strict monoidal category. Pick a field $mathbb K$, and let $mathbb K[mathcal C_1]$ be the vector space with basis the morphism of $mathcal C$. It is an associative unital algebra under tensor product $otimes$ (the identity morphism on the $otimes$ unit is the algebra unit).



I will now define a coassociative comultiplication on $mathbb K[mathcal C_1]$, although without restriction on $mathcal C$ the comultiplication will not converge. I'll give two descriptions:



  1. $mathbb K[mathcal C_1]$ is an associative algebra not only under $otimes$, but also under composition: if $a,b in mathcal C_1$, then $ab = acirc b$ if that composition is defined in $mathcal C_1$, and $0$ otherwise. But $mathbb K[mathcal C_1]$ has a distinguished basis (namely $mathcal C_1$), and hence a distinguished map $mathbb K[mathcal C_1] to (mathbb K[mathcal C_1])^*$; using this map, turn the composition multiplication into a comultiplication.

  2. For each morphism $c in mathcal C_1$, there is some set ${(a,b)in mathcal C_1 times mathcal C_1 text{ s.t. } acirc b = c }$ of ways to factorize $c$. Define $Delta(c) = sum_{ acirc b = c } aotimes b$; where here the $otimes$ is the exterior one (not the other multiplication on $mathbb K[mathcal C_1]$.

From either description, it's clear that the comultiplication isn't really defined: in general that sum diverges. So let's suppose that $mathcal C$ has the property that any morphism has only finitely many factorizations. Clearly this requirement is evil.




Question 0: Is there a less evil way to talk about this comultiplication? Actually, even the requirement that $mathcal C$ be strict is evil, but without it $mathbb K[mathcal C_1]$ is not associative. Is there a less evil fix for this?




The comultiplication is co-unital. The counit on $mathbb K[mathcal C_1]$ sends identity morphisms to $1in mathbb K$ and non-identity morphisms to $0$. (A less-evilization might want to send, say, isomorphisms to $1$, or something.)



So, I have a vector space $mathbb K[C_1]$ with a multiplication (coming from the monoidal structure on $mathcal C$) and a comultiplication (coming from the composition structure on $mathcal C$).




Question 1: Are there simple general conditions that assure that this structure is a bialgebra?




In the categories I am most interested in, $mathbb K[mathcal C_1]$ is a bialgebra. My intuition is that when $mathcal C$ is sufficiently free, everything works. Here's an example. The category of braided graphs has objects the non-negative integers, thought of as distinguished subsets of $mathbb R$. A morphism between $m$ and $n$ is: a graph $G$ with $m$ univalent vertices marked "in" and $n$ univalent vertices marked "out", along with a smooth embedding $G to mathbb R^2 times [0,1]$ so that $G cap mathbb R^2 times{0}$ consists of precisely the $m$ "in" vertices, spaced out on the integers ${1,dots,m} times {0} times {0}$, and similarly for the out vertices, and such that every edge of $G$ is never horizontal. Two morphisms are identified if they are isotopic rel boundary among embedded graphs with non-horizonal edges. Composition are the monoidal structure are obvious. Equivalently, the category of braided graphs is the free braided monoidal category generated by a single basic object $V$ and a basic morphism in each $hom (V^{otimes m}, V^{otimes n})$.



In any case, once you have a bialgebra, you are lead inexorably to the following question:




Question 2: When is $mathbb K[mathcal C_1]$ Hopf?




For very free categories, it is Hopf: a free category is graded, by setting the generators to have grading $1$; the degree-zero part is $mathbb K[text{identity maps}]$, and these themselves are graded by the number of objects; the degree-zero part of this is $mathbb K$, generated by the identity map on the monoidal unit; then bootstrap back up. Probably this works for less-free things too, using filtrations rather than gradings (i.e. filtered quotients of free monoidal categories).