Saturday, 31 October 2009

rotation - How would Earth's climate differ if it's axis were tilted around 90 degrees like Uranus?

This is a good question, and has been somewhat a topic of study by a few. So below is some consequences of if Earth had a tilt like that of Uranus (for this explanation, I am disregarding the wobbles in the Earth's current axis, just focussing on the 90 degree axis):



According to the article Not All Habitable Zones Are Created Equal (Moomaw, 1999), each day and night would be 6 months each (including an epically long dusk and dawn), with daytime temperatures reaching up to 80C, but the night time side may not reach freezing - owing to the time to take to cool down from the 6 month day. Parts of the equatorial regions would be, however, permanently encased in ice.



Pretty much like this diagram of Uranus' rotation (from Source: University of Hawaii:



enter image description here



A major consequence of this according to the author is:




In such an environment, life could almost certainly still appear, but it would have much more difficulty evolving into forms that could survive such grotesque temperature extremes -- which would greatly slow down its evolution into more complex forms, maybe by billions of years.




A great consequence of this, particularly the 6 month night is that photosynthesis would have been stilted, if it could have started at all. This has a great consequence on oxygen levels of the atmosphere.



Interestingly, the authors claim that if the Earth had an axial tilt of 90 degrees, but at 210 million km from the sun, then:




its climate would be positively balmy -- the equator would be 11 deg C (52 deg F), and the poles would never rise above 46 deg C (115 deg F) or fall below 3 deg C (37 deg F). Earth would have no ice anywhere on its surface, except on some of its highest mountains.




According to the article High Planetary Tilt Lowers Odds for Life? (Hadhazy, 2012), has a great way of putting it:




"Your northern pole will be boiled during part of the year while the equator gets little sunlight," said Heller. Meanwhile, "the southern pole freezes in total darkness." Essentially, the conventional notion of a scorching hell dominates one side of the planet, while an ultra-cold hell like that of Dante's Ninth Circle prevails on the other.




Then, to make matters worse, the hells reverse half a year later. "The hemispheres are cyclically sterilized, either by too strong irradiation or by freezing," Heller said.



They also describe that if life were to evolve, extremophiles (specifically, thermophiles) would be dominant - seasonally.

a infinity algebras - Is there a refinement of the Hochschild-Kostant-Rosenberg theorem for cohomology?

The HKR theorem for cohomology in characteristic zero says that if $R$ is a regular, commutative $k$ algebra ($char(k) = 0$) then a certain map $bigwedge^* Der(R) to CH^*(R,R)$ (where $wedge^* Der(R)$ has zero differential) is a quasi-isomorphism of dg vector spaces, that is, it induces an isomorphism of graded vector spaces on cohomology.



Can the HKR morphism be extended to an $A_infty$ morphism? Is there a refinement in this spirit to make up for the fact that it is not, on the nose, a morphism of dg-algebras?

Friday, 30 October 2009

ag.algebraic geometry - Eichler-Shimura isomorphism and mixed Hodge theory

Let $Y(N),N>2$ be the quotient of the upper half-plane by $Gamma(N)$ (which is formed by the elements of $SL(2,mathbf{Z})$ congruent to $I$ mod $N$). Let $V_k$ be the $k$-th symmetric power of the Hodge local system on $X(N)$ tensored by $mathbf{Q}$ (the Hodge local system corresponds to the standard action of $Gamma(N)$ on $mathbf{Z}^2$).



$V_k$ is a part of a variation of polarized Hodge structure of weight $k$. So the cohomology $H^1(Y(N),V_k)$ is equipped with a mixed Hodge structure (the structure will be mixed despite the fact that $V_k$ is pure because $Y(N)$ is not complete). The complexification $H^1(Y(N),V_kotimesmathbf{C})$ splits



$$H^1(Y(N),V_kotimesmathbf{C})=H^{k+1,0}oplus H^{0,k+1}oplus H^{k+1,k+1}.$$



There is a natural way to get cohomology classes $in H^1(Y(N),V_kotimesmathbf{C})$ from modular forms for $Gamma(N)$. Namely, to a modular form $f$ of weight $k+2$ one associates the secion



$$zmapsto f(z)(ze_1+e_2)^k dz$$



of $$Sym^k(mathbf{C}^2)otimes Omega^1_{mathbf{H}}.$$



Here $mathbf{H}$ is the upper half plane and $(e_1,e_2)$ is a basis of $mathbf{C}^2$ coming from a basis of $mathbf{Z}^2$. This pushes down to a holomorphic section of
$V_kotimes mathbf{C}$.



Deligne had conjectured (Formes modulaires et repr'esentations l-adiques, Bourbaki talk, 1968/69) that the above correspondence gives a bijection between the cusp forms of weight $k+2$ and $H^{k+1,0}$. (This was before he had even constructed the Hodge theory, so strictly speaking this can't be called a conjecture, but anyway.) Subsequently this was proved by Zucker (Hodge theory with degenerating coefficients, Anns of Maths 109, no 3, 1979). See also Bayer, Neukirch, On automorphic forms and Hodge theory, (Math Ann, 257, no 2, 1981).



The above results concern cusp forms and it is natural to ask what all modular forms correspond to in terms of Hodge theory. It turns out that all weight $k+2$ modular forms give precisely the $k+1$-st term of the Hodge filtration on $H^1(Y(N),V_kotimesmathbf{C})$ i.e. $H^{k+1,0}oplus H^{k+1,k+1}$.



The proof of this is not too difficult but a bit tedious. So I would like to ask: is there a reference for this?



upd: The original posting contained non-standard notation; this has been fixed.

big picture - Why is it useful to study vector bundles?

I think many of the other answers boil down to the same underlying idea: Sections of vector bundles are "generalized functions" or "twisted functions" on your manifold/variety/whatever.



For example, Charles mentions subvarieties, which are roughly "zero loci of functions". However, there are no non-constant holomorphic global functions on, say, a projective variety. So how can we talk about subvarieties of a projective variety? Well, we do have non-constant holomorphic functions locally, so we can still define subvarieties locally as being zero loci of functions. But the functions $f_i$ which define a subvariety on one open set $U$ and the functions $g_i$ which define a subvariety on another open set $V$ won't necessarily agree on $U cap V$. We need some kind of "twist" to make the $f_i$'s and the $g_i$'s match up on $U cap V$. Upon doing so, the global object that we obtain is not a global function (because, again, there are no non-constant global functions) but a "twisted" global function, in other words a section of a vector bundle whose transition functions are described by these "twists".



Similarly, sections of vector bundles and line bundles are a nice way to talk about functions with poles. Meromorphic functions then become simply sections of a line bundle, which is nice because it allows us to avoid having to talk about $infty$. This is essentially why line bundles are related to maps to projective space $X to mathbb{P}^n$; intuitively, $n+1$ sections of a line bundle over $X$ is the same as $n+1$ meromorphic functions on $X$, which is the same as a map "$X to (mathbb{C} cup infty)^{n+1}$" which becomes a map "$X to mathbb{P}^n$" after we "projectivize".



One way to think of vector bundles and their sections as being invariants of your manifold/variety/whatever is to think of them as describing what kinds of "generalized" or "twisted" functions are possible on your manifold/variety.



The view of sections of vector bundles as being "twisted functions" is also useful for physics, as in e.g. David's answer. For instance, suppose we have a manifold, which we think of as being some space in which particles are moving around. We have local coordinates on the manifold, which are used to describe the position of the particles. Since we are on a manifold, the transitions between the local coordinates are nontrivial. We may also be interested in studying, say, the velocities or momenta (or acceleration, etc.) of the particles moving around in space. On local charts we can describe these momenta easily in terms of the local coordinates, but then for a global description we need transitions between these local descriptions of momenta, just like how we need transitions between the local coordinates in order to describe the manifold globally. The transitions between local descriptions of momenta are not the same as that between the local coordinates (though the former depends on the latter); phrased differently, we obtain a non-trivial (ok, not always non-trivial, but usually non-trivial) vector bundle over our manifold.

Thursday, 29 October 2009

distances - Why has Moving Cluster Method been successful only for Hyades?

Do you know this paper? Mamajek (2005). "A Moving Cluster Distance to the Exoplanet 2M1207b in the TW Hydrae Association".



I'll risk an opinion:
There is about a thousand clusters with kinematic measurements, but besides no more than half a dozen, all pm components are under 20 mas (milli-arcseconds) with errors more or less in the range 0.3-6 mas. Such errors are too high in relation to the pm values and as a result, the vectors will not converge.

sequences and series - Uniquely generate all permutations of three digits that sum to a particular value?

Visualizing this problem, as unique ways to hand out ninja stars to ninjas. This also shows how each larger solution is made up of its neighboring, more simple solutions.



alt text



Here is how to implement it in php: (might help you understand it too)



function multichoose($k,$n)
{
if ($k < 0 || $n < 0) return false;
if ($k==0) return array(array_fill(0,$n,0));
if ($n==0) return array();
if ($n==1) return array(array($k));
foreach(multichoose($k,$n-1) as $in){ //Gets from a smaller solution -above as (blue)
array_unshift($in,0); //This prepends the array with a 0 -above as (grey)
$out[]=$in;
}
foreach(multichoose($k-1,$n) as $in){ //Gets the next part from a smaller solution too. -above as (red and orange)
$in[0]++; //Increments the first row by one -above as (orange)
$out[]=$in;
}
return $out;
}

print_r(multichoose(3,4)); //How many ways to give three ninja stars to four ninjas?


Not optimal code: Its more understandable that way.



Our output:



(0,0,0,3)
(0,0,1,2)
(0,0,2,1)
(0,0,3,0)
(0,1,0,2)
(0,1,1,1)
(0,1,2,0)
(0,2,0,1)
(0,2,1,0)
(0,3,0,0)
(1,0,0,2)
(1,0,1,1)
(1,0,2,0)
(1,1,0,1)
(1,1,1,0)
(1,2,0,0)
(2,0,0,1)
(2,0,1,0)
(2,1,0,0)
(3,0,0,0)


Fun use to note: Upc relies upon this exact problem in barcodes. The sum of the whitespace and blackspace for each number is always 7, but is distributed in different ways.



//Digit   L Pattern  R Pattern  LR Pattern (Number of times a bit is repeated)
0 0001101 1110010 2100
1 0011001 1100110 1110
2 0010011 1101100 1011
3 0111101 1000010 0300
4 0100011 1011100 0021
5 0110001 1001110 0120
6 0101111 1010000 0003
7 0111011 1000100 0201
8 0110111 1001000 0102
9 0001011 1110100 2001


Note only 10 of the 20 combinations are used, which means the code can be read upside-down just fine. All 20 can be used however, and are in EAN13, with a bit more complexity.



http://en.wikipedia.org/wiki/EAN-13



http://en.wikipedia.org/wiki/Universal_Product_Code



http://www.freeimagehosting.net/uploads/58531735d3.png

How large must an object be to be seen through a telescope?

A few clarifications.



Telescopes in general operate at very large distances - "at infinity" is the term used in optics parlance.



A bright enough object can be seen from any distance, no matter what its size is. All that matters is that:



  1. It's bright enough to produce an impression on whatever sensor
    you're using (or your eye)


  2. The background is dark enough to produce sufficient contrast


But then it would be just a bright but tiny spot.



I believe what you're really asking for is: what is the combination of factors that shows the object as bigger than a simple dot? In that case, it's two factors: aperture of the telescope, and angular size of the object.



Assuming a flawless telescope, its aperture is what determines its resolving power. The resolving power is the angle at which two dots can be separated by the telescope. The formula is:



resolving power = 1 / (10 * aperture)


where resolving power is in arcseconds, and aperture is in meters. Examples:



aperture         resolving power
10 cm 1 arcsec
20 cm 0.5 arcsec
1 m 0.1 arcsec


As long as the object's angular size is bigger than the resolving power, it will appear bigger than a dot.



That's all. In astronomy, we don't speak of an object's absolute size, we only speak of the angular size. But that should be enough. As soon as you have the angular size, and say the distance, then you could deduce the absolute size, it's a simple matter of trigonometry.



absolute size = distance * tangent(angular size)


E.g., this is the size of an object of 1 arcsec angular size, situated at 384,000 km (the orbit of the Moon):



http://www.wolframalpha.com/input/?i=384000+km+*+tangent%281+arcsec%29



It's 1.8 km (in case the link above is unavailable).



In other words, that's the minimum absolute distance resolved by a telescope 10 cm in aperture, for objects on the Moon. Any two dots closer together than 1.8 km, placed on the Moon, are seen as one dot in a 10 cm telescope.

atmosphere - Equation for solar radiation at a given latitude on a given exoplanet?

I'm trying to find equations that would help me determine the amount of solar radiation hitting a certain latitude on a certain planet given the following inputs:



  • the degrees of latitude of the location in question

  • this hemisphere's current season (winter or summer)

  • size of the planet

  • luminosity of the star(s) the planet orbits

Without taking into consideration wind, air pressure, or any atmospheric effects.



Ideally I would like to determine the average solar radiation of a given location in both the winter and summer.



My end goal is to determine the average surface temperature of a given latitude on a planet using the base solar radiation and the effects of wind, air pressure, and surface ocean currents.



I know it is possible to do this for the entire planet generally, but I would like some way of doing it for a particular latitude and season.

atmosphere - Can we find out whether early Venus was Earth-like or not?

My ad-hoc opinion: This wouldn't be the first step of Venus exploration. Geologic in-situ investigation of the resurfacing hypotheses would already be a very challenging mission.
Might be, one could find some metamorphic remnants which have survived the last resurfacing, and one could determine the age of rocks. Might be there exist some old layers below the surface, which haven't been molten up completely.



Some kinds of crystals are more resistant to heat than others. They may also contain some records of more ancient epochs. Comparing their isotopic ratios might tells something about the ancient atmosphere, or at least about ancient geology. But conditions on the surface of Venus are very hard for probes.



More feasible would be a detailed isotopic analysis of Venus' atmosphere, as e.g. proposed as VDAP mission concept, or by follow-up balloon missions. A submarine-like ballon diving into the hellish conditions of the lower atmosphere or even to the surface for a short period might be an approach.



A TLS (tunable laser spectrometer) has been suggested to analyse the atmosphere.



May be an additional mass spectrograph would be useful. If short dives close to the surface become feasible, cameras would make sense. A LIBS (laser induced breakdown spectroscope) would make sense only on direct contact with the surface due to the atmospheric absorption. APXS wouldn't probably work due to the heat, unless samples could be taken for analysis higher into the (cooler) atmosphere. IR spectroscopy could possibly work, if cooling can be achieved somehow (e.g. by insulation and adiabatic decompression). If surface samples can be taken during short surface contact, more ways to analyse rocks would become possible. The more instruments, the more weight, the larger the ballon, the larger the probe, the more expensive. But I'm far from elaborating technical or scientific proposals here for space agencies.



Hence VDAP with a small TLS and a QMS (quadrupole mass spectrometer) - as already proposed by hugely experienced scientists - would be a good start.

computational complexity - Quantum computation implications of (P vs NP)


Possible Duplicate:
What impact would P!=NP have on the characterization of BQP?




Before I begin, I had a similar post closed for mentioning the recently released (to be verified) proof that P!=NP. This question is about the implications of P!=NP, not about the proof internals or specifics.



Does P!=NP imply that NP-Complete problems cannot be solved in Quantum Polynomial time?



According to Wikipedia, quantum complexity classes BQP and QMA which are the bounded-error quantum analogues of P and NP. If P!=NP was a know fact, does that imply that BQP!=QMA?

co.combinatorics - infinite permutations

The first thing to notice is that infinite permutations may have infinite support, that is, they may move infinitely many elements. Therefore, we cannot expect to express them as finite compositions of permutations having only finite support.



But if we allow (well-defined) infinite compositions, then the answer is that every permutation can be expressed as a composition of disjoint cycles and also expressed as a composition of transpositions. So the answer to question 1 is yes, and the answer to question 2 is no.



To see this, suppose that f is a permutation of ω. First, we may divide f into its disjoint orbits, where the orbit of n is defined as all the numbers of the form fk(n) for any integer k. The action of f on each of these orbits commute with each other, because the orbits are disjoint. And the action of f on each such orbit is a cycle (possibly infinite). So f can be represented as a product of disjoint cycles. For the transposition representation, it suffices to represent each such orbit as a suitable product of transpositions. The finite orbits are just finite cycles, which can be expressed as a product of transpositions in the usual way. An infinite orbit looks exactly like a copy of the integers, with the shift map. This can be represented in cycle notation as (... -2 -1 0 1 2 ...). This permutation is equal to the following product of transpositions:



  • (... -2 -1 0 1 2 ...) = [(0 -1)(0 -2)(0 -3)...][...(0 3)(0 2)(0 1)]

I claim that every natural number is moved by at most two of these transpositions, and that the resulting product is well-defined. On the right hand side of the equality, I have two infinite products of transpositions. Using the usual order of product of permutations, the right-most factor is first to be applied. Thus, we see that 0 gets sent to 1, and subsequently fixed by all later transpositions. So the product sends 0 to 1. Similarly, 1 gets sent to 0 and then to 2, and then unchanged. Similarly, it is easy to see that every non-negative integer n is sent to 0 and then to n+1 as desired. Now, the right-hand factor fixes all negative integers, which then pass to the left factor, and it is easy to see that again -n is sent to 0 and then to -n+1, as desired. So altogether, this product is operating correctly. An isomorphic version of this idea can be used to represent the action of any infinite orbit, and so every permutation is a suitable well-defined product of transpositions, as desired.



Thus, the answers to the questions in (1) are yes, and the answer to question (2) is no.

Wednesday, 28 October 2009

cmb - Does the cosmic microwave background change over time?

The CMB patterns do indeed change over time, although statistically they remain the same, and although it will not be noticeable on human timescales.



The CMB we observe now comes from a thin shell with us in the center, and with a radius equal to the distance that the light has traveled from the Universe was 379,000 years old and until now. As time passes, we will receive CMB from a shell with an increasingly larger radius. As that light has traveled farther through space, it will, as you say, be more redshifted, or "cooler". But it will also have been emitted from more distant regions in the early Universe that, although statistically equivalent, simply will be other regions and hence look different.



The patterns that change the fastest are the smallest patterns we can observe. The angular resolution of the Planck satellite is 5-10 arcmin. Since the CMB comes from a redshift of ~1100, the angular diameter distance defining the physical distance spanned by a given angle — is ~13 Mpc, so 5 arcmin corresponds to a physical scale of roughly 19 kpc in physical coordinates, or 21 Mpc in comoving coordinates (that is, a structure spanning 5 arcmin today were ~19 kpc across at the time of emission, but have now expanded to a size of ~21 Mpc, with 1 Mpc = 1000 kpc = 3261 lightyears).



Assuming an isotropic Universe, if the smallest observable parcels of gas were 19 kpc across perpendicular to our line of sight, they are also on average 19 kpc across along our line of sight.



So the question of how fast the CMB changes comes down to how much time did it take light to travel 19 kpc when the Universe was 379,000 years old. This is not simply 19 kpc divided by the speed of light, since the Universe expands as the light travels, but it's pretty close. If my calculations are not wrong, we will have to wait until the CMB has been redshifted roughly to 1200, which will take around 60,000 years (assuming that Planck will not get replaced by better instruments within that time which, um, is dubious).



So you're right, you could make a 3D image of the CMB, but since the patterns are much larger than a lightyear, you don't have to take a new picture every year.



Actually, I'm a bit surprised by these small numbers of 19 kpc and 60 kyr. If somebody can spot a mistake here (e.g. a missing (1+z) factor), please correct. My calculations are here (in Python):



import numpy as np
import cosmolopy.distance as cd
import cosmolopy.constants as cc

cosmo = {'omega_M_0':0.27, 'omega_lambda_0':0.73, 'omega_k_0':0.0, 'h':0.7}
zCMBnow = 1100.
dA = cd.angular_diameter_distance(zCMBnow, **cosmo) # In Mpc
kpc_5am = dA*1e3 * np.pi/(180.*3600) * 5*60 # Scale at source in kpc per 5 arcmin
print '5 arcmin corresponds to ' + str(round(kpc_5am,1)) + ' kpc at the source'

zCMBfuture = 1200.
d = cd.light_travel_distance(zCMBfuture, zCMBnow, **cosmo) * 1e3
print 'When the CMB has redshifted to ' + str(zCMBfuture) + ', the radius of the CMB shell will be ' + str(round(d,1)) + ' kpc larger'

tdiff_Myr = cd.lookback_time(zCMBfuture,zCMBnow,**cosmo) / cc.Myr_s
print 'This will take ' + str(round(tdiff_Myr*1e6)) + ' years.'

ag.algebraic geometry - Differential equation of line tangent to caustics

This problem (or rather, statement that I cannot understand) has arisen in a paper I have been reading "Geometry of Integrable Billiards and Pencils of Quadrics" by Dragovic and Radnovic. I'd be most grateful for any explanations of it (it may be a simple fact, but I'm not sure).



Let $Omega subset mathbb{R}^d$ be a bounded domain such that its boundary $partial Omega $ lies in the union of several quadrics from the (confocal) family $ mathcal{Q}_{lambda}: Q_{lambda}(x)=1 $ where $$Q_{lambda}(x)=sum_{i=1}^{d}frac{x_{i}^2}{a_{i}-lambda}.$$ Then in elliptic coordinates, $Omega$ is given by: $$beta_{1}'leqlambda_{1}leqbeta_{1}'', ldots, beta_{d}'leqlambda_{d}leqbeta_{d}'' $$ where $a_{s+1}leq beta_{s}'leqbeta_{s}''leq a_{s}$ for $1leq s leq d-1$ and $- infty < beta_{d}'<beta_{d}''leq a_{d}.$



Define $P(x):= (a_1 -x)ldots(a_d -x)(alpha_{1} -x)ldots(alpha_{d} - x).$



Now, we consider a billiard system inside $Omega$ with caustics $mathcal{Q}_{alpha_1}, ldots, mathcal{Q}_{alpha_d-1}.$



Why does the system of equations:
$$ sum_{s=1}^{d}frac{dlambda_s}{sqrt{P(lambda_s)}}=0, sum_{s=1}^{d}frac{lambda_{s}dlambda_{s}}{sqrt{P(lambda_s)}}=0, ldots,
sum_{s=1}^{d}frac{lambda_{s}^{d-2}dlambda_{s}}{sqrt{P(lambda_s)}}=0,$$



(which are apparently due to Jacobi and Darboux - I'd appreciate a modern reference because the only version I can find is scanned page-by-page in German), where $sqrt{P(lambda_s)}$ is taken with the same sign in all expressions, represent a system of differential equations of a line tangent to all the caustics $mathcal{Q}_{alpha_1}, ldots, mathcal{Q}_{alpha_d-1}$? Moreover, why does: $$sum_{s=1}^{d}frac{lambda_{s}^{d-1}dlambda_{s}}{sqrt{P(lambda_s)}}=2dl$$ where $dl$ is an element of ``the" line length?



I found similar looking equations on page 4 of another paper (by Buser and Silhol), but cannot understand them either.

Monday, 26 October 2009

software - SPH simulations - Astronomy

I would recommend MPA Garching's Gadget code for cosmological simulations of structure formation. It's primarily gravitational, but I do believe you can include gas effects as well:




GADGET computes gravitational forces with a hierarchical tree algorithm (optionally in combination with a particle-mesh scheme for long-range gravitational forces) and represents fluids by means of smoothed particle hydrodynamics (SPH). The code can be used for studies of isolated systems, or for simulations that include the cosmological expansion of space, both with or without periodic boundary conditions. In all these types of simulations, GADGET follows the evolution of a self-gravitating collisionless N-body system, and allows gas dynamics to be optionally included. Both the force computation and the time stepping of GADGET are fully adaptive, with a dynamic range which is, in principle, unlimited.




I don't know if this is what is commonly used for tasks such as yours, but it might be a place to start looking.

gravity - Brain vs galaxy

This looks like it's going to be a "No, it's not possible, because . . ." answer, but hopefully it answers your question. Your proposal is that the Milky Way is a brain, and that the stars are like neurons, correct? I'm not a biologist or neuroscientist, but I do know one characteristic of neurons that helps them do their job: They rely on chemical and electrical impulses.



Neurons work by transmitting electrical signals along pathways called synapses. These signals can be either (nearly) continuous or in regular or irregular "pulses" - i.e. the strength of the signal goes up, down, or disappears. The point here, though, is that signals between neurons are generally not continuous. The gravitational force between two stars, on the other hand, is generally the same, barring a catastrophic event. In other words, the gravitational "signals" from stars don't function like those from neurons.



There are other differences between stars and neurons, too. One is that neurons cannot regrow in significant quantities, whereas stars are forming all the time. Generally in an organism, the neurons in a brain are of similar ages; this is clearly not the case for stars. Another difference is that neural networks are relatively fixed, whereas stars are constantly moving around.



I'll conclude by repeating what I mentioned in a comment, which is that the idea that the universe is a living organism has been around for some time. Another idea is that the universe is actually a giant computer (or just Earth, if you're a sci-fi fan). The problem is that while most people would disagree with these large-scale hypotheses, we may never know for sure if they are right or wrong.



I hope this helps.

Sunday, 25 October 2009

the sun - Why do sunspots appear dark?

Typical sunspots have a dark region (umbra) surrounded by a lighter region, the penumbra. While sunspots have a temperature of about 6300 °F (3482.2 °C), the surface of the sun which surrounds it has a temperature of 10,000 °F (5537.8 °C).



From this NASA resource:




Sunspots are actually regions of the solar surface where the magnetic field of the Sun becomes concentrated over 1000-fold. Scientists do not yet know how this happens. Magnetic fields produce pressure, and this pressure can cause gas inside the sunspot to be in balance with the gas outside the sunspot...but at a lower temperature. Sunspots are actually several thousand degrees cooler than the 5,770 K (5496.8 °C) surface of the Sun, and contain gases at temperature of 3000 to 4000 K (2726.9 - 3726.8 °C). They are dark only by contrast with the much hotter solar surface. If you were to put a sunspot in the night sky, it would glow brighter than the Full Moon with a crimson-orange color!




Sunspots are areas of intense magentic activity, as is apparent in this image:



sunspot



You can see the material getting stretched into kind of getting stretched into strands.



As for the reason it is cooler than the rest of the surface:




Although the details of sunspot generation are still a matter of research, it appears that sunspots are the visible counterparts of magnetic flux tubes in the Sun's convective zone that get "wound up" by differential rotation. If the stress on the tubes reaches a certain limit, they curl up like a rubber band and puncture the Sun's surface. Convection is inhibited at the puncture points; the energy flux from the Sun's interior decreases; and with it surface temperature.




All in all, the sunspots appear dark because the are darker than the surrounding surface. They're darker because they are cooler, and they're cooler because of the intense magnetic fields in them.

co.combinatorics - Submitting to arXiv when unaffiliated


Official report number(s) from the author(s) institution(s) must be provided.




I have absolutely no idea what that phrase means. I have a few papers on the arXiv and have never knowingly provided that information. With the proviso that I'm not privy to the internal workings of the arXiv, I would recommend just trying to submit and seeing what it tells you to do - I deem it highly unlikely that you'll get a message saying "You have no affiliation, never darken our doors again!" but more likely "We notice that you have not provided an academic affiliation; therefore, in order for your article to be properly submitted, you need to do X, Y, and Z.".



There are, of course, other ways of getting timestamps and of making your work public. If you want to know whether or not it is of sufficient quality to be worth publishing, you should track down someone in the field who you could ask for an opinion.



(But if you do that, don't just send them the manuscript with a brief note saying "Please give me your opinion on the attached."! Write a specific letter to that person, preferably with a fair amount of flattery - don't overdo it - and ask a specific question. If you want to know "Has this been done before", you could ask that here, I think, but if you want to know "Is this decent quality work", then you should not ask it here.)

rt.representation theory - Can an admissible SO(n) representation contain an SO(n-1) representation with infinite multiplicity?

Funnily enough, I wrote a paper about this question a few years ago.



The takeaway is that there is a geometric method for understanding when such a restriction is admissible. For many pairs of groups it never is, but I think Matt found the only examples of the form SO(n) and SO(n-1). In general, each finite dimensional representation of SO(n) comes from quantizing a coadjoint orbit, and you want only finitely many of the coadjoint orbits that lie in the image of the moment map on the contangent bundle of the sphere $T^*S^{n-1}$. In particular, $T^*S^1 to mathfrak{so}_2^*$ is surjective since $S^1$ is a regular action, and $T^*S^2 to mathfrak{so}_3^*$ is surjective since the adjoint representation is covered by the orbit of any line.

motivic cohomology - Beilinson conjectures

Let me talk about Beilinson's conjectures by beginning with $zeta$-functions of number fields and $K$-theory. Space is limited, but let me see if I can tell a coherent story.



The Dedekind zeta function and the Dirichlet regulator



Suppose $F$ a number field, with
$$[F:mathbf{Q}]=n=r_1+2r_2,$$
where $r_1$ is the number of real embeddings, and $r_2$ is the number of complex embeddings. Write $mathcal{O}$ for the ring of integers of $F$.



Here's the power series for the Dedekind zeta function:
$$zeta_F(s)=sum|(mathcal{O}/I)|^{-s},$$
where the sum is taken over nonzero ideals $I$ of $mathcal{O}$.



Here are a few key analytical facts about this power series:



  1. This power series converges absolutely for $Re(s)>1$.


  2. The function $zeta_F(s)$ can be analytically continued to a meromorphic function on $mathbf{C}$ with a simple pole at $s=1$.


  3. There is the Euler product expansion:
    $$zeta_F(s)=prod_{0neq pinmathrm{Spec}(mathcal{O}_F)}frac{1}{1-|(mathcal{O}_F/p)|^{-s}}.$$


  4. The Dedekind zeta function satisfies a functional equation relating $zeta_F(1-s)$ and $zeta_F(s).$


  5. If $m$ is a positive integer, $zeta_F(s)$ has a (possible) zero at $s=1-m$ of order
    $$d_m=begin{cases}r_1+r_2-1&textrm{if }m=1;\
    r_1+r_2&textrm{if }m>1textrm{ is odd};\
    r_2&textrm{if }m>1textrm{ is even},
    end{cases}$$
    and its special value at $s=1-m$ is
    $$zeta_F^{star}(1-m)=lim_{sto 1-m}(s+m-1)^{-d_m}zeta_F(s),$$
    the first nonzero coefficient of the Taylor expansion around $1-m$.


Our interest is in these special values of $zeta_F(s)$ at $s=1-m$. At the end of the 19th century, Dirichlet discovered an arithmetic interpretation of the special value $zeta_F^{star}(0)$. Recall that the Dirichlet regulator map is the logarithmic embedding
$$rho_F^D:mathcal{O}_F^{times}/mu_Ftomathbf{R}^{r_1+r_2-1},$$
where $mu_F$ is the group of roots of unity of $F$. The covolume of the image lattice is the the Dirichlet regulator $R^D_F$. With this, we have the



Dirichlet Analytic Class Number Formula. The order of vanishing of $zeta_F(s)$ at $s=0$ is $operatorname{rank}_mathbf{Z}mathcal{O}_F^times$, and the special value of $zeta_F(s)$ at $s=0$ is given by the formula
$$zeta_F^{star}(0)=-frac{|mathrm{Pic}(mathcal{O}_F)|}{|mu_F|}R^D_F.$$



Now, using what we know about the lower $K$-theory, we have:
$$K_0(mathcal{O})congmathbf{Z}oplusmathrm{Pic}(mathcal{O})$$
and
$$K_1(mathcal{O}_F)congmathcal{O}_F^{times}.$$



So the Dirichlet Analytic Class Number Formula reads:
$$zeta_F^{star}(0)=-frac{|{}^{tau}K_0(mathcal{O})|}{|{}^{tau}K_1(mathcal{O})|}R^D_F,$$
where ${}^{tau}A$ denotes the torsion subgroup of the abelian group $A$.



The Borel regulator and the Lichtenbaum conjectures



Let us keep the notations from the previous section.



Theorem [Borel]. If $m>0$ is even, then $K_m(mathcal{O})$ is finite.



In the early 1970s, A. Borel constructed the Borel regulator maps, using the structure of the homology of $SL_n(mathcal{O})$. These are homomorphisms
$$rho_{F,m}^B:K_{2m-1}(mathcal{O})tomathbf{R}^{d_m},$$
one for every integer $m>0$, generalizing the Dirichlet regulator (which is the Borel regulator when $m=1$). Borel showed that for any integer $m>0$ the kernel of $rho_{F,m}^B$ is finite, and that the induced map
$$rho_{F,m}^Botimesmathbf{R}:K_{2m-1}(mathcal{O})otimesmathbf{R}tomathbf{R}^{d_m}$$
is an isomorphism. That is, the rank of $K_{2m-1}(mathcal{O})$ is equal to the order of vanishing $d_m$ of the Dedekind zeta function $zeta_F(s)$ at $s=1-m$. Hence the image of $rho_{F,m}^B$ is a lattice in $mathbf{R}^{d_m}$; its covolume is called the Borel regulator $R_{F,m}^B$.



Borel showed that the special value of $zeta_F(s)$ at $s=1-m$ is a rational multiple of the Borel regulator $R_{F,m}^B$, viz.:
$$zeta_F^{star}(1-m)=Q_{F,m}R_{F,m}^B.$$
Lichtenbaum was led to give the following conjecture in around 1971, which gives a conjectural description of $Q_{F,m}$.



Conjecture [Lichtenbaum]. For any integer $m>0$, one has
$$|zeta_F^{star}(1-m)|"="frac{|{}^{tau}K_{2m-2}(mathcal{O})|}{|{}^{tau}K_{2m-1}(mathcal{O})|}R_{F,m}^B.$$
(Here the notation $"="$ indicates that one has equality up to a power of $2$.)



Beilinson's conjectures



Suppose now that $X$ is a smooth proper variety of dimension $n$ over $F$; for simplicity, let's assume that $X$ has good reduction at all primes. The question we might ask is, what could be an analogue for the Lichtenbaum conjectures that might provide us with an interpretation of the special values of $L$-functions of $X$? It turns out that since number fields have motivic cohomological dimension $1$, special values of their $zeta$-functions can be formulated using only $K$-theory, but life is not so easy if we have higher-dimensional varieties; for this, we must use the weight filtration on $K$-theory in detail; this leads us to motivic cohomology.



Write $overline{X}:=Xotimes_Foverline{F}$. Now for every nonzero prime $pinmathrm{Spec}(mathcal{O})$, we may choose a prime $qinmathrm{Spec}(overline{mathcal{O}})$ lying over $p$, and we can contemplate the decomposition subgroup $D_{q}subset G_F$ and the inertia subgroup $I_{q}subset D_{q}$.



Now if $ell$ is a prime over which $p$ does not lie and $0leq ileq 2n$, then the inverse $phi_{q}^{-1}$ of the arithmetic Frobenius $phi_{q}in D_{q}/I_{q}$ acts on the $I_{q}$-invariant subspace $H_{ell}^i(overline{X})^{I_{q}}$ of the $ell$-adic cohomology $H_{ell}^i(overline{X})$. We can contemplate the characteristic polynomial of this action:
$$P_{p}(i,x):=det(1-xphi_{q}^{-1}).$$
One sees that $P_{p}(i,x)$ does not depend on the particular choice of $q$, and it is a consequence of Deligne's proof of the Weil conjectures that the polynomial $P_{p}(i,x)$ has integer coefficients that are independent of $ell$. (If there are primes of bad reduction, this is expected by a conjecture of Serre.)



This permits us to define the local $L$-factor at the corresponding finite place $nu(p)$:
$$L_{nu(p)}(X,i,s):=frac{1}{P_{p}(i,p^{-s})}$$
We can also define local $L$-factors at infinite places as well. For the sake of brevity, let me skip over this for now. (I can fill in the details later if you like.)



With these local $L$-factors, we define the $L$-function of $X$ via the Euler product expansion
$$L(X,i,s):=prod_{0neq pinmathrm{Spec}(mathcal{O})}L_{nu(p)}(X,i,s);$$
this product converges absolutely for $Re(s)gg 0$. We also define the $L$-function at the infinite prime
$$L_{infty}(X,i,s):=prod_{nu|infty}L_{nu}(X,i,s)$$
and the full $L$-function
$$Lambda(X,i,s)=L_{infty}(X,i,s)L(X,i,s).$$



Here are the expected analytical properties of the $L$-function of $X$.



  1. The Euler product converges absolutely for $Re(s)>frac{i}{2}+1$.


  2. $L(X,i,s)$ admits a meromorphic continuation to the complex plane, and the only possible pole occurs at $s=frac{i}{2}+1$ for $i$ even.


  3. $Lleft(X,i,frac{i}{2}+1right)neq 0$.


  4. There is a functional equation relating $Lambda(X,i,s)$ and $Lambda(X,i,i+1-s).$


Beilinson constructs the Beilinson regulator $rho$ from the part $H^{i+1}_{mu}(mathcal{X},mathbf{Q}(r))$ of rational motivic cohomology of $X$ coming from a smooth and proper model $mathcal{X}$ of $X$ (conjectured to be an invariant of the choice of $mathcal{X}$) to Deligne-Beilinson cohomology $D^{i+1}(X,mathbf{R}(r))$. This has already been discussed here. It's nice to know that we now have a precise relationship between the Beilinson regulator and the Borel regulator. (They agree up to exactly the fudge factor power of $2$ that appears in the statement of the Lichtenbaum conjecture above.)



Let's now assume $r<frac{i}{2}$.



Conjecture [Beilinson]. The Beilinson regulator $rho$ induces an isomorphism
$$H^{i+1}_{mu}(mathcal{X},mathbf{Q}(r))otimesmathbf{R}cong D^{i+1}(X,mathbf{R}(r)),$$
and if $c_X(r)inmathbf{R}^{times}/mathbf{Q}^{times}$ is the isomorphism above calculated in rational bases, then
$$L^{star}(X,i,r)equiv c_X(r)modmathbf{Q}^{times}.$$

Saturday, 24 October 2009

ag.algebraic geometry - Algebraic cycles of dimension 2 on the square of a generic abelian surface

Here is an easy $5$ dimensional space of cycles: Inside $A times A$, consider the subvarieties ${ (a,b) : a=mb }$, for $m=0$, $1$, $2$, $3$, $4$. I will show that these are linearly independent over $mathbb{Q}$.



By Kunneth and Poincare,
$$H^4(A times A, mathbb{Q}) cong bigoplus_{i=0}^4 H^{i}(A, mathbb{Q}) otimes H^{4-i}(A, mathbb{Q}) cong bigoplus_{i=0}^4 mathrm{End}(H^{i}(A, mathbb{Q})).$$



The graph of multiplication by $m$, in this presentation, has class
$$(mathrm{Id}, m mathrm{Id}, m^2 mathrm{Id}, m^3 mathrm{Id}, m^4 mathrm{Id})$$



Since the Vandermonde matrix
$$begin{pmatrix} 0^0 & 0^1 & 0^2 & 0^3 & 0^4 \ 1^0 & 1^1 & 1^2 & 1^3 & 1^4 \ 2^0 & 2^1 & 2^2 & 2^3 & 2^4 \ 3^0 & 3^1 & 3^2 & 3^3 & 3^4 \ 4^0 & 4^1 & 4^2 & 4^3 & 4^4 end{pmatrix}$$
has nonzero determinant, the $5$ classes I listed are linearly independent over $mathbb{Q}$.

cosmology - What is the ultimate fate of a cluster of galaxies?

As you point out, in an accelerating Universe, large scale structures will become more and more isolated. So at a certain point you will have gravitationally bound superclusters separated by very large voids and less and less filamentary structures.



Once isolated, we can then study the dynamics of these independent superclusters. On very large time scales, galaxies will collide and merge. After collisions you will tend to form elliptical galaxies. So I think that you will end up with a big single elliptical galaxy.
Elliptical galaxy



Then we can be interested in the future of the stars of these galaxies. First we currently see that the star formation rate has already peaked some billion years ago. So as the number of galaxies collision that usually targets star formation, the star formation rate will slowly continue to decrease. Moreover, as heavy elements (all elements apart from hydrogen and helium) are formed in stars, the future generation of stars will have more and more heavy elements. From the nuclear point of view, the most stable element is iron, so on very very large time scales light elements will be converted into iron, whereas heavy elements will decay into iron.
Evolution of the star formation rate



This is a little speculative, but I think that on large time scales, more and more interstellar gas and stars will fall at the center of the gravitational potential of super elliptical galaxy. So as the density will increase at the center, you will mechanically form a heavier and heavier supermassive black hole. Another interesting point is that we do not currently know whether proton are stable or not. So on time scales larger than $10^{30}$ years (see more details here) protons may naturally decay into lighter subatomic particles.



So maybe as $trightarrowinfty$ we will end up with supermassive black holes and light particles. But as you mentioned, black holes will themselves slowly lose mass by Hawking radiation. In the same time, the expansion rate may have increased significantly leading ultimately to isolated particles in an expanding Universe.



Note: this is an hypothetical scenario and there are a lot of unknowns

the sun - Are ascending node of sun, the point of intersection of prime meridian and equator and center of earth all collinear?

Not exactly.



During either of the equinoxes there is a moment when the line between center of earth and the sun aligns with the equator (as opposed to just crossing it twice daily). This doesn't coincide with prime meridian in any way though; it may happen at any meridian whatsoever that happens to coincide with the line.



Of course if it happens the sun is in zenith above prime meridian at that moment, your conditions will be satisfied. But depending on precision you allow, that may be a very short time window. It's all about how precisely you want that to be.



Near Equinox Earth tilts by about 0.25 degree of latitude per day, meaning that is the maximum angle by which the line will be off from equator while crossing the prime meridian on the same day as when Equinox happens. 0.25 degree of latitude is about 27 kilometers, so this is the maximum error. Of course on some years, that will be much less, it's just that Earth tilt is correlated with length of year, zenith line crossing of prime meridian is correlated with time of day, and time of day is not really correlated with length of year.

Friday, 23 October 2009

Does the Moon's magnetic field affect Earth's magnetic field?


So would the Moon's magnetic field affect the Earth's magnetic field, just as its gravitational pull affects Earth's gravitational pull for oceans?




Yes, but only slightly. Firstly, magnetic fields can superimpose, so the field at any point is the sum of the field due to the Earth and the field due to the moon.



However, the moon is rather far away (and has a weak magnetic pole strength), so the magnetic field due to the moon on Earth's surface is nearly negligible (magnetic field also decreases as an inverse-square law)



In addition, the magnetic field of the Moon may bolster or erode the Earth's field as magnets moving relative to each other tend to either lose magnetization or become stronger. But this process has a negligible effect when we take the Moon and Earth.

How do you figure out the magnitude of stars?

You must remember that there are Absolute magnitude (that measured from a fixed distance of 10pc) and Apparent magnitude (which is measured from Earth).



Historically magnitudes ranged from First to Sixth and were assigned by guess by Ptolomaeus. His First Magnitude was the equivalent of saying first size, so it was the brightest, and Sixth Magnitude was the assignment he made for the dimmest seen with the naked eye.



Nowadays, we use roughly the same scale but with real numbers, ranging from -26.74 (Sun) up. We defined the star Vega to be the exact 0.0 apparent magnitude (and 0.0 colour in all filters, for the case), and then we use the formula



$m_1-m_{ref}=-2.5log{Iover I_{ref}}$



to define all other apparent magnitudes from that one by measuring Intensities.



So to answer you, yes, we measure apparent magnitudes rather accurately. For stars other than Sun, they range from -1.46 for Sirius up to +8 in a perfect night for a trained naked eye (typically +6 for the normal eye on a normal, far from city lights, night).



Reference http://en.wikipedia.org/wiki/Magnitude_(astronomy)

Thursday, 22 October 2009

Some arithmetic terminology: "universal domain", "specialization", "Chow point"

Igusa uses Weil's language, in a modified/enhanced version that deals
with reduction mod primes. (My memory is that there is a paper of Shimura from the 50s that develops this language.) It's not so easy to read it carefully, unfortunately.



Chow's method for constructing Jacobians (explained in his paper in the American Journal
from the 50s, again if memory serves) is, I think, as follows:
take $Sym^d C$ for $d > 2g - 2$. The fibres of the map $Sym^d C to Pic^d(C)$
are then projective space of uniform dimension (by Riemann--Roch), and so it is not so hard to quotient out
by all of them to construct $Pic^d(C)$ (for $d > 2g - 2$), and hence to construct the Jacobian. (I hope that I'm remembering correctly here; if not, hopefully someone will correct me.)



I think that this should be contrasted with the more traditional method of considering $Sym^g C$, which maps birationally to $Pic^g(C)$, i.e. with fibres that are generically points, but which has various exceptional fibres of varying dimensions, making it harder to form the quotient, thus inspiring in part Weil's "group chunk" method where he uses the group action
to form the quotient (in an indirect sort of way), and consequently loses some control of the
situation (e.g. he can't show that the Jacobian so constructed is projective). I should
also say that it's been a long time since I looked at this old 1950s literature, and I'm not
completely confident that I understand its thrust (i.e. I'm not sure what was considered easy and what was considered hard, and what was considered new and innovative in various papers as contrasted to what was considered routine), so take this as a very rough guide only.

Wednesday, 21 October 2009

How many pairs of edges can disconnect a biconnected graph?

The statement is true. In fact, much more general statements are true. If $G$ is a graph with $n$ vertices and $c$ is the cardinality of a minimum edge cut of $G$, then the number of edge cuts of cardinality $c$ is at most $binom{n}{2}$, and for every half-integer $k geq 1$, the number of edge cuts containing at most $kc$ edges is bounded above by $2^{2k-1} binom{n}{2k}.$



The upper bound of $binom{n}{2}$ on the number of minimum cuts is attributed to Bixby and Dinitz-Karzanov-Lomonosov. The more general bound on the number of approximate minimum cuts is due to Karger (Global min-cuts in RNC, and other ramifications of a simple min-cut algorithm), who also re-proved the $binom{n}{2}$ bound on minimum cuts. His appealingly simple proof rests on the analysis of a simple "randomized contraction" algorithm. Here we present the proof that the number of minimum cuts is at most $binom{n}{2}$.



Suppose that $G$ is a multigraph with $n$ vertices, $c>0$ is the number of edges in a minimum cut of $G$, and $A$ is a specific set of $c$ edges whose removal disconnects $G$. Repeatedly perform the following process to obtain a sequence of multigraphs $G = G_0, G_1, ldots, G_{n-2}$: choose a uniformly random edge of $G_t$ and contract it to obtain $G_{t+1}$. In other words, if $(u,v)$ is the edge chosen in step $t$, then we replace $u$ and $v$ with a single vertex $z$ in $G_{t+1}$, and we replace every edge of $G_t$ having exactly one endpoint in ${u,v}$ with a corresponding edge of $G_{t+1}$ with endpoint $z$. (Edges from $u$ to $v$ in $G_t$ are deleted during this step.) Note that $G_{n-2}$ has exactly two vertices $a,b$, these vertices correspond to a partition of $V(G)$ into two nonempty sets $A,B$ (those vertices that were merged together to form $a in V(G_2)$, and those that were merged together to form $b$), and that the edges of $G_{n-2}$ are in one-to-one correspondence with the edges of the cut separating $A$ from $B$ in $G$. Denote this random cut by $R$.



Now consider a specific cut $C$ of cardinality $c$. We claim that the probability of the event $R=C$ is at least $1 left/ binom{n}{2} right.$, from which it follows immediately that the number of distinct cuts of cardinality $c$ is at most $binom{n}{2}$. To prove the upper bound on the probability that $R=C$, observe that for all $t = 0,ldots,n-2$, every vertex of $G_t$ has degree at least $c$. (Otherwise, that vertex of $G_t$ corresponds to a set of vertices in $G$ having fewer than $c$ edges leaving it, contradicting our assumption about the edge connectivity of $G$.) Consequently, the number of edges of $G_t$ is at least $(n-t)c/2$, and the probability that an edge of $C$ is contracted in step $t$, given that no edge of $C$ was previously contracted, is at most $c/|E(G_t)| leq 2/(n-t)$. Combining these bounds, we find that the probability that no edge of $C$ is ever contracted is bounded below by $prod_{t=0}^{n-3} left(1 - frac{2}{n-t}right) = frac{n-2}{n} cdot frac{n-3}{n-1} cdots frac{1}{3} = frac{2}{n(n-1)}.$

rt.representation theory - reference on examples of (g, K)-modules

For simplicity, just let G=GL(2) or SL(2),g be the corresponding Lie algebra, K=SO(2), we have various realizations of smooth or unitary representation of G in certain function spaces. Can one give an explicit description of the corresponding (g, K)-module? (or reference is ok)



For example, let SL(2) act on unit circle, then the smooth representation of SL(2) on smooth functions of the circle has the set of trigonometric polynomials as the underlying (g, K)-module.

Tuesday, 20 October 2009

What is the significance of the discovery of a pulsar flipping between radio and x-ray emissions?

The significance of this discovery, according to the European Space Agencies web article "Volatile pulsar reveals millisecond missing link" is the first time a pulsar in a crucial transitional phase between has been observed, and that this explains the origin of mysterious millisecond pulsars.



The significance is as stated by the ESA (bolding mine):




This bouncing behaviour is caused by a rhythmical interplay between the pulsar's magnetic field and the pressure of accreted matter.




What this tells us about pulsar dynamics is




When accretion is more intense, the high density of accreted matter inhibits the acceleration of particles that cause radio emission, so the pulsar is not visible in radio waves but only through the X-rays radiated by the accreted matter. When the accretion rate decreases, the magnetosphere expands and pushes matter away from the pulsar: as a consequence, the X-ray emission becomes weaker and weaker, while the radio emission intensifies.




The article also contains contact details of scientists involved

Monday, 19 October 2009

pr.probability - convex hull of k random points

The number of $k$-dimensional faces $f_k$ on a random polytope is well studied subject, and you are asking about the $k=0$ case. The distributions that have probably received the most attention are uniform distributions on convex bodies and the standard multivariate normal (Gaussian) distribution. As Gjergji mentioned, Bárány has some of the strongest results in this area. In particular Bárány and Vu proved central limit theorems for $f_k$.



This Bulletin survey article is a good place to start.



One amusing point worth noting: if you look at uniform distributions on convex bodies, the answer will change drastically depending on the underlying body. The convex hull of random points in a disk, for example, will have many more points than the convex hull of random points in a triangle.

Sunday, 18 October 2009

ag.algebraic geometry - Lie algebra actions on schemes

Let us assume first of all that we are in the affine case (we can worry about globalization later) and that we have $X$ affine over $S$, where $S$ is some unspecified scheme (but in practice probably the spectrum of a field), with $X=mathrm{Spec}(A)$ (thus $A$ is an $mathcal{O}_S$-algebra). We are emphatically not assuming $X$ to be smooth over $S$.



Assume that we are given a map $mathfrak{g}tomathrm{Der}_S(A)$ of Lie algebras and that we are viewing $mathfrak{g}$ as a Lie-sub-algebra of $mathrm{Der}_S(A)$.



In analogy with the differential-geometric case we can interpret this as a distribution on $X$ and so we can ask: what are the integral subschemes of this distribution? Specifically, is there, through every point, a unique integral subscheme? And even more importantly, what can go wrong in the singular points and can we "integrate" this action to an analogue of a Lie groupoid?



I'm seriously betting the answer to most of the above is a resounding "NO!" but I'm curious to know what can go wrong and what is known to go wrong? In short: what is known concerning this? Can one form something like "$X/mathfrak{g}$"?



Finally, let me iterate that I'm not assuming $X$ to be $S$-smooth.

oc.optimization control - Five Front Battle

I am reposting notzeb's solution to the 3 front case here, and making some comments on it. In particular, I will point out that the solution is not unique; while notzeb used fractal methods, I will give a piecewise smooth solution using the same ideas.



Idea of solution:



I claim that it is enough to find any probability distribution on ${(p,q,r): p+q+r=1, p,q,r geq 0 }$ whose projection to each coordinate is the uniform measure on $[0,2/3]$.



Proof that such a measure works:



(For simplicity, I disregard ties.) Observe that it is impossible for either general to win on all fronts. Therefore, if I find a strategy that guarantees that I am expected to win at least 1.5 fronts against any opposing strategy, this means that I have probability at least 1/2 of winning 2 fronts against any opposing strategy. (This logic does not extend to the 5 front case.)



Suppose my enemy sends $p$ troops to the first front. I beat him with probability $max(1-(3/2)p, 0)$. By linearity of expectation, if my enemy sends troops $(p,q,r)$, my expected number of victories is
$$max(1-(3/2)p, 0)+max(1-(3/2)q, 0)+max(1-(3/2)r, 0)$$
$$geq 3-(3/2)(p+q+r) = 3/2.$$
If my opponent adopts a mixed strategy, linearity shows that I still have expected number of victories at least $3/2$. QED



notzeb's measure:



Take the triangle of possible solutions and inscribe a hexagon in it, with vertices at the permutations of $(0,1/3,2/3)$. All our solutions will be inside that hexagon.



Now, take that hexagon and place 6 smaller hexagons in it as shown below.



6 Hexagons in 1



Choose one of those 6 hexagons uniformly at random. Place 6 smaller hexagons inside that one, and choose one of these uniformly at random again. Keep going. The hexagons shrink in size each time; the limiting point is your army distribution.



Notice that the space of possible solutions is a Sierpinski-gasket-like figure, of Hausdorff dimension $log 6/log 3$. It is cute to observe that the white star of David in the center becomes a Koch snowflake of excluded points in the final solution.



My alternate measure

Inscribe a circle in the triangle. On that circle, place the measure $dA/sqrt{1-r^2}$, as described in Harald Hanche-Olsen's answer to a different question.

Saturday, 17 October 2009

co.combinatorics - Highbrow interpretations of Stirling number reciprocity

The number ${n choose k}$ of $k$-element subsets of an $n$-element set and the number $left( {n choose k} right)$ of $k$-element multisets of an $n$-element set satisfy the reciprocity formula



$displaystyle {-n choose k} = (-1)^k left( {n choose k} right)$



when extended to negative integer indices, for example by applying the usual recurrence relations to all integers. There's an interesting way to think about the "negative cardinalities" involved here using Euler characteristic, which is due to Schanuel; see, for example, this paper of Jim Propp. Another (related?) way to think about this relationship is in terms of the symmetric and exterior algebras; see, for example, this blog post.



The number $S(n, k)$ of $k$-block partitions of a set with $n$ elements and the number $c(n, k)$ of permutations of a set with $n$ elements with $k$ cycles satisfy a well-known inverse matrix relationship, but they also satisfy the reciprocity formula



$c(n, k) = S(-k, -n)$



when extended to negative integer indices, again by applying the usual recurrence relations.



Question: Are there any known highbrow interpretations of this reciprocity formula?

Friday, 16 October 2009

solar system - Is the argument of perihelion random?

Argument of perihelion or most planets/bodies changes very slowly over time due to higher order perturbations from other planets' motions (mostly Jupiter and Saturn for the solar system). General relativistic effects also cause the perihelion to advance over time, though this effect is smaller than the others for most purposes.



So given enough time, the arguments of the planets' perihelia would likely end up being fairly randomly distributed at any given time. Also, the planets' orbits are often dynamic on such timescales due to the effects of higher order perturbations showing up, which can cause eccentricity changes, which also affect the positions of perihelia.



So, my guess would be that it is okay to have the arguments of perihelia randomly distributed, since it is inconceivable (at least for me) that any sort of resonances would be visible at this level in order for the distributions to be non-random.

Good algorithm for finding the diameter of a (sparse) graph?

In general it does not seem that the diameter computation implies APSP.
Indeed If the graph is undirected the following can be applied.



Pilu Crescenzi, Roberto Grossi, Michel Habib, Leonardo Lanzi, Andrea Marino: On computing the diameter of real-world undirected graphs. TCS 2012.



and if the graph is directed the following can be applied.



Pierluigi Crescenzi, Roberto Grossi, Leonardo Lanzi, Andrea Marino: On Computing the Diameter of Real-World Directed (Weighted) Graphs. SEA 2012.



In the worst case the complexity of these methods is the same as computing APSP, but in real world cases it has been experimentally shown that they run in O(m), where m is the number of edges. Both can be used even if the graph is weighted.



Regards



Andrea Marino

Thursday, 15 October 2009

gn.general topology - Conditions on a metric space so that boundedness implies total boundedness

If $X$ is locally compact, then it has this Heine-Borel-property. For topological vector spaces locally compactness is equivalent to finite dimension if I remember correctly.



But there are other examples even vector spaces that have the Heine-Borel-property without being locally compact. The space $H(U)$ of holomorphic functions on an open set $Usubseteqmathbb{C}$ with the topology of locally uniform convergence of all derivatives. This is Montel's theorem and therefore such spaces are called Montel spaces (well a certain additional condition is needed, but that's not the point) Another example is the Schwartz-Space $mathcal{S}(mathbb{R}^n)$ of rapidly decreasing functions. Because being Montel is stable under taking strong duals, the space of tempered distributions $mathcal{S}'(mathbb{R}^n)$ has the Heine-Borel-property too (but is not metrizable).

How long until Earth's core solidifies?

Global warming has to do with the surface only, and at best involves changes of 20 degrees at the outside extreme, in comparison to the earth's core, which is as hot as the surface of the sun.



For complete accuracy, and to reflect what a commenter has pointed out, the inner core is solid already, but this is because of the extremely high pressure of the overlying layers of the outer core (which IS liquid), and the mantle. See the Wikipedia articles concerning the Inner Core, and the Outer Core. Note that it is the outer core which creates the earth's magnetic field.



The answer is that the earth's core will never be solid. And I do mean NEVER. Now, that being said, there is only one way it could ever happen and that is if the earth happened to get thrown out of its orbit to become a nomad planet. Then it might have time for its core to cool.



The reason I say this is because it will take longer for the earth's core to turn solid than it will take for the sun to run out of nuclear fuel and expand to engulf the earth. At that point, the earth will be vaporized as it spirals out of its orbit into the sun. The core would soon turn into incandescent gas. This will occur something like 4 to 5 billion years from now.



If by some chance the earth were to become a nomad planet, free to cool in its own good time, then it would take a long long time. See Energetics of the Earth by John Verhoogen, available online via Google Books.



The main factor slowing down the cooling is radioactive decay of long living atoms, namely Uranium-238, Uranium-235, Thorium-232, and Potassium-40, with half-lives of roughly 4.47 billion years, 704 million years, 14.1 billion years, and 1.28 billion years, respectively. From the half-lives of these isotopes and a comparison with the age of Earth, you can see that internal heat production via radioactive decay will likely persist at near current levels for quite some time to come. Verhoogen gives 5000 K as the core temperature now, and a 250 K cooling since the formation of the Solar System, 4.5 billion years ago. If it really does cool at that rate (55 degrees per billion years), it would take something like 91 billion years to cool to 0 Kelvin.



But don't worry, it won't happen, as I said.



Edited to add detail

Wednesday, 14 October 2009

dg.differential geometry - Homotopy classes of differential maps VS those of continuous maps

There's no way this can be literally true:



$$[M,N]^{diff} = [M,N]^{cont}$$



Most of the continuous functions from $M$ to $N$ are not differentiable. So there's no way the above equality can be an equality of sets. I think what you want to ask is if the inclusion:



$$[M,N]^{diff} to [M,N]^{cont}$$



a bijection? This is answered affimatively in Hirsch's "Differential Topology" textbook. It boils down to a smoothing argument, that every continuous function can be uniformly approximated by a $C^infty$-smooth function and the smoothing is unique up to a small homotopy. The argument goes further, to state the the space of continuous functions has the same homotopy-type as the space of $C^infty$ functions. The smoothing argument can be done with bump functions and partitions of unity, and also via a standard convolution with a bump function argument ("smoothing operators").

Tuesday, 13 October 2009

the sun - What is the exact mass of the Sun?

The mass of the Sun is determined from Kepler's laws:



$$frac{4pi^2times(1,mathrm{AU})^3}{Gtimes(1,mathrm{year})^2}$$



Each term in this component contributes to both the value of the solar mass and our uncertainty. First, we know to very good precision that the (sidereal) year is 365.256363004 days. We have also defined the astronomical unit (AU) to be 149597870700 m. Strictly speaking, the semi-major axis of the Earth's orbit is slightly different, but by very little in the grand scheme of things (see below).



At this point, we can solve for the product $GM$, known as the gravitational parameter, sometimes denoted $mu$. For the Sun,
$$mu_odot=132712440018pm9,mathrm{km}^3cdotmathrm{s}^{-2}$$



So solve for $M_odot$, we need the gravitational constant $G$, which, as it turns out, is by far the largest contributor to the uncertainty in the solar mass. The current CODATA value is $6.67384pm0.00080times10^{-11},mathrm{Ncdot m}^2cdotmathrm{kg}^{-2}$, which all combines to give
$$M_odot=1.98855pm0.00024times10^{30},mathrm{kg}$$
where my uncertainty is purely from the gravitational constant.



The value $1.9891times10^{30},mathrm{kg}$ (and nearby values) probably come from an older value of the gravitational constant of $6.672times10^{-11},mathrm{Ncdot m}^2cdotmathrm{kg}^{-2}$, which is still supported by some measurements of $G$.

Monday, 12 October 2009

polynomials - Constraining a least squares minimization to fit a single root?

I finally have an example ready...



So we have an image of a sphere, and we take data points on it.



alt text



Since this is a sphere it is reasonable to assume that we can fit a sphere to this part of the image. Here are the reults:



alt text



Blue is low error, we can see that most points fit well, the other are the `parasitic' roots, interpolations look as follows:



alt textalt text



The correct root fits the image well, the other root is just getting in the way (at the moment) but here is the fitted function:



alt text



Now this is a nice result since the 'edge' of the fitted sphere corresponds to the edge of the sphere in the image. Something I imagine the suggested rational fitting can't do any why I WANT several roots in the solution, but I only want to fit one to the data!!



So whats the problem, well, this is a nice result but a 3rd order polynomial might fit it better (the data is technically not a simple sphere), or i might have other data that would need even higher order fitting (toruses for exmaple are 4th order). So I fit a 3rd order poly and here is the resulting function:
alt text



Not nice, the roots are all over each other, fighting against each other to fit the data.



Now I want the re-iterate my question. One can always solve for z and fit a single root to the data using non-linear regression, but my feeling is that we should be able to do the same using constrained linear fitting. And HOW is my question! Here is an example that constraining the fit does at least help. Here the $z$ and $z^2$ parameters are fixed to $1$: Fitting the same 3rd order poly to the same data:
alt text



A much friendlier 3rd order poly, but this restriction is too much it turns out, and i have lost the nice results.



So the question in terms of my original example:



How can I linearise the following minimization problem without reintroducing multiple roots?



$min_c sum_{i=0}^n (y_i - sqrt{-c-x_i^2})^2$



and then we can see if this can be done in general to all implicit polys....



Thanks.

lo.logic - What notions of universe does predicative type theory admit?

The analogy between universes in type theory and the Mahlo hierarchy in set theory has been analyzed in many different ways by Michael Rathjen. (This builds on his analysis of KPM, but ML type theories with universes came in later in the game.)



I don't have the Palmgren paper you refer to, but I think the following paper is closely related to your questions:



Rathjen, Griffor, Palmgren, Inaccessibility in constructive set theory and type theory, Ann. Pure Appl. Logic 94 (1998), 181-200.



This is not the only way of relating universes in type theory and inaccessibles in set theory, another one is presented by Anton Setzer in Extending Martin-Löf type theory by one Mahlo-universe and further investigated by Rathjen in Realizing Mahlo set theory in type theory.

formation - Why do some celestial bodies have atmospheres, and not others?

The amount and kind of gases a body can trap depends on the object's surface temperature, and its density & radius (which refers to it's gravity).



An object with high gravity and low surface temperature will be able to hold more gases in it's atmosphere. In the case of the Moon, due to its low gravity it could barely trap an atmosphere of Xenon.



enter image description here



You can check this thanks to this amazing plot: http://astro.unl.edu/naap/atmosphere/animations/gasRetentionPlot.html



But this is not all you need to hold an stable atmosphere like the Earth does. The object will also need some protection against the solar wind. In the Earth we have a magnetic field that prevents the solar wind to reach our atmosphere, but in the case of Mars it has a weak and poor magnetic field to defend itself against the solar wind that rips it's atmosphere.

Sunday, 11 October 2009

Number Theory and Geometry/Several Complex Variables

No, I wouldn't say that "most" applications of algebraic and complex geometry to number theory are limited to the case of spaces of complex dimension 1. A more detailed answer follows:



1) Classical algebraic number theory and classical algebraic geometry both fit under the aegis of scheme theory. In this regard, there is an analogy between the ring of integers Z_K of a number field K and an affine algebraic curve C over a field k: both are one-dimensional, normal (implies regular, here) integral affine schemes of finite type. (The analogy is especially close if the field k is finite.) My colleague Dino Lorenzini has written a very nice textbook An Invitation to Arithmetic Geometry, which focuses on this analogy. I might argue that it could be pushed even further, e.g. that students and researchers should be as familiar with non-maximal orders in K as they are with singular curves...



2) Algebraic number theory is closely related to arithmetic geometry: the latter studies rational points on geometrically connected varieties. To do so it is essential to understand the "underlying" complex analytic space, and it is undeniable that by far the best understood case thus far is when this space has dimension one: then the theorems of Mordell-Weil and Faltings are available. Greg Kuperberg's remark about mixing two things which are in themselves nontrivial is apt here: it is certainly advantageous in the arithmetic study of curves that the complex picture is so well understood: by now the algebraic geometers / Riemann surface theorists understand a single complex Riemann surface (as opposed to moduli spaces of Riemann surfaces) rather well, and this firm knowledge is very useful in the arithmetic study.



3) In considering a scheme X over a number field K, one often "gains a dimension" in thinking about its geometry because key questions require one to understand models of X over the ring of integers Z_K of K. For instance, the study of algebraic number fields as fields is the study of zero-dimensional objects, but algebraic number theory proper (e.g. ramification, splitting of primes) begins when one looks at properties not primarily of the field K but of its Dedekind ring of integers Z_K.



A consequence of this is that in the modern study of curves over a number field, one makes critical use of the theory of algebraic surfaces, or rather of arithmetic surfaces, but the latter is certainly modeled on the former and would be hopeless if we didn't know, e.g. the classical theory of complex surfaces.



4) On the automorphic side of number theory we are very concerned with a large class of Hermitian symmetric domains and their quotients by discrete subgroups. For instance, Hilbert and Siegel modular forms come up naturally when studying quadratic forms over a general number field. More generally the theory of Shimura varieties is playing an increasingly important role in modern number theory.



5) Also classical Hodge theory (a certain additional structure on the complex cohomology groups of a projective complex variety) is important to number theorists via Galois representations, Mumford-Tate groups of abelian varieties, etc.



And so forth!




Addendum: An (only a few years) older and (ever so much) wiser colleague of mine who does not yet MO has contacted me and asked me to mention the following paper of Bombieri:



MR0306201 (46 #5328)
Bombieri, Enrico
Algebraic values of meromorphic maps.
Invent. Math. 10 (1970), 267--287.
32A20 (10F35 14E99 32F05)



He says it is "an extreme counterexample to the premise of the question." Because my august institution does not give me electronic access to this volume of Inventiones, I'm afraid I haven't even looked at the paper myself, but I believe my colleague that it's relevant and well worth reading.



Edit: He is now a MO regular: Emerton.

Saturday, 10 October 2009

Best Algebraic Geometry text book? (other than Hartshorne)

I've found something extraordinary and of equally extrordinary pedigree online recently. I mentioned it briefly in response to R. Vakil's question about the best way to introduce schemes to students. But this question is really where it belongs and I hope word of it spreads far and wide from here.



Last fall at MIT, Micheal Artin taught an introductory course in algebraic geometry that required only a year of basic algebra at the level of his textbook. The official text was William Fulton's Algebraic Curves, but Artin also wrote an extensive set of lecture notes and exercise sets. I found them quite wonderful and very much in the spirit of his classic textbook(By the way, simply can't wait for the second edition.).



Not only has he posted these notes for download, he's asked anyone working through them to email him any errors found and suggestions for improvements.All the course materials can be found at the MIT webpage. I've also posted the link at MathOnline, of course.



I don't know if most of the hardcore algebraic geometers here would recommend these materials for a beginning course. But for any student not looking to specialize in AG, I can't think of a better source to begin with. That's just my opinion. But it certainly belongs as a possible response to this question. Then again, it may be too softball for the experts,particularly those of the Grothendieck school.



Here's keeping our fingers crossed that this is the beginning of the gestation of a full blown text on the subject by Artin.

light - Could dark matter be considered a medium?

Long ago, it was theorized that light had to travel through a medium, until the particle/wave duality of light was discovered. Then it changed to the idea that light can travel through the vacuum of space as a particle.



Light travels at different speeds depending on the medium that the light is traveling through. It travels slower in water than it does through air.



However, is there any possibility that dark matter is a medium that light travels through? Could it be conceivable that what we recognize as light speed is really just the speed at which light travels through this medium? If we were to remove dark matter, would light go faster?

galaxy - Active Galaxies - Astronomy

Yes of course, there are other forms apart from the four stated above. Basically you have just two kinds:



  1. the radio-quiet, and

  2. the radio-loud.

But then again, there are two Seyferts



  1. Seyfert I, and

  2. Seyfert II

Apart from Quasar, Blazar and Radio galaxy, we also have presently,



  1. BL Lac (Named after its prototype BL Lacarte (original member of Blazar type)), and

  2. OVV (Optically Variable Violent Quasar or OVV Quasar; subtype of Blazar)

Best source to begin with is wikipedia.org of course.

Thursday, 8 October 2009

set theory - What is the idea behind stationary sets?

One answer to your question about intuition is simply that stationary sets arise very naturally once you begin to think of the natural measure surrounding club sets. The stationary sets are simply those that have positive outer measure with respect to the club filter. So if you care about club sets being large, then the concept of stationary sets arises naturally.



More specifically, if you consider the collection of sets that either contain or omit a club, then you have a natural two-valued measure (measure 1 means containing a club, measure 0 means omitting a club). The stationary sets are precisely the sets that do not have measure 0, and this is the same as having outer measure 1.



Many uses of the club sets rely on the fact that they can be thought of as the large subsets of $kappa$, in these sense that they have measure 1 with respect to this measure. However, many of these uses generalize from club to stationary because it is sufficient for the application that the set is larger merely in the sense that it does not have measure $0$, rather than actually have measure $1$.



By the way, I'm not so sure I share your intuition that the club sets are those that "contain big enough ordinals". On the one hand, if $C$ is the set of limit ordinals below $kappa$, then it is club, but if I add one (or any fixed non-zero ordinal) to every ordinal in $C$, making $D={lambda+1 | lambdain C}$, then it would seem to have just as big ordinals or bigger, but $D$ is not club.

Is a Moon-base inherently more dangerous than a Space Station?


If it is more dangerous, how much more and in what ways?




Proffesorfish and Eli Skolas have both given thoughtful answers comparing the hazards of I.S.S. vs Moonbase. If humans were preceded by robots to establish infra-structure, I believe a moonbase could be less hazardous. Radiation shielding from local resources could be added to Bigelow habs. At the poles there may be volatiles that could be harvested for life support as well as propellent.



Now for the second part of the question:




And if the danger isn't that much different, why aren't NASA/ESA/Russia doing it




We don't have a moon base because it's a lot harder.



The biggest difference between I.S.S. and a lunar base is delta V. It takes 9 km/s to reach the I.S.S. and 15 km/s to reach the lunar surface. At this time there's no infra-structure and thus no propellent available at the lunar surface. So another 3 km/s must be added for the return trip. An 18 km/s delta v budget is vastly different from a 9 km/s delta V budget.



A capsule from the I.S.S. re-enters at 8 km/s and one from the moon would re-enter earth's atmosphere at about 11 km/s. More robust structure and thermal protection would be needed.



Also a lunar soft landing is a lot harder than rendezvous with the I.S.S.

can we see all binary stars as pairs?

To answer the main part of your question: Yes, there do exist such systems. They're called visual binaries. We generally need a telescope to tell them apart. Most binary systems look like a single star when viewed with only the naked eye; many cannot be resolved without the aid of a telescope. But visual binaries can.




also is there any binary stars doesn't linked together by gravity




No, there aren't. You were right when you said that the definition of a binary system is basically two stars orbiting a common center of mass. If the stars don't orbit each other, they aren't a binary system.



By the way, check out Proxima Centauri, the nearest star to the Sun. It's one of the three stars in the triple-star Alpha Centauri system. The reason I mention it is that even though it's seemed for years like it was gravitationally bound to the system, that idea is now under debate.




Let me address something that Mitch pointed out. The classification of star systems as "visual binaries" is based solely on our ability to observe them. If we had better telescope, this class could change. An analogy might be our naming certain wavelengths of light "visible". They only depend on our perceptions, not some objective characteristic that all observers in the universe could agree on.

rt.representation theory - Is there a machinery describing all the irreducible representations ?

The problem of classifying irreducible $sl_2(mathbb C)$-representations is essentially
untractable as it contains a wild subproblem. Indeed, the action of the Casimir
element $C$ on any irreducible representation is by a complex scalar (by a
theorem of Quillen I believe). If we consider the case when $C$ acts by zero, by
a result of Beilinson-Bernstein the category of $sl_2$-representations with
$C=0$ is equivalent to the category of quasi-coherent $mathcal D_{mathbf
P^1}$-modules. In this $1$-dimensional case every irreducible $mathcal
D_{mathbf P^1}$-module is holonomic. If we restrict ourselves to irreducible
regular holonomic modules we have two possibilities. One case is that they are
supported at a single point and then the point is a complete invariant. In the
other case they are classified by a finite collection of points of $mathbf P^1$
and equivalence classes of irreducible representation of the fundamental group
of the complement of the points which map the monodromy elements of the points
non-trivially. In particular we can consider the case of three points in which
case the fundamental group is free on two generators (they and the inverse of
their product being the three monodromy elements). The irreducible
representations where one of the monodromy elements act trivially correspond to
removing the corresponding point and thinking of the representation as a
reprentation of the fundamental group of that complement.



Hence, we can embed the category of finite-dimensional representations of the
free group on two elements as a full subcategory closed under kernels and
cokernels of the category of $sl_2(mathbb C)$-modules. This makes the latter
category wild in the technical sense. However, the irreducible representations
of the free group on two letters are also more or less unclassifiable.



There is no contradiction between this and the result of Block. His result gives
essentially a classification of irreducibles in terms of equivalence classes of
irreducible polynomials in a twisted polynomial ring over $mathbb C$. So the
consequence is that such polynomials are essentially unclassifiable.



[Added] Intractable depends on your point of view. As an algebraic geometer I agree with Mumford
making (lighthearted) fun of representation theorists that think that wild problems are intractable. After
all we have a perfectly sensible moduli space (in the case of irreducible representations) or moduli stack (in the general case). One should not try to "understand" the points of an algebraic variety but instead try to understand the variety geometrically. Today, I think that this view point has been absorbed to a large degree by representation theory.

jupiter - How to calculate conjunctions of 2 planets

EDIT: http://wgc.jpl.nasa.gov:8080/webgeocalc/#AngularSeparationFinder lets you find planetary conjunctions online using NASA's data. It's still iterative, but fairly fast (since NASA uses fairly powerful servers, even for their websites).



Summary: I'm still researching, but there appears to be no well-known,
reliable non-iterative method to find conjunctions. Using the
iterative method and the C SPICE libraries, I created a table of
conjunctions for the visible planets (Mercury, Venus, Mars, Jupiter,
Saturn, Uranus) here:



http://search.astro.barrycarter.info/



Full "answer":



I am still researching a general answer to this question ("How to
calculate conjunctions of 2 planets"), but here's what I have so far.



The iterative method:



  • Compute the positions of the planets at regular intervals (eg,
    daily). The "daily" works for planets (but not some asteroids and
    definitely not the Moon) because the planets move through the sky
    relatively slowly.


  • Find local minima in the daily lists.


  • For efficiency, carefully discard local minima that are too
    large. For example, Mercury and Venus may approach each other, reach
    a minimal distance of 20 degrees, and then drift apart. The 20
    degrees is a local minima, but not a conjunction.


  • However, be careful when discarding minima. If you are searching
    for 5-degree conjunctions, two planets may be 5.1 degrees apart one
    day, and 5.2 degrees apart the next day, but less than 5 degrees
    apart sometime in the interim.


  • For 5-degree conjunctions, you only need daily minima less than 8
    degrees, and even that is overkill. The fastest a planet can move in
    the sky is 1.32 degrees per day (Mercury), and the second fastest is
    1.19 degrees per day (Venus). In theory, these movements could be in
    opposite directions, so the fastest two planets can separate is 2.51
    degrees per day. So, if two planets are more than 8 degrees apart
    two days in a row, there is no way they could be closer than 5
    degrees between the days.


  • In reality, planets maximum retrograde angular speed is slower
    than the prograde maximum speed, so the 2.51 degree limit above is
    never actually reached.


  • After finding local minima, use minimization techniques (eg, the
    ternary method) to find the instant of closest approach.


  • I ended up using the C SPICE libraries, and found 32,962
    six-degrees-or-less conjunctions between -13201 and 17190, averaging
    about 1 conjunction per year. Of these, 2,185 occur between the "star
    of Bethlehem" and the 2015 conjunctions:


http://f97444b55127cb9d21fd365807cad442.astro.db.mysql.94y.info/




This iterative process works, but can be tedious. Since planetary
positions are semi-well-behaved, you'd think there would be a faster,
non-iterative method. Well...



http://emfisis.physics.uiowa.edu/Software/C/cspice/doc/html/cspice/gfsep_c.html



However, this also uses an iterative method, as the long description
of the "step" parameter indicates:



"step must be short enough for a search using step to locate the time
intervals where the specified angular separation function is monotone
increasing or decreasing. However, step must not be too short, or
the search will take an unreasonable amount of time"



By experimentation, I found that a step size of 6 days does not find
all Mercury/Venus minimal separations, although a step size of 1 day
does find these. In other words, reducing the step size from 6 days to
1 day found additional conjunctions, but reducing the step size to
well below 1 day did not produce any additional conjunctions.




M[y script] iterates through dates.



[...]



In Astronomical Algorithms, Meeus has a chapter (17) on Planetary
Conjunctions, but the chapter is less than three pages and lacks
detail except that he's pretty clearly iterating and looking for
the smallest separation, same as my Python program does.



He does a little better in Astronomical Formulae for Calculators:
sounds like he's advocating iterating over big steps to find the
place where the separation stops decreasing and starts increasing,
then using "inverse interpolation" to find the exact time.



"Mathematical Astronomy Morsels" (also Meeus) talks starting on p.
246 about triple conjunctions (defined as two planets having several
conjunctions in a short time, due to retrograde motion, not three
planets all in conjunction at once) and gives some examples for very
specific pairs, like Jupiter and Saturn, with ways you could look for
those conjunctions.



It doesn't look like any of these books would be very helpful in
finding a non-iterative solution.



  • I haven't had a chance to read the books above, but did find:

http://adsabs.harvard.edu/abs/1981JRASC..75...94M



where Meeus confirms the standard iterative method, but also provides
a different, less accurate method. Unfortunately, Meeus only uses this
method to compute solar conjunctions, elongations and oppositions, not
interplanet conjunctions.



  • Jon Giorgini of NASA (jdg@tycho.jpl.nasa.gov) tells me:



As far as NASA computed planetary/stellar conjunctions/occultations, I
don't know of anyone within NASA that does that routinely. There is an
external network of volunteers that does that under the umbrella of
the International Occultation Timing Association (IOTA), and they have
developed pretty refined internal software for that purpose.





[...] the software package Occult does generate planetary conjunction
predictions - based on a two-body solution. The approach used is a
crude brute-force method of generating the planetary ephemerides on a
daily basis. It is not particularly efficient - but it is sufficient
for its intended purpose. [...]



  • As a note, IOTA focuses on asteroid occultations, so computing
    positions daily doesn't always work. Especially for near-Earth
    asteroids, IOTA must iterate considerably more frequently.


  • I also tried contacting Fred Espenak, the creator of
    http://eclipse.gsfc.nasa.gov/SKYCAL/SKYCAL.html, but was unable to
    do so. Jon Giorgini tells me that Fred has retired.


  • I'm still looking, but my current conclusion is that there is no
    good well-known non-iterative way to find planetary conjunctions. As
    the image in my
    https://mathematica.stackexchange.com/questions/92774 shows,
    planetary separations aren't really as well-behaved as I had hoped.


  • I just got a reply from Arnold Barmettler, who runs calsky.com:



I'm using a very time consuming iterative approach to pre-calculate
Bessel Elements for conjunctions. This allows fast online calculation
for any place on earth.
Initial calculations are only done every few years, so CPU time does not
matter. This would change if I'd enter the business to calculate
asteroidal occultations.


re-iterating (pun intended) the same theme.



Miscellaneous:



  • I used planetary system barycenters (the center of mass of a
    planet and its moons) for an entirely different reason. If you ask
    HORIZONS (http://ssd.jpl.nasa.gov/?horizons) for the position of
    Mars and set the date, you'll see this notice:

Available time span for currently selected target body:
1900-Jan-04 to 2500-Jan-04 CT.



However, if you use Mars' barycenter, this becomes:



Available time span for currently selected target body:
BC 9998-Mar-20 to AD 9999-Dec-31 CT.



In other words, NASA computes the position of Mars' planetary system
barycenter for a much longer interval than they compute Mars' actual
position. Since I wanted to compute conjunctions for a long period of
time, I went with the barycenters (DE431 computes barycenters even
beyond 9998 BC and 9999 AD).



I've complained that this is silly, especially for Mars, since the
distance between Mars' center and Mars' planetary system barycenter is
only about 20cm (yes, centimeters, since Phobos and Deimos have very
little mass) per
http://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/aareadme_de430-de431.txt. However,
NASA apparently plans to keep it this way.



  • I ignore light travel time, which introduces a small error. Most
    planetarium programs, like Stellarium, ignore light travel time by
    default, although Stellarium has an option to turn it on.


  • I also ignore refraction, aberration, and similar minor effects.


More details on how I generated these tables (in highly fragmented form):



https://github.com/barrycarter/bcapps/tree/master/ASTRO



(README.conjuncts in the above is a good starting point)



Some of the "cooler" conjunctions I found are at:
http://search.astro.barrycarter.info/table.html. Here's a screenshot:



enter image description here