Sunday, 28 October 2012

co.combinatorics - Which lattices have more than one minimal periodic coloring?

The lattice $mathbb{Z}^n$ has an essentially unique (up to permutation) minimal periodic coloring for all $n$, namely the "checkerboard" 2-coloring. Here a coloring of a lattice $L$ is a coloring of the graph $G = (V,E)$ with $V = L$ and $(x,y) in E$ if $x$ and $y$ differ by a reduced basis element. (NB. I am not quite sure that this graph is the proper one to consider in general, so comments on this would also be nice.)



The root lattice $A_n$ has many minimal periodic colorings if $n+1$ is not prime (I have sketched this here, and some motivation is in the last post in that series); if $n+1$ is prime, then it has essentially one $n+1$-coloring. Two minimal periodic colorings for $A_3$ are shown below (for convenience, compare the tops of the figures):



alt text
The generic ("cyclic") coloring.



alt text
A nontrivial example.



The lattices $D_n$ are also trivially 2-colored.



So: are there other lattices that admit more than one minimal periodic coloring? I'd be especially interested to know if $E_8$ or the Leech lattice do.



(A related question: does every minimal periodic coloring of $A_n$ arise from a group of order $n+1$?)

lo.logic - Model of ZF + $neg$C in which Solovay's Theorem on stationary sets fails?

It is a theorem of Solovay that any stationary subset of a regular cardinal, $kappa$ can be decomposed into a disjoint union of $kappa$ many disjoint stationary sets. As far as I know, the proof requires the axiom of choice. But is there some way to get a model, for instance a canonical inner model, in which ZF + $neg $C holds and Solovay's Theorem fails?



I am interested in this problem because Solovay's theorem can be used to prove the Kunen inconsistency, that is, that there is no elementary embedding j:V -->V, where j is allowed to be any class, under GBC. The Kunen inconsistency may be viewed as an upper bound on the hierarchy of large cardinals. Without choice, no one has yet proven the Kunen inconsistency (although it can be proven without choice if we restrict ourselves to definable j). So if there is hope of proving Solovay's Theorem without choice, we could use this to prove the Kunen inconsistency without choice.

solar system - Ninth planet - what else could it be?

The introduction of the paper mentions some alternative interpretations put forward in earlier papers. It seems as if this "problem" has been noted earlier. This new solution to the problem beats earlier solutions. But maybe it is an evolving discovery process which again will invent new better explanations?



In addition to those mentioned in the paper, I suggest the following candidate alternatives:



-) Too few observations in order to hold up in a soon dramatically increased discovery rate. 4 or 5 or maybe even 8 TNOs which, as they claim, give a 0.007% significance level is certainly resting on some assumptions which will be challenged if thousands of similar objects are found in the next decade. So, as Andy has answered: possibly a coincidence due to small sample. (The paper's 0.007% probability is, well, we'll see)



-) Weird unthought-of science bias which, in math more than in lens, tends to select inadvertently among the (candidate) observations made and their characteristics. Since the authors are at the top in their fields, and Michael Brown has discovered loads of distant objects, including Sedna, because of that maybe one could suspect that some one thought has created a bias somehow.

fundamental astronomy - Calculating azimuth from equatorial coordinates

I try to get the azimuth of an object from its equatorial coordinates using this formula:



$a = arctan2(sin(θ - α), sin φ * cos(θ - α) - cos φ * tan δ)$



Where
φ = geographic latitude of the observer (here: 0°)
θ = sidereal time (here: 0°)
δ = declination
α = right ascension



I made two JavaScript functions to implement this calculation:



  obliq = deg2rad(23.44);  // obliquity of ecliptic
lat2 = deg2rad(0); // observer's latitude
lmst = deg2rad(0); // siderial time

function equatorial(lat, lon) {
// returns equatorial from ecliptic coordinates
dec = Math.asin( Math.cos(obliq) * Math.sin(lat) + Math.sin(obliq) *
Math.cos(lat) * Math.sin(lon));
ra = Math.atan2(Math.cos(obliq) * Math.sin(lon) - Math.sin(obliq) * Math.tan(lat),
Math.cos(lon));
ra += 2 * Math.PI * (ra < 0);
return [dec, ra];
}

function horizontal(lat, lon) {
// returns horizontal from ecliptic coordinates
coords = equatorial(lat, lon);
dec = coords[0]; // δ
ra = coords[1]; // α
alt = Math.asin(Math.sin(lat2) * Math.sin(dec) + Math.cos(lat2) *
Math.cos(dec) * Math.cos(lmst - ra));
azm = Math.atan2(Math.sin(lmst - ra), Math.sin(lat2) * Math.cos(lmst - ra) -
Math.cos(lat2) * Math.tan(dec));
azm += 2 * Math.PI * (azm < 0);
return [alt, azm];
}


I cannot see any error, but I get strange results for azimuth (a) as can be seen in this table (the other values seem correct):



   λ       δ         α         h         a
0 0.0000 0.0000 90.0000 0.0000
15 5.9094 13.8115 75.0000 246.5600
30 11.4723 27.9104 60.0000 246.5600
45 16.3366 42.5357 45.0000 246.5600
60 20.1510 57.8186 30.0000 246.5600
75 22.5962 73.7196 15.0000 246.5600
90 23.4400 90.0000 0.0000 246.5600
105 22.5962 106.2804 -15.0000 246.5600
120 20.1510 122.1814 -30.0000 246.5600
135 16.3366 137.4643 -45.0000 246.5600
150 11.4723 152.0896 -60.0000 246.5600
165 5.9094 166.1885 -75.0000 246.5600
180 0.0000 180.0000 -90.0000 248.3079
195 -5.9094 193.8115 -75.0000 66.5600
210 -11.4723 207.9104 -60.0000 66.5600
225 -16.3366 222.5357 -45.0000 66.5600
240 -20.1510 237.8186 -30.0000 66.5600
255 -22.5962 253.7196 -15.0000 66.5600
270 -23.4400 270.0000 -0.0000 66.5600
285 -22.5962 286.2804 15.0000 66.5600
300 -20.1510 302.1814 30.0000 66.5600
315 -16.3366 317.4643 45.0000 66.5600
330 -11.4723 332.0896 60.0000 66.5600
345 -5.9094 346.1885 75.0000 66.5600
360 -0.0000 360.0000 90.0000 68.3079


Does anyone see the error? Thank you.

Friday, 26 October 2012

lo.logic - When can we prove constructively that a ring with unity has a maximal ideal?

I suspect that the most general reasonable answer is a ring endowed with a constructive replacement for what the axiom of choice would have given you.



How do you show in practice that a ring is Noetherian? Either explicitly or implicitly, you find an ordinal height for its ideals. Once you do that, an ideal of least height is a maximal ideal. This suffices to show fairly directly that any number field ring has a maximal ideal: The norms of elements serve as a Noetherian height.



The Nullstellensatz implies that any finitely generated ring over a field is constructively Noetherian in this sense.



Any Euclidean domain is also constructively Noetherian, I think. A Euclidean norm is an ordinal height, but not at first glance one with the property that $a|b$ implies that $h(a) le h(b)$ (with equality only when $a$ and $b$ are associates). However, you can make a new Euclidean height $h'(a)$ of $a$, defined as the minimum of $h(b)$ for all non-zero multiples $b$ of $a$. I think that this gives you a Noetherian height.



I'm not sure that a principal ideal domain is by itself a constructive structure, but again, usually there is an argument based on ordinals that it is a PID.

Thursday, 25 October 2012

universe - Is time itself speeding up universally?

The rate at which a clock ticks is a local property. There is no universal rate of time which could speed up, so the answer is no.



A clock stationary in the same frame of reference, and the same gravitational field as me will tick at the same rate, (one second every second) It is only clocks that are moving, or in different gravitational fields that that tick at a different rate. If I am falling into a black hole, and I stop the check the time, I wouldn't see my watch slow down.



So there is no standard clock rate. Clocks in the early universe (or at least things that depend on time, such as nuclear decay) ran at the same rate, in their local gravitational field (at one second per second)



If we were to observe a clock from the early universe, it would be receding at great velocity (and so it would be redshifted and time dilated)



Now, the gravitational time dilation does not depend on the amount of mass directly, but on the intensity of the gravitational field. A galaxy has huge mass, but the only place in which the gravitational field is significant (from a GR perspective) is in the neighbourhood of Neutron stars and black holes.



To get gravitational time dialtion you need an intense gravitational field. Just having a lot of mass is not enough. After the period of inflation, the universe was homogeneous, and there were no significant "lumps" to give a net gravitational field. As the gas in the universe collapsed into galaxies and stars, and eventually black holes regions of intense gravity formed, in which clocks would run slowly.



However there is no general speeding up of clocks in the universe.

the sun - What is the shape of the Sun's orbit around the Earth taking into account elliptical orbits?

Consider the non-inertial frame that is at rest with respect to the Earth's revolution around the Sun. (Ignore the Earth's rotation around its own axis.) My question is, what is the shape of the Sun's orbit around the Earth in this reference frame?



Now if we assume the Earth moves in a circular orbit around the Sun, then the Sun's orbit around the Earth will also be circular. (Just use geometric the definition of a circle - the set of all points a fixed distance from a given point.). Specifically it's the great circle formed by the intersection of the ecliptic plane with the celestial sphere.



But the Earth does not move in a circular orbit - Kepler's first law states that the Earth moves in an ellipse around the Sun, with the Sun at one of the foci of the ellipse. So if we consider the Earth's elliptical orbit around the Sun, what is the shape of the Sun's orbit around the Earth. I doubt it's an ellipse, so would be it a more complicated-looking curve.



Note that I'm not interested in gravitational influences from the moon and other planets - this is pretty much a purely mathematical question: if we assume the Earth moves in an ellipse, what would be the shape produced?

milky way - Why is the Solar Helical (Vortex) model wrong?

It isn't correct, because a vortex is not a helix, and so while the planets do trace a helical path as they move through the galaxy, this is not evidence of a vortex.



Yes, the sun actually is moving through space, as it traces a path around the centre of the galaxy. The whole mass of the solar system moves with it, so the planets are not left behind as the sun moves.



Rhys Taylor and Phil Plait have comprehensive smackdowns debunking this vortex idea and other misunderstandings/delusions by the author.

Wednesday, 24 October 2012

stellar evolution - How massive does a main sequence star need to be to go type 1 supernova?

We know the mass a white dwarf needs to be. That's well defined by the Chandrasekhar limit, but before a main sequence star turns into a white dwarf it tends to lose a fair bit of its matter in a stellar nebula.



According to this site, the white dwarf that remains is about half the mass of the main sequence star, with larger stars losing a bit more.



So, the question: Is it correct to say that a star with a mass of about three solar masses will eventually go supernova, similar to a type 1 supernova, even when it's not part of a binary system? Has that kind of supernova ever been observed?



Or does something else happen like in the final stages of that star? Does it keeps going though collapse and expand cycles, losing enough mass that when it finally becomes a white dwarf it's below the Chandrasekhar limit in mass?



Mostly, what I've read on supernovae says that type 1 supernovae happen when a white dwarf accretes extra matter and reaches the limit and type 2 supernovae are much larger and require about 8-11 solar masses to generate the iron core which triggers the supernova. What happens with the death of the star between three solar masses and eight solar masses?

Saturday, 20 October 2012

what is the percentage of stars with planetary systems?

For the purposes of the Drake equation you may as well assume that every star has a solar system.



At present, the exact fraction is unknown since the search techniques are limited to finding planets with certain characteristics. For example, transit searches tend to find close-in, giant planets; doppler shift surveys are also most sensitive to massive planets with short orbital periods, and so-on.



One can try and account for this incompleteness, but although some examples have been found, we are in the dark about the fraction of stars that have an Earth-sized planet at distances further than the Earth is from the Sun, or the fraction of stars that have any kind of planet orbiting beyond where Saturn is in our solar system.



For the Drake equation, you need to know the fraction of stars with planetary systems times the number of "habitable" planets per system is. There have been attempts to estimate this from Kepler data.



Possibly the best estimate at present is from Petigura et al. (2013), who estimate that 22% of sun-like stars have an earth-sized (1-2 Earth radii) planet that receives between 0.25-4 times the radiative flux of the Earth.



Obviously this is a lower limit because it doesn't include planets smaller than the Earth. It also doesn't include the moons of giant planets, which may also be habitable.



We still know very little about this fraction when it comes to M-dwarf stars, which are the most common type of star in the Galaxy...

history - Acquirable Raw Data in Amateur Astrophotography

First off, pairing a classic dob with a DSLR is a bit like a shotgun marriage. A dobsonian is fundamentally a visual telescope. Most manufacturers don't even consider the possibility that these instruments could be used for data collection via a sensor. There are 2 issues here:



1. The dobsonian is not tracking



The sky is moving, the dob stays still. You have to push the dob to keep up with the sky. Any long-exposure photo would be smeared. To remedy this, you'll need an equatorial platform, which will move the dob in sync with the sky.



Please note that only the best platforms allow reasonably long exposure times. Then the results can be fairly good.



2. There isn't enough back-focus



The best photos are taken when you remove the lens from the camera, plug it into the telescope directly, and allow the primary mirror to focus the image directly on the sensor. This is called prime focus photography. But most dobs can't reach the sensor within the camera, because their prime focus doesn't stick out far enough. There are several remedies for this, like using a barlow, moving the primary up, etc.



The bottom line is that it takes some effort to make a dob and a DSLR play nice together. Is it doable? Yes. Is it simple and immediate? No. So the literal answer to your question is that there isn't much you can do with just a dob and a DSLR.



You can take photos of the Moon and the Sun, because the short exposure there does not require tracking, but that's pretty much it. Here is an image of the Moon I took with a home-made 6" dob (with home-made optics) and a mirrorless camera (prime focus, about 1/320 sec exposure):



the Moon



Makes a cute little desktop background, I guess, but it's definitely not research-grade.



Now add a tracking platform and things become more interesting, and the possibilities open up quite a lot.




In a more general sense:



There are telescopes that are specifically made for astrophotography. They have lots of back-focus, they are short and lightweight and therefore can easily be installed on tracking mounts. More importantly, there are tracking mounts made specifically for imaging - very precise, delicate mechanisms that follow the sky motion with great accuracy. In fact, the mount is more important than the scope.



A typical example would be a C8 telescope installed on a CGEM mount, or anything equivalent. Barring that, a dob with lots of back-focus sitting on a very smooth tracking platform (probably not as accurate as a GEM, but good enough for many purposes).



Make sure you don't exceed the load capacity of the mount. If the mount claims it can carry X amount of weight, it's best if the telescope weight doesn't exceed 1/2 of that amount. Close to the weight load limit, all mounts become imprecise.



Once you have: a tracking mount, a good camera, and a telescope (listed here from most important to least important), you can start imaging various portions of the sky for research. There are 2 main classes of objects that you could image:



1. Solar system objects



They're called "solar system objects" but the class includes anything that's pretty bright, not very big, and it's high resolution. Tracking is important but not that crucial.



You need a sensitive, high speed camera that can take thousands of images quickly (a movie, basically). These are called planetary cameras. As a cheap alternative in the beginning you could use a webcam, there are tutorials on the Internet about that. A DSLR in video mode in prime focus might work, but it's going to do a lot of pixel binning, so resolution would be greatly reduced unless you use a very powerful barlow (or a stack of barlows).



You'll load all those images in a software that will perform "stacking" to reduce them all to one single, much clearer image.



The scope needs to operate at a long focal length, f/20 being typical, so a barlow is usually required. The bigger the aperture, the better.



2. Deep space objects (DSO)



These are anything that's pretty faint and fuzzy, like galaxies, but some comets are also DSO-like in their appearance. You need to take extremely long exposures; usually a dozen or a few dozen images, each one between 30 sec and 20 min of exposure. Extremely precise tracking is paramount, so you need the best tracking mount you could buy. Autoguiding is also needed to correct tracking errors.



The scope needs to operate at short focal ratios, f/4 is pretty good, but as low as f/2 is also used; focal reducers (opposite of barlows) are used with some telescopes, like this or like this. Aperture doesn't mean much; small refractors are used with good results.



The camera needs to be very low noise; DSO cameras use active cooling that lowers their temperature 20 ... 40 C below ambient. DSLRs can also provide decent results, but their noise is typically higher than dedicated cameras, so you need to work harder for the same results.



Specific software is used for processing, stacking, noise reduction, etc.




So what can you do with such a setup?



Comet- or asteroid-hunting works pretty well. Terry Lovejoy has discovered several comets recently using equipment and techniques as described above. Here's Terry talking about his work.



Tracking variable stars is also open to amateurs. This could also be done visually, without any camera, just a dob, meticulous note-taking, and lots of patience.



With a bit of luck, you could also be the person who discovers a new supernova in a nearby galaxy. You don't need professional instruments, you just need to happen to point the scope in the right direction at the right time and be the first to report it. This also could be done purely visually, no camera, just a dob.

How big do comets get?

If my understanding is correct, Chury, the best explored comet so far is rather on the smallish side as comets go. Most comets observable with naked eye were so because they flew close to Earth, not because they were so big. I wonder, though, how big are the biggest ones.



Let's get it in two variants:



  • How big is the biggest known comet? (mass, diameter)

  • Is there an estimate, or a theoretical limit on how big a comet can be? (for whatever reason, e.g. becoming a planet, breaking up due to tidal forces or whatever)

Friday, 19 October 2012

cosmology - Strong force and metric expansion

Excepting a Big Rip scenario, there is no eventual 'clash'.



Consider a Friedmann–Lemaître–Robertson–Walker universe:
$$mathrm{d}s^2 = -mathrm{d}t^2 + a^2(t)left[frac{mathrm{d}r^2}{1-kr^2} + r^2left(mathrm{d}theta^2 + sin^2theta,mathrm{d}phi^2right)right]text{,}$$
where $a(t)$ is the scale factor and $kin{-1,0,+1}$ corresponds to a spatially open, flat, or closed cases, respectively. In a local orthonormal frame, the nonzero Riemann curvature components are, up to symmetries:
$$begin{eqnarray*}
R_{hat{t}hat{r}hat{t}hat{r}} = R_{hat{t}hat{theta}hat{t}hat{theta}} = R_{hat{t}hat{phi}hat{t}hat{phi}} =& -frac{ddot{a}}{a} &= frac{4pi G}{3}left(rho + 3pright)text{,} \
R_{hat{r}hat{theta}hat{r}hat{theta}} = R_{hat{r}hat{phi}hat{r}hat{phi}} = R_{hat{theta}hat{phi}hat{theta}hat{phi}} =& frac{k+dot{a}^2}{a^2} &= frac{8pi G}{3}rhotext{,}
end{eqnarray*}$$
where overdot denotes differentiation with respect to coordinate time $t$ and the Friedmann equations were used to rewrite them in terms of density $rho$ and pressure $p$. From the link, note in particular that $dot{rho} = -3frac{dot{a}}{a}left(rho+pright)$.



If dark energy is described by a cosmological constant $Lambda$, as it is in the standard ΛCDM model, then it contributes a constant density and pressure $rho = -p = Lambda/8pi G$, and so no amount of cosmic expansion would change things. Locally, things look just the same as they ever did: for a universe dominated by dark energy, the curvature stays the same, so the gravitational tidal forces do too. Through those tiny tidal forces, the dark energy provides some immeasurably tiny perturbation on the behavior of objects, including atomic nuclei, forcing a slightly different equilibrium size than would be otherwise. But it is so small that that it has no relevance to anything on that scale, nor do those equilibrium sizes change in time. The cosmological constant holds true to its name.



On the other hand, if dark energy has an equation of state $p = wrho$, then a flat expanding universe dominated by dark energy has
$$dot{rho} = -3frac{dot{a}}{a}left(rho+pright) = -3sqrt{frac{8pi G}{3}}left(1+wright)rho^{3/2}text{,}$$
and immediately one can see that there is something special about $w<-1$, leading to an accumulation of dark energy, while the cosmological constant's $w = -1$ leads to no change. This leads to a Big Rip more generally, as zibadawa timmy's answer explains.





? If the metric expands surely objects get further away from one another and that would include the stars inside galaxaies as well as the galaxies themselves?




Not at all. It wouldn't even make any sense: if you have an object like an atom or a star in gravitational freefall, by the equivalence principle only tidal forces across it are relevant. The tidal forces stretch the object until the internal forces balance them. But for a Λ-driven accelerated expansion, dark energy contribution to tidal forces is constant. Hence, an object already in equilibrium has no reason to further change its size, no matter how long the cosmic acceleration occurs. This also applies to galaxies, only that the internal forces are also gravitational and balance the dark energy contribution.



Looking at this in more detail, imagine a test particle with four-velocity $u$, and a nearby one with the same four-velocity, separated by some vector $n$ connecting their worldlines. If they're both in gravitational freefall, then their relative acceleration is given by the geodesic deviation equation, $frac{D^2n^a}{dtau^2} = -R^alpha{}_{mubetanu}u^mu u^nu n^beta$. The gravitoelectric part of the Riemann tensor, $mathbb{E}^alpha{}_beta = R^alpha{}_{mubetanu}u^mu u^nu$, represents the static tidal forces in a local inertial frame comoving with $u$, which will drive those particles apart (or together, depending). Hence, keeping those particles at the same distance would require a force between them, but it's not necessary for this force to change unless the tidal forces also change.



Galaxies don't change size through cosmological expansion. Stars don't either, nor do atoms. Not for Λ-driven expansion, at least. It would take an increase in tidal forces, such as those provided by a Big Rip, for them to do so.



A related way of looking at the issue is this: according to the Einstein field equation, the initial acceleration of a small ball of initially comoving test particles with some four-velocity $u$ (in the comoving inertial frame), which is given by the Ricci tensor, turns out to be:
$$lim_{Vto 0}left.frac{ddot{V}}{V}right|_{t=0}!!!!= -underbrace{R_{munu}u^mu u^nu}_{mathbb{E}^alpha{}_alpha} = -4pi G(rho + 3p)text{,}$$
where $rho$ is density and $p$ is the average of the principal stresses in the local inertial frame. For a positive cosmological constant, $rho = -p>0$, and correspondingly a ball of test particles will expand. That's cosmic expansion on a local scale; because of uniformity of the FLRW universe, the same proportionality works on the large scale as well, as we can think of distant galaxies as themselves test particles if their interactions are too minute to make a difference.



Again, we are led to the same conclusion: if the ball has internal forces that prevent expansion, then those forces don't need to change in time unless the dark energy density also changes, which doesn't happen for cosmological constant. If they're not 'clashing' now, then they won't need to in the future either.

Sunday, 14 October 2012

cv.complex variables - Level set of a harmonic function

Let $u$ be a nonconstant real-valued harmonic function defined in the open unit disk $D$. Suppose that $Gammasubset D$ is a smooth connected curve such that $u=0$ on $Gamma$. Is there a universal upper bound for the length of $Gamma$?



Remark: by the Hayman-Wu theorem, the answer is yes if $u$ is the real part of an injective holomorphic function; in fact, in this case there is a universal upper bound for the length of the entire level set in $D$. For general harmonic functions, level sets can have arbitrarily large length, e.g. $Re z^n$.

Saturday, 13 October 2012

co.combinatorics - Is there a natural family of languages whose generating functions are holonomic (i.e. D-finite)?

I have considered this problem for a while now. I agree with Greg that the parallels with complexity theory seem to end at unambiguous context free. The quality that makes a word difficult to recognize diverges from what makes a language difficult to enumerate: e.g. { a^n b^n c^n: n a posint} is easy to count. On the other hand, D-finite sequences are unable to handle a certain notion sparseness, (i.e. {a^(2^n): n a posint}) because they tend to give rise to natural boundaries in the generating functions.



As for Jacques' comment, it is true that there is a differential operator in species, but that does not mean that you can model solutions to differential equations easily. If you take an iterative approach a la Chomsky Schützenberger to generate the combinatorial objects, you need to ensure convergence of your language. (the paper cited above by Martin actually needs a partial retraction on this point) It is easy to show convergence if you can use Theta, which is actually x d/dx, but you cannot use theta to build all linear ODEs with polynomial coefficients.



Along this line, if you restrict yourself to solutions of smaller families of differential equations, there are several combinatorial interpretations, often in terms of rooted trees.



Or, maybe a completely different approach is warranted, for example the notion of level of
Géraud Sénizergues or the recent work of Labelle and Lamathe http://www.mat.univie.ac.at/~slc/wpapers/s61Alablam.html.

ct.category theory - Are all coproducts of 1 in a topos distinct ?

At least if you're talking about finite coproducts, then the answer is yes. If $nle m$, then we have a canonical inclusion $sum_{i=1}^n 1 hookrightarrow sum_{j=1}^m 1$, which is in fact a complemented subobject with complement $sum_{k=1}^{m-n} 1$. If this inclusion is an isomorphism, then its complement is initial, and hence (assuming the topos is nontrivial) $n=m$. Now if we have an arbitrary isomorphism $sum_{i=1}^n 1 cong sum_{j=1}^m 1$, then composing with the above inclusion we get a monic $sum_{i=1}^m 1 hookrightarrow sum_{j=1}^m 1$. However, one can show by induction that any finite coproduct of copies of $1$ in a topos is Dedekind-finite, i.e. any monic from it to itself is an isomorphism. (See D5.2.9 in "Sketches of an Elephant" vol 2.) Thus, the standard inclusion is also an isomorphism, so again $n=m$.

Friday, 12 October 2012

the sun - Solar / lunar positions in ECEF using SOFA C libraries

I'd like to calculate the positions of the sun and moon in the ECEF coordinate system using the SOFA (http://www.iausofa.org/current_C.html) libraries given a Julian day input.



So far, I've got this process for the sun: (1) convert Jd (UTC) to TAI, to TT, then to TDB, then (2) calculate earth position/velocity in heliocentric coordinates (iauEpv00), then (3) negate the terms to get ECI.



(a) I'm not sure if the process above is correct, especially since calculating TDB time requires a "dtr" parameter, which apparently you have to have TDB times to estimate using iauDtdb().



(b) I'm not sure how to convert the resulting vector from ECI to ECF, which is probably just a time-based rotation of the longitude, but how much time is elapsing?



(c) Not sure where to even start with the moon.

ds.dynamical systems - Proper families for Anosov flows

So I've been skimming Bowen's 1972 paper "Symbolic Dynamics for Hyperbolic Flows" hoping it would give me some insight into how to build a Markov family for the cat flow (i.e., the Anosov flow obtained by suspension of the cat map with unit height). For the sake of completeness, the cat flow $phi$ is obtained as follows:



i. Consider the cat map $A$ on the 2-torus and identify points $(Ax,z)$ and $(x,z+1)$ to obtain a 3-manifold $M$



ii. Equip $M$ with a suitable metric (e.g., $ds^2 = lambda_+^{2z}dx_+^2 + lambda_-^{2z}dx_-^2 + dz^2$, where $x_pm$ are the expanding and contracting directions of $A$ and $lambda_pm$ are the corresponding eigenvalues.)



iii. Consider the flow generated by the vector field $(0,1)$ on $M$--that's the cat flow.



Unfortunately I'm getting stuck at the first part of Bowen's quasi-constructive proof, which requires finding a suitable set of disks and subsets transverse to the flow. Rather than rehash the particular criteria for a set of disks and subsets used in Bowen's construction, I will relay a simpler but very similar set of criteria, for a proper family (which if it meets some auxiliary criteria is also a Markov family):



$mathcal{T} =$ {$T_1,dots,T_n$} is called a proper family (of size $alpha$) iff there are differentiable closed disks $D_j$ transverse to the flow s.t.



  1. the $T_j$ are closed

  2. $M = phi_{[-alpha, 0]}Gamma(mathcal{T})$, where $Gamma(mathcal{T}) = cup_j T_j$

  3. $dim D_j = dim M - 1$

  4. diam $D_j < alpha$

  5. $T_j subset$ int $D_j$ and $T_j = bar{T_j^*}$ where $T_j^*$ is the relative interior of $T_j$ in $D_j$

  6. for $j ne k$, at least one of the sets $D_j cap phi_{[0,alpha]}D_k$, $D_k cap phi_{[0,alpha]}D_j$ is empty.

I've been stuck on even constructing such disks and subsets (let alone where the subsets are rectangles in the sense of hyperbolic dynamics). Bowen said this sort of thing is easy and proceeded under the assumption that the disks and subsets were already in hand. I haven't found it to be so. The thing that's killing me is 6, otherwise neighborhoods of the Adler-Weiss Markov partition for the cat map would fit the bill and the auxiliary requirements for the proper family to be a Markov family.



I've really been stuck in the mud on this one, could use a push.

gravity - Camera reaching the event horizon

If the black hole has no surrounding matter, so that there is no violent radiation generated by accretion or the like, then it still depends on the mass of the black hole. If it is in the thousands of solar masses or more, it is possible for a realistic camera to survive free-fall to the horizon. This is essentially a non-issue for supermassive black holes. On the other hand, smaller black holes are quite a bit more punishing.



For Newtonian gravity with potential $Phi$, in a free-falling frame, a particle at $x^k$ near the origin of the frame will be accelerated at
$$frac{mathrm{d}^2x^j}{mathrm{d}t^2} = -frac{partialPhi}{partial x^j} = -frac{partial^2Phi}{partial x^jpartial x^k}x^ktext{,}$$
where the the second derivatives of the potential, $Phi_{,jk}$, form the so-called tidal gravitational field. Since $Phi = -GM/r$ for a point-source, you should expect the tidal forces on a free-falling object to be proportional to $GM/r^3$ times the size of the object. Thus, at the Schwarzschild radius, this is on the order of $c^6/(GM)^2$.



Of course, black holes are not Newtonian. However, it turns out that for the non-rotating, uncharged (Schwarzschild) black hole, radial free-fall of a test particle has the same form as in Newtonian theory, except in Schwarzschild radial coordinate (not radial distance) and proper time of the particle (not universal time), so the above is essentially correct even for Schwarzschild black holes.



To be relativistically correct, the tidal forces on an free-falling object are described by the equation of geodesic deviation, in which the gravitoelectric part of the Riemann curvature provides forms the tidal tensor:
$$frac{mathrm{D}^2x^alpha}{mathrm{d}tau^2} = -R^alpha{}_{mubetanu} u^mu u^nu x^betatext{.}$$
In Schwarzschild spacetime, this turns out to be $+2GM/r^3$ in the radial direction, stretching the free-falling object, and $-GM/r^3$ in the orthogonal directions, squeezing it. This stretching-and-squeezing due to gravitational tidal forces is sometimes called spaghettification.



Some example numbers: say the camera size is on the order of $0.1,mathrm{m}$. The the following are the approximate tidal accelerations near the horizon for black holes of different multiples of solar masses:



  • $Msim 10,mathrm{M_odot}$: $sim 10^6$ Earth gravities;

  • $Msim 10^4,mathrm{M_odot}$: $sim 1$ Earth gravity;

  • $Msim 10^6,mathrm{M_odot}$: $sim 10^{-4}$ Earth gravities.

The curvature around a rotating black hole is more complicated, but the moral of the story is basically the same.

Thursday, 11 October 2012

ct.category theory - Are there two groups which are categorically Morita equivalent but only one of which is simple

Can you find two finite groups G and H such that their representation categories are Morita equivalent (which is to say that there's an invertible bimodule category over these two monoidal categories) but where G is simple and H is not. The standard reference for module categories and related notions is this paper of Ostrik's



This is a much stronger condition than saying that C[G] and C[H] are Morita equivalent as rings (where C[A_7] and C[Z/9Z] gives an example, since they both have 9 matrix factors). It is weaker than asking whether a simple group can be isocategorical (i.e. have representation categories which are equivalent as tensor categories) with a non-simple group, which was shown to be impossible by Etingof and Gelaki.



Matt Emerton asked me this question when I was trying to explain to him why I was unhappy with any notion of "simple" for fusion categories. It's of interest to the study of fusion categories where the dual even Haagerup fusion category appears to be "simple" while the principal even Haagerup fusion category appears to be "not simple" yet the two are categorically Morita equivalent.

Wednesday, 10 October 2012

ag.algebraic geometry - Normal bundle to a curve in P^2

Yes, there is a strong relationship between the two.



First, let's work locally in affine space rather than in projective space (it makes more
sense to work locally just because we are dealing with a sheaf, which is defined locally).
So I will consider a non-homogen



Working without a metric (as one does in at least the algebraic aspects of algebraic geometry),
it is perhaps better to talk not about the gradient of $f$, but its exterior derivative
$df$, given by the same formula: $df = f_x dx + f_y dy.$ Since this is differential form
valued, we will compare it with the conormal bundle to the curve $C$ cut out by $f = 0$.



Now the exterior derivative can be thought of simply as taking the leading (i.e. linear) term of $f$.



On the other hand, if $mathcal I$ is the ideal sheaf cutting out the curve $C$, then the
conormal bundle is $mathcal I/mathcal I^2$. (If $f$ is degree $d$, then $mathcal I = mathcal O(-d)$, and so this can be rewritten as $mathcal O(-d)_{| C}$, dual to the normal
bundle $mathcal O(d)_{| C}$.) Now $f$ is a section of $mathcal I/mathcal I^2$ (over the affine patch on which we are working), so we may certainly regard it as a section of $mathcal I/mathcal I^2$; this section
is the (image in the conormal bundle to $C$ of) the exterior derivative of $f$.



The formula $mathcal I/mathcal I^2$ for the conormal bundle is thus simply a structural
interpretation of the idea that we compute the normal to the curve by taking the leading term
of an equation for the curve.

sheaf theory - Heuristic explanation of why we lose projectives in sheaves.

We know that presheaves of any category have enough projectives and that sheaves do not, why is this, and how does it effect our thinking?



This question was asked(and I found it very helpful) but I was hoping to get a better understanding of why.



I was thinking about the following construction(given during a course);



given an affine cover, we normally study the quasi-coherent sheaves, but in fact we could study the presheaves in the following sense:



Given an affine cover of X,



$Ker_2left(piright)rightrightarrows^{p_1}_{p_2} Urightarrow X$



then we can define $X_1:=Cokleft(p_1,p_2right)$, a presheaf, to obtain refinements in presheaves where we have enough projectives and the quasi-coherent sheaves coincide. Specifically, if $X_1xrightarrow{varphi}X$ for a scheme $X$, s.t. $mathcal{S}left(varphiright)in Isom$ for $mathcal{S}(-)$ is the sheaffication functor, then for all affine covers $U_ixrightarrow{u_i}X$ there exists a refinement $V_{ij}xrightarrow{u_{ij}}U_i$ which factors through $varphi$.



This hinges on the fact that $V_{ij}$ is representable and thus projective, a result of the fact that we are working with presheaves. In sheaves, we would lose these refinements. Additionally, these presheaves do not depend on the specific topology(at the cost of gluing).



In this setting, we lose projectives because we are applying the localization functor which is not exact(only right exact). However, I don't really understand this reason, and would like a more general answer.



A related appearance of this loss is in homological algebra. Sheaves do not have enough projectives, so we cannot always get projective resolutions. They do have injective resolutions, and this is related to the use of cohomology of sheaves rather than homology of sheaves. In paticular, in Rotman's Homological Algebra pg 314, he gives a footnote;




In The Theory of Sheaves, Swan writes "...if the base space X is not discrete, I know
of no examples of projective sheaves except the zero sheaf." In Bredon, Sheaf Theory:
on locally connected Hausdorff spaces without isolated points, the only
projective sheaf is 0




addressing this situation.




In essence, my question is for a
heuristic or geometric explanation of
why we lose projectives when we pass
from presheaves to sheaves.




Thanks in advance!

star - What decides the direction in which the accretion disk spins?

Stellar systems are born from clouds of turbulent gas. Although "turbulence" means that different parcels of gas move in different directions, the cloud have some overall, net angular momentum. Usually a cloud gives birth to multiple stellar systems, but even the subregion forming a given system has a net, and non-vanishing (i.e. $ne0$), angular momentum.



Parcels moving in opposite directions will collide, and friction will cause the gas to lose energy, such that the cloud contracts. Eventually subclouds moving in one direction will "win over" subclouds moving in other directions such that everything moves in the same direction, keeping the original angular momentum (minus what is ejected e.g. through jets).



This means that the central star will rotate in the same direction as the circumstellar disk and that, in general, the planets that form subsequently, will also not only orbit the star in the same direction, but also spin in the same direction around their own axes. This is called prograde rotation. Sometimes, however, collisions between bodies may cause a planet or asteroid to spin in the opposite direction. This is called retrograde rotation and is the case for Venus and Uranus.

Tuesday, 9 October 2012

gt.geometric topology - Flat SU(2) bundles over hyperbolic 3-manifolds

Many (compact orientable) hyperbolic 3-manifolds have non-trivial $SU(2)$ representations.



By Mostow rigidity, the representation of the fundamental group $Gamma$ of a closed hyperbolic 3-manifold into $SL(2,mathbb{C})$ (lifted from $PSL(2,mathbb{C})$) may be conjugated so that it lies in $SL(2,K)$, for $K$ a number field (because transcendental extensions have infinitesimal deformations in $mathbb{C}$). In particular, the traces of elements will always lie in a number field. One may take different Galois embeddings of $K$ into $mathbb{C}$, and get new representations of $Gamma$ into $SL(2,mathbb{C})$. Sometimes, this representation is just conjugate to the original (e.g. if $K$ was chosen too large), but in other cases the new representation of $Gamma$ lies in $SL(2,mathbb{R})$ or in $SU(2)$. A nice class of examples of this type are arithmetic hyperbolic 3-manifolds. In fact, they are characterized by the fact that all traces of elements are algebraic integers, and non-trivial Galois embeddings lie in $SU(2)$ (you have to be a bit careful about what this means). Some arithmetic manifolds will only have the complex conjugate representation this way (basically, if squares of the traces lie in a quadratic imaginary number field), but otherwise you get a non-trivial $SU(2)$ representation. The simplest example is the Weeks manifold, with trace field a cubic field. I suggest the book by MacLachlan and Reid as an introduction to arithmetic 3-manifolds. The description I've given though is encoded in terms of quaternion algebras and other algebraic machinery. Another characterization of arithmeticity is in this paper. The nice thing about these representations is that they are faithful.
There is a very explicit way to see these representations for hyperbolic reflection groups (studied by Vinberg in the arithmetic case). Basically, given a hyperbolic polyhedron with acute angles of the form $pi/q$, sometimes one can form a spherical polyhedron with corresponding angles which are $ppi/q$, and get a representation into $O(4)$. Passing to finite index manifold subgroups, one can obtain $SU(2)$ reps. (since $SO(4)$ is essentially $SU(2)times SU(2)$).



There are other ways one has $SU(2)$ representations, but they are less explicit.
Kronheimer and Mrowka have shown that any non-trivial integral surgery on a knot has a non-abelian $SU(2)$ representation. Also, any hyperbolic 3-manifold with first betti number positive or a smooth taut orientable foliation has non-abelian $SU(2)$ representations.



Addendum: Another observation relating $SU(2)$ representations to hyperbolic geometry is via the observation that the binary icosahedral group (a $mathbb{Z}/2$ extension of $A_5$) is a subgroup of $SU(2)$. By an observation of Long and Reid, every hyperbolic 3-manifold group has infinitely many quotients of the form $PSL(2,p)$, $p$ prime. These groups always contain subgroups isomomorphic to $A_5 < SO(3)$, so one may find a finite-sheeted cover which has a non-abelian $SO(3)$ and therefore $SU(2)$ representation. I have no idea though whether these representations are detected by the Casson invariant or Floer homology.

st.statistics - How long for a simple random walk to exceed $sqrt{T}$?

For a Brownian motion, Novikov finds an explicit expression for any real moments (positive and negative) of the random variable $(tau(a,b,c)+c)$, where
$$
tau(a,b,c) = inf(t geq 0, W(t) leq -a +b(t+c)^{1/2})
$$
with $a geq 0$, $c geq 0$, and $bc^{1/2} < a$. Shepp provides similar results but with W(t) replaced by |W(t)| in the definition, and the range of permissible $a,b,c$ restricted accordingly. Shepp also cites papers by Blackwell and Freedman (1964), Chow, Robbins, and Teicher (1965), and Chow and Teicher (1965), which look like they prove similar but weaker results when the Brownian motion is replaced by a random walk with finite variance. I don't have time to read those references at the moment but I figure these papers should lead you to your answer.

Why does it make sense to say the universe has no centre?

My opinion and 20 Cents:



Humanity did confused many things in history.



Back to the days of the holy bonfires on the squares of Europe, nobody didn't understood that the Earth - is not a center of universe.



We are still believing that speed of light is a maximum speed of information, while there is discovered quantum teleportation with higher speed, infinity speed. Infinity looks much better than 299 792 458 meters per second, is not?



Moreover, we can not describe why does photon getting its super-speed without any boost, immediately, as a part of its existence.



That is why, I'm thinking that we are confused with this great theory about center of universe, which is everywhere, there and here, anywhere...



It is too beauty and too hard to understand, that is why everybody simply believing in it, without tries to imagine anything else.



You could read about Cosmic microwave background. There is a redshift and Milky-Way is moving to the one side of this big-bang sphere.



There is a sphere. There is a redshift. We are inside.



Do you still believe that center of universe is "everywhere"?

Monday, 8 October 2012

ag.algebraic geometry - Is there a presentation of the cohomology of the moduli stack of torsion sheaves on an elliptic curve?

It seems that one can obtain the additive structure of rational cohomology
without too much effort (in no way have I checked this carefully so caveat
lector applies). As Allen noticed, for rational cohomology it is enough to
compute $T$-equivariant cohomology and then take $Sigma_m$-invariants (if this is to
work also for integral cohomology a more careful analysis would have to be made I think). Now, the space $Tor^m_C$ of length $m$ quotients of $mathcal O^m_C$ (I use $C$
as everything I say will work for any smooth and proper curve $C$) is smooth and proper so we may use the Bialinski-Birula analysis (choosing a general $rhocolon mathrm{G}_m to T$) and we first look at the fixed point locus of $T$.



Now, for every sequence $(k_1,ldots,k_m)$ of non-negative integers with
$k_1+ldots+k_m=m$ gives a map $S^{k_1}Ctimescdotstimes S^{k_m}C to Tor^m_C$, where $S^kC$ is the
symmetric product interpreted as a Hilber scheme, and the map takes $(mathcal
I_1,ldots,mathcal I_m)$ to $bigoplus_imathcal I_i hookrightarrow mathcal O^m_C$. It is
clear that this lands in the $T$-fixed locus and almost equally clear that this
is the whole $T$-fixed locus (any $T$-invariant submodule must be the direct sum of its
weight-spaces).



We can now use $rho$ to get a stratification parametrised by the sequences
$(k_1,ldots,k_m)$. Concretely, the tangent space of $Tor^m_C$ to a point of
$S^{k_1}Ctimescdotstimes S^{k_m}C$ has character $(k_1alpha_1+cdots+k_malpha_m)beta$, where
$beta=sum_ialpha_i^{-1}$ (and we think of characters as elements of the group ring
of the character group of $T$ and the $alpha_i$ are the natural basis elements of
the character group). This shows that $rho$ can for instance be chosen to be $t mapsto
(1,t,t^2,ldots,t^{m-1})$. In any case the stratum corresponding to $(k_1,ldots,k_m)$ has
as character for its tangent space the characters on the tangent space on which
$T'$ is non-negative and its normal bundle consists of those on which $T'$ is
non-negative (hence with the above choice, the character on the tangent space is
$sum_{igeq j}k_ialpha_ialpha_j^{-1}$ and in particular the dimension of the stratum is
$sum_iik_i$). We now have that each stratum is a vector bundle of the
corresponding fixed point locus so in particular the equivariant cohomology of
it is the equivariant cohomology of the fixed point locus and in particular is
free over the cohomology ring of $T$. Furthermore, if we build up the cohomology
using the stratification (and the Gysin isomorphism), at each stage a long exact
sequence splits once we have shown that the top Chern class of the normal bundle
is a non-zero divisor in the cohomology of the equivariant cohomology of the
component of the fixed point locus (using the Atiyah-Bott criterion).



However, the non-zero divisor condition seems more or less automatic (and that
fact should be well-known): The normal bundle $mathcal N$ of the
$(k_1,ldots,k_m)$-part, $F$ say, of the fixed point locus splits up as direct sum
$bigoplus_alpha mathcal N_alpha$, where $T$ acts by the character $alpha$ on $mathcal N_alpha$. Then
the equivariant total Chern class of $mathcal N_alpha$ inside of
$H^ast_T(F)=H^ast(F)bigotimes H^ast_T(pt)$ is the Chern polynomial of $mathcal N_alpha$
as ordinary vector bundle evaluated at $c_1(alpha)=alphain H^2_T(pt)$. Hence, if we quasi-order
the characters of $T$ by using $beta mapsto -rho(beta)$ (``quasi'' as many characters get the
same size), then as $-rho(alpha)>0$ (because $alpha$ appears in the normal bundle) we get
that $1otimesalpha^{n_alpha}in H^ast_T(F)$, where $n_alpha$ is the rank of $mathcal N_alpha$, is the term of
$c_n(mathcal N_alpha)$ of largest order. Hence, we get that for the top Chern class
of $mathcal N$ which is the product $prod_alpha c_{n_alpha}(mathcal N_alpha)$ its term of
largest order has $1$ as $H^ast(F)$-coefficient and hence is a non-zero divisor.



As the cohomology of $S^nC$ is torsion free we get that all the involved
cohomology is also torsion free and everything works over the integers but as
I've said to go integrally from $T$-equivariant cohomology to
$mathrm{GL}_m$-equivariant cohomology is probably non-trivial.



If one wants to get a hold on the multiplicative structure one could use that
the fact that the Atiyah-Bott criterion works implies that that $H^ast_T(Tor^m_C)$
injects into the equivariant cohomology of the fixed point locus. The algebra
structure of the cohomology of $S^nC$ is clear (at least rationally) so we get
an embedding into something with known multiplicative structure. The tricky
thing may be to determine the image. We do get a lower bound for the image by
looking at the ring generated by the Chern classes of of the tautological bundle
but I have no idea how close that would get us to the actual image. (It is a
well-known technique anyway used in for instance equivariant Schubert calculus
so there could be known tricks.)



There is another source of elements, namely we have the map $Tor^m_C to S^mC$. This map becomes
even better if one passes to the $Sigma_m$-invariants.



[Later] Upon further thought I realise that the relation between $T$- and $G=GL_m$-equivariant cohomology
is simpler than I thought. The point is (and more knowledgeable people certainly know this) that the map $EGtimes_TX to EGtimes_GX$ is a $G/T$ bundle and $G$ being special $H^*_T(X)$ is free as a $H^*_G(X)$-module (with $1$ being one of the basis elements). That means that $H^*_G(X) to H^*_T(X)$
is injective but more precisely $H^*_T(X)/H^*_G(X)$ is torsion free with $Sigma_m$-action without invariants so that $H^*_G(X)$ is the ring of $Sigma_m$-invariants of $H^*_T(X)$.

Do all planets have a molten core?

Depends on what you call "planet." For example, in comparison to Earth, the Earth and Moon is a "binary planet system." And the moon doesn't have any molten core. Also, you said inner. Inner core of Earth (and many other planets) are solid because of (what I think is pressure). Finally, if it is outer core you meant, and you didn't count the moon as a "planet", would you count extrasolar planets as "planets". Please clarify, but to your current question, no.

ho.history overview - Good books on problem solving / math olympiad

From a review for Polya's book on Amazon, the books to be read in sequence:



  • Mathematical Problem Solving by Alan Schoenfeld

  • Thinking Mathematically by J. Mason et al.

  • The Art and Craft of Problem Solving by Paul Zeitz

  • Problem Solving Strategies by Arthur Engel

  • Mathematical Olympiad Challenges by Titu Andreescu

  • Problem Solving Through Problems by Loren Larson

Full text of the review below:




By Abhi:



Good aspects of this book have been said by most of the other
reviewers. The main problem with such books is that for slightly
experienced problem solvers, this book probably does not provide a
whole lot of information as to what needs to be done to get better.
For instance, for a kid who is in 10th grade struggling with math,
this is a very good book. For a kid who is in his 11th grade trying
for math Olympiad or for people looking at Putnam, this book won't
provide much help.



Most people simply say that "practice makes perfect". When it comes to
contest level problems, it is not as simple as that. There are
experienced trainers like Professor Titu Andreescu who spend a lot of
time training kids to get better. There is lot more to it than simply
trying out tough problems.



The most common situation occurs when you encounter extremely tough
questions like the Olympiad ones. Most people simply sit and stare at
the problem and don't go beyond that. Even the kids who are extremely
fast with 10th grade math miserably fail. Why?



The ONE book which explains this is titled "Mathematical Problem
Solving" written by Professor Alan Schoenfeld. It is simply amazing. A
must buy. In case you have ever wondered why, in spite of being
lightning fast in solving textbook exercises in the 10th and 11th
grade, you fail in being able to solve even a single problem from the
IMO, you have to read this book. I am surprised to see Polya's book
getting mentioned so very often bu nobody ever mentions Schoenfeld's
book. It is a must read book for ANY math enthusiast and the math
majors.



After reading this book, you will possibly get a picture as to what is
involved in solving higher level math problems especially the
psychology of it. You need to know that as psychology is one of the
greatest hurdles to over when it comes to solving contest problems.
Then you move on to "Thinking Mathematically" written by J. Mason et
al. It has problems which are only few times too hard but most of the
times, have just enough "toughness" for the author to make the point
ONLY IF THE STUDENT TRIES THEM OUT.



The next level would be Paul Zeitz's The Art and Craft of Problem
Solving. This book also explains the mindset needed for solving
problems of the Olympiad kind. At this point, you will probably
realize what ExACTLY it means when others say that "problem solving is
all about practice". All the while you would be thinking "practice
what? I simply cannot make the first move successfully and how can I
practice when I can't even solve one problem even when I tried for
like a month". It is problem solving and not research in math that you
are trying to do. You will probably get a better picture after going
through the above three books.



Finally, you can move on to Arthur Engel's Problem Solving Strategies
and Titu Andreescu's Mathematical Olympiad Challenges if you managed
to get to this point. There is also problem solving through problems
by Loren Larson. These are helpful only if you could solve Paul
Zeitz's book successfully.



To conclude, if you are looking for guidance at the level of math
Olympiad, look for other books. This book won't be of much assistance.
On the other hand, if you are simply trying to get better at grade
school math, this book will be very useful.


the sun - How are stellar elemental abundances quoted?

First of all, your first question.



This source clearly state that Values are given in the usual logarithmic (dex) scale, for the same formula that you quoted (similar job).
It is a bit tricky as the article "explains" the values, but you have to pay attention to the exact definition.



I think it is better to work out with an example. Let's take the He.



Good enough, you can better read the paper from here.



From the table, we have $A_{el}=10.93$. This is the abundance of He relative to H (in logarithmic scale).
From this you find out that $frac{N_{el}}{N_H}=0.08 = 8%$.
Indeed the work confirms this value (see the last page).



What you quote as about 25% He, is what they call abundances by mass of [...] Helium (Y), which means $Y = mass of Helium / mass of Hydrogen$, and this is indeed about $25%$.

reference request - Learning Topology

The best introduction I know to the entire field of topology is John McCleary's A First Course in Topology: Continuity and Dimension.Not only does it present all the essentials in a strongly geometric manner in low dimensions,it gives a historical perspective on the subject.



What's the best introduction to algebraic topology?



Well,depends on if you like geometric intuition or not. If so,Allen Hatcher's textbook is considered by many to be the new gold standard. And best of all,it's available online for free at Hatcher's website.



If you like more modern (i.e. abstract) approaches,the book by Joseph Rotman can't be beat. And recently,an awesome text by Tammo tom Dieck came out which is probably the state of the art right now and is very readable.



A book that's probably too difficult to use as a textbook but is so beautiful that it needs to be used as a supplement is Peter May's A Concise Course In Algebraic Topology.By May's own admission,it's probably too tough for a first course on the subject,but it is beautifully written and gives a great overview of the subject. It also has a very good bibliography for further study.



My favorite texts on algebraic topology?Probably the 2 books by V.V. Pravalov, Elements of Combinatorial And Differential Topology and Elements of Homology Theory.both available in hardcover from the AMS.Together,they probably give the single most complete presentation of topology that currently exists,with plenty of low-dimensional pictures,concrete constructions and emphasis on manifolds.



And of course,I'd be remiss if I didn't mention the wamdering oddball text which a lot of US universities are afraid to use for thier first course,but is a treasure trove for mathematics students: John Stillwell's Classical Topology And Combinatorial Group Theory. An incredibly rich historical presentation by a master. It's strange organization and selection of material is a double edged sword-but it will give amazing insight to the basic ideas of topology and how they develped. These tools will give a student coming out of Stillwell's book a very strong foundation for studying more modern presentations. I highly recommend it to anyone interested in topology at any level.



There are also several terrific free online lecture sources you should look at,primarily the complete notes of K.Wurthmuller and Gregory Naber. Both can be found at Math Online and I recommend them both highly.

Sunday, 7 October 2012

fa.functional analysis - What's the nearest algebraic theory to inner product spaces?

Certainly you can get fairly close with your ternary operation $T(x,y,z) = langle x,y rangle z$. You can impose conditions on $T$ so that it comes from a bilinear form that takes values in a commutative ring of extended scalars acting on the vector space $V$. This is not entirely a bad thing; you could instead start with an abelian group $A$ and let $T$ induce both the bilinear form and the ground ring. To stick close to your original question, let's suppose that $V$ is a vector space over a fixed field $k$, and that the elements of $k$ are written into the algebraic theory. Then still the scalars could extend further, but you can write axioms to make sure that that is all that happens.



In detail, you can first suppose that $T(x,y,z)$ is trilinear and that $T(x,y,z) = T(y,x,z)$. Then $T$ can already be read as a bilinear form that takes values in operators acting on $V$. We can use the shorthand $U(z) = T(a,b,z)$, with $U$ an implicit function of $a$ and $b$, and see what conditions can be further imposed. You can impose the axiom:
$$T(a,b,T(c,d,x)) = T(c,d,T(a,b,x)),$$
which says that the different values of $U$ all commute, and thus generate a commutative algebra $R$. You can also impose the relation:
$$T(T(a,b,x),y,z) = T(a,b,T(x,y,z)),$$
in other words $T(U(x),y,z) = U(T(x,y,z))$. This relation says that $T$, as an operator-valued inner product, is $U$-linear.



With these relations, every word $W$ in $T$ collapses like this, after permuting inputs:
$$W(x_0,x_1,ldots,x_{2n}) = langle x_1, x_2 rangle cdots langle x_{2n-1}, x_{2n} rangle x_0,$$
where the product is interpreted over $R$ rather than over $k$. (The number of inputs must be even because $T$ is ternary.) This sort of collapse is the most that you can expect from any $(n,1)$-ary operation formed from a bilinear form. I think that that proves that with this approach, you can't do better than inner products with extension of scalars.



For all I know, it is possible that you could hard-code Euclidean geometry in some more subtle way using inequalities and names of elements in $mathbb{C}$ or $mathbb{R}$ in addition to using multilinear algebra. I do not know how to do that, though.

Saturday, 6 October 2012

When can I use Fourier Series to solve the heat/wave equation on $[0,L]$?

I don't have a complete answer, but just some preliminary thoughts: the idea behind using Fourier analysis to solve constant coefficient linear PDEs is to transform a partial differential equation into an ordinary differential equation. In symbols, suppose $psi:Itimesmathbb{R}^nto mathbb{C}$ solves the PDE



$$ sum_{0leq i leq N} P_i(nabla) partial_t^i psi = 0 $$



where $P_i$ are constant coefficient polynomials, the Fourier transform "gives an equation"



$$ sum_{0 leq i leq N} P_i(xi) partial_t^i hat{psi} = 0~. $$



The problem is: how do you interpret this equation? To treat it as an ODE, you need to treat $hat{psi}$ as a map $I to X$ where $X$ is, say, the Hilbert space of $L^2$ functions over $mathbb{R}^n$ or some such.



Now, in your case of prescribing boundary conditions for the interval $[0,1]$, if your boundary conditions were time independent, and if the boundary conditions plays well with the Fourier transform, then you can again recover the ODE formulation. (The solvability of the ODE, as you noted, depends on which Hilbert space you use and the properties of the polynomials $P_i$ on the Hilbert space.)



But if your boundary conditions are time dependent, then an immediate problem is that the Hilbert space in which $hat{psi}$ lives will be time-dependent. So the naive application of "Fourier" methods won't make sense. Geometrically the case where $X$ is a fixed Hilbert space is the analogue of solving an ODE on the trivial vector bundle $V$ over $I$ with trivial connection. The case where $X$ also varies with time can be thought of as having some sort of an attempt at writing down an ODE on an arbitrary vector bundle $V$ over $I$. Without specifying the connection, even the notion of an ODE is not well-defined.



To put it differently, since a connection over a curve is just an identification of the fibres (roughly speaking), what you need to use an analogue of the Fourier method is a collection of 1-parameter families of functions $phi_i(t;x)$, such that



  • For each fixed $t$, the functions $phi_i(t;x)$ forms an ON basis of some appropriate Hilbert space

  • Each $phi_i(t;x)$ solves the PDE you are looking at

Just directly assuming the trace of $phi_i$ on constant $t$ slices are the trigonometric functions is probably not the right way to go in general.



Not having thought about this problem in detail before, I don't have much more to say. But I suspect that the suitability of individual boundary conditions need to be examined on a case by case basis.

amateur observing - How many stars are in the constellation Canis Minor?

You possibly are confused by these two entries in Wikipedia (click on the quotations to go to the original distinct entries:



Canis Minor contains only two stars brighter than the fourth magnitude, Procyon (Alpha Canis Minoris), with a magnitude of 0.34, and Gomeisa (Beta Canis Minoris), with a magnitude of 2.9.



and



This is the list of [56] notable stars in the constellation Canis Minor, sorted by decreasing brightness.



The first statement is about stars brighter than the 4th magnitude, of which there are two, and not about all the others. Those two stars are NOT a binary. Procyon is 11 lightyears away and Gomeisa is 170 lightyears away. Procyon itself is a binary, but for the purpose of general observation is considered to be one star.



The second statement is about "notable stars", which is rather arbitrary. The list is 56 stars long, but in fact is one were to map every star visible to telescopes, there would be thousands.



Another quibble: What perchance is the exact area in the sky that is assigned to Canis Minor? This is not precisely defined. See below.



enter image description here



~~~~~~~~~~~~~~~~~~~~~~ and ~~~~~~~~~~~~~~~~~~~~~~



enter image description here



Note the completely different boundaries.

Thursday, 4 October 2012

ag.algebraic geometry - Zariski-style valuation theory

I'm not an expert here, but I'll try. Let $X$ be a proper irreducible variety and $K$ the field of meromorphic function on $X$. Let $v : K^* to A$ be a valuation, where $A$ is a totally ordered abelian group. Recall that the valuation ring $R$ is $v^{-1}(A_{geq 0})$, with maximal ideal $mathfrak{m} = v^{-1}(A_{> 0})$. So $R$ is a local ring with fraction field $K$. By the valuative criterion of properness, we get a map $mathrm{Spec} R to X$; the image of the closed point is called the "center" of $v$. We denote the center by $Z$.



I'll start with the simplest cases, and move to the more general. The easiest case is that $X$ is defined over an algebraically closed field $k$, the point $Z$ is a closed point of $X$, and $A$ is $mathbb{Z}$. Then the completion of $R$ at $mathfrak{m}$ is isomorphic to $k[[t]]$; choose such an isomorphism. If $x_1$, $x_2$, ... $x_N$ are coordinates near $z$ then this isomorphism identifies $z_i$ with a power series $sum x_i^j t^j$.



Intuitively, we can think of $(x_1(t), x_2(t), ldots, x_N(t))$ as giving an analytic map from a small disc $D$ into $X$, taking the origin to $Z$. (This interpretation makes rigorous sense if we are working over $mathbb{C}$, and the power series $x_i$ converge somewhere.) The image of this map is the "branch" which Zariski speaks of. The valuation can be thought of as restricting a function to this branch and seeing to what order it vanishes at $Z$. Notice that, if there is a polynomial relation $f(x_1(t), ldots, x_N(t))=0$ then we should have $v(f)=infty$ and $v(1/f)= - infty$. If you don't allow this then you need to require that the $x_i$ be algebraically independent; intuitively, this is the same as saying that the branch does not lie in any polynomial hypersurface. Since Zariski allows "algebraic" branches, I assume he IS permitting this and has some appropriate convention to deal with it.



Now, what if $A$ is a subgroup of $mathbb{R}$, but is no longer discrete? Then $R$ is going to embed in some sort of Puiseux field. The details here can be subtle, but the intuition should be that the $x_i(t)$ can have real exponents. For example, if $K=k(x,y)$, and $v(f(x,y))$ is the order
of vanishing of $f(t, t^{sqrt{2}})$ at $t=0$, then we can think of $v$ as the order of vanishing along the branch $(t, t^{sqrt{2}})$. Maybe that is what Zariski means by a transcendental branch??



Suppose now that $Z$ is the generic point of an $s$-dimensional variety. Let $L$ be the field of functions on $Z$ and suppose, for simplicity, that $L$ and $K$ have the same characteristic. If $A=mathbb{Z}$ then $R$ embeds in $L[[t]]$. Again, we can take coordinates $x_1$, ..., $x_N$ and write them as power series. How to think of these power series? One way is to think of them as giving a map $U times D$ into $X$, where $D$ is again a small disc and $U$ is a dense open subset of $X$. The image is the $(s+1)$-dimensional branch which Zariski discusses. The valuation is to restrict to this branch and work out the order of vanishing of this restriction along $U times { 0 }$.



Now, all of this is discussing the case where $A$ embeds (as an ordered group) in $mathbb{R}$. In general, of course, there are more valuations. For example, let $A=mathbb{Z} times mathbb{Z}$ ordered lexicographically. Let $v:k[x,y] to A$ send a polynomial to its lowest degree monomial, and extend this to a valuation on $k(x,y)$. The way I would think of that is that we have a little disc near $(0,0)$, and a little curve passing through $(0,0)$ along the $x$-axis. So our valuation is to, first, restrict to the curve and the order of vanishing at the point and, second, to restrict to the surface and take the order of vanishing along the curve. So here we have a flag of branches, not just one. I'm not sure why your Zariski quote doesn't discuss this possibility.

Is mass+energy conserved when a new universe forms inside a black hole?


is mass+energy conserved?




Yes. Even in general relativity energy and momentum are conserved, although it is a bit more complicated than in Newtonian Mechanics.




is the total amount of material within the new universe limited to how much stuff has fallen into the black hole, or how much stuff has reached the singularity?




No. Well, sort of. The physicist who's work you linked in your question addressed this in a follow-up paper. His original work (about which your article was based on) is: Cosmology with torsion: An alternative to cosmic inflation. A follow-up discussing the mass of the new universe is: On the mass of the Universe born in a black hole. In this paper he claims if our entire universe were inside a black hole, the black hole would only have to be 1,000 solar masses.



Believe it or not, this doesn't violate conservation of energy. In fact, he explicitly uses conservation of energy in his calculations. The resolution of this paradox is that there is A LOT of energy in the gravitational field of a black hole.



In this model, the singularity inside the black hole never forms. An event horizon forms as the matter collapses, as you would expect, but inside the horizon space-time "bounces" before it has a chance to make a singularity. As the matter falls inwards its energy increases; it accelerates and increases its kinetic energy due to the immense gravitational field. When it reaches the stationary "universe" inside this kinetic energy is translated into rest-mass energy: and if the matter was accelerating for long enough the increase in energy can be enormous. This gives the "universe" inside potentially a lot more mass than the stuff that fell in. This process is described in the paper as the creation of particle-antiparticle pairs by the gravitational field, which amounts to the same effect. Either way, you take energy from the gravitational field and turn it into the energy (mass) of particles inside the horizon. Someone sitting outside the black hole doesn't notice this, because they can only measure the total energy and that remains constant.




If so then it would seem these black hole universes are only a tiny fraction the size of our own universe.




Not quite. Inside the event horizon the crazy warping of space-time can make the inner "universe" quite large. Like the TARDIS, something much bigger on the inside :)



I am obliged to say that all of this work is very theoretical, his conclusions rest upon some assumptions that we have no evidence to support, but its a neat idea!

nt.number theory - Fermat over Number Fields

This is mostly an amplification of Kevin Buzzard's comment.



You ask about points on the Fermat curve $F_n: X^n + Y^n = Z^n$ with values in a number field $K$.



First note that since the equation is homogeneous, any nonzero solution with $(x,y,z) in K^3$ can be rescaled to give a nonzero solution $(Nx,Ny,Nz) in mathbb{Z}_K$, the ring of algebraic integers of $K$ -- here $N$ can taken to be an ordinary positive integer.



Thus you have two "parameters": the degree $n$ and the number field $K$.



If you fix $n$ and ask (as you have seemed to) whether there are solutions in some number field $K$, the answer is trivially yes as Kevin says: take $x$ and $y$ to be whatever algebraic integers you want; every algebraic integer has an $n$th root which is another algebraic integer, so you can certainly find a $z$ in some number field which gives a solution. Moreover, if you take $x$ and $y$ in a given number field $K$ (e.g. $mathbb{Q}$), then you can find infinitely many solutions in varying number fields $L/K$ of degrees at most $n$. But it is interesting to ask over which number fields (or which number fields of a given degree) there is a nontrivial solution.



On the other hand, if you fix the number field $K$ and ask for which $n$ the Fermat curve
$F_n$ has a solution $(x,y,z) in K$ with $xyz neq 0$, then you're back in business: this is a deep and difficult problem. (You can ask such questions for any algebraic curve, and many people, myself included, have devoted a large portion of their mathematical lives to this kind of problem.) So far as I know / remember at the moment, for a general $K$ there isn't that much which we know about this problem for the family of Fermat curves specifically, and there are other families (modular curves, Shimura curves) that we understand significantly better. But there are some beautiful general results of Faltings and Frey relating the plenitude of solutions (in fact not just over a fixed number field but over all number fields of bounded degree) to geometric properties of the curves, like the least degree of a finite map to the projective line (the "gonality").

Wednesday, 3 October 2012

Why can't there be a general theory of nonlinear PDE?

I agree with Craig Evans, but maybe it's too strong to say "never" and "impossible". Still, to date there is nothing even close to a unified approach or theory for nonlinear PDE's. And to me this is not surprising. To elaborate on what Evans says, the most interesting PDE's are those that arise from some application in another area of mathematics, science, or even outside science. In almost every case, the best way to understand and solve the PDE arises from the application itself and how it dictates the specific structure of the PDE.



So if a PDE arises from, say, probability, it is not surprising that probabilistic approximations are often very useful, but, say, water wave approximations often are not.



On other hand, if a PDE arises from the study of water waves, it is not surprising that oscillatory approximations (like Fourier series and transforms) are often very useful but probabilistic ones are often not.



Many PDE's in many applications arise from studying the extrema or stationary points of an energy functional and can therefore be studied using techniques arising from calculus of variations. But, not surprisingly, PDE's that are not associated with an energy functional are not easily studied this way.



Unlike other areas of mathematics, PDE's, as well as the techniques for studying and solving them, are much more tightly linked to their applications.



There have been efforts to study linear and nonlinear PDE's more abstractly, but the payoff so far has been rather limited.

If split algebraic groups are potentially isomorphic, are they isomorphic?

The answer is yes, for arbitrary split connected reductive groups over any field. The main point is that the Existence, Isomorphism, and Isogeny Theorems (relating split connected reductive groups and root data) are valid over any field. One reference is SGA3 near the end (which works over any base scheme), but in Appendix A.4 of the book "Pseudo-reductive groups" there is given a direct proof over fields via faithfully flat descent, taking as input the results over algebraically closed fields (since for some reason the non-SGA3 references always seem to make this restriction).



[Caveat: that A.4 gives a complete treatment for the Isomorphism and Isogeny Theorems over general ground fields, and that is what the question is really about anyway; for the Existence Theorem in the case of exceptional types I don't know a way to "pull it down" from an algebraic closure, instead of having to revisit the constructions to make them work over prime fields or $mathbf{Z}$.]

standards - Standardized "constellation" regions?

Modern astronomers are not instructed on the constellations and pay little attention to them except as a naming convention for stars. Positional information is nearly always either equatorial (ra,dec), or galactic (gl,gb), or supergalactic (SGL, SGB). To improve the speed in which information on a region can be extracted from a database, a scheme called Hierarchical Triangular Mesh (HTM), is often used. The sky is divided into three sections (0,1,2) and each of those is divided into 3 sections (00,01,02,10,11...) and this iterated until the level of finest size is reached. One can address the finest region by using all of the digits or a larger region by dropping some of the final digits.

Tuesday, 2 October 2012

co.combinatorics - What is the minimum set of combinations C(p,n) required to guarantee q

You may find the following interesting.



I think what you're asking for is the minimum cardinality of a code of length $p$ over the alphabet $Z_n$ (the integers mod $n$, assuming) the possible numbers are $1,2,ldots,n-1$ with covering radius $p-q$. This is in general a very hard NP-complete problem.



Given two vectors $x,y$ in $Z_n^p$, their Hamming distance $d(x,y)$ is defined as the number of coordinates on which they differ. Given a subset $C$ of $Z_n^p$ which corresponds to the collection of lottery selections, the covering radius $R(C)$ is



$$
R(C) = max d(x,C)
$$



where the maximum is taken as $x$ ranges over $Z_n^p$ and where $d(x,C)=min d(x,c)$ with the minimum taken over $c in C.$ When you think about it, you're asking for the minimal cardinality code with covering radius $p-q,$ i.e., with $p-q$ wrong numbers out of $p$.



In general there are bounds on $R(C)$ and the case $n=3$ is of interest in football [soccer] pools. The Golay code is relevant here since it is a perfect ternary code with good covering radius.



There was a nice article MAA monthly years ago entitled something like "football pools, a problem for mathematicians".

A graph connectivity problem (restated)

According to wikipedia, if your only constraint is minimizing the number of edges removes, it's easy. Now whether those algorithms are approriate for solving your problem as stated exactly or approximately, thats another question entirely. I'd guess that it's certainly easy (in at least one of an exact or approximate sense) as long as the graphs have some nice sparseness or geometric structure such as being planar or k regular.

ag.algebraic geometry - When two k-varieties with the same underlying topological spaces isomorphic?

[This is a situation where things have been rewritten several times in response to an ongoing discussion. Let me try to reconstruct some of the temporal sequence here.]



ROUND ONE:



Ben's answer gives one way that your statement can fail: $Y$ can be singular and $f$ can be a birational morphism which does not induce an isomorphism of local rings at at least one of the singular points.



Here is something else that can go wrong: in positive characteristic, $f$ can be purely inseparable, e.g. the $p$-power Frobenius map.



ROUND TWO



I edited my response to point out that Ben's example has a nonreduced fiber over the singular point. I also said "I think" that mine does not, but this was pointed out by Kevin Buzzard to be false. [Or rather, the statement is false. I truly did think it was true for a little while.]



I also suggested that the following modification might be true:




Suppose $X$ and $Y$ are geometrically irreducible and $Y$ is nonsingular (together with all of the questioner's hypotheses, especially reducedness of the fibers!). Then if $f:X rightarrow Y$ is a bijective morphism with reduced fibers, it is an isomorphism.




ROUND THREE



I typed up a counterexample over an imperfect ground field when the varieties are not geometrically integral (g.i. = the base change to the algebraic closure is reduced and irreducible: the reduced business has to be taken more seriously when the ground field is imperfect, since taking an inseparable field extension can introduce nilpotent elements). But Kevin Buzzard posted a simpler counterexample, so I deleted my answer.



ROUND FOUR



Kevin's answer also includes a beautifully simple example to show that the question is false even over $mathbb{C}$ without some nonsingularity hypotheses: use nodes instead of cusps and remove one of the preimages of the nodal point.



I still wonder if my attempted reparation of the statement above is correct.

ag.algebraic geometry - Decomposition theorem and blow-ups

In this example we have $p : X to Y$ and we may assume, wlog, that $X$ is isomorphic to the total space of the normal bundle to the surface, and $p$ is the contraction of the zero section.



Then, by the Deligne construction, $IC(Y) = tau_{le -1} j_* mathbb{Q}[3]$, where $j : Y^0 hookrightarrow Y$ is the inclusion of the smooth locus (which is isomorphic to $X^0$ the complement of the zero section in $X$).



In order to work this out, we can use the Leray-Hirsch spectral sequence



$E_2^{p,q} = H^p(S) otimes H^q(mathbb{C}^*) Rightarrow H^{p+q}(X^0)$



this converges at $E_3$ and we get that the degree 0, 1 and 2 parts of the cohomology of $X^0$ is given by the primitive classes in $H^i(S)$ for $i = 0, 1, 2$. Note that this is everything in degrees 0 and 1, but in degree two the primitive classes form a codimension one subspace $P_2 subset H^2(S)$.



The Deligne construction above, gives us that $IC(Y)_0 = H^0(S)[3]
oplus H^1(S)[2] oplus P_2[1]$.



(This is a general fact: whenever you take a cone over a smooth projective variety, the stalk of the intersection cohomology complex at 0 is given by the primitive classes with respect to the ample bundle used to embed the variety. This follows by exactly the same arguments given above.)



Then the decomposition theorem gives



$p_* mathbb{Q} = mathbb{Q}_0[1] oplus ( IC(Y) oplus H^3(S) ) oplus H^4(S)[-1]$.



EDIT: fixed typos pointed out by Chris.

Monday, 1 October 2012

gt.geometric topology - Proof of the Reidemeister theorem

I taught knot theory last semester and ran into the same problem. I looked in every book I could get my hands on, and could not find an undergraduate level proof. In the end, I wrote up my own notes (which I would be happy to scan when I get back into the office). The key ideas for the case-by-case analysis are in the book "Knots, links, braids, and 3-manifolds" by Prasolov and Sosinsky. I also found Louis Kauffman's book "On knots" to be helpful. There are two lemmas I could not find anywhere: (1) the general position argument, which says that there is a nice projection and (2) the argument which says that you can find a general projection so that the associated diagram is equivalent to the original diagram (most books skip this issue). The point of the second lemma is that it is not enough to show that there exist two projections that differ by Reidemeister moves, rather, you want to show that the two given diagrams differ by Reidemeister moves.

nt.number theory - Bounds on $sum {n ((alpha n))}$ where $((x))$ is the sawtooth function

Here is an approach which may give some better estimates for particular values of $alpha$:



$$sum_{i= 1}^N i((ialpha)) = sum_{i=1}^N sum_{j=i}^N ((j alpha)) = sum_{i=1}^N sum_{k=0}^{N-i}((Nalpha -kalpha))$$.



So, if you can estimate



$$sup_{substack{xin (0,1) \ M le N}} bigg|sum_{k=0}^{M} ((x-kalpha))bigg|,$$



then you can crudely multiply by $N$ to get an estimate for $|sum i((ialpha))|$.



Specifically, my guess is that for quadratic irrationals $alpha$, there is an upper bound for



$$bigg|sum_{k=0}^M ((x-kalpha))bigg|$$



which is $O (log M)$, which would give you a bound of $O(N log N)$, and more generally that there is a bound in terms of the coefficients of the simple continued fraction for $alpha$, so that if those are bounded, then you still get $O(N log N)$.



For the particular value $phi = (sqrt5 + 1)/2$, $sum_{i=0}^{M} ((i phi))$ has logarithmic growth $c + (5sqrt5 - 11)/4 log_phi M$ (achieved at indices in the sequences A064831 (+) and A059840 (-)), which suggests that $sup sum_{i=0}^M ((x-iphi))$ also has logarithmic growth, which would give an $N log N$ bound for the sum.




In the opposite direction, for all $alpha notin frac 12mathbb Z$, $$limsup bigg(log_N bigg|sum_{i=0}^N i((ialpha))bigg|bigg) ge 1$$ since there are terms proportional to $N$.



The sum can be greater than $N^{2-epsilon}$ infinitely often by choosing $alpha$ so that it is extremely well approximated by infinitely many rational numbers. When $alpha$ is very closely approximated by $p/q$, then for $N$ a small multiple of $q$ (where "small" is relative to how well $p/q$ approximates $alpha$), about $1/q$ of the terms can be moved past integers with a small perturbation of $alpha$ to $alpha'$, which causes a jump of about $N^2/q$ in the sum. So, either the sum for $alpha$ or $alpha'$ is large. We can choose a sequence $p_n/q_n$ which converges to an $alpha$ which produces large sums infinitely often, so that for these $alpha,$
$$limsup bigg(log_N bigg|sum_{i=0}^N i((ialpha))bigg|bigg) = 2$$.

Homological Algebra for Commutative Monoids?

I used to think about this problem in relation to a chain theory for bordism ( as mentioned by Josh Shadlen above).
The problems you have with monoids is first and foremost that the category is not balanced. That means that you can have an epimorphism that is also a monomorphism but NOT an isomorphism. eg. the inclusion of N --> Z.
Subsequently most constructions that you would like to make - notably short exact sequences and the snake lemma - fail at some level.
I made a few notes on this as part of my investigation into bordism theory / homework assignment
here.

In this case, we have complexes of free abelian monoids whose homology takes it's values in abelian groups and yet, the long exact sequence does not come from a short exact sequence of monoid complexes.

My references include:

[Bau89] Friedrich W. Bauer. Generalised homology theories and chain complexes. Annali di Mathematica pura ed applicata, CLV:143–191, 1989.

[Bau95] Friedrich W. Bauer. Bordism theories and chain complexes. Journal of Pura and Applied Algebra, 102:251–272, 1995.

[BCF63] R.O. Burdick, P.E. Conner, and E.E. Floyd. Chain theories and their derived homology. Proceedings of the AMS, 19(5):1115–1118, Oct. 1963.

[Koc78] S. O. Kochman. A chain functor for bordism. Transactions of the American Mathematical Society, 239:167–196, 1978.

at.algebraic topology - Burnside ring and zeroth G-equivariant stem for finite G

This is from Segal's paper



Equivariant stable homotopy theory. Actes du Congrès International des Mathématiciens (Nice, 1970), Tome 2, pp. 59--63. Gauthier-Villars, Paris, 1971.



MR0423340 (54 #11319)



where he states that the equivariant stable cohomotopy of $S$ is $pi_G^{-ast}(S)=bigoplus_Kpi_{ast}^s(B(N_G(K)/K)^+)$. In particular, $pi_G^0(S)$ is the Burnside ring.

universe - What would we find if we could travel 786 trillion light years in any direction

There is one principle that says "The universe is isotropic". It means it looks the same (on large scales) from any point you look from.



This means, in turn, that if you travel 786 trillion light years instantaneously in some direction, you'll find a very differnt sky but with exactly the same general structure: stars, galaxies, clusters of galaxies, and attractors.



Altough you can NOT travel so far so fast, you can make this mind experiment:



Send a spaceship so far at (almost) speed of light with a photo camera, take a 360º photo, come back, arrive 1572 triilion years afterwards.
While awaiting for the spaceship return, wait 786 trillion light years, take a 360º photo, wait the othe 768 trillion years.



Compare the two photos. They were taken at the same "Cosmic time", one of them 768 trillion years ago, here, and the other one maybe two or three years of spaceship time ago, 768 trillion light years afar.



The photos will look the the same on the rough scale.