Wednesday 30 September 2009

geometry - Recovering a piecewise affine function

Lets say I have an piecewise affine convex function $f(x_1,x_2)$, on which the following operations are possible:



  • Computing $f(x_1,x_2)$.

  • Computing a subgradient to $f$ at $(x_1,x_2)$

  • Computing all breakpoints along a line $a_1x_1 + a_2x_2 = a_3$

What is an efficient algorithm to compute $f$ completely, i.e. to compute all its breakpoints? Are there any general methods?

co.combinatorics - What exactly is the relationship between codes over finite fields and Euclidean sphere-packings?

Conway and Sloane, Sphere Packings, Lattices, and Groups, is one of the best survey-style mathematics books ever written. It certainly does extend the discussion to error-correcting codes, even though the main theme is Euclidean sphere packings.



The most important relationship between sphere packings and codes, but not the only relationship, is as an extended analogy that leads to uniformly argued bounds and constructions. The Hamming metric and the Euclidean metric are both metrics, and in both cases you are interested in minimum distance sets which you then call codes. But the analogy is more than just that. The Hamming cube is a normed abelian group, and Euclidean space is a normed abelian group. In both cases, there is a special interest in codes that are subgroups. If a code is a subgroup, you only have to check the minimum distance from the 0 vector. Also, recall Pontryagin-Fourier duality: If $A$ is a locally compact abelian group, it has a dual group $hat{A}$ which is its group of characters. The Hamming cube and Euclidean space are both canonically self-dual in the sense that $A = widehat{A}$. If $C subset A$ is a subgroup, it has a dual code $C^perp$ which is by defining the subgroup $widehat{A/C} subset widehat{A} = A$. (In other words, it is the group of characters which are trivial on $C$.) $C$ has a weight enumerator, which in the Euclidean case is called a theta series, and the weight enumerators of $C$ and $C^perp$ are related by a transform. The transform is called the MacWilliams identity in the Hamming case and the Jacobi identity in the Euclidean case. The transform is possible because of another fundamental common feature: The Hamming cube and Euclidean space are both 2-point transitive metric spaces.



When a metric space is 2-point transitive, there is a construction due to Delsarte for finding upper bounds on the sizes of codes. The construction is a certain relaxation of the distance distribution of a code that reduces the bound to linear programming. The linear constraints come from harmonic analysis. It is easier in the compact case, and it is explained in SPLAG in the Hamming case and the spherical geometry case, where you get bounds on kissing numbers and other spherical sphere packings. The Euclidean case of the linear programming bounds were later developed by Henry Cohn and Noam Elkies.



On the construction side, the analogy is sometimes more direct. A sphere packing in $mathbb{R}^n$ could be both a subset of $mathbb{Z}^n$ and union of cosets of $(2mathbb{Z})^n$. When it is of this form, it comes from a binary code in $(mathbb{Z}/2)^n$. Sometimes this leads to the best known sphere packing. In fact one of these cases uses the Best code with $n=10$ (found by Marc Roelant Best). A simpler case is the $D_n$ lattice, which is the best packing when $n=3$, the best lattice packing when $n=4,5$, and thought to be the best packing in these two cases. It comes from the parity code.



This transfer of codes extends to another important case, codes over $mathbb{Z}/k$ in the Lee metric. The Lee metric on $mathbb{Z}/k$ is just the graph metric of a $k$-gon, i.e., $d(a,b) = |a-b|$ if you choose residues so that the answer is at most $k/2$. The standard Lee metric on $mathbb{Z}/k$ is the $ell^1$ sum of the Lee distances on the factors, but you can view this as an approximation to $ell^2$ and again lift to the Euclidean case. You can obtain the $E_8$ lattice with way with $k=4$. $k=4$ is also separately because $mathbb{Z}/4$ is isometric to $(mathbb{Z}/2)^2$, and this isometry leads to what are called $mathbb{Z}/4$-linear binary codes. (The Best code mentioned in the previous paragraph is one of these codes.)

big list - What are some applications of other fields to mathematics?

I can think of at least three things that the question might mean, and it would probably help if Steve clarified which ones count for him!



(1) Other fields suggesting new questions for mathematicians to think about, or new conjectures for them to prove. Examples of that sort are ubiquitous, and account for a significant fraction of all of mathematics! (Archimedes, Newton, and Gauss all looked to physics for inspiration; many of the 20th-century greats looked to biology, economics, computer science, etc. Even for those mathematicians who take pride in taking as little inspiration as possible from the physical world, it's arguable how well they succeed at it.)



(2) Other fields helping the process of mathematical research. Computers are an obvious example, but I gather that this sort of application isn't what Steve has in mind.



(3) Other fields leading to new or better proofs, for theorems that mathematicians care about even independently of the other fields. This seems to me like the most interesting interpretation. But it raises an obvious question: if a field is leading to new proofs of important theorems, why shouldn't we call that field mathematics? One way out of this definitional morass is the following: normally, one thinks of mathematics as arranged in a tree, with logic and set theory at the root, "applied" fields like information theory or mathematical physics at the leaves, and everything else (algebra, analysis, geometry, topology) as trunks or branches. Definitions and results from the lower levels get used at the higher levels, but not vice versa. From this perspective, what the question is really asking for is examples of "unexpected inversions," where ideas from higher in the tree (and specifically, from the "applied" leaves) are used to prove theorems lower in the tree.



Such inversions certainly exist, and lots of people probably have favorite examples of them --- so it does seem like great fodder for a "big list" question. At the risk of violating Steve's "no theoretical computer science" rule, here are some of my personal favorites:



(i) Grover's quantum search algorithm immediately implies that Markov's inequality, that



$max_{x in [-1,1]} |p'(x)| leq d^2 max_{x in [-1,1]} |p(x)|$



for all degree-d real polynomials p, is tight.



(ii) Kolmogorov complexity is often useful for proving statements that have nothing to do with Turing machines or computability.



(iii) The quantum-mechanical rules for identical bosons immediately imply that |Per(U)|≤1 for every unitary matrix U.

co.combinatorics - Enumerating (generalized) de Bruijn tori

Given a cyclic word $w$ of length $N$ over a $q$-ary alphabet and $k in mathbb{Z}_+$, consider the directed multigraph $G_k(w) = (V,E)$ with $V subset$ {$1,dots,q$}$^k$ given by the $k$-lets (i.e., subwords of $k$ symbols) that appear in $w$ (without multiplicity) and $E$ given by the $(k+1)$-lets in $w$ that appear with multiplicity. An edge $w_ell dots w_{ell+k}$ connects the vertices $w_ell dots w_{ell+k-1}$ and $w_{ell+1} dots w_{ell+k}$. If $w$ is a de Bruijn sequence, $G_k(w)$ is a de Bruijn graph, and vice versa. So call $G_k(w)$ the generalized de Bruijn graph corresponding to $w$ and $k$. It is not hard to compute the number of words $w'$ having $G_k(w)$ as their generalized de Bruijn graph, using the matrix-tree and BEST theorems.



In two dimensions, the picture is much less clear. De Bruijn tori are basically periodic rectangular arrays of symbols in which all possible subarrays of a certain size occur with multiplicity 1. There is a structure (a hypergraph?)--the "generalized de Bruijn structure"--corresponding to a generic rectangular array of symbols in a generalization of the sketch above, so by analogy call a rectangular array of symbols over a finite alphabet a generalized de Bruijn torus in this context.



Given one generalized de Bruijn torus, how many others share its multiplicities of rectangular subarrays (equivalently, its generalized de Bruijn structure)?



(Note that even the existence of de Bruijn tori for nonsquare subarrays is uncertain, which is why I'm working in the "generalized" context.)

Tuesday 29 September 2009

cosmology - If the Universe is infinite, why isn't it of infinite density?

If we make the assumption that the Universe is infinite, and has an infinite number of hydrogen atoms, then why is it not of infinite density - because, under Schrodinger's wave equation the probability of an electron being at any given point is non-zero and any non-zero number multiplied by infinity is itself infinity?



Is the answer (a) I have made some basic error in physics, (b) the Universe is provably not infinite because of this - effectively a version of Obler's Paradox or (c) the Pauli exclusion principle means that electrons just cannot be anywhere?

Monday 28 September 2009

co.combinatorics - Complete graph invariants?

Obviously, graph invariants are wonderful things, but the usual ones (the Tutte polynomial, the spectrum, whatever) can't always distinguish between nonisomorphic graphs. Actually, I think that even a combination of the two I listed will fail to distinguish between two random trees of the same size with high probability.



Is there a known set of graph invariants that does always distinguish between non-isomorphic graphs? To rule out trivial examples, I'll require that the problem of comparing two such invariants is in P (or at the very least, not obviously equivalent to graph isomorphism) -- so, for instance, "the adjacency matrix" is not a good answer. (Computing the invariants is allowed to be hard, though.)



If this is (as I sort of suspect) in fact open, does anyone have any insight on why it should be hard? Such a set of invariants wouldn't require or violate any widely-believed complexity-theoretic conjectures, and actually there are complexity-theoretic reasons to think that something like it exists (specifically, under derandomization, graph isomorphism is in co-NP). It seems like it shouldn't be all that hard...



Edit: Thorny's comment raises a good point. Yes, there is trivially a complete graph invariant, which is defined by associating a unique integer (or polynomial, or labeled graph...) to every isomorphism class of graphs. Since there are a countable number of finite graphs, we can do this, and we have our invariant.



This is logically correct but not very satisfying; it works for distinguishing between finite groups, say, or between finite hypergraphs or whatever. So it doesn't actually tell us anything at all about graph theory. I'm not sure if I can rigorously define the notion of a "satisfying graph invariant," but here's a start: it has to be natural, in the sense that the computation/definition doesn't rely on arbitrarily choosing an element of a finite set. This disqualifies Thorny's solution, and I think it disqualifies Mariano's, although I could be wrong.

Sunday 27 September 2009

amateur observing - How can I observe the Orionid Meteor Shower?

While this shower is now largely over, the way to best observe meteors is the same for all showers in principle (including coming showers like the Leonids around November 17 and Geminids around December 14).



The name of most showers reference the name of the constellation their radiant (point in the sky the meteors seem to be coming from) lies in. In the case of the Orionids, this is the constellation of Orion, so look in the general direction of (and around) Orion.



As you suggest, meteors appear unexpectedly and move too fast to be caught with a telescope. The best way to observe them is to lie down comfortably and use the naked eye. The key word is "comfortably": find a comfortable reclining garden chair or lilo and lie down. That's one half of the comfort; the other is to stay warm. Even when watching the Perseids in summer you're lying still under an open sky, and for the fall/winter showers mentioned here you'll really want to bundle up and perhaps take a thermos with a hot drink.



The comfort is important - there may be several minutes between two meteors and hence lying down is a good way to relax while watching, which will allow you to stay out longer and see more. It is definitely more relaxed than standing up with your head in your neck.



One other tip for "comfort": many people will enjoy this more in the company of other enthousiasts, and if you're one of them you should consider going out with a few other people. It will allow you to discuss the meteors just sighted to pass the time between events, or even to help each other to see one (if they're not too fast).

Saturday 26 September 2009

frobenius algebras - Cohomology rings and 2D TQFTs

There is a "folk theorem" (alternatively, a fun and easy exercise) which asserts that a 2D TQFT is the same as a commutative Frobenius algebra. Now, to every compact oriented manifold $X$ we can associate a natural Frobenius algebra, namely the cohomology ring $H^ast(X)$ with the Poincare duality pairing. Thus to every compact oriented manifold $X$ we can associate a 2D TQFT.



Is this a coincidence? Is there any reason we might have expected this TQFT to pop up?



When $X$ is a compact symplectic manifold, perhaps the appearance of the Frobenius algebra can be explained by the fact that the quantum cohomology of $X$, which comes from the A-twisted sigma-model with target $X$, becomes the ordinary cohomology of $X$ upon passing to the "large volume limit".



But for a general compact oriented $X$? I don't see how we might interpret the appearance of the Frobenius algebra in some quantum-field-theoretic way. Maybe there is an explanation via Morse homology?

nt.number theory - b^(n-1)=-1 mod n

It's clear that b = n-1 with n even gives a solution. But there are many other solutions. Here are the solutions $(b,n)$ not of the form $(2k-1, 2k)$, with n less than or equal to 200, from MAPLE.



L := []: for n from 2 to 200 do 
for b from 1 to n-2 do
if (b^(n-1) mod n) = n-1 then L := [op(L), [b,n]]; fi:
od: od:
L;
[[3, 28], [19, 28], [23, 52], [43, 52], [17, 66], [29, 66], [35, 66],
[41, 66], [19, 70], [59, 70], [27, 76], [31, 76], [31, 112], [47, 112],
[99, 124], [119, 124], [49, 130], [69, 130], [11, 148], [27, 148], [87, 154],
[131, 154], [7, 172], [123, 172], [63, 176], [79, 176], [95, 176], [127, 176],
[23, 186], [29, 186], [77, 186], [89, 186], [29, 190], [59, 190], [69, 190],
[79, 190], [89, 190], [109, 190], [129, 190], [179, 190], [19, 196], [31, 196]]


For example, $3^{28-1} equiv -1 mod 28$, so the pair [3,28] is on the list.



I can't make sense of this output myself, but maybe someone else can?

homotopy theory - Definition of an E-infinity algebra

In characteristic 0, Kadeishvili has a notion of $C_{infty}$ algebra which models rational homotopy theory. See the last paragraph of the introduction of his paper arXiv:0811.1655. His point of view is to simply consider $A_{infty}$ algebras whose operations satisfy a certain property with respect to shuffle maps. So your computer doesn't have to remember any new operations, just check that the old ones are right.



In characteristic $p$, things are probably hopeless.



Added Remark: I just want to make clear that this does not give a "trivial proof" that a commutative dga is formal as a commutative dga if the underlying dga is formal in the "non-commutative" sense. The reason is that when you transfer from cochains from cohomology, you are restricted in the kind of morphisms allowed if you are interested in the commutative theory. So, just as in the answers to this question, there is some work to be done if you want results like that (to be completely honest, there is not yet a proof that I completely understand, so declare myself agnostic).

Friday 25 September 2009

Does the set of open sets in a topological space have a topology itself?

Of course there are many answers to your question. The interesting thing to ask is if there is a "best" or "right" answer. In many respects the "correct" topology for the lattice of open sets is the Scott topology. In case $X$ is locally compact, the Scott topology coincides with the compact-open topology of the continuous function space $C(X,Sigma)$, where $Sigma$ is the Sierpinski space (where we identify open sets with their characteristic functions into $Sigma$).



There are several reasons why the Scott topology is the "right" one. One of them is that the following are equivalent for a space $X$:



  1. $X$ is an exponentiable space in the category of topological spaces ($Y^X$ exists for all $Y$).

  2. The exponential $Sigma^X$ exists.

  3. The topology of $X$ is a continuous lattice.

  4. The lattice of open sets of $X$ equipped with the Scott topology is the exponential $Sigma^X$.

I recommend the following paper by Martin Escardó and Reinhold Heckmann in which they explain many things related to topology of the lattice of open sets (and function spaces in general):




M.H. Escardo and R. Heckmann. Topologies on spaces of continuous functions. Topology Proceedings, volume 26, number 2, pp. 545-564, 2001-2002.


orbit - Are trojans in L5 more likely than in L4?

There is a symmetry in the rotating gravitational field, which means that capture of an asteroid to the L4 is just as likely as to the L5 Lagrange point.



In the case of Mars the split is 1:6, and a simple binomial model suggests a probability of 0.0625 (the probability of a 1:6 or 0:7 split given the hypothesis that they are distributed randomly is 0.0625) This doesn't give a reason to suppose there is anything unusual happening, and as noted in the comments, there is no favouring of L5 when other planets are considered.



As noted by UserLTK: "Earth has an L4 asteroid, none in L5. Uranus also has an L4, no L5 and Neptune has more L4 than L5 objects."



The conclusion is that No, the greater number of L5 martian trojans is just a random effect.

Thursday 24 September 2009

set theory - Why is it important to have disjoint sets in a union for the union to make sense w.r.t the order types?

This question has been bugging me for quite some time now.



Say we have some $beta$ smaller than some $gamma$ and a sequence



$beta$$epsilon$ : $epsilon$ smaller than cf($beta$) cofinal in $beta$ and say



we have some sets $A$n$epsilon$ and each of these $A$n$epsilon$ has order type less than $gamma$$n$.



Now $forall n$ in $omega$ let $B$n= $cup$ $A$n$epsilon$ for all $epsilon$ < $gamma$ and suppose in the end I can write $beta$ as the union of all the $B$n (but that is not really my problem here)



Why can I deduce that $B$n has order type less than $gamma$$n+1$only if all my sets $A$n$epsilon$ are disjoint and do not overlap?



(since we have a union of less then $gamma$ sets each of which is of order type less than $gamma$$n$)



Why can't I still guarantee that the $B$n will still have order type $gamma$$n+1$ if all the $A$n$epsilon$ are not disjoint?



I know that I need to take the $A$n$epsilon$ to be [$epsilon$,$epsilon+1$) so that they are disjoint.



But why does everything in the union have to be in order?



I hope I conveyed my question clearly. Thanks in advance for any help.

Can Space-Time Itself Have Energy Qualities Like Momentum?

I'm going to build off of chris's original comment. I haven't had a response from him during the 24 hours since I asked the question, and there doesn't appear to have been any activity from him recently (Note: don't confuse him with another user who goes by Chris), so I might as well expand upon what he said.



Gravitational waves do appear to be what you're looking for. They are emitted by systems with varying quadrupole moments (see https://en.wikipedia.org/wiki/Gravitational_wave and https://en.wikipedia.org/wiki/Quadrupole for more information); commonly cited examples are binary neutron stars. In fact, one such binary, the Hulse-Taylor binary, was the first discovered system to emit gravitational waves.



Gravitational waves carry energy away from the system, at a rate of
$$frac{dE}{dt}=-frac{32}{5}frac{G^4}{c^5}frac{(m_1m_2)^2(m_1+m_2)}{r^5}$$
where $E$ is eneryg, $t$ is time, $m_1$ and $m_2$ are the masses of the objects in the system, $r$ is the distance between them, and $G$ and $c$ are the constants, the universal gravitational constant and the speed of light. I invite you to do the calculations for a given system, if you please. I can assure you that it's one of the easier calculations in general relativity! This release of energy causes the orbits of the two neutron stars to gradually decay, and it is thought that eventually the two will merge.



The answer boils down to this: Yes, gravitational waves can carry [angular] momentum, just like many other types of waves. They also have frequency, amplitude, wavelength, and speed, just like the "normal" waves we are familiar with.



I hope this helps.

st.statistics - Monte Carlo simulations

Here is a bit of a braindump, ranging from things physicists actually write down to theory that might never be seen by anyone in a lab. Please comment if you want some more specific responses, and I can try to hunt something down.



The large particle physics projects use intensive MC simulations, and these are not computationally quick or easy, taking days or weeks to run (my only direct experience is with SNO, and indir.ectly CERN because of that, but this must be true for most projects). These often seem very specific (i.e. they model much of the world, including where dirt might come from and so on, not something like 'hard discs' or other abstract systems)



As Steve Huntsman mentions, lattice guage theory is demanding (though there have been many huge successes!), both in terms of the theory and in terms of the crazy calculations people have succeeded in doing.



Outside of my direct experience, but within recent reading, random surfaces and the 'surface integrals' (which are in some sense analogous to 'path integrals', though I don't know enough physics to know why) are so hard that we are very far from making any sort of reasonable success. There are many good survey articles by physicists on the subject (google searching will get some), but as a mathematician it is worthwhile to remember that these models are still fairly mathematically intractable.



Statistical physics (of the graph-dynamics/spin-system type) is, as far as I know, a mixed bag right now. Until recently, 'all' of it was I think quite hard. Some recent advances (especially coupling from the past) have allowed sampling from the Ising model on a lattice and a number of other formerly intractable problems, while leaving other nearby problems sort of untouched.



I don't have any great references for the above, and am not involved in that research, so take it with a grain of salt. One source you should really look at is David Wilson's spectacular archive on MCMC in general. He seems to have a strong interest in physical models: http://dimacs.rutgers.edu/~dbwilson/exact.html/



Another general reference (it is not primarily about physics, but has physics references at the end, and was an intro to the area for me) is http://www.ams.org/bull/2009-46-02/S0273-0979-08-01238-X/S0273-0979-08-01238-X.pdf. In it, he suggests reading "Statistical Mechanics: Algorithms and Computations" by Krauth (2006). I have only looked at it briefly, but it certainly touches upon many modern problems, including those that you mention, and is probably fairly up-to-date.



EDIT: I'll add something that is probably obvious, just in case you are new, since it wasn't obvious to me when I first looked at this subject. The 'hard' examples you mention (like solid-balls-in-boxes) are the ones that are just on the borderline between being tractable math and total messes. The Ising model has now crossed over into 'tractable' land, as have some types of graph-colouring problems. On the other hand, the first thing I mention (use of MC for large particle physics experiments) is often completely intractable, and nobody can turn these sorts of things into math problems - physicists just do something that seems reasonable and run the simulation for a while. There's nothing wrong with that, but you should be aware that the types of 'mathy' hard sampling problems you mention are ones where there is some hope of rigorously showing that your sample is close to correct, and there are many more problems where rigorous and/or sharp analysis is essentially impossible.

fourier analysis - Why do Littlewood-Paley projections behave like iid random variables

There is a quantitative way to express the somewhat vague notion of "almost independence of the Littlewood-Paley projections".



Let $mathcal F_n$, $ninmathbb Z$, be the minimal $sigma$-algebra generated by the set $mathcal D_n$ of
dyadic cubes in $mathbb R^d$
$$mathcal D_n=left{prodlimits_{k=1}^{d}[m_k2^{-n},(m_k+1)2^{-n})|quad (m_1,dots,m_d)inmathbb Z^dright}.$$
Then for any locally integrable function $f$ on $mathbb R^d$, one may define the conditional
expectation $E_n(f)$ with respect to the filtration of $sigma$-algebras ${mathcal F_k| kinmathbb Z }$:
$$E_n(f)=sumlimits_{Qin mathcal D_n}chi_Q frac{1}{|Q|}int_Q f(x)dx.$$
It is not hard to check that the differences $D_n(f)=E_n(f)-E_{n-1}(f)$, $ninmathbb Z$,
define a martingale. This means that the family of Haar functions has the martingale property (and they indeed can be viewed as iid random variables).



Now, the Littlewood-Paley projections $Delta_n$ (and partial sums of Fourier series, in general) cannot be interpreted directly as conditional expectations. However, they do behave almost like the family of Haar functions. Roughly speaking, the families of projections ${Delta_k}_{kinmathbb Z}$ and ${D_j}_{jinmathbb Z}$ are almost biorthogonal.




Theorem. There exists a constant $C$ such that for every $k$, $jinmathbb Z$ the following estimate on the operator norm of $D_kDelta_j: L^2(mathbb R^n)to L^2(mathbb R^n)$ is valid
$$|D_kDelta_j|=|Delta_jD_k|leq C2^{-|j-k|}.$$




This result is relatively recent and is due to Grafakos and Kalton (see Chapter 5 of the book by Grafakos).

Wednesday 23 September 2009

dg.differential geometry - What is a good way to think about a fundamental field on a principal G-bundle?

I find that the notion of fundamental vector field is well defined not only for $G$-principal bundle but even for any $G$-manifold, i.e. a manifold with an action by a Lie group $G$.



About your notional equation, I would say that the fundamental vector fields effectively arise from an action of $frak{g}$ on $M$. However some clarifications are needed.



Let $Psi:Mtimes Gto M$ be a right action of a Lie group $G$ on a manifold $M$.
Let $frak{g}$ be the Lie algebra of $G$, viewed as formed by the left invariant vectorfields on $G$.



Then there exists a unique map $zeta^{Psi}equivzeta:Xinfrak{g} mapsto $$zeta_Xin mathfrak{X}$$(M)$ such that $(Tpsi)circ(0_M+X)=zeta_XcircPsi$, $zeta_X$ and $0_M+X$ are $Psi$-related, for any $Xinfrak{g}$.(Above $0_M$ denoted the zero vectorfield on $M$.)
For any $Xinfrak{g}$, the vector field $zeta_X$ on $M$ is called the fundamental vectorfield corresponding to $X$ w.r.t. the right action $Psi$.



The definition of $zeta$ is well posed just because, for any $Xinfrak{g}$, the map $TPsicirc(0_M+X)$ is constant on the fibers of $Psi$; and this holds being $Psi$ a right action and $X$ a left invariant vectorfield.



Obviously the following properties are satisfied:



  • $zeta_{aX+bY}=azeta_X+bzeta_Y,zeta_{[X,Y]}=[zeta_X,zeta_Y]$, for any $a,binmathbb{R}$, and $X,Yinfrak{g}$, i.e. $zeta:mathfrak{g} to mathfrak{X} (M)$ is a Lie algebra homomorphism;

  • $zeta_X$ is complete and its $t$-time flow is $Psi^{exp{tX}}$, for any $Xinfrak{g}$ and $tinmathbb{R}$.

For an abstract Lie algebra $frak{g}$, an action of $frak{g}$ on a manifold $M$ is defined to be a Lie algebra homomorphism from $frak{g}$ to $frak{X}$$ (M)$.
In such a way for any right action $Psi$ of a Lie group $G$ on $M$, we have that $zeta^{Psi}$ is an action on $M$ by the Lie algebra of $G$.

Tuesday 22 September 2009

matrices - Matrix Factorization Model

I'll give an example from politics. Let's say you have a legislative body, such as the House of Representatives in the U. S. Congress. Over a period of time, the members of the House will vote on many bills and thereby accrue a voting history. Let's encode votes as numbers: a vote for a bill is 1, a vote against a bill is -1, and an abstention (no vote) is 0. Also, let's label the representatives $R_1,ldots,R_m$, and label the bills $B_1,ldots,B_n$. We thus have, for each pair $i,j$ of numbers with $0 <ileq m$, $0 < j leq n$, a vote $V_{ij}in {-1,0,1}$, namely how congressperson $R_i$ voted on bill $B_j$. This gives us a big $mtimes n$ "vote matrix" $V$ whose entries are the votes $V_{ij}$.



Now, it's true that each congressperson has an opinion on every bill, or dually, that each bill appeals to different congresspeople in different amounts (possibly negative). However, that description misses an important aspect of the situation: a congressperson's voting behavior can be approximated using much fewer parameters than a list of all his votes. Also, a bill's tendency to appeal to different people can be approximated using much fewer parameters than a list of all the people who voted for and against it. Indeed, it's not really bills that congresspeople have opinions on; it's issues and policies. On the flip side, a given bill will implement various types of policies and address various issues, and that's really what determines who will like it and how much.



Thus, to describe voting behaviors, what you really need is (1) a list of policies $P_1,ldots,P_f$ (the latent factors), (2) for each congressperson $R_i$, a degree of preference $S_{ik}$ for each policy $P_k$, and (3) for each bill $B_j$, a value $C_{kj}$ describing to what extent it implements policy $P_k$. Let's continue the convention that positive values for $S_{ik}$ or $C_{kj}$ indicate accordance and negative values indicate opposition. To each congressperson $R_i$, we can assign the vector $S_i = (S_{i1},ldots,S_{if})$ (which we might call the "policy vector" for that congressperson), and to each bill $B_j$, we can assign the vector $C_j = (C_{1j},ldots,C_{fj})$ (which we might also call the "policy vector" for that bill).



For what follows, I'm going to use a very simplistic (but somewhat plausible) mathematical model. (Your article uses a more complicated and more realistic model, taking bias into account, for example.) Also, the model makes a lot more sense if congresspeople are asked to state their degree of preference for each bill rather than simply voting "yes" or "no," so that the "votes" $V_{ik}$ take values in $mathbb{R}$. When deciding how to vote on a bill, a congressperson may consider how well it correlates with her opinions and make a decision based on that. With a lot of vigorous hand waving and wishful thinking, the outcome of this process can be described very simply in terms of policy vectors: the vote $V_{ij}$ is simply the dot product $S_icdot C_j = sum_k S_{ik} C_{kj}$. (I'll leave it as an exercise to show that's not completely ridiculous, even if unlikely to be exactly true.) Another way of saying this is that if $S$ is the matrix with entries $S_{ik}$ and $C$ is the matrix with entries $C_{kj}$, then $V = SC$. In other words, our knowledge about legislative policies as latent factors induces a factorization of the matrix $V$.



One merit of the above approach is that although it's a bit too simple, it leads to well understood mathematics. For it to be useful, the number of policies $f$ should be much smaller than $m$ and $n$, the numbers of congresspeople and bills. In that case, the factorization $V = SC$ means that $V$ has rank $f$, which is small compared to its dimensions. In practice, admitting that the description in terms of policies as latent factors can only be a good approximation, not exact, this means that $V$ is well approximated by a low-rank matrix. Factorizations can be obtained from that observation alone using standard matrix tools like the singular-value decomposition. In particular, the policy vectors can be found even before you have any idea what the "policies" should be. (In other words, you don't have to sit down and make a list of policies you think are important and figure out what the policy vectors must be from that; you can use a standard algorithm which will determine the policies for you. Of course, it won't name the policies, but if you need to, you can compare policy vectors to figure out what real-world policies the algorithmically extracted policies approximate.)

Sunday 20 September 2009

Is the Drake Equation an accurate way of finding the probability of life on planets?

The Drake Equation is a formula for gestimating the number of intelligent extraterrestrial civilizations we might be able to detect, which will most likely be a small subset of planets in the universe that host lifeforms.



Let's see if we can break this down a bit. From the Wikipedia article:




The Drake equation is:



$N = R_{ast} cdot f_p cdot n_e cdot f_{ell} cdot f_i cdot f_c cdot L$



where:



$N$ = the number of civilizations in our galaxy with which radio-communication might be possible (i.e. which are on our current past light cone);



and



$R_{ast}$ = the average rate of star formation in our galaxy



$f_p$ = the fraction of those stars that have planets



$n_e$ = the average number of planets that can potentially support life per star that has planets



$f_{ell}$ = the fraction of planets that could support life that actually develop life at some point



$f_i$ = the fraction of planets with life that actually go on to develop intelligent life (civilizations)



$f_c$ = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space



$L$ = the length of time for which such civilizations release detectable signals into space




When Dr. Frank Drake first proposed this formula for estimating potential SETI signals in 1961, very few, if any, of these factors had enough reliable data upon which to form a good estimate. In the intervening decades, astronomers have done much research on the process of star formation. Also in recent decades, a great number of extrasolar planets have been discovered; the massive amount of data returned from the Kepler space telescope is especially helpful when estimating $f_p$. In addition, the developing theory of the Goldilocks zone, the orbital band in which an exoplanet can have water in liquid form, gives us a perspective on the third factor. Thus we are beginning to get some idea of the magnitude of the first three factors.



The last four parameters, however, are entirely speculative. As we only know of one planet which supports life, Earth, and know only one intelligent civilization in the universe, our own, we have insufficient data upon which to form any reasonable conclusions. The current estimates for each factor are summarized in the Wikipedia article.

gr.group theory - Criterion for an abelian group to have a commutative endomorphism ring

Amongst the finitely generated abelian groups, those with commutative endomorphism ring are exactly the cyclic groups.



Torsion abelian groups with commutative endomorphism rings are exactly the locally cyclic groups, that is, the subgroups of Q/Z. They were classified in:



  • Szele, T.; Szendrei, J.
    "On abelian groups with commutative endomorphism ring."
    Acta Math. Acad. Sci. Hungar. 2, (1951). 309–324
    MR51835
    DOI:10.1007/BF02020735

This papers also gives more complicated examples of mixed groups with commutative endomorphism ring.



The mixed case was completed in:



  • Schultz, Ph.
    "On a paper of Szele and Szendrei on groups with commutative endomorphism rings."
    Acta Math. Acad. Sci. Hungar. 24 (1973), 59–63.
    MR316598
    DOI:10.1007/BF01894610

This paper indicates the difficulty of any classification of torsion-free abelian groups with commutative endomorphism rings, as Corner has shown that very large torsion-free abelian groups can have commutative endomorphism rings (while the classifications up to now have basically been "only very small ones").

ap.analysis of pdes - Elliptic regularity on bounded domains

Dorian, aren't you messing up things a little? Surely you can expand any $L^2$ function in a series of eigenfunctions for the elliptic operator, but please notice that this simple fact already requires quite a detailed theory of elliptic operators on bounded domains. In order to prove the existence of eigenfunctions you must be able to solve the equation $Lu=lambda u$, and if you want to use your expansion for regularity results, you need to study the properties of the eigenfunctions, their growth etc.



But actually you are right, in a sense. It is indeed possible to prove existence and regularity of solution at the interior of the domain by using essentially the same methods as on the whole space. However, if you want to control the properties of the solution at the boundary, then this requires new tools. As a minimum, in the lucky situation of a smooth boundary, you can reduce to the case of a half space, but no less than that. If you are not convinced, think of the fact that some results cease to be true if you drop the assumption that the boundary is Lipschitz or satisfies some suitable cone condition.



As you suspect, it is also possible to do frequency space analysis much in the same way as on the whole space, but I would not call this easy. There is a beautiful set of notes "Lectures on semiclassical analysis" available on the web, by Evans and Zworsky, see Theorem 3.17 there (they prove interior Schauder estimates using Paley-Littlewood, apparently following a suggestion of H.Smith). I repeat: this is interior regularity, the behaviour at the boundary is substantially more difficult.

Friday 18 September 2009

fa.functional analysis - When is a Banach space a Hilbert space?

In this simple note http://arxiv.org/abs/0907.1813 (to appear in Colloq. Math.), Rossi and I proved a characterization in terms of "inversion of Riesz representation theorem".



Here is the result: let $X$ be a normed space and recall Birkhoff-James ortogonality: $xin X$ is orthogonal to $yin X$ iff for all scalars $lambda$, one has $||x||leq||x+lambda y||$.



Let $H$ be a Hilbert space and $xrightarrow f_x$ be the Riesz representation. Observe that $xin Ker(f_x)^perp$, which can be required using Birkhoff-James orthogonality:



Theorem: Let $X$ be a normed (resp. Banach) space and $xrightarrow f_x$ be an isometric mapping from $X$ to $X^*$ such that



1) $f_x(y)=overline{f_y(x)}$



2) $xin Ker(f_x)^perp$ (in the sense of Birkhoff and James)



Then $X$ is a pre-Hilbert (resp. Hilbert) space and the mapping $xrightarrow f_x$ is the Riesz representation.

ag.algebraic geometry - difference between equivalence relations on algebraic cycles

I will focus on complex projective varieties.



Codimension one



The situation in codimension one is considerably simpler than in higher codimensions.
Codimension one rational equivalence classes are parametrized by $Pic(X)= H^1(X,mathcal O_X^{ast})$ while algebraic equivalence classes are parametrized by the Neron-Severi group of $X$, which can be defined as the image of the Chern class map from $Pic(X)$ to $H^2(X,mathbb Z)$. It follows that in codimension one



  • the group of rational equivalence classes is a countable union of abelian varieties;

  • the groups of algebraic equivalence classes and homological equivalence classes coincide, and are equal to $NS(X)$ a subgroup of $H^2(X,mathbb Z)$;

  • the group of numerical equivalence classes is the quotient of $NS(X)$ by its torsion subgroup.

Higher codimension



The higher codimension case, as pointed out by Tony Pantev, is considerably more complicate and algebraic and homological equivalence no longer coincide.



Concerning rational equivalence, Mumford proved that the Chow group of zero cycles of surfaces admitting non-zero holomorphic $2$-forms are infinite dimensional, contradicting a conjecture by Severi. The paper is Mumford, D. Rational equivalence of $0$-cycles on surfaces. J. Math. Kyoto Univ. 9 1968.



Warning



The definitions of rational and algebraic equivalence at wikipedia are not correct.
I will commment below on the algebraic equivalence.



There one can find the following definition.




$Z ∼_{alg} Z'$ if there exists a curve $C$ and a
cycle $V$ on $X × C$ flat over C, such
that $$V cap left( X timeslbrace crbrace right) = Z quad text{ and }
quad V cap left( X timeslbrace crbrace right) = Z' $$
for two points $c$ and $d$ on the
curve.




This is not correct. The correct definition is




$Z ∼_{alg} Z'$ if there exists a curve $C$ and a
cycle $V$ on $X × C$ flat over C, such
that $$V cap left( X timeslbrace crbrace right) - V cap left( X timeslbrace drbrace right) = Z - Z' $$
for two points $c$ and $d$ on the
curve.




To construct an example of two algebraically equivalent divisors which do not satisfy the wikipedia definition let $X$ be a projective variety with $H^1(X,mathcal O_X) neq 0$ and
take a non-trivial line-bundle $mathcal L$ over $X$ with zero Chern class.
If $Y = mathbb P ( mathcal O_X oplus mathcal L)$ then $Y$ contains two copies $X_0$ and $X_{infty}$ of $X$ ( one for each factor of $mathcal O_X oplus mathcal L$ ) which are algebraically equivalent but can't be deformed because their normal bundles are $mathcal L$ and $mathcal L^{ast}$. This does not contradict the second definition because for sufficiently ample divisors $H$ it is clear $X_0 + H$ can be deformed into $X_{infty} + H$.

gas giants - What will happen when landing on Jupiter?

Jupiter does not have a "surface" and nor is there anything but an arbitrary division between interplanetary space and where its atmosphere begins.



The crushing pressure, is its atmospheric pressure. The deeper into the atmosphere you go, the greater the column of gas that lies above you. It is tthe weight of this column of gas that is responsible for the rapid increase in pressure with depth.



The answer to your last question is most definitely addressed in the duplicate question about whether Jupiter is entirely made of gas. There is quite likely to be a liquid phase nearer the centre and there may be a solid core of order 10 times the mass of the Earth. It is not a settled question.



The gas motions you talk about are essentially belts of weather systems in the upper layers of Jupiter's atmosphere. It is all most definitely gas that you can see.

nt.number theory - von Staudt-Clausen over a totally real field

I also can't answer the question, but I'll say some things that could help. One thing von Staudt-Clausen tells you is the denominator of the Bernoulli number $B_k$: it is precisely, the product of primes p for which $p-1mid k$ (when $p-1nmid k$, a result of Kummer says that $B_k/k$ is p-integral). As Buzzard commented, the Bernoulli numbers should be thought of (at least in this situation) as appearing in special values of p-adic L-functions, specifically, for k a positive integer
$$zeta_p(1-k)=(1-p^{k-1})(-B_k/k),$$
where $zeta_p$ is the p-adic Riemann zeta function (see chapter II of Koblitz's "p-adic numbers, p-adic analysis, and zeta-functions", for example).
For a totally real field F, a generalization of the p-adic Riemann zeta function exists, namely the p-adic Dedekind zeta function $zeta_{F,p}$ (as proved independently by Deligne–Ribet (Inv Math 59), Cassou-Noguès (Inv Math 51), and Barsky (1978)). One link between these and the Leopoldt conjecture is through the p-adic analytic class number formula which is the main theorem of Colmez's "Résidue en s = 1 des fonctions zêta p-adiques" (Inv Math 91):
$$lim_{srightarrow1}(s-1)zeta_{F,p}(s)=frac{2^{[F:mathbf{Q}]}R_phE_p}{wsqrt{D}}$$
where h is the class number,
$$E_p=prod_{mathfrak{p}mid p}left(1-mathcal{N}(mathfrak{p})^{-1}right)$$ is a product of Euler-like factors, w = 2 is the number of roots of unity, D is the discriminant and $R_p$ is the interesting part here: the p-adic regulator (as Colmez notes, $sqrt{D}$ and $R_p$ both depend on a choice of sign, but their ratio does not).



Theorem: The Leopoldt conjecture is equivalent to the non-vanishing of the p-adic regulator.



(For this, see, for example, chapter X of Neukirch-Schmidt-Wingberg's "Cohomology of number fields").



A clear consequence of this is that if $zeta_{F,p}$ does not have a pole at s = 1, then the Leopoldt conjecture is false for (F, p). Perhaps an understanding of the denominators of values of $zeta_{F,p}$ could lead to an understanding of the pole at s = 1 of $zeta_{F,p}$.



Added (2010/04/09): So here's how you can use von Staudt–Clausen to see that the $p$-adic zeta function (of Q) has a pole at s = 1. It is clear from your statment of vS–C that it is saying that for $kequiv0text{ (mod }p-1)$, $B_kequiv -1/ptext{ (mod }mathbf{Z}_p)$ (i.e. it is not $p$-integral). Let $k_i=(p-1)p^i$, the $k_i$ is $p$-adically converging to 0, so $zeta_p(1-k_i)$ is approaching $zeta_p(1)$ (since $zeta_p(s)$ is $p$-adically continuous, at least for $sneq1$). By the aforementioned interpolation property of $zeta_p(1-k)$, we have
$$v_p(zeta_p(1-k_i))=v_p(B_{k_i}/k_i)=-1-irightarrow -infty$$
hence $1/zeta_p(1-k_i)$ is approaching 0.

ag.algebraic geometry - Fibration sequences in étale homotopy theory arising from geometric fibres

Let $R = mathbb{Z} [ frac{1}{p}]$ for some prime number $p$ and $GL_{n,R}$ be the general linear group scheme over $R$. The bar construction gives a simplicial scheme $BGL_{n,R}$ over the constant simplicial scheme $Spec(R)$. If $q$ is a prime different from $p$ we can pull $BGL_{n,R}$ back along a map $Spec( bar{mathbb{F}_q}) to Spec(R)$ to get $BGL_{n,bar{mathbb{F}_q}}$. Here $bar{mathbb{F}_q}$ is an algebraic closure of $mathbb{F}_q$. The simplicial scheme $BGL_{n,bar{mathbb{F}_q}}$ has the nice property that if we apply Friedlander's étale topological type functor, defined here, and then p-complete, we get something that is equivalent to the $p$-completion tower $ { (mathbb{Z}/p)_s BGL_n( mathbb{C}) }_s $. (Here $BGL_{n}( mathbb{C})$ means the singular simplicial set of the classifying space of the Lie group).



Several articles state that the sequence$$(BGL_{n,bar{mathbb{F}_q}})_{ét} to (BGL_{n,R})_{ét} to Spec(R)_{ét}$$ becomes a fibration sequence after $p$-completing the $BGL$ terms, but I haven't been able to find any proof or argument supporting this anywhere. Does anyone know of a proof or argument for this?



In the article Exotic cohomology for $GL_n(mathbb{Z} [ frac{1}{2}])$ the reader is referred to Étale homotopy of simplicial schemes but I have only been able to find a proof of the $p$-adic equivalence I mentioned above, not of the fibration sequence. In Algebraic and étale k-theory it is used several times.



I hope this question isn't too narrow for Mathoverflow.



The reason that I ask is that I would like to have similar fibration sequences for other group schemes and I hope they will be fibration sequences for the same reason that the one above is.

Wednesday 16 September 2009

career - Thematic Programs for 2010-2011?

The Hausdorff Research Institute for Mathematics, Bonn, Germany has:



Future Trimester Programs



* Geometry and dynamics of Teichmüller space
May - August 2010
* On the Interaction of Representation Theory with Geometry and Combinatorics
January - April 2011
* Analysis, and Numerics for High Dimensional Problems
May - August 2011


Future Junior Programs



* Algebra and Number Theory
January - April 2010
* Stochastics
September - December 2010


Here's a link to their webpage.

nt.number theory - Equivalence of Finiteness of Class number to Property of Ideals of Algebraic Integers

In the statement of the question you meant "every ideal is finitely generated".



In any event, these two types of finiteness (of ideals as modules and of the ideal class group) are not at all equivalent. For example, you can replace integral closures of $mathbf Z$ in finite extensions of the rationals with integral closures of $F[x]$ in finite extensions of $F(x)$, where $F$ is a field, and thus speak about ideal class groups
in this other setting. There the ideal class groups can be infinite, even though the ideals are still finitely generated. To take a concrete example, consider the integral closure of $mathbf C[x]$ in $mathbf C(x,y)$ where $y^2 = x^3 - x$. That integral closure is a Dedekind domain and its class group is infinite; in fact the ideal class group is isomorphic to the group of complex points on the elliptic curve $y^2 = x^3 - x$, which as a group looks like $mathbf C/mathbf Z[i]$.



Finiteness of class groups is somewhat special (not unique to it, but still special) to the setting of rings of algebraic integers and not valid in general Dedekind domains.
In a general Dedekind domain, any ideal has at most 2 generators and this is unrelated to the size of the class group.

radiation - What happens to the energy from a GRB?

No energy is in transit as GRBs, just like Gamma rays that started in a GRB.



If you remember that all that energy is in the form of radiation, it obeys the same laws as ordinary light: the farther, the dimmer (inverse of square of the distance law).



So if you see stars dimmer as they are farther from you, same happens to the RGBs. Radiation from the stars expands infinitely (becoming dimmer and dimmer) until absorbed by clouds or just becoming dimmer than the background.



Same happens to gamma rays from GRBs. It is just that since rays are more energetic, they are less easily absorbed, and since GRBs themselves are more energetic, the becoming dimmer than background distance is bigger.

Tuesday 15 September 2009

impact - Would a killer asteroid shattered into thousands of pieces produce the same devastation?

TL;DR version: Too big and way, way too late.



The dispersal can't be done, even at the lower end of that 3-20 km scale. Holsapple claims 5 kilojoule/kg are needed to disrupt and disperse a solid 1 km asteroid asteroid, with energy scaling with radius1.65. Disrupting and dispersing a solid 3 km diameter asteroid with a density of 3 g/cc would require a deeply buried two Tsar Bomba weapon. The energy needed to disrupt and disperse a 20 km diameter solid asteroid is beyond anything humanity can achieve.



This disruption and dispersal will not be uniform. It will instead create a few big chunks, perhaps 1/3 the diameter of the original body, a larger number of intermediate sized chunks, and thousands upon thousands of little chunks. Those big chunks are still civilization killers. The intermediate-sized chunks will make Tunguska look small, and the thousands upon thousands of little chunks -- we recently saw what a little chunk can do, over Chelyabinsk.



What if it's a rubble pile or a comet? While they're not quite as dense, they need much, much more energy to disrupt and disperse. Rubble piles and comets are very good at absorbing impact energy. Disrupting and dispersing a one km diameter comet is beyond the scope of human technology.



The time scale, two days, is extremely short. A decade of advanced notice is considered to be very short notice when it comes to diverting a one km impactor. Even larger ones require even more advance notice. A century's advance notice might be enough time to deal with a 20 km diameter object.





Holsapple, "About deflecting asteroids and comets," Mitigation of Hazardous Comets and Asteroids: Vol. 1, Cambridge Univ, 2004.

Monday 14 September 2009

What is a Seyfert galaxy?

Seyfert galaxies differ from other active galaxies (most notably quasars) in that their galactic nuclei are lower in luminosity compared to the rest of the galaxy. Quasars have nuclei that easily outshine the rest of the galaxy. Seyfert galaxies, on the other hand, host active nuclei that do not outshine the rest of the galaxy by the same amount.



Interestingly enough, in his original analysis, Seyfert focused on emission lines, noting that in this class of galaxies, there are strong high-ionization emission lines present in certain parts of the spectrum. This alternate definition is also used today. LINERS (Low Ionization Nuclear Emission line RegionS) are very similar to Seyfert galaxies if this definition is used; they are differentiated because LINERS also contain low-ionization emission lines.



References:

general relativity - Recommendation for introductory cosmology text

This book by Andrew Liddle is fantastic for explaining the basic concepts and giving the reader a good mental picture of the ideas, but it may be a little more basic than what you're looking for (I still keep it as a good reference when reading more advanced texts).



If you're looking for something more advanced Peter Coles & Francesco Lucchins guide may be of interest. Unfortunately some of the discussions have already dated a little (there is a large section on non-gaussianity if I remember rightly, whose proponents have been all but silenced by the latest Planck results). This is another good resource I went to when completing my graduate cosmology course, but I couldn't recommend it on it's own; many of the derivations are piecemeal at best, so ideally read in conjunction with a more thorough text.



Not sure if you came across this text during your undergrad investigations but it is basically the bible of galactic physics along with this. It goes in to some of the touch points of galaxy physics and cosmology which may prove useful for your understanding.



Finally this may be of interest - I can't vouch for it but it popped up when looking for links to the books above, has good reviews and seems to be at the level you're looking for as well as discussing some topics you may already have exposure to.



For all the above I'd recommend getting them out from a library first and working thoroughly through a few chapters to get a feel for the text before splashing out.

Sunday 13 September 2009

ct.category theory - What is the name for the following categorical property?

This isn't quite the question you asked, but does address the notion of ''bijective'' morphisms in categories, so I hope you'll forgive this digression.



The examples you've mentioned - Set, Gp, Top - are all concrete, meaning they are equipped with a forgetful functor U to Set. We say a morphism f in a concrete category C is injective if its image Uf is injective, i.e., monic in the category Set. Dually, f is surjective if Uf is surjective. One usually thinks of concrete categories as "sets with structure", so these definitions coincide with the common use of such terminology: e.g., we call a map of spaces surjective when the underlying map of sets is.



So we have four adjectives to use for arrows in C: monic, epic, injective, surjective. It's an easy exercise to see that all injections are monic and all surjections are epic. The converse is not true in general, but finding examples of monos that aren't injective and epis that aren't surjective can be tricky, and here's why.



Often, particularly in ''algebraic'' examples, the functor U : CSet has a left adjoint F. When this is the case, it is an easy exercise to see that every mono must be injective. Dually, if U has a right adjoint, then every epi is surjective. So for example, the forgetful functor U : TopSet has both adjoints, and hence for spaces the notions injective/surjective and monic/epic coincide, at which point Tom's post answers your question.



Here are some examples of concrete categories where these concepts differ, all of which can be found in Francis Borceux's Handbook of Categorical Algebra (I think). In the category of divisible abelian groups, the quotient map $mathbb{Q} rightarrow mathbb{Q}/mathbb{Z}$ is monic, though it's clearly not injective. In the category of monoids, the inclusion $mathbb{N} rightarrow mathbb{Z}$ is epic, though not surjective. In the category of Hausdorff spaces, the epis are continuous functions with dense image, so also need not be surjective.

the moon - Why is sodium such a common ion for in ion tails?

From http://science.nasa.gov/science-news/science-at-nasa/2000/ast26oct_1/:




"When a Leonid meteoroid hits the Moon it vaporizes some dust and
rock," explains Jody Wilson of the Boston University Imaging Science
Team. "Some of those vapors will contain sodium (a constituent of Moon
rocks) which does a good job scattering sunlight. If any of the impact
vapors drift over the lunar limb, we may be able to see them by means
of resonant scattering. They will glow like a faint low-pressure
sodium street lamp."




Also, relative to other elements, sodium has a low ionization energy.

Saturday 12 September 2009

pr.probability - Monte Carlo method and possible applications to computer poker?

Monte Carlo methods are appropriate for analyzing some systems involving chance, not incomplete information. Monte Carlo methods tell you nothing about how to model a poker strategy.



For general games of incomplete information, you should look up game theory (and not combinatorial game theory), a branch of mathematics which applies well to games of incomplete information such as poker. Some of the earliest work on game theory involved the analysis of model poker games. A common misconception is that bluffing is not mathematical, but this is simply wrong. A book which seems to have been written for mathematicians is "The Mathematics of Poker" by Bill Chen and Jerrod Ankenman. For example, they study many model poker games where players are dealt a uniformly distributed number on [0,1] with restricted betting options, as did Borel and von Neumann.



Polaris plays one form of poker, 2-player limit hold'em. This is not the form of poker you see on TV, which is usually multiplayer No Limit hold'em. The 2-player game with fixed bet sizes is still too large combinatorially to solve completely, but half-size problems can be solved (preflop games, and postflop games), and some of the research has been based on trying to glue these half-solutions together. The result, after much effort, has been strong heads-up limit hold'em programs like Polaris which crush casual players, and are only behind the best human players. However, these techniques do not extend easily to No Limit Hold'em, or to multiplayer versions of the game.



Other variants such as tournaments with low blinds or different poker games such as Razz and draw poker (which is rarely played now) are more susceptible to complete or numerical solutions. Here is an approximate Nash equilibrium calculator for single table tournaments when players are restricted to raising all-in or folding, and at most 3 players can enter the pot, which is a reasonable approximation to a commonly played variant. In practice, exploitive adjustments are important as well.



If you want to understand the current state of poker AI, then I recommend starting by exploring the web page of the Computer Poker Research Group (University of Alberta) which contains some history and research articles.

Spectrum of the sum of generators for irrational rotation algebra

Perhaps it is helpful for you to know that you can find papers on $u + v + u^{dagger} + v^{dagger}$ also by looking for "Harper equation", "Discrete mathieu equation" or "Hofstadter butterfly".



Here's an example of the butterfly. Hoftstadter found the (rough) structure of the butterfly in 1976 by looking at a model for Bloch electrons (i.e. electrons in a periodic structure) in a magnetic field. The irrationality $theta$ represents essentially the magnetic flux through a unit cell of the lattice.

(As for rational $p/q$ there is a translation symmetry one find $q$ "bloch bands", which touch at $E=0$ for pair $q$.)



I spent a part of my PhD thesis (no math, but renormalization from a more physical/heuristical point of view) on the multifractal properties of the spectrum for irrational values and gave some estimations on the minimal and maximal multifractal dimensions for quadratic irrationalities.
If you're interested, here's a paper of mine: http://iopscience.iop.org/0305-4470/30/1/009.

How does the Milky Way look like above 66° North and below 66° South?

On midnight, right around this time of year, the Milky Way will be in the zenith. Here is an XEphem rendering for the north of finland (65th latitude) for yesterday midnight (the brown outline marks the Milky Way):



Sky map for 65th latitude including milky way



You can also see this on photographs by finnish photographer Tommy Eliassen. He has many examples of this on his website. I won't put any in here, because I guess they are copyrighted, and not free to use.



This is an animated 24 hour version of the above sky map, which shows the rotation of the milkyway around the zenith:



Animated sky map including the milky way, 24 hours.

space - What happened before the big bang?

The answer to this questions turns out to be a "yes". The big bang is a sudden rapid expansion of space, so yes, there was space before the big bang. To understand more, check this vulgarization video by MinutePhysics who corrects some misconceptions about the BB, since you seem to not know the Friedmann-Walker-Robertson-Lemaître metric.

Friday 11 September 2009

amateur observing - Is the Milky Way Visible from Nebraska?

Well, because the axis of the rotation of the Earth is not the same as the axis of rotation of the disk of the Milky Way (and also because we're transforming a 2-dimensional spherical map into a 2-dimensional cartesian map), the path of the disk of the Milky Way galaxy looks something like this:



milkywaydisk



So, there is actually a wide range in declination that the Milky Way can be seen at. The range of declination you can see depends on your latitude (for a review of RA and declination, coordinates used in the celestial coordinate system, see this post). For example, here in Philadelphia (just about $+40^{circ}$ declination), I'd be able to see from from $-50^{circ}$ to $+90^{circ}$ in declination. For Nebraska, find the latitude of your location. To calculate the lower limit, add your latitude (which will be positive since you're in the northern hemisphere) to $-90^{circ}$ (mine was $+40^{circ}$, so: $-90^{circ} + 40^{circ} = -50^{circ}$). To find the upper limit in declination, it's even easier. Since you're in the northern hemisphere, you can actually see the north celestial pole. This means that the upper limit is simply the maximum it can possibly be, which is $+90^{circ}$. The larger the latitude the more circumpolar your night sky gets.



The good news is that you should definitely be able to see it in Nebraska. The only question during what season will you see it at its best? Take a look at the constellations which contain parts of the Milky Way and find the season you can see it in - now you know where to look.



The last thing I want to add is that you need nice dark conditions. Local weather conditions or sources of light pollution may very easily hide the Milky Way. From Philadelphia, even on perfectly clear nights we have no chance of seeing it. If you've got a city on one of your horizons, try to plan around that - either go to a darker location or try looking opposite in the sky from any sources of light pollution.

Thursday 10 September 2009

set theory - [Points in space] Pairing function for reals (Set cardinality problem)

You might want to look into space filling curves, which were first described by Peano and Hilbert in the late 1800's. These are continuous surjections from $[0,1]$ onto $[0,1]^2$ (and higher powers) but they are not bijections. However, they are visualizable to a certain extent. A quick Google search gave a lot of hits, in particular this one at Cut The Knot which has an illustrative java applet.



As for the existence of a bijection, you can derive it from the fact that $aleph_0cdot2 = aleph_0$ and the usual exponent rules:
$$(2^{aleph_0})^2 = 2^{aleph_0cdot2} = 2^{aleph_0}$$
It is also easy to write an explicit bijection between Cantor space ${0,1}^{mathbb{N}}$ (the space of infinite binary sequences) and its square by splitting the even and odd coordinates. This, together with a bijection between $mathbb{R}$ and ${0,1}^{mathbb{N}}$, gives what you want. Note that it is this last bijection which is harder to visualize. The reason is that $mathbb{R}$ is connected while ${0,1}^{mathbb{N}}$ is totally disconnected (with the product topology).

Question about "wide" random matrices

Let $A in mathbb{R}^{m times n}$ be a random matrix with i.i.d. entries (the distribution is not important), where $m < n$ (i.e. $A$ is a "wide" matrix). I would like a lower bound on
$$
phi(A) triangleq min_x frac{lVert Ax rVert}{lVert x rVert}
$$
that holds with high probability (apologies if the notation $phi(A)$ conflicts with any established usage).



When $m geq n$, evidently $phi(A) = sigma_{min}(A)$, the least singular value of $A$ (although I am not certain why this is true). Of course the distribution of the least singular value of a random matrix has been well-studied.



But when $m < n$, it seems that $phi(A) neq sigma_{min}(A)$ in general. For example, if $m = 1$ and $n > 1$, then $phi(A) = 0$ (just choose $x$ to be orthogonal to the vector $A$), but $sigma_{min}(A)$ is the Euclidean norm of the vector $A$, which usually will not be $0$.

Wednesday 9 September 2009

gr.group theory - Schur Multipliers

The group $H^2(G,mathbb{C}^times)$ plays a rôle in orbifold conformal field theory and is usually known as the discrete torsion group. In fact, in this context one actually needs the explicit cocycle and for the case of a finite simple abelian group it is very easy to compute explicitly.



Let $varepsilon: G times G to mathbb{C}^times$ be the cocycle. Without loss of generality one can normalise it so that
$$varepsilon(0,g)=varepsilon(g,0) = 1$$
for all $g in G$. With this normalisation the cocycle conditions become, in addition, the following:
$$varepsilon(g,g)=1 quad varepsilon(g,g')= varepsilon(g',g)^{-1}$$
and
$$varepsilon(g_1+g_2,g) = varepsilon(g_1,g)varepsilon(g_2,g)$$
from where it follows that if $G$ has order $N$, then for all $g,g' in G$,
$$varepsilon(g,g')^N = 1$$



Let $G = mathbb{Z}/N_1 times cdots times mathbb{Z}/N_k$ be a finite simple abelian group and let $alpha_i$ be a generator of $mathbb{Z}/N_i$, so that we can write any element of $G$ as a sum $sum_i n_i alpha_i$ where $n_i = 0,1,ldots,N_i-1$.



Then one finds that all cocycles are given in terms of $B_{ij} = -B_{ji}$ taking the possible values $0,1,ldots,mathrm{gcd}(N_i,N_j)-1$, by the formula
$$varepsilon(sum_i n_ialpha_i,sum_j m_jalpha_j) = exp 2pisqrt{-1}sum_{i,j} frac{B_{ij} n_im_j}{mathrm{gcd}(N_i,N_j)}$$



It is the bilinear $B_{ij}/mathrm{gcd}(N_i,N_j)$ which is called the discrete torsion. It should be emphasised that torsion here is by analogy with the torsion of a connection in differential geometry and not with torsion as in group theory.



If you google "discrete torsion" and "orbifold" you might find suitable references, just like this paper of Vafa and Witten.

cv.complex variables - Approximately holomorphic functions

I've got some thoughts, but they should be treated somewhat as speculation. Harald above brought up the notion of quasiconformality. It would not surprise me if the "approximately holomorphic" functions you described were quasiconformal -- the quasiconformality condition is a very soft condition. I would look at the discussions of it in Hubbard's book on Teichmuller theory and Ahlfors's book on quasiconformal mappings and see if you can prove that it holds.



If it does hold, then I'm pretty certain that that the quasiconformality constant would be $1$. If it is, then you are in luck -- a famous theorem of Weyl says that quasiconformal mappings with quasiconformality constant $1$ are actually holomorphic (well, maybe you wouldn't call it luck, as you then wouldn't actually have a generalization).



This sequence of speculations fits into another important intuition about holomorphic functions, namely that they end up being more differentiable than you might guess a priori. For instance, if $f$ has partial derivatives with respect to $x$ and $y$ and the Cauchy-Riemann conditions hold, then $f$ is automatically infinitely differentiable. You don't even have to assume that the partial derivatives of $f$ are continuous or that $f$ is actually differentiable. The theorem of Weyl I mentioned above is also in this vein, as it says that you can even assume that the partial derivatives of $f$ only exist in the weak sense.

gn.general topology - Regular spaces that are not completely regular

These examples seem to be very difficult to construct. The problem is that any local compactness or uniformity will automatically boost your space to a Tychonoff space, and Tychonoff spaces are closed under passing to subspaces or products. Consequently, there's doesn't seem to be a "machine" for producing these kinds of spaces.



The idea of all the counterexamples $X$ is to write down enough open sets of $X$ to make it clear that points can be separated from closed subsets, but to somehow rig things so that any continuous real-valued function on $X$ identifies two distinct points of the space.



The example in Munkres's textbook that Elencwajg mentions is a pretty straightforward one (relatively speaking); it's the same in spirit as Raha's example, which is the easiest I've found. Here it is:



For every even integer $n$, set $T_n:={n}times(-1,1)$, and let $X_1=bigcup_{ntextrm{ even}}T_n$. Now let $(t_k)_{kgeq 1}$ be an increasing sequence of positive real numbers converging to $1$.



For every odd integer $n$, set $$T_n:=bigcup_{kgeq 1}{(x,y)inmathbf{R}^2 | (x-n)^2+y^2=t_k^2}$$ and let $X_2=bigcup_{ntextrm{ odd}}T_n$. Now let $$X={a,b}cupbigcup_{ninmathbf{Z}}T_n$$



Topologize $X$ so that:



  1. every point of $X_2$ except the points $(n,t_k)$ are isolated;

  2. a neighborhood of $(n,t_k)$ consists of all but finitely many elements of ${(x,y)inmathbf{R}^2 | (x-n)^2+y^2=t_k^2}$;

  3. a neighborhood of a point $(n,y)in X_1$ consists of all but a finite number of points of ${(z,y) | n-1<z<n+1}cap(T_{n-1}cup T_n)$;

  4. a neighborhood of $a$ is a set $U_c$ containing $a$ and all points of $X_1cup X_2$ with $x$-coordinate greater than a number $c$;

  5. a neighborhood of $b$ is a set $V_d$ containing $b$ and all points of $X_1cup X_2$ with $x$-coordinate less than a number $d$.

This is a space that is $T_3$, but every continuous map $f:Xtomathbf{R}$ has the property that $f(a)=f(b)$, so it is not $T_{3frac{1}{2}}$.

Tuesday 8 September 2009

at.algebraic topology - The (n+1)-st cohomology of K(Z/p,n).

I was looking through my notes for a homotopy theory course and found the following mysterious statement (K is of course the Eilenberg-Maclane space):



$$H^{n+1}(K(mathbb Z_p,n);mathbb Z_p) cong mathbb Z_p.$$



(This would be obvious if n+1 were replaced with n. This is supposed to imply that the natural transformations $H^n(X; mathbb Z_p)to H^{n+1}(X; mathbb Z_p)$ are all multiples of the Bockstein homomorphism).



I'm at a loss trying to understand why. Spectral sequences haven't been covered yet, so there should be some simple reason. Also, is there a way to see the Bockstein in all this?



Thank you!

observation - How would we detect a planet behind the Sun?

Let's assume hypothetically that Earth has a twin planet on the opposite side of Earth's orbit. Its orbital period would be exactly the same as Earth's and it would always be behind the Sun so directly observing it from Earth would not be possible.



How would we detect the planet's existence short of sending out a spacecraft that could look "behind" the sun and observe it visually? Have we confirmed that there is no such planet?

Monday 7 September 2009

applications - Given a function f(t): t -> R^n, can 2D, or nD DFTs be used on f(t) to perform frequency analysis?

Frequency analysis is often performed on wave forms (1D DFT), and images (2D DFT), where the function in question often takes the form:



$f(t): Re mapsto Re$



$f(x,y): Re^2 mapsto Re$



$f(x_1, x_2, ldots, x_n): Re^n mapsto Re$



However, note that in all 3 cases 'f' maps to a single real value. If, however, f takes the form:



$f(t): Re mapsto Re^n$



... it isn't clear to me whether the Fourier transform can be used to perform any kind of frequency analysis that would provide any information across dimensions.



If the dimensions are spatially correlated, eg, the sample $f_1(t_0)$ is physically adjacent to $f_2(t_0)$, and circularly adjacent to $f_n(t_0)$, then intuitively it would make sense to use the 2D DFT to perform the desired analysis. This interpretation, in essence, transforms f into the form $f(t,x): Re^2 mapsto Re$.



However, if no such relationship can be imposed on the set $lbrace f_irbrace$, does there exist an analog of the FFT/DFT for this sort of problem?



To put this another way: if $f(t): Re mapsto Re^n$, and $f(t)$ is transformed in a similar fashion -- eg, $f(t,i): Re^2 mapsto Re$, where $i$ indexes into the dimensions of $f(t)$ -- is there a generalized approach to Fourier analysis that can make use of the index variable without making the assumption that $i=1$ and $i=2$ have any spatial relationship?



A vector-valued function can be used in the expression for computing a Fourier Transform, but unfortunately that results in computing the Fourier transform of each component of the vector-valued function without making use of information available in other dimensions of the range. In other words, if $f(t): Re mapsto Re^3$, then $Flbrace f(t)rbrace = (Flbrace f_1(t)rbrace, Flbrace f_2(t)rbrace, Flbrace f_3(t)rbrace)$. The question isn't whether this is possible, but whether more can be done than just this level of analysis.



A somewhat recent paper by Thomas Batard may answer the question, but I don't have the expertise to know whether it does. His paper, A Metric Approach to nD Images Edge Detection with Clifford Algebras, demonstrates a technique for performing analysis on color images, where the mapping might take a form similar to $(c,m,y,k) = f(x,y): Re^2 mapsto Re^4$.



As I have time, I would like to study Clifford algebras, but if this is a good path for me to go down it would give me more incentive to do so earlier rather than later.

impact - Did a piece of Halley's comet strike the Earth 1,500 years ago?

I have not read the complete scientific article, but I'm not prone to trust this. On that year the Chinese civilization were quite well established and annotating about comets, and I have not read about them annotating about a comet that approached so much to Earth.

Are there any interesting connections between Game Theory and Algebraic Topology?

You may also want to see http://arxiv.org/abs/1005.2405



Flows and Decompositions of Games: Harmonic and Potential Games
Authors: Ozan Candogan, Ishai Menache, Asuman Ozdaglar, Pablo A. Parrilo



Abstract: In this paper we introduce a novel flow representation for finite games in strategic form. This representation allows us to develop a canonical direct sum decomposition of an arbitrary game into three components, which we refer to as the potential, harmonic and nonstrategic components. We analyze natural classes of games that are induced by this decomposition, and in particular, focus on games with no harmonic component and games with no potential component. We show that the first class corresponds to the well-known potential games. We refer to the second class of games as harmonic games, and study the structural and equilibrium properties of this new class of games. Intuitively, the potential component of a game describes the possibility of agreement and coordination between players, while the harmonic part represents the conflicts between their interests. We make this intuition precise, by studying the properties of these two classes, and show that indeed they have quite distinct and remarkable characteristics. For instance, while finite potential games always have pure Nash equilibria, harmonic games generically never do. Moreover, we show that the nonstrategic component does not affect the equilibria of a game, but plays a fundamental role in their efficiency properties, thus decoupling the location of equilibria and their payoff-related properties. Exploiting the properties of the decomposition framework, we obtain explicit expressions for the projections of games onto the subspaces of potential and harmonic games. This enables an extension of the properties of potential and harmonic games to 'nearby' games. We exemplify this point by showing that the set of approximate equilibria of an arbitrary game can be characterized through the equilibria of its projection onto the set of potential games.

solar system - Emulation of an Orrery

The principle is almost exactly the same as a watch or clock, but instead of three concentric axles, you need 9 for the planets.



Have a google for Orrery kit - there are loads available. It is really all simple maths - you just need to know relative orbital periods in order to calculate cog sizes.
enter image description here
(picture from curiousminds.co.uk)
For moons, you do add a little complexity in the form of a transmission to support the Earth, so that the difference in movement can move the Moon.

Sunday 6 September 2009

What is the formula to predict lunar and solar eclipses accurately?

The NASA sites have some very useful resources for this I will list them below:



Lunar Eclipses



This Link has an index for all lunar eclipses from -1999 to +3000, predominantly a statistics page but also has this page that contains how to calculate when lunar eclipses are.



There is more than one formula depending on which time frame you are trying to look in.



This is the formula for eclipses between the year 2005 and 2050:




$$Delta T = 62.92 + 0.32217 * t + 0.005589 * t^2$$



Where:
$$y = year + (month - 0.5)/12$$
$$t = y - 2000$$




Solar Eclipses



This Link has an index like above but for all of the solar eclipses from -1999 to +3000.



This link has the formula for calculating solar eclipses. This is the formula for between 2005 and 2050:




$$Delta T = 62.92 + 0.32217 * t + 0.005589 * t^2$$



Where:
$$y = year + (month - 0.5)/12$$
$$t = y - 2000$$


riemannian geometry - Why these particular numerical factors in the definition of Gaussian curvature?

Joel is right that it is partly just a convention to scale Gaussian curvature so that the curvature of a unit sphere is $1$. However, there are three natural motivations for this scale besides matching 1 to 1 in the case of a sphere.



First, Gauss defined his curvature as the product of the extrinsic curvatures of a surface in $mathbb{R}^3$. So there is a coefficient of 1 in this natural formula.



Second, the unit sphere has the property that the deviation from Euclid's parallel postulate has a factor of 1. In other words, the area $A$ of a triangle with angles $alpha, beta, gamma$ is $alpha + beta + gamma - pi$. In general, if you have a very small triangle with area $A$ at a point of local curvature $K$, its angle deviation is $KA$ to first order. This factor of 1 leads to a factor of $2pi$ in the Gauss-Bonnet theorem, that the integral of Gaussian curvature is $2pi chi$.



Third, Gaussian curvature is the ratio of the Ricci curvature tensor to the metric, and it is also half of the scalar curvature.



In comparing these formulas, the most reasonable scales for Gaussian curvature are the standard choice, the standard choice times 2 to match scalar curvature, and the standard choice divided by $2pi$ to match the Gauss-Bonnet theorem. The volume and area formulas are some justification for a 1/3 or a 1/12 or similar, but these are taken to be less fundamental scales.



(One irony of the discussion is that $pi$ itself is half of the most important value in trigonometry.)



Also the volume and surface area ratios are given in Wikipedia in $n$ dimensions. It is also worth looking at the generalized Gauss-Bonnet theorem in $2n$ dimensions.

cosmology - Available data on the Milky way around 1920

In a book by Alexander Moszkowski, there is an Einstein quote about a hypothetical size of the universe (100 million Light years). Moszkowski claims that Einstein had deduced that from the gravitational constant and the mass distribution of the Milky Way, as known around 1920. My question:



What was the data and the best estimate of the mass density of the Milky Way, back then in 1920?

solar system - Energetics of Titans Tholin haze

This is a part answer to your question, based on some recent research of the photochemical behaviour modelled and observed for Titan's tholin haze and modelling of Titan's stratosphere.



The process appears to begin, according to the paper Ice condensation layers in Titan’s Stratosphere (Barth, 2012) (Abstract only - paywalled), with




Photochemical destruction of methane along with the destruction of nitrogen molecules from energetic electrons in Titan’s upper atmosphere result in the production of a number of hydrocarbon and nitrile compounds which may be capable of condensing at the colder temperatures of Titan’s lower stratosphere.




then, according to the paper Laboratory experiments of Titan tholin formed in cold plasma at various
pressures: implications for nitrogen-containing polycyclic aromatic
compounds in Titan haze
(Imanaka et al. 2004), in particular in reference to Titan's stratosphere,




. In the stratosphere (100–
300 km), further chemical reactions are induced by the catalytic
$CH_4$ dissociation by such molecules as $C_2H_2$ and
$C_4H_2$ absorbing the long UV (> 155 nm) irradiation




The significance of these UV absorbing molecules is explained in the article Photochemical activity of Titan’s low-altitude condensed haze (Gudipati et al. 2013) (Abstract only - paywalled), they state that tholin haze could form on condensed aerosols in Titan's atmosphere, demonstrating that, at least part of Titan's atmosphere is photochemically active. Through modelling, they found that




Detected in Titan’s atmosphere, dicyanoacetylene ($C_4N_2$) is used in our laboratory simulations as a model system for other larger unsaturated condensing compounds. We show that $C_4N_2$ ices undergo condensed-phase photopolymerization (tholin formation) at wavelengths as long as 355 nm pertinent to solar radiation reaching a large portion of Titan’s atmosphere, almost close to the surface.




and evidence of these ices is suggested in the article Titan’s aerosol and stratospheric ice opacities between 18 and 500 μm: Vertical and spectral characteristics from Cassini CIRS (Anderson and Samuelson, 2011) stating that the ices and aerosols




appear to be located over a narrow altitude range in the stratosphere centered at ∼90 km. Although most abundant at high northern latitudes, these nitrile ice clouds extend down through low latitudes and into mid southern latitudes, at least as far as 58°S.


Saturday 5 September 2009

dg.differential geometry - Tetrad postulate: Implies or results from the metricity of the connection?

Having botched the first attempt at answering this question and not wanting to delete the evidence, let me try again here.



The "tetrad postulate" is independent from metricity and from the condition that the connection be torsion-free. It is simply the equivalence (via the vielbein) of two connections on two different bundles. Here are the details. $M$ is a smooth $n$-dimensional manifold.



First of all we have an affine connection $nabla$ on $TM$ with connection coefficients $Gamma^rho_{munu}$ relative to a coordinate basis -- that is,
$$nabla_{partial_mu} partial_nu = Gamma_{munu}^rho partial_rho,$$
with $partial_mu$ an abbreviation for $partial/partial x^mu$ where $x^mu$ is a local chart on $M$.



Then we have a connection on an associated vector bundle to the frame bundle $P_{mathrm{GL}}(M)$. The frame bundle is a principal $mathrm{GL}(n)$-bundle and given any representation $rho: mathrm{GL}(n) to mathrm{GL}(V)$ of $mathrm{GL}(n)$ we can define a vector bundle
$$P_{mathrm{GL}}(M) times_rho V.$$
Take $V$ to be the defining $n$-dimensional representation and call the resulting bundle $E$. Relative to a local frame $e_a$ for $E$, a connection $hatnabla$ defines connection one-form $omega$ by
$$hatnabla_{partial_mu} e_a = omega_{mu~a}^b e_b.$$



Now the vielbein defines a bundle isomorphism $TM buildrelcongoverlongrightarrow E$ and all the "tetrad postulate" says is that the two connections $nabla$ and $hatnabla$ correspond. In fact, the "tetrad postulate" is just the statement that the vielbein is a parallel section of the bundle $T^*M otimes E$ relative to the tensor product connection.



This works for any affine connection $nabla$ on any smooth manifold $M$. No metric is involved.



A special case of this construction is when $(M,g)$ is a riemannian manifold and $nabla$ is the Levi-Civita connection (i.e., the unique torsion-free, metric connection on $TM$).
You can without loss of generality restrict to orthonormal frames, which defines a principal $mathrm{O}(n)$ (or $mathrm{O}(p,q)$ depending on signature) bundle. The representation $V$ restricts to an irreducible rep of the orthogonal group, possessing an invariant bilinear form $eta$. This relates $g$ and $eta$ as in your question.

Thursday 3 September 2009

How does the Earth move in the sky as seen from the Moon?

To expand a little more, yes the Earth would hang in the same spot in the sky, moving around in a small circle as the moon rotated around it over the course of each of its 28 day orbits. It would have phases, full Earth when the moon is between it and the sun, new Earth when the Earth is between the moon and the sun, and wax and wane between these two points. Each time the Earth was full, a different part of it would be visible from the moon, depending on the season and how it happened to line up when full Earth occurred.



The Earth would roll in the sky according to the seasons on Earth and the precession of the moon's orbit. The moon tilts only 1.5 degrees from the ecliptic and so in essence has no seasons. The orbit of the moon is tilted 5 degrees from the ecliptic, so when it is farthest to the south of the plane of the solar system, more of the south of our planet is visible, and vice-versa when it is farthest north. But the Earth is tilted 23 degrees from the ecliptic, so this is what would mostly determine what part of it was visible from the moon. Here is a good summary of the relationship of the Earth and Moon. The distance between the two bodies is not to scale, but the relative size of the one to the other is to scale.



the orbit of the moon around the Earth



In December, Antarctica would be visible - as it was during the famous Blue Marble photo from Apollo 17, taken on Dec. 7, 1972.



Apollo 17 photograph of Earth, showing Africa, Arabia, and Antarctica.



(This version of the photo is supposed to be as it was in the original photograph. They took it on the way to the moon, so the orientation of the camera was the only thing that determined which way was 'up'. If you were on the moon at far southern latitudes, this is in fact how it would look from your point of view.)



In July, the Arctic would be visible. Apollo 11 took this photo on July 16, 1969, when they were half-way to the moon.



Apollo 11 photo of Earth, showing Africa, Europe, and the Middle East



Here actually you don't see much of the Arctic. The moon's orbit had carried it a bit south of the Earth at that point. Heading to it meant going south a bit, so when they looked back at the Earth, they saw less of the far north.



Just to be clear, something you would NEVER see is this, which has been widely misunderstood as a photo of the whole planet from space. Actually it is a super-wide-angle photo taken from low earth orbit - the horizon line is for the part of the world the satellite could see, it is not one hemisphere of the Earth.



enter image description here



If you think about it for a second you realize this can't possibly be true. The United States is gigantic, if it was really this size Canada would occupy the whole arctic and roll over onto the other side of the world, displacing most of Russia. Argentina and Chile would take the place of Antarctica.

Wednesday 2 September 2009

ap.analysis of pdes - "Physical" construction of nonconstant meromorphic functions on compact Riemann surfaces?

Miranda's book on Riemann surfaces ignores the analytical details of proving that compact Riemann surfaces admit nonconstant meromorphic functions, preferring instead to work out the algebraic consequences of (a stronger version of) that assumption. Shafarevich's book on algebraic geometry has this to say:



A harmonic function on a Riemann surface can be conceived as a description of a stationary state of some physical system: a distribution of temperatures, for instance, in case the Riemann surface is a homogeneous heat conductor. Klein (following Riemann) had a very concrete picture in his mind:

"This is easily done by covering the Riemann surface with tin foil... Suppose the poles of a galvanic battery of a given voltage are placed at the points $A_1$ and $A_2$. A current arises whose potential $u$ is single-valued, continuous, and satisfies the equation $Delta u = 0$ across the entire surface, except for the points $A_1$ and $A_2$, which are discontinuity points of the function."



Does anyone know of a good reference on Riemann surfaces where a complete proof along these physical lines (Shafarevich mentions the theory of elliptic PDEs) is written down? How hard is it to make this appealing physical picture rigorous? (The proof given in Weyl seems too computational and a little old-fashioned. Presumably there are now slick conceptual approaches.)

lunar eclipse - Will the UK be able to witness the tetrad in April 2014?

It is great that you have an interest in astronomy.



Unfortunately, the UK will be on the wrong side of the Earth to see the lunar eclipse on April 15. If you you want to experiment with various places on earth that will see it, and what they will see, get a planetarium application like Stellarium or SkySafari or TheSkyX.



A lunar eclipse is always easy to spot - it is where the full moon is. Also, because it is the result of the Earth getting in the way of the Sun shining on the Moon, it is always after the sun sets and after the full moon rises in the East. That is, if you're going to be in a place to see the whole sequence, not just a part of it.



Just so you know, the tetrad is not one but a series of lunar eclipses over 2 years, this is just the first lunar eclipse in the series. The last tetrad was 2003-2004, so you were around, but probably not paying as much attention at that time :-)



Unfortunately, Cambridge UK will not see any of the first 3 eclipses in this tetrad. However the 28 Sept 2015 eclipse will be visible from that location.



Incidentally, one of the better places to see this lunar eclipse will be Easter Island (Rapa Nui) which will see the whole eclipse, and with the totality occuring high in the sky, near the meridian. Also, not much light pollution.