Sunday 31 January 2010

About dense orbits on dynamical systems

This is called the shrinking target problem, and there is a reasonably large literature on it. For hyperbolic dynamical systems we can usually find quite a few pairs $x$, $p$ such that $A$ is infinite for all $delta$. Indeed, I believe that there are results showing that in certain cases, for any point $z$ and positive real number $delta>0$, the set of all $x$ such that $d(T^nx,z)<exp(-ndelta)$ for infinitely many $n geq 1$ has positive Hausdorff dimension. A good place to start would be the articles "Ergodic theory of shrinking targets" and "The shrinking target problem for matrix transformations of tori", both by Hill and Velani, but there are many results beyond this.



For illustration, here is a nice example in the case where $T$ is a smooth map of the circle which is not a diffeomorphism. I realise that this falls slightly outside the purview of your question, but it is possible to extend this argument to the case of toral diffemorphisms using the technical device of a Markov partition. (I will not attempt this here because it is very fiddly.) Let $X=mathbb{R}/mathbb{Z}$ be the circle, let $T colon X to X$ be given by $Tx = 2x mod 1$, and let $d$ be a metric on $X$ which locally agrees with the standard metric on $mathbb{R}$. Take $p=0 in X$ and fix any $delta>0$. Now, the orbit of $x$ is dense if and only if it enters every interval of the form $(k/2^n,(k+1)/2^n)$, if and only if every possible finite string of 0's and 1's occurs somewhere in the tail of its binary expansion.
On the other hand, we have $d(T^nx,0)<2^{-delta n}$ as long as the binary expansion of $x$ contains a string of zeroes starting at position $n$ and having length $lceil delta n rceil$. I think that it is not difficult to see that we can construct an infinite binary expansion, and hence a point $x$, such that this condition is met for infinitely many $n$, whilst simultaneously meeting the condition that the orbit of $x$ is dense. In particular we can construct a point $x$ for which $A$ is infinite, even for all $delta$ simultaneously if you like.

Saturday 30 January 2010

ag.algebraic geometry - Size of a Groebner basis

The double exponential bound is indeed frighteningly easy to obtain. This ought to make Grobner bases prohibitively costly, and indeed there has been some effort put into non-Grobner methods to solve polynomial equations (e.g. Gregoire Lecerf's Kronecker package for Magma).



In practice, Grobner bases remain competitive, and users of Grobner methods note that the double-exponential bound, though easily obtained through construction, does not occur often in the systems that people are actually interested in. In particular, if you're working with a 0-dimensional ideal, it is possible to construct the Grobner basis of the system (wrt any admissible ordering) in single-exponential time , which certainly implies that the basis itself is single exponential in size. [Y.N. Lakshman, A single exponential bound on the complexity of computing Gröbner bases of zero-dimensional ideals. In: Effective methods in algebraic geometry (Castiglioncello, 1990).]



Sparsity does not help. As I recall, Mayr and Meyer-type examples are extremely sparse.



I've never heard of any serious probabilistic study of the number of elements, probably because this would be excessively hard and not very rewarding, given that there is no natural way to put a probability measure on your system. (Note that for a somewhat related problem, the average number of real roots of a real polynomial, the answers you obtain do depend noticeably on the measure used.)

nt.number theory - Fermat for polynomials, as used in the AKS (Agrawal-Kayal-Saxena) algorithm

The basis for the deterministic polynomial-time algorithm for primality of Agrawal, Kayal and Saxena is (the degree one version of) the following generalization of Fermat's theorem.




Theorem



Suppose that P is a polynomial with integer coefficients, and that p is a prime number. Then
$(P(X))^pequiv P(X^p) (mod p)$.




Surely this result was known previously, but I have not been able to find a reference in the literature on the AKS algorithm (which means that the authors also did not know of a reference). Does anyone here know of one?



Furthermore, there is a converse to the lemma in the AKS paper:




Lemma



If n is a composite number, then $(X+a)^nnot equiv X^n+a (mod n)$ whenever a is coprime to n.




Again, it is easy to generalize this statement. For example, if P is a polynomial which has at least two nonzero coefficients and such that all nonzero coefficients are coprime to n, then $P(X)^nnotequiv P(X^n) (mod n)$ for composite n.



On the other hand, clearly some conditions are necessary; for example $(3X+4)^6equiv 3X^6+4 (mod 6)$.



Is there a best possible statement? And, again, is there a reference?

Friday 29 January 2010

co.combinatorics - To what degree do min-cuts specify the cut function of a graph?

Given an unweighted graph $G = (V, E)$, let the cut function on this graph be defined to be:
$C:2^V rightarrow mathbb{Z}$ such that:
$$C_G(S) = |{(u,v) in E : u in S wedge v notin S}|$$



For any two vertices $i,j in V$, let the $(i,j)$ min-cut in a graph $G$ be:
$$alpha_{i,j}(G) = min_{S subset V : i in S, j not in S}C_G(S)$$
Now, suppose we have two unweighted graphs on the same vertex set, $G = (V,E)$ and $H = (V,E')$ such that they are identical with respect to all $(i,j)$ min-cuts:
$$forall i,j in V, alpha_{i,j}(G) = alpha_{i,j}(H)$$
How much can $H$ and $G$ differ with respect to their cuts? That is, how large can the following quantity be:
$$Delta(H,G) = max_{S subset V} |C_G(S) - C_H(S)|$$



Note that if the graphs are allowed to be weighted (or to be multigraphs), then for any $G$, there is a tree $T$ that agrees with $G$ on all min-cuts (A Gomory-Hu tree). But I am interested in the case of unweighted graphs...

ergodic theory - Product Measure Only Possible Measure?

The answer is no. A trivial example is to concentrate the measure in a "periodic orbit", this will give an invariant measure for the shift.



But there are a whole lot of invariant measures (including full support measures which probably are more interesting).



The measure which is a product measure, has though some important features. For example, if you look at its "entropy".



(See K. Sigmund, Generic properties for Axiom A diffeomorphisms, Inventiones Math 11 (1970) for the case of the space X being finite)

Thursday 28 January 2010

dg.differential geometry - Smooth Dependence of ODEs on Initial Conditions

Dear all,
The following is a theorem known to many, and is essential in elementary differential geometry. However, I have never seen its proof in Spivak or various other differential geometry books.



Let $t_0$ be real, and $x_0 in mathbb{R}^n$ and $a,b>0$. Let $f:[t_0-a,t_0 + a] times overline{B(x_0,b)}rightarrow mathbb{R}^n$ be $C^k$ for $kge 1$.



Then $f$ is Lipschitz continuous, with which it is easy, using the contraction mapping theorem of complete metric spaces, to prove that the ODE:
$dfrac{d}{dt}alpha(t,x)=f(t,alpha(t,x)),quad alpha(t_0,x)=x$



has a continuous solution in an open neighbourhood of $(t_0,x_0)$. In other words, the ODE
$x'(t)=f(t,x(t));x(t_0)=x_0$ has a family of solutions which depends continuously on the initial condition $x_0$.



The theorem that I'd like to prove is that, in fact, if $f$ is $C^k$, then $alpha$ is $C^k$, for any $kge 1$.



I'd like an "elementary" proof that needs no calculus on Banach spaces or any terribly hard theory such as that, but hopefully something elementary, such as the contraction mapping theorem. I currently have an attempt of a proof that looks at perturbations of linear ODEs, but it is incorrect (I think). The proof can be found on page 6 of http://people.maths.ox.ac.uk/hitchin/hitchinnotes/Differentiable_manifolds/Appendix.pdf. I believe that there is a typo in the claim:



"Apply the previous lemma and we get



$mathrm{sup}_{left| tright|leq epsilon}left|lambda(t,x)y-{alpha(t,x+y)+alpha(x)}right|=o(left|yright|).$"



but more importantly, what it should be replaced by is incorrect. What is needed is that $||A-B_y||=o(||y||)$ but I do not see why this is.



Thank you for your time and effort.

ct.category theory - Can the inner structure of an object be systematically deduced from its position in the category?

Background



Even for the novice it seems comprehensible that the "inner structure" of an object is determined (up to isomorphism) by its "position" in a category, defined by the morphisms.



What is not so obvious is how the inner structure of an object can be recovered from its position.



Question



What has to be given (and how) to reconstruct the inner structure of an object (up to isomorphism)? What conditions must the category fulfill? And how could the reconstruction be systematically achieved?



Examples



The following examples are simple in the sense that the categories are especially tailored: the objects are finite, there are no isomorphisms (except the identities) and all hom-sets contain at most one element. The question is, whether similar reconstructions can succeed in the general case, too.



(1)



Consider the category of "unlabeled" finite sets (vulgo natural numbers with a sequence of Hilbert-strokes as "inner structure") with the $leq$ relations as morphisms ($leq$ means "injectively embeddable", the number of possible embeddings ignored). Now consider only identities and prime morphisms (see my definition), corresponding to the relation $x = y + 1$ ("x is reachable from y by adding one stroke").



It is easy to "see" the inner structure from the position of an object with respect to the initial object (the empty set).



(2)



Consider the category of finite undirected unlabeled graphs without isolated vertices. The morphisms correspond to the relation "is edge-wise incidence-preserving embeddable" (ignoring again the number of possible embeddings). Consider again only identities and prime morphisms, corresponding to the relation "x is reachable from y by adding one edge".



See a fragment of this category here (identities and arrow heads not displayed).



Conjecture: Each object in this category is uniquely determined by the tuple $(n,k,l)$ with $n$ its distance from the initial object (the empty graph), $k$ its number of in-going morphisms and $l$ its number of out-going morphisms.



Claim: The inner structure of each object up to distance 3 from the initial object can be systematically deduced from its position, considering (only?) its tuple $(n,k,l)$.



Observation: This category reflects something like the "generating lattice" of the graphs (each non-identity morphism corresponds to adding one edge). Which other categories can be interpreted in this way?

Wednesday 27 January 2010

measure theory - Generalisation of Lebesgue decomposition theorem

A little further searching turned up a simple proof of Lebesgue's decomposition theorem in "The Lebesgue Decomposition Theorem for Measures", J. K. Brooks, The American Mathematical Monthly, 78 (1971), pp. 660-662. Without much extra work, it admits the following generalisation.



Theorem. Let $mathcal{N} subset Omega$ be a collection of subsets such that



  1. if $Ein mathcal{N}$ and $Fin Omega$, $Fsubset E$, then $Fin mathcal{N}$;

  2. if $E_n in mathcal{N}$ is a countable collection, then $bigcup_{n} E_n in mathcal{N}$ as well.

Consider the subspace $mathcal{S} = lbrace mu mid mu(E) = 0 text{ for all } Ein mathcal{N} rbrace$. Then $mathcal{M} = mathcal{S} oplus mathcal{S}^perp$.



Proof. Fix $nuin mathcal{M}$, and consider the following collection of subsets:
$$
mathcal{R} = lbrace E in mathcal{N} mid nu(E) > 0 rbrace.
$$
Let $alpha = sup lbrace nu(E) mid Ein mathcal{R} rbrace$, and let $E_nin mathcal{R}$ be a sequence of sets such that $nu(E_n) to alpha$. Let $A = bigcup_n E_n$. Then $nu(A) = alpha$ and $A in mathcal{N}$.



Furthermore, given any $Ein mathcal{R}$, we have $nu(Esetminus A) = 0$. Indeed, if $nu(Esetminus A)>0$, then $nu(E) = nu(A) + nu(Esetminus A) > alpha$, contradicting the definition of $alpha$. Similarly, $nu(Esetminus A) = 0$ for every $Ein mathcal{N}$.



Thus we may take $nu_1 = nu|_{Xsetminus A}$ and $nu_2 = nu|_A$. It follows that $nu_2 in mathcal{S}^perp$, since $nu_2(A)=1$ and $Ain mathcal{N}$, and $nu_1in mathcal{S}$, since $nu(Esetminus A) = 0$ for every $Ein mathcal{N}$.



Finally, uniqueness follows since $mathcal{S} cap mathcal{S}^perp = lbrace 0rbrace$.

nt.number theory - When f(x)-a and f(x)-b yield the same field extension

An interesting mathoverflow question was one due to Philipp Lampe that asked whether a non-surjective polynomial function on an infinite field can miss only finitely many values. In my interpretation of the question, if $k$ is a starting field and $f$ is a polynomial, you could ask what happens if you repeatedly adjoin a root of $f(x)-a$, except for a finite set of values $a in S subset k$ for which you hope a root never appears. You have to adjoin a root for all $a in tilde{k} setminus S$, where $tilde{k}$ is the growing field. Either a root of $f(x)-a$ for some $a in S$ will eventually appear by accident, or $f$ as a polynomial over the limiting field $tilde{k}$ is an example.



(Edit: I call this an interpretation rather than a construction, because in generality it is equivalent to Philipp's original question. I also don't mean to claim credit for the idea; it was already under discussion when I posted my answer then. Maybe an answer to the question below was already implied in the previous discussion, but if so, I didn't follow it.)



For some choices of $f$ and a non-value $a$, you can know that you are sunk at the first stage. For instance, suppose that $f(x) = x^n$. When you adjoin a root of $x^n-a$, you also adjoin a root of $x^n-b^na$ for every $b in k$. You cannot miss $a$ without also missing every $b^na$, which is then infinitely many values when $k$ is infinite.



So let $k$ be an infinite field, and let $f in k[x]$ be a polynomial. Define an equivalence relation on those elements $a in k$ such that $f(x)-a$ is irreducible. The relation is that $a sim b$ if adjoining one root of $f(x)-a$ and $f(x)-b$ yield isomorphic field extensions of $k$. Is any such equivalence class finite? What if $k$ is $mathbb{Q}$ or a number field?



In my partial answer to the original MO question, I calculated that if $f$ is cubic and the characteristic of $k$ is not 2 or 3, then the equivalence classes are all infinite.

Tuesday 26 January 2010

ag.algebraic geometry - On algebraic tubular neighbourhoods and Weak Lefschetz

Can one formulate those version of Weak Lefschetz that uses tubular neighbourhoods purely in terms of cohomology of (some) algebraic varieties?
Theorem in 5.1 of Part II in Goresky-MacPherson's "Stratified Morse theory" implies (in particular) that:
for a smooth projective P (over the field of complex numbers), X open in P, and a small enough tubular neighbourhood $H_delta$ of an arbitrary (!) hyperplane section H of P (in P!) a Weak Lefschetz theorem for $(H_deltacap X,X)$ is valid i.e.:
the map on singular cohomology $H^{i}_{sing}(X)to H^{i}_{sing}(H_deltacap X)$ is an isomorphism for $i<dim X-1$, and is an injection for $i=dim X-1$. A caution: $H_deltacap X$ is not (usually) a tubular neighbourhood of $Hcap X$ in $X$.



My question is: could one formulate an analogue of this statement purely in terms of algebraic geometry? I would be completely satisfied with cohomology with $Z/l^n Z$-coefficients i.e. etale cohomology. I only want to replace the cohomology of $H^{i}_{sing}(H_deltacap X)$ in the statement by something that could be computed without using differential geometry.



My guess: one should probably replace $H_delta$ with an etale tubular neighbourhood of $H$ in $P$ (then $H_deltacap X$ will be replaced by the corresponding fibre product); this is 'my conjecture'. Etale tubular neighbourhoods were defined and studied by Cox and Friedlander. Yet though they proved that etale tubular neighbourhoods share several properties with 'ordinary' tubular neighbourhoods, it seems that no comparison statement that would allow to deduce my conjecture from the Goresky-MacPherson's theorem is known. One should probably use nice properties of the comparison of the etale site with the fine one; yet this seems to require a site-theoretic definition of a tubular neighbourhood. Also, etale tubular neighbourhoods seem to be rather 'implicit', so I don't know how to check my conjecture on examples. Certainly, I do not object against proving my conjecture 'directly', yet this seems to be difficult (since Goresky-MacPherson's proof heavily relies upon stratified Morse theory).



Any suggestions would be very welcome!

Expressing power sum symmetric polynomials in terms of lower degree power sums

Combining the trace formula proposed by Gjergji Zaimi and Qiaochu Yuan,
$$
p_k={rm Tr}begin{pmatrix} e_1 & 1 & cdots & 0 \
-e_2 & 0 & ddots & vdots \
vdots & vdots & ddots & 1 \
(-1)^{N-1}e_N & 0 & cdots & 0 end{pmatrix}^{k},
$$
with the formula quoted by Peter Erskin,
$$
e_n=frac1{n!} begin{vmatrix}p_1 & 1 & 0 & cdots\ p_2 & p_1 & 2 & 0 & cdots \ vdots&& ddots & ddots \ p_{n-1} & p_{n-2} & cdots & p_1 & n-1 \ p_n & p_{n-1} & cdots & p_2 & p_1 end{vmatrix},
$$
Mathematica produces the following expansions of $p_k$:




$$N=2$$



$$
p_3=-frac{1}{2} p_1^3+frac{3}{2} p_1p_2
$$



$$
p_4=-frac{1}{2} p_1^4+p_1^2p_2+frac{1}{2} p_2^2
$$



$$
p_5=-frac{1}{4} p_1^5+frac{5}{4} p_1p_2 ^2
$$



$$
p_6=-frac{3}{4} p_1^4p_2+frac{3}{2} p_1^2p_2^2+frac{1}{4} p_2^3
$$



$$
p_7=frac{1}{8} p_1^7-frac{7}{8} p_1^5p_2+frac{7}{8} p_1^3p_2^2+frac{7}{8} p_1p_2^3
$$



$$
p_8=frac{1}{8} p_1^8-frac{1}{2} p_1^6p_2-frac{1}{4} p_1^4p_2^2+frac{3}{2} p_1^2p_2^3+frac{1}{8} p_2^4
$$



$$
p_9=frac{1}{16} p_1^9-frac{9}{8} p_1^5p_2^2+frac{3}{2} p_1^3p_2^3
+frac{9}{16} p_1p_2^4
$$



$$
p_{10}=frac{5}{16} p_1^8p_2-frac{5}{4} p_1^6p_2^2+frac{5}{8} p_1^4p_2^3
+frac{5}{4} p_1^2p_2^4+frac{1}{16} p_2^5
$$



$$
p_{11}=-frac{1}{32} p_1^{11}+frac{11}{32} p_1^9p_2-frac{11}{16} p_1^7p_2^2-frac{11}{16} p_1^5p_2^3
+frac{55}{32} p_1^3p_2^4+frac{11}{32} p_1p_2^5
$$




$$N=3$$



$$
p_4=frac{1}{6} p_1^4-p_1^2p_2+frac{1}{2} p_2^2+ frac{4}{3} p_1p_3
$$



$$
p_5=frac{1}{6} p_1^5-frac{5}{6} p_1^3p_2+frac{5}{6} p_1^2p_3+frac{5}{6} p_2p_3
$$



$$
p_6=frac{1}{12} p_1^6-frac{1}{4} p_1^4p_2-frac{3}{4} p_1^2p_2^2+frac{1}{4} p_2^3+frac{1}{3} p_1^3p_2^3+p_1 p_2 p_3 +frac{1}{3} p_3^2
$$



$$
p_7=frac{1}{36} p_1^7-frac{7}{12} p_1^3p_2^2+frac{7}{36} p_1^4p_3+frac{7}{12} p_2^2p_3+frac{7}{9} p_1p_3^2
$$



$$
p_8=frac{1}{72} p_1^8-frac{1}{18} p_1^6p_2+frac{1}{12} p_1^4p_2^2-frac{1}{2} p_1^2p_2^3+frac{1}{8} p_2^4+frac{2}{9} p_1^5p_3
$$
$$
-frac{8}{9} p_1^3p_2p_3+frac{2}{3} p_1p_2^2p_3+frac{8}{9} p_1^2p_3^2+frac{4}{9} p_2p_3^2
$$




$$N=4$$



$$
p_5=-frac{1}{24} p_1^5+frac{5}{12} p_1^3p_2-frac{5}{8} p_1p_2^2-frac{5}{6} p_1^2p_3+frac{5}{6} p_2p_3+frac{5}{4} p_1p_4
$$



$$
p_6=-frac{1}{24} p_1^6+frac{3}{8} p_1^4p_2-frac{3}{8} p_1^2p_2^2-frac{1}{8} p_2^3-frac{2}{3} p_1^3p_3+frac{1}{3} p_3^2+frac{3}{4} p_1^2p_4+frac{3}{4} p_2p_4
$$



$$
p_7=-frac{1}{48} p_1^7+frac{7}{48} p_1^5p_2+frac{7}{48} p_1^3p_2^2-frac{7}{16} p_1p_2^2-frac{7}{24} p_1^4p_3-frac{7}{12} p_1^2p_2p_3
$$
$$+frac{7}{24} p_2^2p_3
+frac{7}{24} p_1^3p_4+frac{7}{8} p_1p_2p_4+frac{7}{12} p_3p_4
$$



$$
p_8=-frac{1}{144} p_1^8+frac{1}{36} p_1^6p_2+frac{5}{24} p_1^4p_2^2
-frac{1}{4} p_1^2p_2^2-frac{1}{16} p_2^4
$$
$$
-frac{1}{9} p_1^5p_3-frac{2}{9} p_1^3p_2p_3
-frac{1}{3} p_1p_2^2p_3-frac{4}{9} p_1^2p_3^2+frac{4}{9} p_2p_3^2
$$
$$
+frac{1}{12} p_1^4p_4+frac{1}{2} p_1^2p_2p_4+frac{1}{4} p_2^2p_4
+frac{2}{3} p_1p_3p_4+frac{1}{4} p_4^2.
$$




It seems to me that a nice and compact formula for $a_{k,rho}$ does exist. Indeed,
the coefficients in the above examples are extremely simple.



In particular, I observe that the last terms in each of $p_k$ for $N=8$
have the form
$$
k prod_j frac{1}{j^{r_j}r_j!}p_j^{r_j},
$$
which corresponds to
$$
a_{k,rho}=kprod_j frac{1}{j^{r_j}r_j!}.
$$
This formula (whose structure resembles the coefficients in the expansion of Schur functions quoted by Peter Erskin) also works for all terms of the type $p_jp_{k-j}$ at arbitrary $N$.



Apparently, this is not a general formula, as can be seen from the coefficients in front
of $p_1^k$ which do depend on $N$.
I believe, however, that the general formula for $a_{k,rho}$ with $N$ properly included should not be much more complex than the empirical one above.



Hope this helps.

amateur observing - Working with high-magnification eye-pieces

Try the moon first.



If you see nothing but black, assuming that you don't have a lens cap on or something, then most likely, you are zoomed in on, well, relative blackness. The star you were viewing is probably off to the side now. Or, you may just be looking through the eye-piece at the wrong angle or something. The moon is too big a target to miss, and you have nice rough features to use to tune in your focus. And if you see black when you are looking at the moon, it's much easier to troubleshoot. You should definitely see something!



Now, secondarily the reason you expect to see something, but instead see nothing may be a viewfinder problem. If you have a small, low magnification viewfinder scope attached to your telescope, you usually can't just trust that what you see in the center of that view is exactly what your telescope will be aimed at. You may have to align it. Search "align telescope viewfinder" for help with this. From your description, it sounds like you are not using a viewfinder, but this seems to be a common problem, so I thought I should mention it.

at.algebraic topology - Are there results about the group of homeomorphisms of $(T^2-{*,*})$ up to isotopy?

There are many results in this field, and such groups, called mapping class groups, are well-studied. In the case of a torus the situation is totally understood; the mapping class group of the torus is $text{SL}_2(mathbb{Z})$. The only problem is that I am not sure what ${*,*}$ means. Note: the group $text{Homeo}(T^2setminus {p})/sim$ is called the extended mapping class group, denoted $text{Mod}^{pm}(T^2setminus{p})$, while the subgroup of orientation-preserving homeomorphisms is the mapping class group $text{Mod}(T^2setminus{p}):=text{Homeo}^+(T^2setminus {p})/sim$.




If you mean a one-element set, something like ${(0,0)}$: in the case of a torus, the missing point turns out not to matter: $text{Mod}(T^2setminus{p})=text{Mod}(T^2)$. This group, of orientation-preserving homeomorphisms up to isotopy, is isomorphic to $text{SL}_2(mathbb{Z})$. Your group is then an extension of $mathbb{Z}/2mathbb{Z}$ by this group, corresponding to the action on the orientation.
$$1to text{Mod}(T^2setminus{p})to text{Mod}^{pm}(T^2setminus{p})to mathbb{Z}/2mathbb{Z}to 1$$
which can be written as $$1to text{SL}_2(mathbb{Z})to text{Homeo}(T^2setminus {p})/simto mathbb{Z}/2mathbb{Z}to 1$$




If you mean a two-element set, then first consider the subgroup $text{PMod}(T^2setminus{p,q})$ of homeomorphisms that don't "switch" the two punctures. The map given by "filling in the puncture $q$" gives an extension
$$1topi_1(T^2setminus{p},q) totext{PMod}(T^2setminus{p,q})to text{Mod}(T^2setminus{p})to 1$$
which can also be written
$$1to F_2totext{PMod}(T^2setminus{p,q})to text{SL}_2(mathbb{Z})to 1$$
since $pi_1(T^2setminus{p},q)$ is a free group of rank two. The mapping class group is an extension of $mathbb{Z}/2mathbb{Z}$ by this group, corresponding to whether the punctures are switched:
$$1to text{PMod}(T^2setminus{p,q})to text{Mod}(T^2setminus{p,q})to mathbb{Z}/2mathbb{Z}to 1$$




A good reference for all these things is Farb-Margalit's "A Primer on Mapping Class Groups". In particular, the useful fact that there is no difference between homotopy and isotopy in dimension 2, or between considering homeomorphisms and diffeomorphisms, is covered in Chapter 1. The mapping class group of the torus is described in Chapter 2, starting with Theorem 2.15 on page 70.

computational complexity - P not eq. NP news?:

"Vinay Deolalikar. P is not equal to NP. 6th August, 2010 (66 pages 10pt, 102 pages 12pt). Manuscript sent on 6th August to several leading researchers in various areas. Confirmations began arriving 8th August early morning. The preliminary version made it to the web without my knowledge. I have made minor updates, here." (related link)

dg.differential geometry - Looking for a reference for the laplacian operator

So I wrote up this small derivation drawing insights from the answer by Deane Yang and Steve Huntsman,



With respect to the Riemann-Christoffel connection on a riemannian manifold the laplacian on that manifold will have the form $$nabla ^2 phi = frac{1}{sqrt{g}} partial _{mu} left [ sqrt{g} g^{mu nu} partial _{nu} phi right ]$$



where $g$ is the the determinant of the metric on the manifold and $phi$ is some smooth scalar function on the manifold.



On can write the line element on $S^n subset mathbb{R}^{n+1}$ as,



$dOmega _n ^2 = dtheta _1 ^2 + sin^2 theta_1 dtheta _2 ^2 + sin^2 theta _1 sin^2 theta_2 dtheta _3 ^3 +...+sin^2theta _1 sin^2 theta_2...sin^2 theta_{n-2} sin^2 theta_{n-1} dtheta _n ^2$



Then the line element on $mathbb{R}^{n+1}$ in polar coordinates can be written as,



$$ds^2 = dr^2 + r^2 dOmega _n ^2$$



and
$g_{mathbb{R}^{n+1}} = r^{2n}g_{_{S^n}}$
where
$g_{_{S^n}} = (sin^2 theta _1)^{n-1}(sin^2 theta_2)^{n-2}...(sin^2 theta _{n-2})^2(sin^2 theta_{n-1})^1$



Therefore since the metric is diagonal $nabla _{mathbb{R}^{n+1}} ^2 phi = frac{1}{r^n sqrt{g_{_{S^n}} }} partial _{mu} left [r^n sqrt{g_{_{S^n}} } g^{mu mu}_{mathbb{R}^{n+1}} partial _{mu} phi right ]$



$=frac{1}{r^n sqrt{g_{_{S^n}} }} partial _{r} left [r^n sqrt{g_{_{S^n}} } partial _{r} phi right ] + frac{1}{r^n sqrt{g_{_{S^n}} }} partial _{theta _i} left [r^n sqrt{g_{_{S^n}} } g^{theta _i theta _i}_{mathbb{R}^{n+1}} partial _{theta _i} phi right ]$



$=frac{1}{r^n}partial _r (r^n partial_r phi) + frac{1}{sqrt{g_{_{S^n}} }} partial_{theta _i} left[ sqrt{g_{_{S^n}} }
frac{g^{theta _i theta _i}_{S^n}} {r^2} partial _{theta _i} phi right]$



$=frac{1}{r^n}partial _r (r^n partial_r phi) + frac{nabla _{S^n}^2 phi}{r^2}$



Therefore after doing the differentiation we have the final result,



$$nabla _{mathbb{R}^{n+1}}^2 phi = frac{n}{r}partial _r phi + partial _r ^2 phi + frac{nabla _{S^n} ^2 phi}{r^2}$$



And I don't see an neat way of writing the Laplacian on $S^n$ !

fundamental astronomy - Calculating Angular Distance

You use Spherical Trigonometry



Given $A_1$ and $A_2$ are the respective azimuthal coordinates of the two objects, and $a_1$, $a_2$ their respective altitudes,



the angular seperation $theta$ is given by



$$cos theta = sin a_1 sin a_2 + cos a_1 cos a_2 cos (A_1-A_2)$$

Monday 25 January 2010

big bang theory - Astronomy questions I wondered for hours

I read a question posted by a community member in my community that shocked me, I think it contains some pretty cool astronomy questions that I cant wait for you scientists to answer. I am asking this from , because this is where all the real scientists are , and I want to see the truth , a real explanation , brief but simple.



I wont copy the exact question details because it will make it duplicate, but the question is posted here, please take time to read the details and explain it to me it is posted here-
http://srilanka.answercup.com/question/215/how-can-i-convince-my-atheist-friend-that-god-exists/



I am fascinated by his explanation, I cant wait for the astronomy community to tell me the answers this guy is asking.



Ok this is the question I am pasting it here after being requested by a member on the comment section-



His tite is -
How can i convince my atheist friend that God exists???



Apparently he seems to be religious person, who has some points to make.
This is the description of his question-




There is this friend that I have who believes in science and doesnt believe in religion. We have had many arguements, and it raises many interesting questions. I am posting this to see Answercup users opinion on it. If scientists say that Big bang is proven I have some simple questions . What happened before the big bang according to these scientists? And this big bang theory came from scientists thinking that the universe expands so if we reverse the time, it will do the opposite and come to one single point....but then if it did come from a point, I found that the expansion is INCREASING which is weird. How on earth can it be a blast , because obv like any blast it should decreases and come to a hault , here instead of the universe expansion decelerating its actually accelerating. How can these so called scientists tell something is right when common sense which can be understood by primary school children shows clearly that they are wrong ??? Plus how can you seriously get a blast from empty space ? If scientist says thats the beginning of time.




With his question, I remembered watching the vedio "Does god exist" on Discovery channel by stephen hawking. His explanations wasnt complete and he bragged about how there is no need for god, but his explanation never explained many parts. And then there are scientists who gives the excuse laws of physics break at the point of the big bang so we can't explain. So this brought my mind a new question, if scientists still cant understand what has happens in the big bang , how can they tell that God doesnt exist, and stuff like that and make them appear as facts, if they knew the physics at the big bang and talked then its fine, but here they are talking without even knowing those.



Source: http://srilanka.answercup.com/question/215/how-can-i-convince-my-atheist-friend-that-god-exists/

the sun - How to get the longitude/latitude from solar zenith/azimuth?

Well let's see:



Local Standard Time Meridian (LSTM)



The Local Standard Time Meridian (LSTM) is a reference meridian used for a particular time zone and is similar to the Prime Meridian, which is used for Greenwich Mean Time.
The (LSTM) is calculated according to the equation:
$$
LSTM = 15^{o}.Delta T_{GMT}
$$
where $Delta T_{GMT}$ is the difference of the Local Time (LT) from Greenwich Mean Time (GMT) in hours.



Equation of Time (EoT)



The equation of time (EoT) (in minutes) is an empirical equation that corrects for the eccentricity of the Earth's orbit and the Earth's axial tilt.
$$
EoT = 9.87 sinleft(2Bright) - 7.53cosleft(Bright) - 1.5 sinleft(Bright)
$$



Where $B = frac{360}{365}left(d - 81right) $ in degrees and d is the number of days since the start of the year.



Time Correction Factor (TC)



The net Time Correction Factor (in minutes) accounts for the variation of the Local Solar Time (LST) within a given time zone due to the longitude variations within the time zone and also incorporates the EoT above.
$$
TC = 4left(Longitude - LSTMright) + EoT
$$
The factor of 4 minutes comes from the fact that the Earth rotates 1° every 4 minutes.



Local Solar Time (LST)



The Local Solar Time (LST) can be found by using the previous two corrections to adjust the local time (LT).
$$
LST = LT + frac{TC}{60}
$$



Hour Angle (HRA)



The Hour Angle converts the local solar time (LST) into the number of degrees which the sun moves across the sky. By definition, the Hour Angle is 0° at solar noon. Since the Earth rotates 15° per hour, each hour away from solar noon corresponds to an angular motion of the sun in the sky of 15°. In the morning the hour angle is negative, in the afternoon the hour angle is positive.
$$
HRA = 15^{o}left(LST - 12right)
$$



Declination angle:



The declination angle denoted by $delta$, varies seasonally due to the tilt of the Earth on its axis of rotation and the rotation of the Earth around the sun. If the Earth were not tilted on its axis of rotation, the declination would always be 0°. However, the Earth is tilted by 23.45° and the declination angle varies plus or minus this amount. Only at the spring and fall equinoxes is the declination angle equal to 0°.
$$
delta = 23.45^{o} sinleft[frac{360}{365}left(d - 81right)right]
$$



where d is the day of the year with Jan 1 as d = 1.



Elevation angle:



The elevation angle is the angular height of the sun in the sky measured from the horizontal.
$$
α=sin^{−1}[sindelta sinphi + cosdelta cosphi cos(HRA)]
$$
Where $delta$ is the declination angle, $phi$ is the local latitude and HRA is the Hour angle.





The azimuth angle is the compass direction from which the sunlight is coming. At solar noon, the sun is always directly south in the northern hemisphere and directly north in the southern hemisphere.
$$
Azimuth = cos^{-1}left[frac{sindelta cosphi - cosdelta senphi cos(HRA)}{cosalpha}right]
$$



Where $delta$ is the declination angle, $phi$ is the local latitude and HRA is the Hour angle.





The zenith angle is the angle between the sun and the vertical. The zenith angle is similar to the elevation angle but it is measured from the vertical rather than from the horizontal, thus making the zenith
$$
Zenith = 90° - alpha
$$



Where $alpha$ is the elevation angle.




Note that your input parameters are going to be:



  • Longitude

  • $Delta T_{GMT}$ is the difference of the Local Time (LT) from Greenwich Mean Time (GMT) in hours

  • LT local military time in hours

  • $phi$ the local latitude

  • d the day of the year

graph theory - Eigenvector centrality

The wikipedia article quoted by Jon Bannon mentions using the power-iteration method as readily applicable -- and this is in my experience (for connected graphs, with degrees <5)quite efficient, say starting with the vector with weight 1 for every site. And this wikipedia article mentions several other choices for measuring centrality, besides the "eigenvector centrality". But it does not mention some choices indicated in D. J. Klein, "Centrality Measure in Graphs", J. Math. Chem. 47 (2010) 1209-1223. There centrality measure is suggested to be related to choice of metric or semimetric D on the graph. A couple choices for D yield centrality measures very similar to common measures, and a new "resistive centrality" is noted to result in connection with the choice of D as the "resistance distance" metric.

Sunday 24 January 2010

Deciding when an infinite graph is connected

My original example was not locally finite, this is a different example which is locally finite.



Given a Turing machine T, let GT be the graph whose vertex set is {-1,+1}×ℤ, and (a,n) is connected to (b,m) if and only if either a = b and |m-n| = 1, or a ≠ b and T halts (with blank input) in exactly |m - n| steps. This is computable since it is decidable whether T halts in a given number of steps. The automorphism group of GT acts transitively since the maps (a,n) → (±a,n+k) are always automorphisms. The graph GT is connected if and only if T eventually halts. Since the halting problem is undecidable, there is no algorithm that will uniformly decide whether GT is connected.

earth - the length of second, minute or hour, what defines the time of exo planetary bodies

we measure time based on earth and earth massgravity plays a role in the measurement ? so how and what would be time if not earth time on other exo planets since they will be having different mass/gravity and probably more variables affecting time measurement.



I can say Gliesa has a day of 30 hrs(assumed) ,



So will the length of second, minute and hour, likeewise day or month will fall under its own time system?

Does the Moon have any oxygen in its atmosphere?

Just to add to GreenMatt's answer, according to the article "The Lunar Atmosphere: History, Status, Current Problems and Context" (Stern, 1999), the lunar atmosphere is in fact a tenuous exosphere, which the authors describe as being composed of




"independent atmospheres" occupying the same space.




This is further elaborated in "The Lunar Dusty Exosphere: The Extreme Case of an Inner Planetary Atmosphere" (NASA), that




in direct response to these intense and variable environmental drivers, the Moon releases a low density neutral gas forming a collisionless atmosphere. This ~100 tons of gas about the Moon is commonly called the lunar surface-bounded exosphere




There is also an ionosphere, due to (from the ANSA article):




Ions are also created directly either by surface sputtering or subsequent neutral photoionization, forming a tenuous exo-ionosphere about the Moon.




The authors also suggest that ionic oxygen may be present due to surface sputtering.



Due to solar radiation and the solar wind, dust particles become charged as well, and can be subsequently listed from the lunar surface.

Saturday 23 January 2010

nt.number theory - Is the ABC conjecture known to imply the Riemann hypothesis?

I am pretty sure that the answer to the question is no: no two of those big conjectures are known to imply the third. But I feel somewhat sheepish giving this as an answer: what evidence can I bring forth to support this, and if nothing, why should you believe me?



The only thing I can think of is that in the function field case, ABC and GRH are fully established, but only parts of BSD are known.



(Maybe I should also admit that I didn't know anything about the connection between ABC and bounds on Shafarevich-Tate groups of elliptic curves in terms of the conductor until I glanced just now at the paper of Goldfeld the OP linked to. The fact that you can build examples of large Sha from triples of integers with large ABC exponent is amazing to me.)



Addendum: I feel especially confident that ABC and GRH do not imply BSD, at least not the part of BSD that asserts finiteness of Shafarevich-Tate groups. The first two conjectures are essentially analytic in nature, whereas the finiteness of Sha is deeply arithmetic. It seems extremely unlikely.



Moreover, ABC is really hard, in the sense that for all of the results of the form "X implies ABC" that I've ever seen, X includes a statement which is ABC-like in the sense that it gives a uniform bound on one arithmetic quantity in terms of another. For example, ABC is known to be of a similar flavor to the Szpiro Conjecture (and implies it), but so far as I know it is only known to be implied by a more-explicitly-ABC-like Modified Szpiro Conjecture. Admittedly bounding Sha in terms of the conductor, as in Goldfeld's work, is only vaguely ABC-like, but to an arithmetic geometer like me these bounds still feel very "analytic"; I can't see any connection at all between this and BSD. So I doubt that GRH (let me say ERH, so that I more or less know what I'm talking about -- i.e., Dedekind zeta functions) plus BSD is known to imply ABC.

Friday 22 January 2010

amateur observing - How Does a Refractor Telescope Work?

All telescopes have in common that they gather and focus light from far away objects. They use a primary opical element, such as a concave mirror or a (planar- or bi-)convex lense (or lense system), and they use an eyepiece with another lense system (for viewing) or a camera in their primary focus.



A refractor telescope does not sharpen the image per se. The convex lense concentrates the light rays, not unlike a magnifying glass. To actually focus the enlarged image on your retina, you need an eyepiece, which is another bi-convex lense (in its most simple form). This will re-align the light rays after they passed through the primary focus. See this image for a visual explanation:



Light ray sketch for a refracting telescope
Source: Wikipedia



The above image also explains why the image of a refracting telescope appears upside down. You don't need (or want!) any prisms in this kind of setup.



On the other hand, a reflector telescope uses a concave mirror plus an eyepiece. There are different configurations, but one of the most simple and most common ones is the Newtonian telescope:



Newton telescope ray diagram
Source: Wikipedia



So instead of refraction of light by a lense, we use the reflection of light on a mirror, to enlarge the image. Focusing on the retina is again done by an eyepiece in the same way as with the refracting telescope.



The advantage of refracting telescopes is that there is no obstruction in the optical path inside the telescope. This is not the case with reflector telescopes. They usually have a secondary mirror in the middle of the optical paths, hence reducing light gathering performance.



On the other hand, reflector telescopes are often much lighter, and cheaper to assemble. Also, very compact models of reflector telescopes can be built.



Also, simple refractor telescopes will produce colourful fringes on object edges, called chromatic abberation, which is due to the glass used in the lenses. This can be compensated by multiple lenses, but this will make the refractor even heavier and more expensive.

Thursday 21 January 2010

galaxy - Radial Density Profile Equation

The Virgo Galaxy Cluster has a mass of $10^{14} M_{odot}$ and its centre is $16Mpc$ from Earth. The large elliptical galaxy $M87$ lies at the centre of the Virgo cluster. $M87$ has a supermassive black hole at it's centre with an estimated mass of $6 times 10^9 M_{odot}$. Take Hubble's constant to be $H_0=70 km s^{-1} Mpc^{-1}$.



Taking the Virgo cluster to be spherically symmetric with a radial density profile given by



$rho(r)=rho_0 (frac{r}{1Mpc})^{-2}$,



Determine the value of the constant $rho_0$ is S.I units assuming the radius of the Virgo cluster is 1Mpc.




I am confused with how to approach this question,
I know that density $rho=frac{M}{frac{4}{3} pi r^3}$, when I substitute it into the given radial density profile, the $r$ variable doesn't cancel, should I substitute the radius of the cluster into $r$? Is it really that straight forward?

co.combinatorics - Algorithm for decomposing permutations

Is there an algorithm for solving the following problem: let $g_1,ldots,g_n$ be permutations in some (large) symmetric group, and $g$ be a permutation that is known to be in the subgroup generated by $g_1,ldots,g_n$, can we write $g$ explicitly as a product of the $g_i$'s?



My motivation is that I'm TAing an intro abstract algebra course, and would like to use the Rubik's cube to motivate a lot of things for my students, and would, in particular, like to show them an algorithm to solve it using group theory. (That is, I can write down what permutation of the cubes I have, and want to decompose it into basic rotations, which I then invert and do in the opposite order to get back to the solved state.) Though I'm interested in the more general case, not just for the Rubik(n) groups, if a solution works out.



Note: I don't really know what keywords to use for solving this problem, if someone can point me to the right search terms to google to get the results I'm looking for, I'll gladly close this.

Wednesday 20 January 2010

geometry - Getting rid of exceptional fibers by passing to finite covers?

If the Seifert fiber space is compact, then this is true, as long as the base orbifold is "good", which means that it has a finite-sheeted manifold cover, which is a compact surface. This induces a cover of the Seifert fiber space which is a circle bundle over the surface. If the base orbifold is bad, then no such covering will exist. This can happen for a Seifert fibering of $S^3$ over a football orbifold with distinct orders of torsion points, or over a teardrop orbifold.
If the Seifert fiber space is non-compact, then there may be infinitely many exceptional fibers, and the base orbifold might have torsion of arbitrarily large order, so there is no hope of finding a finite-index cover which is a circle bundle.



See the draft of Thurston's book for more information on orbifolds and Seifert fibered spaces. Exercise 5.7.10 is on the Seifert fibering of $S^3$ over bad orbifolds.

computability theory - Intermediate value theorem on computable reals

Thanks first to Andrej for drawing attention to
my paper on the IVT,
and indeed for his contributions to the work itself.
This paper is the introduction to Abstract Stone Duality
(my theory of computable general topology) for the general mathematician,
but Sections 1 and 2 discuss the IVT in traditional language first.
The following are hints at the ideas that you will find there and
at the end of Section 14.



I think it's worth starting with a warning about the computable
situation in ${bf R}^2$, where it is customary to talk about fixed
points instead of zeroes.
Gunter Baigger

described

a computable endofunction of the square.
The classical Brouwer thereom says that it has a fixed point,
but no such fixed point can be defined by a program.
This is in contrast to the classical response to the constructive
IVT, that either there is a computable zero, or the function
hovers at zero over an interval.
(I have not yet managed to incorporate Baigger's counterexample
into my thinking.)



Returning to ${bf R}^1$, we have a lamentable failure of classical
and contructive mathematicians to engage in a meaningful debate.
The former claim that the result in full generality is "obvious",
and argue by

quoting random fragments of what their opponents have said in
order to make them look stupid
.
On the other hand, to say that
"constructively, the intermediate value theorem fails"
by showing that it implies excluded middle
is equally unconstructive.



Even amongst mainstream mathematicians several arguments are conflated,
so I would like to sort them out on the basis of
the generality of the functions to which they apply.



On the cone hand we have the classical IVT, and the approximate
construtive one that Neel mentions. These apply to any
continuous function with $f(0) < 0 < f(1)$.



There are several other results that impose other pre-conditions:



  • the exact constructive IVT, for non-hovering functions,
    described by Reid;


  • using Newton's algorithm,
    for continuously differentiable functions
    such that $f(x)$ and $f'(x)$ are never simultaneously zero; and


  • the Brouwer degree,
    with an analogous condition in higher dimensions.


These conditions are all weaker forms of saying that the function is
an open map.



Any continuous function $f:Xto Y$ between compact Hausdorff spaces
is proper: the inverse image $Z=f^{-1}(0)subset X$
of $0in Y$ is compact (albeit possibly empty).



If $f:Xto Y$ is also an open map then $Z$ is overt too.
I'll come back to that word in a moment.



When $f$ is an open map between compact Hausdorff spaces and $Z$
is nonempty, there is a compact subspace $Ksubset X$ and an open
one $Vsubset Y$ with $0in V$ and $Vsubset f(K)$.



So for real manifolds we might think of $K$ is a (filled-in) ball
and $f(K)setminus V$ as the non-zero values that $f$
takes on the enclosing sphere.



Could I have forgotten that the original question was about
computability?



No, that's exactly what I'm getting at.



In ${bf R}^1$ an enclosing sphere is a straddling interval,
$[d,u]$ such that $f(d) < 0 < f(u)$ or $f(d) > 0 > f(u)$.



The interval-halving (or, I suspect, any computational) algorithm
generates a convergent sequence of straddling intervals.



More abstractly, write $lozenge U$ if the open subset $U$ contains
a straddling interval.
The interval-halving algorithm (known historically as the
Bolzano--Weierstrass theorem or
lion hunting)
depends exactly on the property that $lozenge$ takes unions to
disjunctions, and in particular
$$ lozenge(Ucup V) Longrightarrow lozenge U lor lozenge V. $$
(Compare this with the Brouwer degree, which takes disjoint unions
to sums of integers.)



I claim, therefore, that the formulation of the constructive IVT
should be the identification of suitable conditions (more than
continuity but less than openness) on $f$ in order to prove the
above property of $lozenge$.



Alternatively, instead of restricting the function $f$,
we could restrict the open subsets $U$ and $V$.
This is what the argument at the end of
Section 14
of my paper does.
This gives a factorisation $f=gcdot p$ of any continuous
function $f:{bf R}to{bf R}$ into a proper surjection $p$
with compact connected fibres and a non-hovering map $g$.



To a classical mathematician, $p$ is obviously surjective
in the pointwise sense, whereas this is precisely the situation
that a constructivist finds unacceptable.
Meanwhile, they agree on finding zeroes of $g$.



In fact, this process finds interval-valued zeroes of
any continuous function that takes opposite signs, which was
the common sense answer to the question in the first place.



The operator $lozenge$ defines an overt subspace,
but I'll leave you to read the paper to find out what that means.

ca.analysis and odes - Derivate Bessel Function with respect to order

Abramowitz and Stegun give a couple of special cases but don't give a general result. Starting from some of the integral or series representations and differentiating you can get a corresponding integral or series for the derivative, but I would guess that it's unlikely to simplify to a "known" function in the general case. An example they give is (for the spherical Bessel function $j_nu(x)$):



$$[ frac{d}{dnu} j_nu(x) ]_{nu=0} = frac{pi}{2x}(operatorname{Ci}(2x)sin x - operatorname{Si}(2x)cos x)$$



They also give examples evaluated at $nu=-1$ and similar results for the case of the "other" spherical bessel $y_nu(x)$.

Tuesday 19 January 2010

big picture - Various concepts of "closure" or "completion" in mathematics

Ah! You edited your question! I had to delete my answer. Anyway, here is a general scenario where idempotent operations such as the one you want arise:



In this para I am going to be vague. But the examples below given should illustrate what I have in mind.Ok, so, You have "some structure" somewhere. You want to go to the "maximal" of such a thing. You have a natural ordering on such structures you want. And also it so happens that the union of a chain of such stuff is again such a thing. Then you apply Zorn's lemma to find the maximal thing. And this operation of going and finding the maximal thing is an "idempotent completion" in your sense.



There are plenty of examples. A few:



$1$. A set of linearly independent vectors in a vector space is enlarged to a basis.



$2$. An algebraic extension of a field is enlarged to the algebraic closure.



$3$. A separable extension of a field, is enlarged to separable closure.



$4$. A differentiable atlas on a smooth manifold is enlarged to a maximal one, ie., a differentiable structure.



$5$. A certain functional on a Banach space is enlarged to fill the whole space, as in the proof of Hahn-Banach theorem.



And so on, nearly in fact every application of Zorn's lemma.



This is not a functorial way to go; but the construction as an operation is idempotent. And in some way, such as in the construction of the algebraic closure, we have an isomorphism of two such different constructions.

Sunday 17 January 2010

observation - Present distances between planet. How can I find them?

It's "commonly known" how distant are our solar system planets from Sun. But we can't easily say that about planets, which distances can differ greatly, without some observations (or simulations, knowing their state in some moment in time).



How can I can check 'actual' relative distances or positions of planets?

Saturday 16 January 2010

algorithms - Water jug puzzle

There are n red & n blue jugs of different sizes and shapes. All
red jugs hold different amounts of water as the blue ones. For every red jug,
there is a blue jug that holds the same amount of water, and vice versa.
The task is to find a grouping of the jugs into pairs of red and blue jugs that hold the same
amount of water.



Operation allowed: Pick a pair of jugs in which one is red and one is blue, fill the red jug with water and then pour the water into the blue jug. This operation will tell you whether the red or the blue jug can hold more water, or if
they are of the same volume. Assume that such a comparison takes one time unit. Your goal is
to find an algorithm that makes a minimum number of comparisons to determine the
grouping.



You may not directly compare two red jugs or two blue jugs.




  1. Prove a lower bound of Θ(n lg n) for the number of comparisons an algorithm solving
    this problem must make.


  2. Give a randomized algorithm whose expected number of comparisons is O(n lg n)


fa.functional analysis - Maximum on unit ball (James' theorem).

James' theorem states that a Banach space $B$ is reflexive iff every bounded linear functional on $B$ attains its maximum on the closed unit ball in $B$.



Now I wonder if I can drop the constraint that it is a ball and replace it by "convex set". That is, I want to know if every bounded linear functional on a reflexive Banach space $B$ attains its maximum on a closed and bounded convex set in $B$.



By Pietro's answer this is known to be true. Is the maximum unique? In optimization by vector space methods it is know that this is true if the set is the closed ball. This was actually my biggest question since I want to show a optimization problem has a unique solution.

Thursday 14 January 2010

cosmology - Is gravity a source of infinite energy at a cosmological scale

As I understand it, gravity cannot be attenuated by any medium (in the way that EM radiation can be, for instance).



Does this, then, not make it a source (theoretically, I am not talking of practicalities) of infinite energy - if we assume the universe itself is infinite and looks the same everywhere (ie there are objects with mass everywhere)?



Or is it more correct to say that gravity has a net contribution of nothing to the universe's energy density because the attractional energy is balanced by a negative potential energy?

set theory - Is it possible for countably closed forcing to collapse $aleph_2$ to $aleph_1$ without collapsing the continuum?

Here is, I think, a partial answer. I believe I can show that as long as a countably closed forcing adds a new $omega_1$-sequence, the continuum is collapsed below the size of the poset. I am not sure if you can do better.



Prop. Let $mathbb{P}$ be a countably closed notion of forcing such that $Vdashdot{f}:omega_1rightarrow ON,dot{f}notin V$. Then, if $G$ is $mathbb{P}$-generic over $V$ we will have $V[G]vDash 2^omegaleq |mathbb{P}|$.



Pf: It's enough to show that in $V[G]$, $|mathcal{P}(omega)cap V|leq |mathbb{P}|$ (because $mathbb{P}$ is countably closed). Note that for each $pinmathbb{P}$ there is some $alpha<omega_1$ such that $p$ doesn't decide $dot{f}(alpha)$ (otherwise $f$ could be defined in $V$); let $alpha(p)$ denote the least such $alpha$. Let $beta_0(p)<beta_1(p)$ be the least ordinals $beta$ such that there's $qleq p$ for which $qVdashdot{f}(alpha(p))=beta$.



Fix in $V$ a well-ordering $prec$ of $mathbb{P}$. Now, working in $V[G]$, we associate to each $qinmathbb{P}$ an $x_qsubseteqomega$ as follows. Inductively define a descending sequence of conditions $q_0geq q_1ldots geq q_nldots $ by $q_0=q$, $q_{n+1}$ is the $prec$-least member of $G$ below $q_n$ which decides $dot{f}(alpha(q_n))$. Let $x_q= {nin omega|f(alpha(q_n))=beta_0(q_n)}$ .



To finish we just have to show that for each $xinmathcal{P}(omega)$ that the set $D_x={rinmathbb{P}|rVdash(exists qin dot{G})x=dot{x_q}}$ is dense. Let $pinmathbb{P}$ be a fixed condition. Inductively define $p_0geq p_1geq ldots p_ngeq $. Set $p_0=p$. If $nin x$ set $p_{n+1}$ to be the $prec$-least member of $mathbb{P}$ with $p_{n+1}leq p_n$ and $p_{n+1}Vdashdot{f}(alpha(p_n))=beta_0(p_n)$; if $nnotin x$ then do the same thing but have $p_{n+1}Vdashdot{f}(alpha(p_n))=beta_1(p_n)$. Then let $r$ be below all the $p_n$. Then $rin D_x$, with $p$ as our witnessing $q$.

USNO moon images look quite different?

This is an unofficial explanation, but I doubt you'll get an answer from official sources here, unless you ask them directly (e.g. their Twitter account is @NavyOceans), so I'll give it a shot. The small thumbs appear to be merely approximation, representation icons that aren't resized from the real-time image, but merely prepared in advance images, displayed depending on the current Moon's phase, calculated by the server.



If you try their Moon phase image generator (at the bottom of this page), you'll see why they did it like that. The prepared thumbs are numbered from m180.gif for Full Moon to m360.gif for New Moon, with a 2° step, and apparently the server calculates based on input parameters the Moon phase and displays prepared thumbnail with a minimum approximation precision of 2° (roughly 8 hour step). The same page also offers an explanation for how this thumbnail generation works:




These lunar phase images were created by R. Schmidt from ray-traced
images of the Moon. A Clementine spacecraft mosaic of the lunar
surface was mapped onto a sphere, and scenes were rendered as a
virtual Sun "orbited" the Moon. The depiction of lunar surface
features suffers geometric distortion but the terminator is correct
with respect to the spherical Moon.




The lunar phase calculator offers date selection that goes forward in time or before the time any images of actual observations were made and stored, so it's unreasonable to expect actual images displayed for, say, some date in 22nd or 19th century. Some examples off U.S. Naval Observatory's server:



    180       
246       
270       
294       
360
            Full Moon                        Waning Gibbons                     Last Quarter                    Waning Crescent                     New Moon



The larger image of how the Moon looks like now seems to be generated by a lot more precise renderer (image is refreshed every minute by a JavaScript call) that takes into account not merely lunar phase, but also its libration in respect to the surface on the Earth. It would look more like any frame off this animation:



                         Tidal locking of the Moon with the Earth



                         Fig. 2: Lunar librations in latitude and longitude over a period of one month (Source: Wikipedia)



And inclination is likely calculated for geolocation of the Astronomical Applications Department of the U.S. Naval Observatory in Washington, DC. The page however doesn't explain if this assumption of mine is correct, but it would certainly appear so.

dg.differential geometry - Non-commutative versions of X/G

Noncommutative versions of sheaves and holomorphic functions are not very well understood. Better understood are noncommutative versions of measurable, continuous, or smooth functions. I generally work with the continuous functions, i.e. $C^* $-algebras, or various subalgebras that deserve to be called smooth. I'll describe things in the $C^*$-framework.



What came to mind immediately for me is the notion of strong Morita equivalence, due to Rieffel. It works like this: suppose you have a locally compact group $G$ acting on a $C^* $- algebra $A$ (think of $A$ as $C(X)$ here). You can form what is called the crossed product algebra, which is a $C^*$-algebra containing $A$ and $G$, and where the action of $G$ on $A$ is implemented via conjugation by $G$; i.e. if $a in A$ and $g in G$, then $g a g^* = alpha_g(a)$, where $alpha$ is the action.



This can be done when $A$ is unital or not, and $G$ can be discrete or not. The resulting algebra, which I would denote $A times_alpha G$, is unital if and only if $A$ is unital and $G$ is discrete.



Now suppose that $X$ is a compact Hausdorff space with an action of $G$. Then $G$ also acts on $A = C(X)$, and so we can make the crossed product algebra $C(X) times_alpha G$. Here's the punchline: when the action of $G$ on $X$ is free and proper, so that the quotient $X/G$ is well-behaved, then the crossed product algebra is strongly Morita equivalent to the algebra $C(X/G)$ of functions on the quotient.



When the action is not free and proper, the quotient may be very bad (e.g. the integers acting on the circle by rotation by an irrational angle) and so the algebra $C(X/G)$ may be reduced to nothing more than scalars, and so be useless for obtaining any information about the quotient. In this case, one uses the crossed-product algebra as a sort of substitute for the algebra of functions on the quotient.



A reference for this is the paper "Applications of Strong Morita Equivalence to Transformation Group $C^*$-algebras, by Rieffel, which is available on his website. Unfortunately it doesn't have the definitions of crossed products (which he calls transformation group algebras), but the wikipedia page is ok, although phrased just for von Neumann algebras.

gt.geometric topology - A problem/conjecture related to 4-manifolds that deserves a name. What name does it deserve?

There's an old problem in 4-manifold theory that, as far as I know, doesn't have a name associated with it and really deserves a name.



Let $M$ be a smooth 4-manifold with boundary. Let $S$ be a smoothly embedded 2-dimensional sphere in $partial M$. Assume $S$ does not bound a ball in $partial M$, but $S$ is null-homotopic in $M$. Does $S$ bound a smooth 3-ball in $M$? Perhaps you need to replace $S$ by another non-trivial $S'$ in $partial M$ before you can find a 3-ball in $M$ bounding it?



You could think of this as the co-dimension one analogue to Dehn's lemma for 4-manifolds. Usually when people talk about a Dehn lemma for 4-manifolds they're interested in the co-dimension 2 analogue.



Does this problem / conjecture have a name? If not, do you have a good name for it? Do you know of anywhere in the literature where this issue is investigated?



Off the top of my head the only vaguely related things I know about in the literature is a 1975 paper of Swarup's.

Wednesday 13 January 2010

peano arithmetic - Naturally definable sets of natural numbers (2): Can the circle be broken?

(follow-up to: Naturally definable sets of natural numbers)



Every formula $Psi(x)$ in the first-order language of Peano arithmetic defines a set of natural numbers. Some of these sets are finite, others are infinite. Every finite set $lbrace n_0, n_1, ..., n_k rbrace$ can be defined by an equation $p(x) = q(x)$ with $p(x), q(x)$ finite polynomials in $x$ with natural coefficients. Let in the following $phi(x)$ be such an equation [read "phi" for "finite"]. Infinite sets cannot be described by any $phi(x)$.



Given a formula $Omega(x)$ which defines an infinite set [read "omega" for "infinite"]. Then every formula of the form $Omega(x) vee phi(x)$ or $Omega(x)wedge negphi(x)$ defines an infinite set, too.



The motivation of the following definition is this: A formula defining an infinite set shall be called arbitrary if it is derived from a natural (= non-arbitrary) formula by adding or removing finitely many arbitrary elements.



Definition (wannabe): A formula $Omega(x)$ is arbitrary iff it defines an infinite set and is equivalent



  1. to a formula $omega(x) vee phi(x)$ with $phi(x) notrightarrow omega(x) $ or

  2. to a formula $omega(x) wedge neg phi(x)$ with $omega(x) notrightarrow negphi(x)$

where $omega(x)$ is not arbitrary. (Of course, $omega(x)$ defines an infinite set.)



On first sight, this definition seems circular:



Let $Omega(x) equiv omega(x) vee phi(x)$ with $phi(x) notrightarrow omega(x)$.



Then $omega(x) equiv Omega(x) wedge negphi'(x)$ with $Omega(x) notrightarrow negphi'(x)$.



Then $Omega(x)$ is arbitrary iff $omega(x)$ is not arbitrary.



Might this seemingly vicious circle not be in fact a (hidden) recursive definition (by something like "(abstract) length of formulas")?



Cannot this circle be broken? What about the intuition, that $(exists y) x = 2 cdot y$ is a non-arbitrary formula, but that $(exists y) x = 2 cdot y vee x = 17$ is an arbitrary one?

ag.algebraic geometry - Deformation theory of representations of an algebraic group

For an algebraic group G and a representation V, I think it's a standard result (but I don't have a reference) that



  • the obstruction to deforming V as a representation of G is an element of H2(G,V⊗V*)

  • if the obstruction is zero, isomorphism classes of deformations are parameterized by H1(G,V⊗V*)

  • automorphisms of a given deformation (as a deformation of V; i.e. restricting to the identity modulo your square-zero ideal) are parameterized by H0(G,V⊗V*)

where the Hi refer to standard group cohomology (derived functors of invariants). The analogous statement, where the algebraic group G is replaced by a Lie algebra g and group cohomology is replaced by Lie algebra cohomology, is true, but the only proof I know is a big calculation. I started running the calculation for the case of an algebraic group, and it looks like it works, but it's a mess. Surely there's a long exact sequence out there, or some homological algebra cleverness, that proves this result cleanly. Does anybody know how to do this, or have a reference for these results? This feels like an application of cotangent complex ninjitsu, but I guess that's true about all deformation problems.



While I'm at it, I'd also like to prove that the obstruction, isoclass, and automorphism spaces of deformations of G as a group are H3(G,Ad), H2(G,Ad), and H1(G,Ad), respectively. Again, I can prove the Lie algebra analogues of these results by an unenlightening calculation.



Background: What's a deformation? Why do I care?



I may as well explain exactly what I mean by "a deformation" and why I care about them. Last things first, why do I care? The idea is to study the moduli space of representations, which essentially means understanding how representations of a group behave in families. That is, given a representation V of G, what possible representations could appear "nearby" in a family of representations parameterized by, say, a curve? The appropriate formalization of "nearby" is to consider families over a local ring. If you're thinking of a representation as a matrix for every element of the group, you should imagine that I want to replace every matrix entry (which is a number) by a power series whose constant term is the original entry, in such a way that the matrices still compose correctly. It's useful to look "even more locally" by considering families over complete local rings (think: now I just take formal power series, ignoring convergence issues). This is a limit of families over Artin rings (think: truncated power series, where I set xn=0 for large enough n).



So here's what I mean precisely. Suppose A and A' are Artin rings, where A' is a square-zero extension of A (i.e. we're given a surjection f:A'→A such that I:=ker(f) is a square-zero ideal in A'). A representation of G over A is a free module V over A together with an action of G. A deformation of V to A' is a free module V' over A' with an action of G so that when I reduce V' modulo I (tensor with A over A'), I get V (with the action I had before). An automorphism of a deformation V' of V as a deformation is an automorphism V'→V' whose reduction modulo I is the identity map on V. The "obstruction to deforming" V is something somewhere which is zero if and only if a deformation exists.



I should add that the obstruction, isoclass, and automorphism spaces will of course depend on the ideal I. They should really be cohomology groups with coefficients in V⊗V*⊗I, but I think it's normal to omit the I in casual conversation.

Collision of 2 black holes

It's called a black hole merger, or coalescence. Here a simulation video.



Even the formation of the event horizons of the two initial black holes takes "super long" in Earth's time. That's similar with the merger. On the other hand we are very close to the completed merger within a short "Earth's" time seen from a distance, as soon as the merger starts. General relativity as well as quantum theory are incomplete with what will happen very close to a presumed singularity or at the presumed event horizon; this will remain disputed until a satisfying theory of quantum gravity is found.



Mergers of black holes are likely to occur, e.g. when two galaxies collide, the momentum of the central supermassive black holes (SMBH) is slowed down by consumption of gas, dust and stars, until the SMBHs merge to the central SMBH of the merged galaxy.
Here a galaxy merger simulation.



Here a simulation of the coalescence of two black holes within a collapsing star.



More on black hole binaries on Wikipedia.

Monday 11 January 2010

The destruction of the Universe by a bubble

I think I know what they mean. They're talking about a false vacuum scenario.



A vacuum state is a state of lowest energy. It's thought that the vacuum of our universe is in a lowest-energy state and is stable, and so nothing special will happen to spacetime. However, if our universe is actually a false vacuum, then it could merely be metastable, and some tiny perturbation could cause it to fall into a lower energy state - a true vacuum, or else a false vacuum of lower energy. The "bubble" that would appear would be a region of this lower-energy vacuum that would expand across the universe without stopping.



Scientists aren't sure if our universe is a false vacuum. Here's a chart of the masses of the Higgs boson and the top quark:
Higgs boson vs. top quark



The latest measurements suggest that their masses lie in a metastable region of the graph (which could be bad), although it's towards the stable end of the region (which is good). This vacuum catastrophe could still happen, but the odds aren't in its favor. I also suggest reading some of the excellent papers in the references section of the Wikipedia article. They're quite comprehensive.




Better explanation:



The universe can be thought of as space containing a variety of quantum fields. A quantum field might be best thought of as something that has a value at every point in space. Put together a bunch of these fields and you can describe particles, particle interaction, and all the matter and energy in a given region of space - in fact, in the whole universe!



Picture a region of space with absolutely nothing in it. Nothing. (I'm ignoring vacuum energy, even though I really should discuss it) That's a true vacuum. Remove all the fields, particles, and other interesting stuff from our universe and that's what you'll get. Nothing.



The false vacuum scenario happens if there isn't actually "nothing" but "something". If our "vacuum" has some extra energy that it shouldn't (again, I'm not talking about normal vacuum energy). In other words, there's something where there should be nothing.



In this end-of-the-universe scenario, the universe goes from having this something to having nothing - from a false vacuum to a true vacuum. The region of space actually having nothing starts at a certain region and expands outward at the speed of light. That's what the scientists were talking about.




For anyone who wants a further explanation and/or could tell how many mistakes I made there: Yes, I'm aware that there were mistakes in that incomplete explanation. I didn't properly explain vacuum energy or an energy state, nor did I discuss stability or even properly touch on quantum fields. But quite frankly, I don't think this explanation needs that baggage. Does it add to the richness of the concept? Yes. But is it confusing? Also yes. I don't think working that in would be productive.

Sunday 10 January 2010

less elementary group theory

The impression I get is that a large chunk finite group theory can be built up from the beginner's toolset: orbit-stabiliser, the isomorphism theorems, and a lot of fiddling around with conjugation, normalisers and centralisers, and induction on the order of the group. You can achieve a lot with surprisingly little.



Character theory (over the complex numbers) is probably the non-'elementary' tool that sees the heaviest use. For instance, one often wants to solve the equation $x y = z$, where $z$ is given and $x$ and $y$ must come from specified conjugacy classes. It turns out that there is a formula for the number of solutions in terms of characters. So instead of trying to find an explicit $(x,y)$, one can try to estimate the value of the formula and prove that the answer is non-zero. (Typically, the trivial character makes a large positive contribution, and the aim is to show that all the other characters make small contributions.)

the sun - Calculate latitude and longitude based on date and sun

What you are looking for is the navigation method used by ships and aircraft before the advent of GPS. It requires not only an instrument for measuring the angle between the sun or a star (such as Polaris) and the horizon, but also an accurate time measurement, and charts that can be used to interpret the numbers. And of course an accurate chronometer -- you'll need to know the time at Greenwich Obervatory, in London.



The traditional instrument used for determining this angle is the sextant. Here is an article describing the use of a sextant for deterining one's position:



http://www.ehow.com/how_7562747_use-nautical-sextant.html



I suppose that an algorithm for doing what you are asking is available where good sextants are sold - as to the charts, well, perhaps there are downloadable charts somewhere.



Also, here is a good article on determining latitude and longitude by the stars. Latitude is what a sextant will tell you, longitude is determined differently. For that you need two clocks: one set to GMT; one set to your local UTC time (NOT your timezone time). The method is described here: How to Calculate Longitude. Which brings us to the question of how to calculate your local solar time, which is described in this article How to Calculate Solar Time. Clearly you can't use this for navigation - it's convenient for human economics, but not finding your location.



Just to make it clear why you can't use local civil time for determining longitude is that the solar time is nearly the same on both Hawaii and Kodiak islands, but Hawaii's civil time is one hour after Kodiak's. And China stretches some 5,026 kilometers across the East Asian landmass, which is about 4 hours of solar time, but it all has the same civil time.



This question might better be asked in the Sailing SE. Oh, wait, there isn't one, yet.

lo.logic - Is the theory of categories decidable?

Thanks for clarifying your question. The formulation that
you and Dorais give seems perfectly reasonable. You have a
first order language for category theory, where you can
quantify over objects and morphisms, you can compose
morphisms appropriately and you can express that a given
object is the initial or terminal object of a given
morphism. In this language, one can describe various finite
diagrams, express whether or not they are commutative, and
so on. In particular, one can express that composition is
associative, etc. and describe what it means to be a
category in this way.



The question now becomes: is this theory decidable? In
other words, is there a computable procedure to determine,
given an assertion in this language, whether it holds in
all categories?



The answer is No.



One way to see this is to show even more: one cannot even
decide whether a given statement is true is true in all
categories having only one object. The reason is that group
theory is not a decidable theory. There is no computable
procedure to determine whether a given statement in the
first order language of group theory is true in all groups.
But the one-point categories naturally include all the
groups (and we can define in a single statement in the
category-theoretic language exactly what it takes for the
collection of morphisms on that object to be a group).
Thus, if we could decide category theory, then we could
decide the translations of the group theory questions into
category theory, and we would be able to decide group
theory, which we can't. Contradiction.



The fundamental obstacle to decidability here, as I
mentioned in my previous answer (see edit history), it the
ability to encode arithmetic. The notion of a strongly
undecidable
structure

is key for proving various theories are undecidable. A
strongly undecidable theory is a finitely axiomatizable
theory, such that any theory consistent with it is
undecidable. Robinson proved that there is a strongly
undecidable theory of arithmetic, known as Robinson's Q. A
strongly undecidable structure is a structure modeling a
strongly undecidable theory. These structures are amazing,
for any theory true in a strongly undecidable structure is
undecidable. For example, the standard model of arithmetic,
which satisfies Q, is strongly undecidable. If A is
strongly undecidable and interpreted in B, then it follows
that B is also strongly undecidable. Thus, we can prove
that graph theory is undecidable, that ring theory is
undecidable and that group theory is undecidable, merely by
finding a graph, a ring or a group in which the natural
numbers is interpreted. Tarski found a strongly undecidable
group, namely, the group G of permutations of the integers
Z. It is strongly undecidable because the natural numbers
can be interpreted in this group. Basically, the number n
is represented by translation-by-n. One can identify the
collection of translations, as exactly those that commute
with s = translation-by-1. Then, one can define addition as
composition (i.e. addition of exponents) and the divides
relation is definable by: i divides j iff anything that
commutes with si also commutes with
sj. And so on.



I claim similarly that there is a strongly undecidable
category. This is almost immediate, since every group can
be viewed as the morphisms of a one-object category, and
the group is interpreted as the morphisms of this category.
Thus, the category interprets the strongly undecidable
group, and so the category is also strongly undecidable. In
particular, any theory true in the category is also
undecidable. So category theory itself is undecidable.

telescope - Who invented the blink comparator?

It would appear to have been developed by Carl Pulfrich working for Zeiss in 1904.



Alternatively Max Wolf in 1900 again working with the Zeiss company.



Looks like the idea was Wolf's and the realisation Pulfrich's. From the second link we have:




Wolf was a codeveloper of the stereo comparator together with Carl
Pulfrich from the Zeiss company. The stereo comparator consists of a
pair of microscopes arranged so that one can see simultaneously two
photographic plates of the same region taken at diff erent times. Wolf
seems to have experimented with such techniques as early as 1892, but
without success. When Pulfrich approached him to adapt the technique
from geodesy to astronomy, Wolf was delighted. A steady exchange of
letters followed. Wolf and Pulfrich then worked together to analyze
the rapidly growing accumulation of photographic plates. Tragically,
Pulfrich lost one eye in 1906, preventing him from using the
stereographic tool from then on.


Saturday 9 January 2010

star - Requesting book references for a non-expert person with math background or just a non-expert person

I've just finished my masters degree (to be exact previous week ^_^). And I'm completing my collection of applications of my studies. During previous year I started reading about Robotics and Coding theory (but in free times as a hobby). I knew that my field of study has application in Astrophysics but I didn't have enough time to go after this topic as well. But my main question; these days I felt into mood of having a general knowledge in Astronomy etc. everything related to stars and space.



As I'm not seeking the application of my field of study at astrophysics (if I were it was better to put this question at math stack instead of hear) and certainly almost all members here are not familiar with pure mathematics, I will be pleased if someone here introduce one or several nice texts or books which satisfy one of the two following set of conditions:
1- Some elementary sources which people who doesn't have any background can read them. The notes be exciting and the reader wants to continue it until its end.
2- Sources which also use mathematics (specially if the level of the math won't be only some calculus) but the dominant text be about stars and space.



Assume you are suggesting a text to a person as his first thing he will read in this topic. And if you think a series or a number of texts are suitable, please give an order to which should be read first and then which for the next and so on.
(Also please note that I don't like to read guesses! So texts contains we guessed x should be because of y or we think the world is ... are not interesting to me)



Thank you for your attention.

Thursday 7 January 2010

nt.number theory - A local-to-global principle for being a rational surface

It seems to me that there are irrational surfaces over $mathbb Q$ that are $mathbb Q_v$-rational for all $v$. (I couldn't find them in the literature, but didn't look very hard. Almost certainly they are to be found there, in papers by either Iskovskikh or Colliot-Thelene.)



Take the affine surface $S$ given by $y^2+byz+cz^2=f(x)$, where $f$ is an irreducible cubic and $b^2-4c$ equals the discriminant $D(f)$ of $f$, up to a square in $mathbb Q^*$, and $D(f)$ is not a square. According to Beauville-Colliot--Thelene--Sansuc--Swinnerton-Dyer $S$ is not $mathbb Q$-rational, but is stably rational. (Irrationality is Iskovskikh I think, in fact.) Via projection to the $x$-line a projective model $V$ of $S$ is a conic bundle over $mathbb P^1$ with $4$ singular fibers (one is at infinity). There is an embedding of $V$ into a weighted projective space $mathbb P(2,2,1,1)$; the defining equation is $Y^2+bYZ+cZ^2=F(X,T)T$, where $F$ is the homogeneous version of $f$. By construction the Galois action on the $8$ lines that comprise the singular fibers is via the symmetric group $S_3$: the two lines in the fiber at infinity are conjugated, and the other six are permuted transitively.



Claim: Assume that $D(f)$ is square-free and prime to $6$. Then $S$ is $mathbb Q_v$-rational for all $v$.



Proof: Suppose that the decomposition group $G_v$ at $v$ is cyclic. Whatever its order ($1,2$ or $3$) there are at least $2$ disjoint lines among the $8$ that are $G_v$-conjugate, so they can be blown down to give a conic bundle over $mathbb P^1$ with at most $2$ singular fibres and a $mathbb Q_v$-point; it is well known that such a surface is $mathbb Q_v$-rational.



Now suppose that $G_v= S_3$. Then $v$ is non-archimedean and $V$ has bad reduction there. In fact, exactly two of the singular fibers are equal modulo $v$; it follows that $G_v=S_3$ is impossible, and we are done.



E.g., $f=x^3+x+1$, of discriminant $-31$, $c=8$, $b=1$.



(This doesn't use stable rationality, but rather the fact that these surfaces, although irrational, are very close to being rational, in the sense that the action of $Gal_{mathbb Q}$ on the lines is as small as possible subject to the surface being irrational, and the action of the decomposition groups is even smaller.)

ag.algebraic geometry - Functions on curves

I'm reading a book on algebraic curves, and at one point it says that if C is a smooth curve and f belongs to K(bar)(C)* for perfect field K, and if div(f)=0, then f has no poles. It's my understanding, that div(f) is the sum of order of f over various points P in C. So isn't it possible to have a function f with a pole and a zero of the same order at two different points P, and thus wouldn't the order still be 0?

fa.functional analysis - Embeddings of Weighted Banach Spaces

This is a special case of a much more general phenomenon, so I'm writing an answer which deliberately takes a slightly high-level functional-analytic POV; I think (personally) that this makes it easier to see the wood for the trees, even if it might not be the most direct proof. However, depending on your mathematical background it might not be the most helpful; so apologies in advance.



Anyway, start with a very general observation: let $E$ be a (real or complex) Banach space, and for each $n=1,2,dots$ let $T_n:Eto E$ be a bounded linear operator which has finite rank. (In particular, each $T_n$ is a compact operator.)



Lemma: Suppose that the sequence $T_n$ converges in the operator norm to some linear operator $T:Eto E$. Then $T$ is compact.



(The proof ought to be given in functional-analytic textbooks, so for sake of space I won't repeat the argument here.)



Now we consider these specific spaces $Omega_p$. Let $T:Omega_ptoOmega_{p'}$ be the embedding that you describe.



For each $n$, define $T_n: Omega_p to Omega_{p'}$ by the following rule:



$$ T_n(x)_i = x_i {rm if } |i| leq n {rm and} 0 {rm otherwise} $$



Then each $T_n$ has finite rank (because every vector in the image is supported on the finite set ${ i in {mathbb Z}^d vert vert ivert leq n}$). I claim that $T_n$ converges to $T$ in the operator norm, which by our lemma would imply that $T$ is compact, as required.



We can estimate this norm quite easily (and indeed all we need is an upper bound). Let $xinOmega_p$ have norm $leq 1$; that is,
$$ sum_{iin{mathbb Z}^d} |x_i|^R (1+ vert ivert)^{-p} leq 1 $$



Then the norm of $(T-T_n)(x)$ in $Omega_{p'}$ is going to equal $C^{1/R}$, where



$$ eqalign{
C &:= sum_{iin {mathbb Z}^d : vert ivert > n} |x_i|^R (1+vert i vert)^{-p'} \\
& = sum_{i in {mathbb Z}^d : vert ivert > n} |x_i|^R (1+vert i vert)^{-p} cdot (1+vert i vert)^{p-p'} \\
& leq sum_{i in {mathbb Z}^d : vert ivert > n} |x_i|^R (1+vert i vert)^{-p} cdot (1+vert n vert)^{p-p'} \\
& leq sum_{i in {mathbb Z}^d } |x_i|^R (1+vert i vert)^{-p} cdot (1+vert n vert)^{p-p'}
& = (1+vert n vert)^{p-p'}
}
$$



This shows that $Vert T-T_nVert leq (1+vert nvert)^{(p-p')/R}$ and the right hand side can be made arbitrarily small by taking $n$ sufficiently large. That is, $T_nto T$ in the operator norm, as claimed, and the argument is complete (provided we take the lemma on trust).



Note that we used very little about the special nature of your weights. Indeed, as Bill Johnson's answer indicates, the only important feature is that in changing weight you in effect multiply your vector $x$ by a "multiplier sequence" which lies in $c_0({mathbb Z}^d)$, i.e. the entries "vanish at infinity".



Edit 17-02-10: the previous paragraph was perhaps slightly too terse. What I meant was the following: suppose that you have two weights $omega$ and $omega'$, such that the ration $omega/omega'$ lies in $c_0({mathbb Z}^d)$. Then the same argument as above shows that the corresponding embedding will be compact. Really, this is what Bill's answer was driving at: it isn't the weights which are important, it's the fact that the factor involved in changing weight is given by something "vanishing at infinity".

Wednesday 6 January 2010

exoplanet - If Kepler-444 planets existed for 11.2 billion years, why fear for life on Earth after six billion years?

The rate of evolution of main sequence stars is highly dependent on their mass. Roughly speaking, the time on the main sequence is proportional to $M^{-5/2}$, where $M$ is the mass of the star. Thus if Kepler 444 has $M=0.75 M_{odot}$, it can live for $0.75^{-5/2} = 2.05$ times as long as the Sun on the main sequence.



Another way of saying the same thing is that the Sun evolves 2.05 times as fast as Kepler 444. This is important because even whilst on the "stable" main sequence, burning hydrogen in their cores, stars are still changing because the average mass of a particle in their cores is increasing due to the build up of a Helium ash. This slow change means that the temperature of the core must gradually increase, the nuclear burning rate increases and the star becomes more luminous.



In terms of the Sun this means, depending on the atmospheric composition at the time, that the Earth could become much hotter in a billion years or so, and probably too hot for life (as it currently exists on Earth). Thus it is not the existence of Earth that is in jeopardy, it is the surface temperature that may destroy life.



None of the discovered planets in the Kepler 444 system are (or ever were) far enough from their parent star that they are in the "habitable zone" where liquid water could exist. The significance of Kepler 444 is that it shows that rocky planetary systems can have formed as long ago as 11 billion years, giving such systems more time, and therefore perhaps more chance, to have formed life.

nt.number theory - Decomposition of primes, where the residue field extensions are allowed to be inseparable

I've been dealing with the following situation:



Let $Rsubseteq S$ be an extension of Dedekind rings, where $Quot(R)=:L subseteq E:=Quot(S)$ is a $G$-Galois extension. Let $mathfrak{p}$ be a prime of $R$, and $mathfrak{q}$ a primes of $S$ above $mathfrak{p}$. Let $D_{mathfrak{q}}$ denote the decomposition group, and $I_{mathfrak{q}}$ the inertia group, of $mathfrak{q}$ over $mathfrak{p}$.



However, unlike in the classic case, I allow the residue fields of $mathfrak{p}$ to be infinite, with positive characteristic. So the extension of residue fields may be inseparable.



It seems that the paper I'm reading implicitly assumes:



$|I_{mathfrak{q}}|=e[kappa(mathfrak{q}):kappa(mathfrak{p})]_ i $ (the ramification index times the inseparability degree of the residue extension)


$|D_{mathfrak{q}}|=e[kappa(mathfrak{q}):kappa(mathfrak{p})]$


$|G|=re[kappa(mathfrak{q}):kappa(mathfrak{p})]$ (where $r$ is the number of primes above $mathfrak{p}$)



Is that right? I keep hitting walls when I try to prove it.

ag.algebraic geometry - Why is the Brauer Loop Scheme Not a Variety?

I am trying to grapple with the basics of scheme theory. Is the scheme defined by Spec[C[x,y,z]/(xy,yx,zx)] a variety? What do the points look like?



I suspect it represents points satisfying xy = yz = zx = 0, so it should have three irreducible components {x = y = 0}, {y = z = 0} and {z = x = 0}. The motivation for this example comes from statistical mechanics and it has quite a bit more content:



Consider the space of 3x3 matrices (entries in C) with the following deformation of the matrix product: $P circ Q = sum_{i leq j leq k, cyc} P_{ij} P_{jk} $. Here we summing over j such that i, j, k appear in cyclic order mod 3. It appears in a set of slides on The Combinatorics of the Brauer Loop Scheme.



The paper then proceeds to define a scheme using equations in matrices. In the space of matrices with 0's along the diagonal, we consider the matrices with $M circ M = 0$. In coordinates, the matrix product therefore looks like:
$ left( begin{array}{ccc} 0 & b_{12} & b_{13} \\
b_{21} & 0 & b_{23} \\
b_{31} & b_{32} & 0 end{array} right) circ left(
begin{array}{ccc} 0 & b_{12} & b_{13} \\
b_{21} & 0 & b_{23} \\
b_{31} & b_{32} & 0 end{array} right) = left( begin{array}{lll}
0 & 0 & b_{12}b_{23} \\
b_{23}b_{31} & 0 & 0 \\
0 & b_{31}b_{12} & 0 end{array} right)$
As all the entries on the right side vanish, this defines three equations in six unknowns. (Actually, only the "clockwise" matrix entries seem to be involved.)