Saturday 27 February 2010

gn.general topology - Hausdorff dimension: subset of $mathbb{R}^n$ vs. boundary of this subset

"Smaller" in the sense of $le$ ... If $S$ is closed and has Hausdorff dimension $< n$, then $S$ has empty interior, so (as noted by Joel) $S$ is its own boundary, and thus we have equality for the two dimensions. And of course if (perhaps not closed) set $S$ has dimension $n$, then the boundary could have any dimension from $0$ to $n$, inclusive. If $S$ is closed and has dimension $n$, then the boundary is either empty or has dimension $ge n-1$.

Friday 26 February 2010

the sun - Is the rotation of the Sun and the rotation/orbit of the Moon around the Earth a coincidence?

While looking at sunspot information in connection with Amateur Radio, I found that the Sun rotates on a period from 27 days to 31 days. Its rotation is differential, at the equator it spins at a rate of about 27 days, while at the poles it's closer to 31 days.



Earth's moon also rotates at a rate of 27.3 days.



I suspect this is nothing more than coincidence, but was wondering if there is more than a coincidental connection between these two rates of rotation, similar to how tidal locking forces the moon's rotation and orbit around earth to be the same rate.

ag.algebraic geometry - Abelian varieties over local fields

Let $K$ be a local field of characteristic zero, $k$ its residue field, $R$ its ring of integers and $p$ the characteristic of the residue field $k$. Let $G$ be the Galois group of $K$, $Isubset G$ the inertia group and $P$ the maximal pro-$p$ subgroup of $I$. Let $I_t:=I/P$.



Let $A_0$ be an abelian scheme over $R$ with generic fibre $A$. Then $A[p]$ is an $I$-module.
Let $V$ be a Jordan-Hölder quotient of the $I$-module $A[p]$.
I am interested in the representation $Ito Aut(V)$.



Question (*): Is it true that $P$ acts trivially on $V$?



(I have seen that there are results
of Raynaud and Serre on the "action of $I_t$ on $V$". I want to study these things, but I am already stuck
with Question (*) at the moment, i.e. with the question whether $I_t$ acts at all.)



Maybe someone can help?

Wednesday 24 February 2010

Objects entering or leaving the observable universe

I don't know if there have been any observations of objects leaving the observable universe, but I'll admit that I have a hard time keeping up with the latest discoveries in observational astronomy. I'll try to approach your question from a theoretical viewpoint.



As you pointed out, the radius of the observable universe is $4.6 times 10^{49}$ light-years. To figure out how fast an object is moving away from Earth (or any observer), we can use Hubble's law:



$$v = H_0D$$



where $v$ is the recessional velocity, $H_0$ is Hubble's constant, and $D$ is the proper distance from the observer to the object (see Wikipedia for the difference between proper distance and commoving distance). The trouble here is that the exact value of Hubble's constant isn't exactly well-known. There have been many different observations since Hubble proposed his law that attempted to find values for the constant, but they range widely. Here I'll use $82$ $km$ $s^{-1} /Mpc$ (where $Mpc$ is a megaparsec - 1 million parsecs).



Let's say that the object in question is $13$ gigaparsecs away (so $13,000$ megaparsecs away). The light we see from the galaxy would be from when it was very young. One of the most distant observed galaxies is this far away, and only $2,000$ light-years across (note: do not confuse the "light-travel distance" with the "proper distance"), so let's say that this galaxy is smaller in size - negligibly small, in fact. We can then use Hubble's law to calculate a recessional velocity $v$ of



$$(82)(13,000) = 1.066 times 10^6$$ kilometers per second, or $3.36 times 10^{13}$ kilometers per year. That's pretty fast! An object $13$ gigaparsecs away would be $42.38$ billion light-years away, not far from the edge of the observable universe. Yet if the edge of the observable universe is $46$ billion light-years away, the galaxy would still be $3.62 times 10^{21}$ kilometers away, and it would take a long time to get there. A galaxy this young would be maybe $1,000$ light-years across, so, moving at its current speed, it would take it many years to cross a distance its own length - that is, if the furthest edge of it were at the edge of the observable universe, it would still take many years to fully disappear. And that's even factoring in that it would be moving away faster!



The reason I don't have the object being further away is that any objects at the edge of the observable universe would be so young when they released the light we see today. That means that galaxies would be small, and wouldn't emit a lot of light. One of the furthest objects we have detected is only $30$ billion light-years away, and $2,000$ light-years across. So it would be terribly hard to observe an object at the edge of the observable universe (ironic, isn't it?). But here I have shown that it would take years for the object to completely disappear.



I hope this helps.




My sources for the data and conversion factors (i.e. light-years to kilometers):



Hubble's Law



Wolfram Science World



Most distant objects



Spacetelescope.org



Light-year

Tuesday 23 February 2010

quantum topology - Why is the volume conjecture important?

Someone else will have to discuss the applications in topology, but I can point out at least one reason the volume conjecture is interesting.



It's often said that no one knows how to define the functional integral for Chern-Simons theory. This isn't literally true. The Reshetikhin-Turaev construction can be interpreted -- tautologically -- as defining a volume measure on a certain space of functionals. (This is just like in quantum mechanics, where one interprets the kernel $langle q_i|e^{-Ht}|q_frangle$ as the volume of the space of paths $phi: [0,t] to mathbb{R}$ which begin at $q_i$ and end at $q_f$.) What we don't know how to do is define the path integral measure as a continuum limit of regularized integrals that look like $frac{1}{Z}e^{iCS(A)}dA$.



The volume conjecture (in particular the version where log of the Jones polynomial looks like vol(3-manifold) plus i times the Chern-Simons functional) tells us that the tautological measure you get from Reshetikhin-Turaev actually has something to do with the Chern-Simons action!

fa.functional analysis - 2, 3, and 4 (a possible fixed point result ?)

The question below is related to the classical Browder-Goehde-Kirk fixed point theorem.



Let $K$ be the closed unit ball of $ell^{2}$, and let $T:Krightarrow K$
be a mapping such that $Vert Tx-TyVert _{ell^{4}}leqVert x-yVert _{ell^{3}}$
for all $x,yin K$.



Is it true that $T$ has fixed points ?

Monday 22 February 2010

gravity - What Is The Great Attractor?

No; it's real - observations of the movement of galaxies indicate that there is an unusual concentration of mass (probably not a massive "object" of course, but totalling as much as tens of thousands of galaxies) in a place around 200 million lightyears from our galaxy.



It's hard to study since the view in that direction happens to be blocked by our own galaxy, but given that it has been studied for decades, it seems unlikely that the phenomenon turns out to be based on some sort of methodical error.

rt.representation theory - Highest weights of the restriction of an irreducible representation of a simple group to a Levi subgroup

This is not a complete answer, but maybe it will be of use.



You're asking which $L$-high weights $mu$ occur in the $G$-irrep $V_lambda$. Let me say that $mu$ occurs classically if for some $N>0$, $Nmu$ occurs in $V_{Nlambda}$.



  1. The set of such $mu$ form a rational polytope lying inside $L$'s positive Weyl chamber.


  2. The vertices of this polytope strictly inside $L$'s chamber ("regular vertices") are exactly those of the form $wcdot lambda$ that are lucky enough to be in there.


  3. The vertices of this polytope lying on $L$'s Weyl walls are very likely to be very complicated. In particular they may not be integral weights of $L$. As I recall this already happens for $GL(3) supset GL(2)times GL(1)$.


Parts 1 & 3 apply to any branching problem (and much further). Part 2 is special to your case that $L$ has the same rank as $G$ (I'm not actually using that it's a Levi).



If all you want is an upper bound, as your comment to Jim suggests, then that's easy: the $L$-high weights that can occur are a subset of the $T$-weights that occur, which you already described. Probably you want something better than that though. In principle it wouldn't be too hard to figure out the local structure of your polytope nearby the regular vertices, but I expect that not all facets contain regular vertices.



Littelmann describes (in the case of a Levi) the highest weights that occur and their multiplicities: one looks at all the Littelmann paths for the irrep $V_lambda$ that lie entirely inside the closed $L$-chamber.

Sunday 21 February 2010

Why do pictures of the Milky Way look like a spiral?

Just to clarify a concept here: there are no pictures of the Milky Way other than the ones taken from within the Solar system, and they all look somewhat like this:



enter image description here



Everything else you see are artists' depictions of the Milky Way based on the latest research. This is the currently most accepted shape:



enter image description here



i.e.: a spiral galaxy with four main arms.



There is an ongoing discussion about whether our Galaxy is made up of four arms or just two. See for example: Vallée (2014), which sadly is not available online :(

Is dust extinction/reddening caused by Rayleigh scattering or some other physics phenomenon (or both)?

In an astronomical context, Rayleigh scattering does not take a huge place.
The extinction is mostly due to absorption and scattering, from gas and dust. Its effect is to diminish the flux from the source (that is, a change in magnitude). Please, take care this depends on the observed wavelength!



For simplicity, we can assume the extinction factor $Q_{ext}$ as:



$Q_{ext} = Q_{scattering} + Q_{absorption}$



Where $Q_{scattering}$ is in turn composed of many elements.



As the OP above said, blue light generally suffers of higher extinction than other frequencies, so causing reddened observed light. This is due to the cross-section dependence on the frequency: if the light has a wavelength that is comparable to the size of the grain dusts (high frequency), it is more effectively absorbed, while in the opposite case it can pass through undisturbed.
This is the reason why obscured fields are observed in radio (in our galaxy, especially) or in infrared (extragalactic).



So, the answer to you question, in space is: Rayleigh scattering is not the most responsible of the extinction/reddening, you have to invoke absorption/scattering by dust and gas.
For comparison, here some explanatory numbers about the extinction efficiency in different contests:



$Extinction simlambda^{-1}$



$Thomson simlambda^0$



$Rayleigh simlambda^{-4}$



You basically would need impressive amount of gas to account for the extinction just by Rayleigh scattering.
Instead, Mie scattering is more important in this context. Here a figure to quickly grab the concept (from here):



enter image description here



And here a very popular plot about extinction versus frequency from here:



enter image description here



Of course, if you observe from ground, you should take care of Rayleigh scattering as well.



Other references:
Lecture 1
Lecture 2

Saturday 20 February 2010

how to measure temperature of the distant star

To measure the surface temperature of a star it is used is it's black body spectrum. You would have to get the light curve of the light from tat star and then by checking it's peak you could guess its temperature.



I really don't know which instruments to use to get the light curve from a distant star but you could use a tool like this one for the final conversion: http://astro.unl.edu/naap/blackbody/animations/blackbody.html

Friday 19 February 2010

Can we reconstruct positive weight invariants in algebraic topology using algebraic geometry?

I can't really say that I understand what a weight is, but the qualitative distinction between weight zero and positive weight has come up a couple times in MathOverflow questions:



  1. The étale fundamental group of a pointed connected complex scheme has a canonical map to the profinite completion of the topological fundamental group, and for regular varieties, this seems to be an isomorphism. However, in the case of a nodal rational curve (see this question), one finds that the étale fundamental group is not profinite, and has an honest isomorphism with the topological fundamental group. Similarly, the degree 1 étale cohoomology of the nodal curve with coefficients in $mathbb{Z}$ is just $mathbb{Z}$ as expected from topology, where one typically expects étale cohomology with torsion-free coefficients to break badly in positive degree. Emerton explained in this blog comment that the good behavior of étale cohomology and the étale fundamental group in these cases is due to the fact that the contribution resides in motivic weight zero, and the singularity is responsible for promoting it to cohomological degree 1.

  2. Peter McNamara asked this question about how well formal loops detect topological loops, and Bhargav suggested in a rather fantastic answer that the formal loop functor only detects weight zero loops (arising from removing a divisor). In particular, he pointed out that maps from $operatorname{Spec}mathbb{C}((t))$ only detect the part of the fundamental group of a smooth complex curve of positive genus that comes from the missing points.

I have a pre-question, namely, how does one tell the weight of a geometric structure, such as a contribution to cohomology, or the fact that removing a divisor yields a weight zero loop?



My main question is: Are there algebraic (e.g., not using the complex topology) tools that always yield the correct invariants in positive weight, such as cohomology with coefficients in $mathbb{Z}$ and the fundamental group of a pointed connected complex scheme?



I've heard a claim that motivic cohomology has a Betti realization that yields the right cohomology, but I don't know enough about that to understand how. Any hints/references?



With regard to the second example above, I've seen some other types of loops in algebraic geometry, but I don't really know enough to assess them well. First, there are derived loops, which you get by generalizing to Top-valued functors on schemes, defining $S^1$ to be the sheaf associated to the constant circle-valued functor (in some derived-étale topology), and considering the topological space $X(S^1)$ or the output of a Hom functor. As far as I can tell, derived loops are only good at detecting infinitesimal things (e.g., for $E$ an elliptic curve, $LE$ is just $E times operatorname{Spec}operatorname{Sym}mathbb{C}[-1]$, which has the same complex points as $E$). Second, there is also some kind of formal desuspension operation in stable motivic homotopy that I don't understand at all. One kind of loop has something to do with gluing 0 to 1 in the affine line, and the other involves the line minus a point. I'm having some trouble seeing a good fundamental group come out of either of these constructions, but perhaps there is some miracle that pops out of all of the localizing.

Wednesday 17 February 2010

reference request - Source needed (at final-year undergrad level) for the double cover of SO(3) by SU(2)

This is a bit of an ill-defined question, and I feel I should have been able to resolve it by combining Google with a few library trips, but I'm having difficulty narrowing down the search results to a list I can actually go through practically. Apologies if the question seems too vague or not sufficiently thought through.



What I'm after is a section of a book or published article which could be used by a 3rd-year undergraduate as a source for the fact that the action of SU(2) on the Riemann sphere by Möbius transformations gives rise to a double cover of SO(3). It doesn't need to be too precise about what exactly is meant by a double cover; but I would like something which makes it clear that we are somehow slicing a 3-sphere into 1-spheres (a.k.a. circles) in an unusual way, without saying "let $E$ be a fibre bundle..." or "consider the exact sequence..." In particular, anything that assumes the student has a proper background in algebraic topology or differential geometry is probably at too advanced/sophisticated a level.



Of course, one is tempted to just write down the map and look at some of its properties: but for the present purposes it's important that I can direct the student to a citable source that is reasonably self-contained (at least when it comes to this particular result). Thus although the wikipedia entry, for, say, "Hopf fibration" is along the desired lines, I really need something more "official-looking". For similar reasons, I don't think I can just explain things to the student in person; that wouldn't be correct, whereas "pointing the student to a book" would be.



Anyway: I thought that on MO there might well be people who've had similar ideas/experiences either as teachers or students, and who had therefore come across a handy section of book which could be used. Any suggestions?

fa.functional analysis - Categorical duals in Banach spaces

My suspicion is "no", because if I recall correctly the map $I to V otimes V^*$ naturally lands in the injective tensor product, not the projective tensor product, and it is the latter which appears as the ``correct'' tensor product for the SMC category of Banach spaces and linear contractions.



In the toy example given, $Voplus V$ with the sup norm is the same as continuous maps from a 2-point set to $V$, equipped with sup-norm, and I'm pretty sure that this is indeed isometrically linearly isomorphic to ${mathbb R}^2 check{otimes} V$ i.e. the injective tensor product.



EDIT: as Reid points out my remarks above assume without justification that the inj. t.p. does differ from the proj t.p. in the specific case being considered. I think this is indeed the case. Take $V$ to be ${mathbb R}^2$ with usual Euclidean norm. The projective tensor product of $V$ with $V^*$ can be identified with $M_2({mathbb R})$ equipped with the trace class norm; the injective tensor product would lead to the `same' underlying vector space, equipped with the operator norm. The 2 x 2 identity matrix has trace class norm 2 and operator norm 1, so the two norms are genuinely different.



My answer is still not as clear as it should be, because due to a sluggish and temperamental internet connection I'm having trouble looking up just what the axioms for categorical duals in a SMC are. But if I recall correctly the natural map from $I to V otimes V^*$ should be given by multiplying a scalar by the vector $e_1otimes e_1 + e_2otimes e_2$ where $e_1,e_2$ is an o.n. basis of ${mathbb R}^2$ -- and that vector does not have norm 1 in the proj t.p. althought it does have norm 1 in the inj t.p.

modular forms - Galois representations attached to newforms

Suppose that $f$ is a weight $k$ newform for $Gamma_1(N)$ with attached $p$-adic Galois representation $rho_f$. Denote by $rho_{f,p}$ the restriction of $rho_f$ to a decomposition group at $p$. When is $rho_{f,p}$ semistable (as a representation of
$mathrm{Gal}(overline{mathbf{Q}}_p/mathbf{Q}_p)$?



To make things really concrete, I'm happy to assume that $k=2$ and that the $q$-expansion of $f$ lies in $mathbf{Z}[[q]]$.



Certainly if $N$ is prime to $p$ then $rho_{f,p}$ is in fact crystalline, while
if $p$ divides $N$ exactly once then $rho_{f,p}$ is semistable (just thinking about the Shimura construction in weight 2 here, and the corresponding reduction properties of $X_1(N)$
over $mathbf{Q}$ at $p$). For $N$ divisible by higher powers of $p$, we know that these representations are de Rham, hence potentially semistable. Can we say more? For example,
are there conditions on "numerical data" attached to $f$ (e.g. slope, $p$-adic valuation of $N$, etc.) which guarantee semistability or crystallinity over a specific
extension? Can we bound the degree and ramification of
the minimal extension over which $rho_{f,p}$ becomes semistable in terms of numerical
data attached to $f$? Can it happen that $N$ is highly divisible by $p$ and yet $rho_{f,p}$ is semistable over $mathbf{Q}_p$?



I feel like there is probably a local-Langlands way of thinking about/ rephrasing this question, which may be of use...



As a possible example of the sort of thing I have in mind: if $N$ is divisible by $p$ and $f$ is ordinary at $p$ then $rho_{f,p}$ becomes semistable over an abelian extension of
$mathbf{Q}p$
and even becomes crystalline over such an extension provided that the Hecke eigenvalues
of $f$ for the action of $mu_{p-1}subseteq (mathbf{Z}/Nmathbf{Z})^{times}$ via the diamond operators
are not all 1.

co.combinatorics - Does War have infinite expected length?

Dear Joel David,
I will try to explain it, but I have to note that article is quite primitive, and is written in a readable English. Moreover there are many figures. But I will try:
I will make a list of statements and then You can mention the number of the non clear one:



  1. By our assumption (players do not have strategy and do not have fixed rules how to return cards) the game is a Markov chain.


  2. Absorbing (final) state is a state where you stay forever :)
    For us it means the end of the game i.e. the state when one of players has got all cards.


3A. In finite Markov chain, assuming arbitrary initial state, you are absorbed with probability ONE If And Only If "for each vertex of the Markov chain graph there is a way to an absorbing state."



3B. So we have to prove that for the graph of our game of war, there no exists such initial state that players do not have any chance to reach the end.



  1. To prove it we should consider first the simplification. Consider the game with cards {1,...,n} i.e. every value meets only once.

We call a vertex attaining if it has got terminal states as its descendants, and wandering otherwise. It is obvious that a descendant of a wandering vertex is again wandering, and an ancestor
of attaining is again attaining.
For an arbitrary oriented graph it is possible that an attaining vertex has got wandering vertices among its descendants. We show that for our graph G it is not so. For that, we need to understand some properties of the graph G.



LEMMA 1.
A: Let state be such that one of the players has got only one
card in his hand, then this state has got exactly one ancestor.



B: If both players have got at least two cards, then this state has got exactly two ancestors.



LEMMA 2. For the graph of the game it holds that a descendant of an attaining vertex is again an attaining vertex.
(Page 5 of the article)



Lemma 3. The states in which one of the players has got only one card are attaining. (page 6)



Lemma 4. Every vertex has got an ancestor that corresponds to the state in which one of the players has got only one card.



Therefore, we have shown that each vertex has got an ancestor that corresponds to the state in which each player has got exactly one card. This state is attaining by Lemma 3. By Lemma 2 descendants of attaining vertices are again attaining, therefore, the initial state is again attaining, and we have proved



Theorem: Graph G does not have any wandering vertices.




Now how to apply it to the standard GAME:
We use the following obvious fact: If a subgraph of an oriented graph does not have wandering vertices, then the original graph does not have any wandering vertices either.



Now the proof is similar.



I hope it is better to read the article, I am sorry.
and I want to note once more time, that question of strategy is never been discussed.



[Added by J.O'Rourke:]
The paper has appeared: "On Finiteness in the Card Game of War,"
Evgeny Lakshtanov and Vera Roshchina,
The American Mathematical Monthly,
Vol. 119, No. 4 (April 2012) (pp. 318-323).
JSTOR link.

topology of infinite union of hyperplanes

Hi all:



I am working on Functional Analysis. I encounter a topology problem in my study of spectrum of certain operators, and it has bothered me for quite some time. Any idea or references is greatly appreciated.



Suppose M is an infinite (possibly uncountable) union of complex hyperplanes in $C^n$ . To be specific, we write $M=∪H_a$ where $H_a ={z∈C^n : acdot z=0}$ .



If M is a finite union, then the de Rham cohomology (with complex coefficient) of M c is generated by the 1-forms $acdot dz/ acdot z$ . This is a well-known theorem. My question is whether there is a similar theorem for an infinite union of hyperplanes. We can assume $M^c$ is open and nice, in particular we assume the first de Rham cohomology $H^1 (M^c , C)$ is finite dimensional. Then is $H^1 (M^c , C)$ spanned by the 1-forms $acdot dz/ acdot z$ ?



Thanks a lot!



Ron

Do black holes have energy?

An isolated black hole is a vacuum solution of general relativity, so in a very direct sense it does not contain any energy anywhere in spacetime. But perhaps somewhat counter-intuitively, that does not imply that such a black hole has no energy.



Defining the total amount of energy is usually very problematic in general relativity, but in some special cases it is possible. In particular, the usual black hole solutions are all asymptotically flat, i.e., spacetime is just the usual flat Minkowski when far away from the black hole.



Here (or in general when we have a prescribed asymptotic form of the spacetime), we can calculate the total energy-momentum, by essentially measuring the gravitational field of the black hole at infinity. The energy just be one component of energy-momentum.



There are actually two relevant different kinds of 'infinity' here: spatial infinity and null (lightlike) infinity, depending on whether we are 'far away' from the black hole in a spacelike or lightlike direction. There's also timelike infinity, but that just corresponds to waiting an arbitrarily long time, so it's not relevant here. The two different infinities beget different definitions of energy-momentum, giving the the ADM energy and the Bondi energy, respectively. In a vacuum, the intuitive difference between the two is that Bondi energy excludes gravitational waves.



So the short answer is 'yes', with the caveat that in a more complicated situation, where we can't attribute everything to the black hole itself, the answer to how much energy is due to the black hole may be ambiguous or ill-defined.



Note that the ADM and Bondi energy-momenta also define their corresponding measures of mass, as the norm of those energy-momenta ($m^2 = E^2-p^2$), but for a black hole we can also define mass more operationally in terms of orbits around the black hole. There are also other alternatives for addressing mass specifically.

Tuesday 16 February 2010

ag.algebraic geometry - Curves on elliptic ruled surfaces?

To find higher genus curves without using a specific embedding $S subset mathbb{P}^n$, it could help to think first about the case when your surface is actually a product $S=mathbb{P}^1 times E$. Let $C$ be a curve which admits two branched covers, $fcolon C to E$ and $g colon C to mathbb{P}^1$. Then the product $f times g colon C to S$ maps into the surface $S$. If the branch points of $f$ and $g$ are different then $f times g$ will even be an embedding.



In general, let $V to E$ be your rank-two vector bundle, so $S=mathbb{P}(V)$. Given a banched cover $f colon C to E$, you pull back $V$ to a bundle $V' to C$. Now every time you have a line sub-bundle $L$ of $V'to C$ you get a section of $mathbb{P}(V')$ which plays the role of $g$ in the first paragraph. It can be combined with $f colon C to E$ to give a map $C to S$. Depending on how much you know about $E$ and $V$, hopefully this should help you find plenty of explicit curves in $S$.

Monday 15 February 2010

Can a topos ever be an abelian category?

No. In fact no nontrivial cartesian closed category can have a zero object 0 (one which is both initial and final), as then for any X, 0 = 0 × X = X. (The first equality uses the fact that – × X commutes with colimits and in particular the empty colimit, and the second holds because 0 is also the final object.)

Sunday 14 February 2010

observation - How does angular resolution of a telescope translate to its parallax precision?

We can often read in the scientific and also more casual reader literature and articles about the angular resolution of various telescopes and other optical equipment, be it ground based or onboard space probes. They would often list their angular resolution, or in other words, their ability to resolve or distinguish small, distant objects with today's digital era mostly on a per sensor pixel basis.



            parallax



                Finding a star's distance from its parallax. Trigonometric Parallax method determines distance to star by measuring
                its slight shift in apparent position as seen from opposite ends of Earth's orbit. (Source: Measuring the Universe)



What I'm interested in is, is the precision in measurements of parallax and with it our ability to determine distance of observed objects directly analogous to mentioned radial resolution and how could it be calculated using data on angular resolution of a telescope alone, if we assume both ground based and in space observatories have more or less the same perihelion to aphelion distance (i.e. the space observatory is in Earth's orbit).

Saturday 13 February 2010

mathematical writing - What is the best graph editor to use in your articles?

I already had about 30 pages of graphs typeset with xymatrix for my thesis before discovering tikz; but was so impressed by it that I was happy to rewrite them all. As well as (imho) looking better, it gave me cross-platform compatibility - xypic seems to need pstricks, so on the mac with TeXshop (which uses pdflatex, I assume) the old graphs couldn't even be rendered.



Its ability to construct graphs iteratively can also be a massive timesaver- for instance, I wanted a bunch of otherwise identical rectangles at various positions, so with tikz could just loop over a list of their first coordinate rather than having to tediously cut,paste and modify an appropriate number of copies of the command for a rectangle. Particularly handy when I then decided they all needed to be slightly wider!



There's a gallery of tikz examples here, to give you some idea of what it's capable of (and with the relevant source code- I did find the manual a bit hard to understand and learnt mostly by examples or trial and error).



The vector graphics package inkscape (which I used to use for drawing more complicated graphs for inclusion as eps images) also apparently has a plugin to export as tikz, although I haven't tried that out.

pr.probability - Probability of one binomial variable being greater than another.

Edit: I've filled in a few more details.



The Hoeffding bound from expressing $Y-X$ as the sum of $n$ differences between Bernoulli random variables $B_q(i)-B_p(i)$ is



$$Prob(Y-X ge 0) = Prob(Y-X + n(p-q) ge n(p-q)) le expbigg(-frac{2n^2 (p-q)^2}{4n}bigg)$$



$$Prob(Y-X ge 0) le expbigg(-frac{(p-q)^2}{2}nbigg)$$



I see three reasons you might be unhappy with this.



  • The Hoeffding bound just isn't sharp. It's based on a Markov bound, and that is generally far from sharp.

  • The Hoeffding bound is even worse than usual on this type of random variable.

  • The amount by which the Hoeffding bound is not sharp is worse when $p$ and $q$ are close to $0$ or $1$ than when they are close to $frac12$. The bound depends on $p-q$ but not how extreme $p$ is.

I think you might address some of these by going back to the proof of Hoeffding's estimate, or the Bernstein inequalities, to get another estimate which fits this family of variables better.



For example, if $p=0.6$, $q=0.5$, or $p=0.9$, $q=0.8$, and you want to know when the probability is at most $10^{-6}$, the Hoeffding inequality tells you this is achieved with $nge 2764$.



For comparison, the actual minimal values of $n$ required are $1123$ and $569$, respectively, by brute force summation.



One version of the Berry-Esseen theorem is that the Gaussian approximation to a cumulative distribution function is off by at most



$$0.71 frac {rho}{sigma^3 sqrt n}$$
where $rho/sigma^3$ is an easily computed function of the distribution which is not far from 1 for the distributions of interest. This only drops as $n^{-1/2}$ which is unacceptably slow for the purpose of getting a sharp estimate on the tail. At $n=2764$, the error estimate from Berry-Esseen would be about $0.02$. While you get effective estimates for the rate of convergence, those estimates are not sharp near the tails, so the Berry-Esseen theorem gives you far worse estimates than the Hoeffding inequality.



Instead of trying to fix Hoeffding's bound, another alternative would be to express $Y-X$ as a sum of a (binomial) random number of $pm1$s by looking at the nonzero terms of $sum (B_q(i)-B_p(i))$. You don't need a great lower bound on the number of nonzero terms, and then you can use a sharper estimate on the tail of a binomial distribution.



The probability that $B_q(i)-B_p(i) ne 0$ is $p(1-q) + q(1-p) = t$. For simplicity, let's assume for the moment that there are $nt$ nonzero terms and that this is odd. The conditional probability that
$B_q(i)-B_p(i) = +1$ is $w=frac{q(1-p)}{p(1-q)+q(1-p)}$.



The Chernoff bound on the probability that the sum is positive is $exp(-2(w-frac 12)^2tn)$.



$$ exp(-2bigg(frac{q(1-p)}{p(1-q)+q(1-p)} - frac 12bigg)^2 big(p(1-q) + q(1-p)big) n)$$



is not rigorous, but we need to adjust $n$ by a factor of $1+o(1)$, and we can compute the adjustment with another Chernoff bound.



For $p=0.6, q=0.5$, we get $n ge 1382$. For $p=0.9, q=0.8$, we get $n ge 719$.



The Chernoff bound isn't particularly sharp. Comparison with a geometric series with ratio $frac{w}{1-w}$ gives that the probability that there are more $+1$s than $-1$s is at most



$${nt choose nt/2} w^{nt/2} (1-w)^{nt/2} frac {1-w}{1-2w}$$



This gives us nonrigorous bounds of $nt gt 564.4, n ge 1129$ for $p=0.6,q=0.5$ and
$ntgt 145.97, nge 562$ for $p=0.9,q=0.8$. Again, these need to be adjusted by a factor of $1+o(1)$ to get a rigorous estimate (determine $n$ so that there are at least $565$ or $146$ nonzero terms with high probability, respectively), so it's not a contradiction that the actual first acceptable $n$ was $569$, greater than the estimate of $562$.



I haven't gone through all of the details, but this shows that the technique I described gets you much closer to the correct values of $n$ than the Hoeffding bound.

Friday 12 February 2010

rt.representation theory - representation theoretic interpretation of Jack polynomials

Monomial symmetric polynomials on $n$ variables $x_1, ldots x_n$ form a natural basis of the space $mathcal{S}_n$ of symmetric polynomials on $n$ variables and are defined by additive symmetrization of the function $x^{lambda} = x_1^{lambda_1} x_2^{lambda_2} ldots x_n^{lambda_n}$. Here $lambda$ is a sequence of $n$ nonnegative numbers, arranged in non-increasing order, hence can also be viewed as partition of some integer with number of parts $l(lambda) le n$.



Power sum polynomials $p_lambda$ on $n$ variables also form a basis for $mathcal{S}_n$



and are defined as
$p_lambda = prod_{i=1}^n p_{lambda_i}$, where $p_r = sum_{i=1}^n x_i^r$.



Schur functions $s_lambda$ (polynomials) form a basis of the space of symmetric polynomials, indexed by partitions $lambda$ of at most $n$ parts, and are characterized uniquely by two properties:



  1. $langle s_lambda, s_mu rangle = 0$ when $lambda neq mu$, where the inner product is defined on the power sum basis by $langle p_lambda, p_mu rangle = delta_{lambda,mu} z_lambda$, and $z_lambda = prod_{i=1}^n i^{alpha_i} alpha_i !$, where $alpha_i$ is the number of parts in $lambda$ whose lengths all equal $i$. Notice $n!/z_lambda$ is the size of the conjugacy class in the symmetric group $S_{sum lambda_i}$ whose cycle structure is given precisely by $lambda$.


  2. If one writes $s_lambda$ as linear combination of $m_mu$'s, then the $m_lambda$ coefficient is $1$ and $m_mu$ coefficients are all $0$ if $mu > lambda$, meaning the partial sums inequality $sum_{i=1}^k mu_i ge sum_{i=1}^k lambda_i$ hold for all $k$ and is strict for at least one $k$. Thus one can say the transition matrix from Schur to monomial polynomial basis is upper triangular with $1$'s on the diagonal.


Jack polynomials generalize Schur polynomials in the theory of symmetric functions by replacing the inner product in the first characterizing condition above with $langle p_lambda, p_mu rangle = delta_{lambda, mu} alpha^{l(lambda)} z_{lambda}$. The second condition remains the same. It can be thought of as an exponential tilting of the Schur polynomials, and in fact it is intimately connected with the Ewens sampling distribution with parameter $alpha^{-1}$, a 1-parameter probability measure on $S_n$ or on the set of partitions of $n$ that generalize the uniform measure and the induced measure on partitions respectively.



It turns out that the theory of Schur polynomials has connections with classical representation theory of the symmetric group $S_n$. For instance the irreducible characters of $S_n$ are related to the change of basis coefficient from Schur polynomials to power sum polynomials in the following way:



if we write $s_lambda = sum_{mu} c_{lambda,mu} p_mu$, then
$$ chi_lambda(mu) = c_{lambda,mu} z_lambda^{-1.}$$.



These are eigenfunctions of the so-called random transposition walk on $S_n$, when viewed as a walk on the space of partitions. The eigenfunctions of the actual random transposition walk on $S_n$ are proportional to the diagonal elements of $rho$, $rho$ ranges over all irreducible representations of $S_n$.



The characters $chi_lambda$ admit natural generalization in the Jack polynomial setting: simply take the transition coefficients from the Jack polynomials to the poewr sum polynomials. And these when properly normalized indeed gives the eigenfunctions for the so-called metropolized random transposition walk that converges to the Ewens sampling distribution, which is an exponentially tilted 1-parameter family of uniform measure on $S_n$.



My question is, what is the analogue of the diagonal enties of the representations of $rho$ in the Jack case? Certainly they will be functions on $S_n$.

Difference between quasar and Active Galactic Nuclei?

All quasars are AGN, but not all AGN are quasars.



AGN is a terminology that comes later than quasars. Quasars is the term applied at the beginning, when the first objects of this type have been discovered. They were radio-loud and point-like (the so-called quasi-stellar radio sources). This characterization still holds nowadays. Another property is that quasars are cosmological, that is they are objects in the distant universe (redshift $z > 1$).



AGN is the most general term we can use to refer to active galaxies. AGN include all of them (Seyfert galaxies, Quasars, Blazars, etc.). In the end, it seems that the difference among various types of AGN is due to a mix of angle-of-sight/epoch-of-observation.

nt.number theory - Gödel's Incompleteness Theorem and the complexity of arithmetic

Yes, this line of thought is perfectly fine.



A set is decidable if and only if it has complexity
$Delta_1$ in the arithmetic
hiearchy
,
which provides a way to measure the complexity of a
definable set in terms of the complexity of its defining
formulas. In particular, a set is decidable when both it
and its complement can be characterized by an existential
statement $exists n varphi(x,n)$, where $varphi$ has
only bounded quantifiers.



Thus, if you have a mathematical structure whose set of
truths exceeds this level of complexity, then the theory
cannot be decidable.



To show that the true theory of arithmetic has this level
of complexity amounts to showing that the arithmetic
hierarchy does not collapse. For every $n$, there are sets
of complexity $Sigma_n$ not arising earlier in the
hierarchy. This follows inductively, starting with a
universal $Sigma_1$ set.



Tarski's theorem on the non-definability of
truth

goes somewhat beyond the statement you quote, since he
shows that the collection of true statements of arithmetic
is not only undecidable, but is not even definable---it
does not appear at any finite level of the arithmetic
hiearchy.



Finally, it may be worth remarking on the fact that there
are two distinct uses of the word undecidable in this
context. On the one hand, an assertion $sigma$ is not
decided by a theory $T$, if $T$ neither proves nor refutes
$sigma$. On the other hand, a set of numbers (or strings,
or statements, etc.) is undecidable, if there is no Turing
machine program that correctly computes membership in the
set. The connection between the two notions is that if a
(computably axiomatizable) theory $T$ is complete, then its
set of theorems is decidable, since given any statement
$sigma$, we can search for a proof of $sigma$ or a proof
of $negsigma$, and eventually we will find one or the
other. Another way to say this is that every computably
axiomatization of arithmetic must have an undecidable
sentence, for otherwise arithmetic truth would be
decidable, which is impossible by the halting problem (or
because the arithmetic hierarchy does not collapse, or any
number of other ways).

Wednesday 10 February 2010

orbit - Can General Relativity indicate phase-dependent variations in planetary orbital acceleration?

Since I don't have Walter's book, I'm uncertain as the context of the derivation of the equation you quote. Therefore, I've simply re-derived it here; apologies if there's some repetition of things you already know, but perhaps it'll be useful for anyone else reading this regardless.



Constants of Motion



The Schwarzschild solution is the unique nontrivial spherically symmetric vacuum solution of general relativity. In the Schwarzschild coordinate chart and units of $G = c = 1$, the metric takes the form
$$mathrm{d}s^2 = -left(1-frac{2M}{r}right)mathrm{d}t^2 + left(1-frac{2M}{r}right)mathrm{d}r^2 + r^2left(mathrm{d}theta^2 + sin^2theta,mathrm{d}phi^2right)text{,}$$
and one can immediately note that that the metric coefficients are completely independent of $t$ and $phi$, which implies that $partial_t$ and $partial_phi$ are Killing vector fields. They are important here because along with generating symmetries of the geometry, they also produce conserved orbital quantities in the following way: given an orbit with four-velocity $u^mu = (dot{t},dot{r},dot{theta},dot{phi})$, the inner product with a Killing vector field is conserved:
$$epsilon = -langlepartial_t,urangle = left(1-frac{2M}{r}right)frac{mathrm{d}t}{mathrm{d}tau}text{,}$$
$$h = langlepartial_phi,urangle = r^2sin^2theta,frac{mathrm{d}phi}{mathrm{d}tau}text{.}$$
The overdot indicates differentiation with respect to any affine parameter of the orbit, which for timelike geodesics appropriate for massive particles we can take without loss of generality to be the proper time $tau$. An alternative way to find these constants of motion is to integrate the $t$ and $phi$ components of the geodesic equation, but in this way they can read off immediately from the metric. These are the specific energy and specific angular momentum of the orbit, respectively. Also note that the coordinates are analogues of the spherical coordinates for Euclidean space, where $theta$ is the zenith angle while $phi$ is the azimuth; if we take the orbital plane to be the equatorial plane ($theta = pi/2$), then $phi$ would represent the true anomaly.



Effective Potential



Substituting the above constants of motion into the timelike worldline condition $langle u,urangle equiv g_{munu} u^mu u^nu = -1$, i.e.,
$$-left(1-frac{2M}{r}right)dot{t}^2 + left(1-frac{2M}{r}right)^{-1}dot{r}^2 + r^2dot{phi}^2 = -1text{,}$$
one can immediately derive the effective gravitational potential:
$$frac{1}{2}(epsilon^2-1) = frac{1}{2}dot{r}^2 + underbrace{left[-frac{M}{r}+frac{h^2}{2r^2} - frac{Mh^2}{r^3}right]}_{V_text{eff}}text{,}$$
or if one insists on a formal comparison with the Newtonian effective potential ($Lequiv mh$),
$$E = underbrace{frac{1}{2}mdot{r}^2 + frac{L^2}{2mr^2} - frac{GMm}{r}}_{text{Newtonian form}} - frac{GML^2}{mr^3c^2}text{.}$$



Orbit Equation



Differentiation of the above effective potential gives
$$ddot{r} + frac{M}{r^2} - frac{h^2}{r^3} + 3frac{Mh^2}{r^4} = 0text{.}$$
In terms of $u equiv 1/r$ with prime denoting differentiation with respect to $phi$,
$$u''= frac{mathrm{d}tau}{mathrm{d}phi}frac{mathrm{d}}{mathrm{d}tau}left(frac{mathrm{d}tau}{mathrm{d}phi}dot{u}right) = frac{r^2}{h}frac{mathrm{d}}{mathrm{d}tau}left(frac{r^2}{h}left(-r^{-2}dot{r}right)right) = -frac{ddot{r}r^2}{h^2}text{,}$$
this gives, after multiplication through by $-r^2/h^2$,
$$u'' + u = frac{M}{h^2} + 3Mu^2text{.}$$
However, there is really no need to consider a second-order at any point; there's a simpler one in terms of $V equiv V_text{eff} - h^2/2r^2$, the effective potential sans the centrifugal potential term:
$$begin{eqnarray*}
frac{2}{h^2}left[frac{E}{m}-Vright] &=& frac{dot{r}^2}{h^2} + frac{1}{r^2}
\&=& frac{1}{r^4}left[frac{mathrm{d}r}{mathrm{d}phi}right]^2 + u^2
\&=& (u')^2 + u^2text{.}
end{eqnarray*}$$




Walter derived an approximate relationship assuming a circular orbit. Goldstein focused on deriving an orbit-average expression for perihelion precession. On re-examining these texts it seems to me that GR provides more than just an orbit-averaged approximation.
...
Walter presents the following equation for a GR orbit (Schwarzschild model)
$$u''_theta + u_theta =frac{mu}{h^2} + frac{3mu}{c^2},u_theta^2$$




One can immediately see that Walter's equation is the above second-order equation, just in normal units rather than $G = c = 1$. I don't know what Walter's argument is (I'm willing to bet the approximation is because Walter substituted a circular-orbit case for $L^2$ or $h^2$ somewhere, though), but that particular relationship holds exactly for massive test particles in Schwarzschild spacetime. It does not even have to be a bound orbit, although of course if one is interested in precession specifically, it would have to be at least bound for precession to make sense. Lightlike geodesics are described by nearly the same equation, just without the $M/h^2$ term.



Furthermore, we can also restate it as
$$u'' + u = frac{M}{h^2}left[1 + 3frac{h^2}{r^2}right] leadsto frac{mu}{h^2}left[1+3frac{h^2}{r^2c^2}right]text{,}$$
which after substitution of $V_tequiv rdot{phi} = h/r$ is what you have.



Conclusion




... so an alternative, more palatable, distance-dependent form of the ratio of accelerations ... would be:-
$$1;text{to};frac {3 h^2}{c^2 , r_theta^2}text{.}$$
The GR/Schwarzschild equations relate to proper time and Schwarzschild radial distance, not their Newtonian equivalents so strictly the ratio of accelerations is still an approximation.



Is this analysis valid, or have I missed something?




It is mostly valid, but I would like you to caution you on several points regarding the way you frame the problem and interpret the result, although you are likely already aware of some of them:



  1. The Schwarzschild time coordinate $t$ is quite different from the proper time $tau$. The former is a special coordinate in which the Schwarzschild geometry is time-independent. It defines the worldlines of a family of observers that are stationary with respect to the geometry, and its scaling matches a stationary observer at infinity. On the other hand, proper time is simply the time measured along some particular worldline; in this context, by the orbiting test particle.

  2. The Schwarzschild radial coordinate $r$ is not a radial distance. It could be called an areal radius in the sense that it is chosen to make a sphere of constant $r$ have area of exactly $4pi r^2$, but usually it is simply called the Schwarzschild radial coordinate. In the Schwarzschild coordinate chart, the radial distance between Schwarzschild radial coordinates $r = r_0$ and $r = r_1$ would be given by
    $$D = int_{r_0}^{r_1}frac{mathrm{d}r}{sqrt{1-frac{2GM}{rc^2}}}text{,}$$
    and would be the distance one would measure if one slowly crawled along the radial direction from $r = r_0$ to $r = r_1$ with some ideal meter-stick, in the limit of zero speed. Of course, $r$ could serve as an approximation to the radial distance in appropriate contexts, but the point is that not only does $r$ fail to be the Newtonian radial distance, it's not actually the 'Schwarzschild radial distance' either.


  3. Acceleration is a bit of a loaded word here. If we mean the second derivative of our radial coordinate with respect to proper time, then no, $ddot{r}_text{GTR}/ddot{r}_text{Newtonian}$ does not simplify quite that nicely, but you calculate it from the above anyway. On the other hand, if we mean the second derivative of the inverse radial coordinate with respect to the azimuthal angle, then yes, the above correct.


But then, it doesn't really make sense to actually call it 'acceleration', does it? This explains (if your previous question was accurate in this phrasing) why Walter uses a more vague term of 'effects' when talking about the above ratio.



Instead (once again using the intentional conflation between $r$,$tau$ and their Newtonian counterparts as an approximation or analogy), it would probably be better to simply think of the Schwarzschild geometry as introducing a new term in the potential that is analogous to a quadrupole moment, which would also put a $propto 1/r^3$ term into the potential, with the corresponding Newtonian equation being
$$(u')^2 + u^2 = frac{2}{h^2}left(frac{E}{m} - Phi(u)right)text{.}$$
Both the effective potential and the first-order equation in $u$ provide a much more straightforward analogy between the Newtonian and Schwarzschild cases.



This is actually pretty interesting: if one assumes that the Sun does indeed have a quadrupole moment, e.g., caused by solar oblateness, then one can easily account for the perihelion advance of Mercury. However, because this is simply an analogy, blaming Mercury's behavior on this would simultaneously mess up the behavior of other planets (since the new term depends on orbital angular momentum) and be even more inconsistent for orbits outside the equatorial plane (since actual oblateness should have the quadrupole term dependent on zenith angle, whereas GTR's is not).



It is also possible to think of the Schwarzschild geometry itself as a scalar field, which we can similarly decompose into spherical harmonic components. Naturally, like most of the above, this peculiarity is specific to the niceness of the spherically symmetric vacuum.

Tuesday 9 February 2010

set theory - Non Lebesgue measurable subsets with "large" outer measure

Yes, I believe so - since subsets of a null set $A$ (i.e., $m(A)=0$) are not necessarily measurable, but will obviously still have outer measure 0, given any measurable set $E$ you "should" (i.e., I think so, but not sure) be able to find a non-measurable subset $S$ of a null set $A$ inside $E$, remove $S$, and since $m(E)=m(E-S)+m(S)$ and $m(S)$ isn't anything, we must have that $m(E-S)$ isn't anything, while we still have $m^{ast}(E)=m^{ast}(E-S)+m^{ast}(S)=m^{ast}(E-S)$.

black hole - Accretion disks - why are they disk-shaped, rather than spherical?

There's two different effects here, and they're both related to viscuous forces in the accreting matter.



First, if the infalling matter has some nonzero angular momentum vector $mathbf{L}$, then consider a plane perpendicular to this vector. Due to its conservation, we can't get rid of rotation along this plane, but the component of angular momentum orthogonal to the plane is zero. As the accreting matter heats up and radiates away energy, that energy is taken from the gravitational potential, which is lessened by bringing the matter particles closer together. Thus, such interactions tend to flatten the matter into the plane orthogonal to its angular momentum.



But there's something more that happens for rotating black holes, as a combination of its gravitomagnetic field and viscous forces in the matter, that drives the matter to the plane of the black hole's rotation, regardless of its initial angular momentum. This is called the Bardeen–Petterson effect [1]:




In an accretion disk, viscosity causes each ring of gas to spiral inward toward the hole with a radial speed $|u^r|lltext{(orbital velocity)} = (M/r)^{1/2}$. If this were the only effect of viscosity, the warping of the disk would consist of a differential precession. Successive rings, each of smaller radius than the preceding, would have interacting longer with the [gravitomagnetic] field and thus would have precessed through successively larger angles... . As a result the disk would be radially twisted, but it would not be driven into the equatorial plane. However, the precession changes the disk's shear $sigma_{jk}$ and thus changes the viscous forces; and these altered forces drive the disk toward the equatorial plane of the black hole.




This is only significant closer to the black hole, at $rlesssim 100M$. At larger radii, the disk shape is due to the direction of the angular momentum of the matter instead.




[1] Black Holes: The Membrane Paradigm, ed. by Kip S. Thorne, Richard H. Price, and Douglas A. Macdonald.

Sunday 7 February 2010

books - Free, high quality mathematical writing online?

I often use the internet to find resources for learning new mathematics and due to an explosion in online activity, there is always plenty to find. Many of these turn out to be somewhat unreadable because of writing quality, organization or presentation.



I recently found out that "The Elements of Statistical Learning' by Hastie, Tibshirani and Friedman was available free online: http://www-stat.stanford.edu/~tibs/ElemStatLearn/ . It is a really well written book at a high technical level. Moreover, this is the second edition which means the book has already gone through quite a few levels of editing.



I was quite amazed to see a resource like this available free online.



Now, my question is, are there more resources like this? Are there free mathematics books that have it all: well-written, well-illustrated, properly typeset and so on?



Now, on the one hand, I have been saying 'book' but I am sure that good mathematical writing online is not limited to just books. On the other hand, I definitely don't mean the typical journal article. It's hard to come up with good criteria on this score, but I am talking about writing that is reasonably lengthy, addresses several topics and whose purpose is essentially pedagogical.



If so, I'd love to hear about them. Please suggest just one resource per comment so we can vote them up and provide a link!

nt.number theory - Definition and meaning of the conductor of an elliptic curve

Let me complete a little bit the story. The conductor mesures the ramification of the Galois group of the local field on the Tate module of the elliptic curves. The formal definition is given in Serre's book as said Jordan, in Buhler's text in the link given by Rob, and also in Sliverman's second volume on elliptic curves.



The conductor is a non-negative integer involved in the $L$-function of $E$. Its $p$-part $f_p$ vanishes iff the Tate module is unramified (the inertia group of ${mathbb Q}_p$ acts trivially). Otherwise it has two part, a tame one, which depends solely on the reduction type of the Néron model of $E$ (I think this is proved in Serre-Tate's paper ''Good reduction of Abelian varieties'').



The second part (the wild one) in the conductor is the Swan conductor. It is the most headhach one. It vanishes if and only if the $p$-Sylow acts trivially on the Tate module. In very simple cases, it can be computed directly. In general, it is related to the invariants of $E$ given by Tate's algorithm: the conductor $f_p$ is given by Ogg's formula:
$$ f_p=nu_p(Delta) - n +1 $$
where $Delta$ is the discriminant of a minimal Weierstrass equation of $E$, and $n$ is the number of geometric irreducible components of the fiber at $p$ of the minimal regular projective model of $E$ over $mathbb Z$ (the fiber at $p$ is a projective, possibly reducible curve over $mathbb F_p$, when $n$ is computed over the algebraic closure of $mathbb F_p$). In Buhler's text, ''geometric'' is missing.



Tate's algorithm gives $Delta$ and $n$ and computer can find them very quickly. So everybody is happy.



But, Ogg's formula, stated in his late 60's paper, was not fully proved. He checked the equality by case by case analysis. In residue characteristic 2, he said ''for the sake of simplicity, we will work in equal characteristic'' ! We know that equal characteristic is kind of limit of mixed characteristic (when the absolute ramification index tends to infinity), of course, this hypothesis simplifies a lot the computation, but does not give any crew for the mixed characteristic case (e.g. $mathbb Q_p$). While this formula was widely used in computer programs, and often used as a definition of the conductor (!), some people were awared of the incompleteness of the proof. For example, Serre said this in seminars. This was also pointed out in the paper of Lockhart, Rosen and Silverman bounding conductor of abelian varieties (J. Alg. Geometry).



This situation is repaired in 1988 in a magistral paper of Takeshi Saito. Let $R$ be a d.v.r. with perfect residue field, let $C$ be a projective smooth and geometrically connected curve of positive genus over the field of fractions of $R$ and let $X$ be the minimal regular projective model of $C$ over $R$. One defines the Artin conductor ${rm Art}(X/R)$ which turns out to be $f+n-1$ with the same meaning as above ($f$ is the conductor associated to the Jacobian of $C$). Saito proved that
$${rm Art}(X/R)=nu(Delta)$$
where $Deltain R$ is the ''discriminant'' of $X$ which mesures the defect of a functorial isomorphism which involves powers of the relative dualizing sheaf of $X/R$. When $C$ is an elliptic curve, one can prove that $Delta$ is actually the discriminant of a minimal Weierstrass equation over $R$, and le tour est joué ! This paper of Saito was apparently not very known by the number theorists. Some more details are given in a text (in French).



So Ogg's formula should be called Ogg-Saito's formula. That some people do.

Saturday 6 February 2010

ag.algebraic geometry - GAGA and Chern classes

My question is as follows.



Do the Chern classes as defined by Grothendieck for smooth projective varieties coincide with the Chern classes as defined with the aid of invariant polynomials and connections on complex vector bundles (when the ground field is $mathbf{C}$)?



I suppose GAGA is involved here. Could anybody give me a reference where this is shown as detailed as possible? Or is the above not true?



Some background on my question:



Let $X$ be a smooth projective variety over an algebraically closed field $k$. For any integer $r$, let $A^r X$ be the group of cycles of codimension $r$ rationally equivalent to zero. Let $AX=bigoplus A^r X$ be the Chow ring.



Grothendieck proved the following theorem on Chern classes.



There is a unique "theory of Chern classes", which assigns to each locally free coherent sheaf $mathcal{E}$ on $X$ an $i$-th Chern class $c_i(mathcal{E})in A^i(X)$ and satisfies the following properties:



C0. It holds that $c_0(mathcal{E}) = 1$.



C1. For an invertible sheaf $mathcal{O}_X(D)$ on $X$, we have that $c_1(mathcal{O}_X(D)) = [D]$ in $A^1(X)$.



C2. For a morphism of smooth quasi-projective varieties $f:Xlongrightarrow Y$ and any positive integer $i$, we have that $f^ast(c_i(mathcal{E})) =c_i(f^ast(mathcal{E}))$.



C3. If $$0longrightarrow mathcal{E}^prime longrightarrow mathcal{E} longrightarrow mathcal{E}^{primeprime} longrightarrow 0$$ is an exact sequence of vector bundles on $X$, then $c_t(mathcal{E}) = c_t(mathcal{E}^prime)c_t(mathcal{E}^{primeprime})$ in $A(X)[t]$.



So that's how it works in algebraic geometry. Now let me sketch the complex analytic case.



Let $Elongrightarrow X$ be a complex vector bundle. We are going to associate certain cohomology classes in $H^{even}(X)$ to $E$. The outline of this construction is as follows.



Step 1. We choose a connection $nabla^E$ on $E$;



Step 2. We construct closed even graded differential forms with the aid of $nabla^E$;



Step 3. We show that the cohomology classes of these differential forms are independent of $nabla^E$.



Let us sketch this construction. Let $k= textrm{rank}(E)$. Let us fix an invariant polynomial $P$ on $mathfrak{gl}_k(mathbf{C})$, i.e. $P$ is invariant under conjugation by $textrm{GL}_k(mathbf{C})$.



Let us fix a connection $nabla^E$ on $E$. We denote denote its curvature by $R^E = (nabla^E)^2$. One shows that $$R^E in mathcal{C}^infty(X,Lambda^2(T^ast X)otimes textrm{End}(E)).$$ That is, $R^E$ is a $2$-form on $X$ with values in $textrm{End}(E)$. Define $$P(E,nabla^E) = P(-R^E/{2ipi}).$$ (This is well-defined.)



The Chern-Weil theorem now says that:



The even graded form $P(E,nabla^E)$ is a smooth complex differential form which is closed. The cohomology class of $P(E,nabla^E)$ is independent of the chosen connection $nabla^E$ on $E$.



Choosing $P$ suitably, we get the Chern classes of $E$ (by definition). These are cohomology classes. In order for one to show the equivalence of these "theories" one is forced to take the leap from the Chow ring to the cohomology ring.



How does one choose $P$? You just take $P(B) = det(1+B)$ for a matrix.



Motivation: If one shows the equivalence of these two theories one gets "two ways" of "computing" the Chern character.

Does gravitational lensing cause a black-hole to be the main 'source' of light in a given area?

So the question is, Can black holes be really be extraordinarily bright due to lensing of background objects?



Let's first also specify that these black holes must certainly be super massive. If not, we're talking about stellar mass black holes (or intermediate mass black holes), and the lensing signal would be much weaker. More mass = more lensing generally speaking (there are some exceptions, e.g. - microlensing, where the lensing is done by stars in our own Milky Way galaxy and the signal is enhanced because the source and lens are aligned nearly perfectly).



This is an important statement to make because virtually every galaxy has been identified with a super massive black hole at its center. The reverse is also true - every super massive black hole is associated with a galaxy. It's also important to note that, realistically speaking, galaxies have total masses of about $sim10^{11} - 10^{12} M_{odot}$, whereas super massive black holes typically have masses of millions to hundreds of millions (maybe even billions) of solar masses. This is a smallish fraction of the total mass. This means that most of the lensing will be done by the galactic halo and not the super massive black hole.



If we modeled the center of the lens galaxy as a point-lens, and the galactic halo as a singular isothermal sphere, the relevant question I would ask is what are their Einstein radii (which is a measure of how effective or efficient they are at lensing) individually, and what are they in combination? Essentially, how much does the existence of a central super massive black hole matter to the system as a whole. Strong lensing features (arcs, rings, or multiple images) generally occur at around the Einstein radius of the object.



My best guess:



Quite honestly I don't see the central super massive black hole mattering all that much when it comes to lensing. Many of these mass profile models for the lens galaxy halo are singular, or rise very rapidly to a central core. Furthermore, I've never really heard of a situation where a lone super massive black hole (not associated with a galaxy - call it a 'rogue' smbh if you will) has been found floating around in space to do this sort of lensing. They generally hide at the centers of galaxies, or show themselves only if they're actively accreting material. Correct me if one has been found (maybe it would come from a merging of two smbh's where one is kicked out of the system).

Friday 5 February 2010

interstellar travel - Voyager spacecrafts

Sort of. But not the same system.



Here's a photo of the directions the two Voyager probes (and a couple Pioneers) are traveling:



*Voyager* directions



They're also not traveling in a flat plane, as this page says:




Voyager 1 has crossed into the heliosheath and is leaving the solar system, rising above the ecliptic plane at an angle of about 35 degrees at a rate of about 520 million kilometers (about 320 million miles) a year. (Voyager 1 entered interstellar space on August 25, 2012.) Voyager 2 is also headed out of the solar system, diving below the ecliptic plane at an angle of about 48 degrees and a rate of about 470 million kilometers (about 290 million miles) a year.




I was able to find some good information here. Voyager 1 is headed towards Gliese 445. However, it will be 1.6 light-years away at its closest approach - not exactly close by! It will reach that star, 17.6 light-years away, in 40,000 years.



Voyager 2 will go by Ross 248, a red dwarf currently 10 light-years away. The craft will take 40,000 years to get there. 256,000 years later, Voyager 2 will come within 4.3 light-years of Sirius.



The probes will have absolutely no power by that time.

Thursday 4 February 2010

Category Theory Construction

For any topological space X let Pa(X) denote the category with objects |Pa(X)|=X and morphisms A from x to y in Pa(X)(x,y) given by continuous maps to X with domain of the form [0,r] where r is a non-negative real number, including possibly r=0. Call r the duration of path A. The identity morphism at x is the map from [0,0] to X with value x, and composition of A from x to y with duration r and B from y to z with duration s is the naturally defined map A;B from [0,r+s] to X.



The homsets Pa(X)(x,y) are "stratified" into disjoint sets of paths with the same duration. Construct a new category strictly containing Pa(X) as follows. Let G be the graph with vertices |G|=X and for objects x, y in G let the set of edges G(x,y) be the disjoint union of the underlying sets of the commutative free monoids generated by paths of the same duration. Let || denote the "parallelism" operation so that if A and B have the same duration then A||B = B||A denotes the freely generated parallelism of A and B. Define the duration of A||B to be the same duration as A and B.



Form the quotient Par(X) of the free category generated by G by the congruence which both restores the composition of Pa(X), and forces the exchange law (A||B);(C||D)=(A;C)||(B;D).



My question is whether this construction is already identified and named, perhaps as part of a more general discussion, including a universal property and an adjoint pair of functors, etc.

dg.differential geometry - Is it possible to capture a sphere in a knot?

Well, here's a proof that the symmetric tetrahedron is not a local minimum - in fact, I claim that it's basically a local maximum!



Fix the positions of three of the vertices $v_1$, $v_2$, $v_3$ of the symmetric tetrahedron. First, let's find the point $p$ on the sphere for which the sum of the lengths of geodesics connecting $p$ to $v_1$, $v_2$, $v_3$ is minimal. By Torricelli's Theorem, this point must either be one of $v_1$, $v_2$, $v_3$, or a point where all edges leaving it meet at $120$ degrees. Thus, $p$ is one of $v_1$, $v_2$, $v_3$, the fourth vertex of the symmetric tetrahedron, or the antipodal point to one of the four points I already mentioned. By direct calculation, we see that $p$ is the antipodal point to the fourth vertex of the symmetric tetrahedron.



Now if we move the fourth vertex to any point $q$ nearby, and draw three great semicircles connecting it to the antipodal point $q'$ and passing through the other three vertices of the symmetric tetrahedron, we see that the sum of the lengths from $q$ is three pi minus the sum of the lengths from $q'$. Since the sum of the lengths from $q'$ is at least the sum of the lengths from $p$, the sum of the lengths from $q$ is at most the sum of the lengths from the top vertex of the symmetric tetrahedron. Thus, we can basically move the top vertex anywhere we like without increasing the total length of the string.



The next thing you will probably be tempted to try is the symmetric cube (using the same trick to handle the vertices of degree three). In this case, each vertex actually is at a strict local minimum if you hold the other vertices fixed. However, I'm pretty certain that it's possible to move all four vertices on the top face simultaneously either to the top of the sphere or to the equator of the sphere without increasing the total length during the process.



Edit: Here's a proof that we can move the vertices on the top face of the cube all upwards or downwards without increasing the total length. Let $x$ be the angle that the line connecting a vertex on the top face to the center of the sphere makes to the plane through the equator of the sphere. We are going to calculate the total length of all string above the equator as a function of $x$ - it's going to be $4x + 4(mbox{angle between adjacent vertices})$. The dot product of the vectors corresponding to adjacent vertices is $sin^2(x) = frac{1-cos(2x)}{2}$. Letting $a = cos(2x)$, we see that the total length above the equator is $2arccos(a) + 4arccos(frac{1-a}{2})$. Thus, since $arccos$ is a concave function for $a$ between $0$ and $1$, that total length achieves its maximum when $a = frac{1-a}{2}$, i.e. $cos(2x) = a = frac{1}{3}$, which is the initial angle we started out with on the cube.

ag.algebraic geometry - Do pushouts along universal homeomorphisms exist?

References and background on universal homeomorphisms



Definition [EGA I (2d ed.) 3.8.1]. A morphism $f:Vto U$ is a universal homeomorphism if for any morphism $U'to U$, the pullback $Vtimes_UU'to U'$ is a homeomorphism.



Theorem [EGA IV 18.12.11]. A morphism is a universal homeomorphism if and only if it is surjective, integral, and radicial.



Theorem ["Topological invariance of the étale topos," SGA I Exp IX, 4.10 and SGA IV Exp. VIII, 1.1] If $f:Vto U$ is a universal homeomorphism, then the induced morphism $f:V_{textrm{ét}}to U_{textrm{ét}}$ of the small étale topoi is an equivalence.



General examples. Any nilimmersion, any purely inseparable field extension (or any base change thereby), the geometric Frobenius of an $mathbf{F}_p$-scheme [SGA V Exp. XIV=XV, § 1, No. 2, Pr. 2(a)].



Theorem. Suppose $X$ a reduced scheme with finitely many irreducible components. Denote by $X'$ its normalization. Then the natural morphism $X'to X$ is a universal homeomorphism if and only if $X$ is geometrically unibranch.



Specific example. Suppose $k$ an algebraically closed field of characteristic $2$. Consider the subring $k[x^2,xy,y]subset k[x,y]$. The induced morphism



$$mathrm{Spec}k[x,y]tomathrm{Spec}k[x^2,xy,y]$$



is a universal homeomorphism.



Question




Do pushouts along universal homeomorphisms exist in the category of schemes?




In more detail. Suppose $f:Vto U$ a universal homeomorphism, and suppose $p:Vto W$ a morphism. Everything here is a scheme; I can assume $W$ quasicompact and quasiseparated, but I have no control over the map $Vto W$. Now of course I can construct the pushout $P$ of $Vto U$ along $Vto W$ as a locally ringed space with no trouble (just take the underlying space of $W$ along with the fiber product $O_W times_{p_{star}O_V}p_{star}O_U$), but I can't show that $P$ is a scheme. Is it?



Thoughts



Of course the key point here is that $f$ is a universal homeomorphism, not just some run-of-the-mill morphism. So one can try to treat the cases where $f$ is schematically dominant or a nilimmersion separately.



Update



If $f$ is a nilimmersion, then I now see how to prove this completely. I still have no idea how to proceed the schematically dominant case.



[EDIT: I removed the additional question.]

Wednesday 3 February 2010

ct.category theory - What's the "correct" smooth structure on the category of manifolds?

As will become clear, this is in some sense a follow up on my earlier question Why should I prefer bundles to (surjective) submersions?. As with that one, I hope that it's not too open-ended or discussion-y. If y'all feel it is too discussion-y, I will happily close it.



Let $rm Man$ be the category of smooth (finite-dimensional) manifolds. I can think of (at least) two natural "smooth structures" on $rm Man$, which I will outline. My question is whether one of these is the "right" one, or if there is a better one.



I should mention first of all that there many subtly different definitions of "smooth structure" — see e.g. n-Lab: smooth space and n-Lab: generalized smooth space and the many references therein — and I don't know enough to know which to prefer. Moreover, I haven't checked that my proposals match any of those definitions.
In any case, the definition of "smooth structure" that I'm happiest with is one where I only have to tell you what all the smooth curves are (and these should satisfy some compatibility condition). So that's what I'll do, but I'm not sure if they do satisfy the compatibility conditions. Without further ado, here are two proposals:



  1. A smooth curve in $rm Man$ is a fiber bundle $P to mathbb R$.

  2. A smooth curve in $rm Man$ is a submersion $Y to mathbb R$.

Then given a manifold $M$, we can make it into a category by declaring that it has only identity morphisms. Then I believe that the smooth functors $M to {rm Man}$ under definition 1 are precisely the fiber bundles over $M$, whereas in definition 2 they are precisely the submersions over $M$.



(Each of these claims requires checking. In the first case, it's clear that bundles pull back, so all bundles are smooth functors, and so it suffices to check that if a surjective submersion to the disk is trivializable over any curve, then it is trivializable. In the second case, it's clear that if a smooth map restricts to a submersion over each curve, then it is a submersion, so any smooth functor in a submersion, and so one must check that submersions pull back along curves.)



I can see arguments in support of either of these. On the one hand, bundles are cool, so it would be nice if they were simply "smooth functors". On the other hand, we should not ask for smooth functions (i.e. 0-functors) to be necessarily "locally trivializable", as then they'd necessarily be constant. Maybe the correct answer is definition 2, and that bundles are "locally constant smooth functors", or something.



Anyway, thoughts? Or am I missing some other good definition?



Addendum



In the comments, folks have asked for applications, which is very reasonable. The answer is that I would really like to have a good grasp of words like "smooth functor", at least in the special case of "smooth functor to $rm Man$". Of course, Waldorf and Shreiber have explained these words in certain cases in terms of local gluing data (charts), but I expect that a more universal definition would come directly from a good notion of "smooth structure" on a category directly.



Here's an example. Once we have a smooth structure on $rm Man$, we can presumably talk about smooth structures on subcategories, like the category of $G$-torsors for $G$ your favorite group. Indeed, for the two definitions above, I think the natural smooth structure on $Gtext{-tor}$ coincide: either we want fiber bundles where all the fibers are $G$-torsors, or submersions where all the fibers are $G$-torsors, and in either case we should expect that the $G$ action is smooth. So then we could say something like: "A principle $G$-bundle on $M$ is (i.e. there is a natural equivalence of categories) a smooth functor $M to Gtext{-tor}$", where $M rightrightarrows M$ is the (smooth) category whose objects are $M$ and with only trivial morphisms. (Any category object internal to $rm Man$ automatically has a smooth structure.) And if I understood the path groupoid mod thin homotopy $mathcal P^1(M) rightrightarrows M$ as a smooth category, then I would hope that the smooth functors $mathcal P^1(M) to Gtext{-tor}$ would be the same as principle $G$-bundles on $M$ with connection. Functors from the groupoid of paths mod "thick" homotopy should of course be bundles with flat connections. Again, Schreiber and Waldorf have already defined these things categorically, but their definition is reasonably long, because they don't have smooth structures on $rm Man$ that are strong enough to let them take advantage of general smooth-space yoga.



Here's another example. When I draw a bordism between manifolds, what am I actually drawing? I would like to say that I'm drawing something close to a "smooth map $[0,1] to rm Man$". I'm not quite, by my definitions — if you look at the pair of pants, for instance, at the "crotch" it is not a submersion to the interval. So I guess there's at least one more possible definition of "smooth curve in $rm Man$":



  • A smooth curve in $rm Man$ is a smooth map $X to mathbb R$.

But this, I think, won't be as friendly a definition as those above: I bet that it does not satisfy the compatibility axioms that your favorite notion of "smooth space" demands.

Tuesday 2 February 2010

astrophotography - Why does this twilight sky flat field have a grid of dark pixels?

The image below is a small region of a twilight flat field. A grid of darker pixels with 728x36 cell dimensions can be seen.
Flat Field with dark grid artifact



The camera used was the SBIG STL-6303 according to the FIT headers, which uses a Kodak KAF-6303E CCD according to the online data sheet.



The CCD dimensions, these cell dimensions, and all the binning options are all perfectly divisible, so I'm assuming this artifact is due to the structure of the CCD.



I plotted the mean pixel value of each row to make sure this wasn't an artifact from the image viewing software. The plot clearly shows a drop every 36 pixels.



So, I'm hoping for some confirmation: is this decreased sensitivity due to the structure of the CCD?



If so, what about the CCD structure actually causes this?

expansion - What is CMB radiation doing to the universe?


It is said that due to the CMB radiation our universe has been expanding?




No. It's the the other way round. The CMB is the result of the expansion of space. In the initial stages after the birth of the universe, there was hot plasma everywhere and light could not travel much farther. The universe was opaque at that time. But slowly as the Universe cooled and atoms began to form, light could travel farther. This light got redshifted due to the expansion of space and can be now detected as the CMB radiation. It's in the microwave region of the spectrum and just above absolute zero(about 2.7 K).




What do we mean by Dark Energy?




After Hubble's discovery, we came to the conclusion that the Universe is expanding. We expected the expansion to slow down as time progresses but what we observed as that the expansion was accelerating and is still doing so. Physicists and cosmologists did not have any explanations for this and so it was named "Dark Energy". Dark in the sense that it is unknown.

Monday 1 February 2010

soft question - Using LaTeX, how can I restate a theorem, with the same theorem number, later in a paper?

I am writing a paper, and would like to state the main results in the Introduction. Then, when I come to prove the main results in later sections, I would like to restate the theorem before proceeding with the proof. (We could debate about whether that is good style, but it is what my co-author and I want to do in this instance.) I would prefer to restate the theorem with the original theorem number - so "Theorem 1.1" again, rather than "Theorem 4.1," when restated. Does anyone have a good solution for how to do this in LaTeX?



I have done this a couple times before, but never with an elegant solution. The best solution I have come up with is to create a different theoremstyle for main results, and call the main results "Theorem A", etc, in the introduction; and then use yet another theoremstyle to reproduce the results, again "Theorem A" later in the paper. This method works so long as you prove the results in the order that you discuss them in the introduction, but is inelegant.



I also realize that hard core TeX users might tell me just to use TeX, where you have much more control. That would also take more time than I have energy for to finish this paper!




What is a good way to re-use Theorem numbers, to repeat a theorem, using LaTeX?



More technically, how can I get a Theorem to use a ref for the number, rather than referring automatically to a counter?



This was closed, but has been answered with two different solutions on tex.stackexchange.com:



Scott Morrison's question, with a couple solutions: http://tex.stackexchange.com/questions/422/how-do-i-repeat-a-theorem-number



My question, with a different answer: http://tex.stackexchange.com/questions/1688/using-latex-how-can-i-restate-a-theorem-with-the-same-theorem-number-later-in