Tuesday, 31 July 2012

expansion - Is dark energy evenly spaced throughout Universe?

If dark energy varied by location, then plots of 1a supernova brightness vs redshift should vary depending on which direction you look in the sky. AFAIK, that's not the case. For example, although coords are not accounted for, there's not a lot of scatter in this plot:



enter image description here




In the relationship between the distance and redshift of Type 1a supernovae, the data (points) agree with the equation in which light propagates through the expanding universe on the least-time path (solid line). Image credit: Annila. ©2011 Royal Astronomical Society




Read more at: http://phys.org/news/2011-10-supernovae-universe-expansion-understood-dark.html#jCp

computer science - post correspondence problem

As Tsuyoshi said, it doesn’t make sense to search for an undecidable instance of a problem. It’s only the problem itself that can be undecidable.



In particular, for every instance of PCP (or any other problem for that matter) there trivially exists an algorithm that gives the correct answer for that particular instance. If we’re dealing with the decision version of the problem, it’s either the algorithm that always answers “yes”, or the algorithm that always answers “no” (granted, this is not a constructive proof).



On the other hand, you might find specific instances of PCP without a known answer, for example by exploiting any open problem of mathematics and the fact that the halting problem reduces to PCP, say via a many-one reduction R.



Consider the Turing machine M that searches for a proof of the Riemann hypothesis by enumerating all proofs, and halts when it finds it. If RH is provable, this machine will halt in a finite amount of time, otherwise it will run forever. You can use the reduction from the halting problem to construct a PCP instance R(M) = x. Now, by deciding whether x is a positive or negative instance of PCP, you also decide RH. But that’s an open problem, and so the status of x also is.

Monday, 30 July 2012

ag.algebraic geometry - Flatness of modules via Tor

As far as I understand, this is false. Here is an example (familiar to $D$-module people):
$A=k[x,y]$; $M=k[a,b]$ on which $x$ (resp. $y$) acts as $frac{d}{da}$ (resp. $frac{d}{db}$).
Since the action of both $x$ and $y$ is locally nilpotent, $M$ is supported at the origin of
$Spec(A)$. Therefore, the only non-zero Tor's of the kind you consider are $Tor_i(M,k)$, where both $x$ and $y$ act on $k$ by zero. These Tor's are easy to compute (they amount to computing de Rham cohomology of affine plane with coordinates $a$ and $b$), and they are non-zero precisely when $i=2$. (Essentially, the calculation repeats the proof of Kashiwara's Lemma.)

complex geometry - question about kahler cone of a compact kahler manifold

Hi to all!



I'm studying complex geometry from Huybrechts book "Complex Geometry"
and i have problems with an exercise, please can anyone help me?



I define the kahler cone of a compact kahler manifold X as the set



$K_X subseteq H^{(1,1)}(X)cap H^2(X,mathbb{R})$
of kahler classes. I have to prove that $K_X$ doesn't contain any line
of the form $alpha + t beta$ with $alpha , betain H^{(1,1)}(X)cap H^2(X,mathbb{R})$
and $betaneq 0$ (i identify classes with representatives).



This is what i thought: i know that a form $omega in H^{(1,1)}(X)cap H^2(X,mathbb{R})$
that is positive definite (locally of the form $frac{i}{2}sum_{i,j} h_{ij}(x)dz^iwedge d overline{z}^{j}$ and $(h_{ij}(x))$ is a positive definite hermitian matrix $forall xin X$) is the kahler form associated to a kahler structure. Supposing $alpha$ a kahler class i want to show that there is a $tinmathbb{R}$ such that $alpha + t beta$ is not a kahler class. Since $betaneq0$ i can find a $tinmathbb{R}$ such that $alpha + t beta$ is not positive definite any more, now i want to prove that there is no form $omega in H^{(1,1)}(X)cap H^2(X,mathbb{R})$ such that $omega=dlambda$ with $lambda$ a real 1-form and $omega=overline{partial}mu$ with $mu$ a complex (1,0)-form (what i'd like to prove is: correcting representatives of cohomology classes with an exact form i don't get a kahler class). From $partialoverline{partial}$-lemma and a little work i know that $omega=ipartialoverline{partial}f$ with f a real function. And now (and here i can't go on) i want to prove that i can't have a function f such that $alpha + t beta+ipartialoverline{partial}f$ is positive definite.



Please, if i made mistakes, or you know how to go on, or another way to solve this, tell me.



Thank you in advance.

nt.number theory - Proof of no rational point on Selmer's Curve 3x^3+4y^3+5z^3=0

The "standard" technique for killing the Hasse priniciple for elliptic curves is to show that the Tate-Shafarevich group has a copy of (Z/mZ)^2 for some m - see chapter X in Silverman's the arithmetic of Eliptic curves, both for the theory and examples. All the examples which Silverman presents ar with m = 2. Selmers example requires m = 3, which requires (much) more computations. Poonen has an example
on his web page of a family of elliptic curves violating the Hasse principle, and containing Selmers example, but you'd have to dive through a labirinth of references.

Sunday, 29 July 2012

co.combinatorics - Is there a combinatorial reason that the (-1)st Catalan number is -1/2?

The $n$th Catalan number can be written in terms of factorials as
$$ C_n = {(2n)! over (n+1)! n!}. $$
We can rewrite this in terms of gamma functions to define the Catalan numbers for complex $z$:
$$ C(z) = {Gamma(2z+1) over Gamma(z+2) Gamma(z+1)}. $$
This function is analytic except where $2n+1, n+2$, or $n+1$ is a nonpositive integer -- that is, at $n = -1/2, -1, -3/2, -2, ldots$.



At $z = -2, -3, -4, ldots$, the numerator of the expression for $C(z)$ has a pole of order 1, but the denominator has a pole of order $2$, so $lim_{z to n} C(z) = 0$.



At $z = -1/2, -3/2, -5/2, ldots$, the denominator is just some real number and the numerator has a pole of order 1, so $C(z)$ has a pole of order $1$.



But at $z = -1$:
- $Gamma(2z+1)$ has a pole of order $1$ with residue $1/2$;
- $Gamma(z+2) = 1$;
- $Gamma(z+1)$ has a pole of order $1$ with residue $1$.
Therefore $lim_{z to -1} C(z) = 1/2$, so we might say that the $-1$st Catalan number is $-1/2$.



Is there an interpretation of this fact in terms of any of the countless combinatorial objects counted by the Catalan numbers?

amateur observing - Can you see city lights on the Moon from Earth?

This is the opposite of another question. That question is about whether you could see cities on Earth if you were standing on the Moon.



Let's there are cities on the Moon and you're standing on the Earth on a clear night. Could you see the city lights?



If you're looking at Earth cities from the Moon, your line of sight is not affected by atmospheric turbulence because the Moon is airless. But if you are on Earth, then you have to look through the turbulent atmosphere to see faint lights. So you won't have to put up with the turbulence that makes stars look blurry or twinkley from the Earth.



But if you're on the Earth, any city lights on the Moon might look blurry or twinkley from our atmospheric turbulence. If it helps, imagine yourself at the top of a mountain so there is less atmosphere to look through and hence less turbulence.



In this picture of the Moon (and Venus in the background), imagine cities on dark part of the Moon's surface facing Earth. Those city lights should be easier to see than cities within the lit crescent.



enter image description here

dg.differential geometry - Two discs with no parallel tangent planes

If I understood your question correctly, I think that the answer is no. In fact, even more is true: if you choose any identification $varphi colon Sigma_1 to Sigma_2$ in such a way that the boundary are compatibly identified, then for any embedding of $Sigma_1$ and $Sigma_2$ there is a point $p$ of $Sigma_1$ such that the tangent plane to $Sigma_1$ is the same as the tangent plane to $Sigma_2$ at $varphi(p)$. To see this, let $mathbb{R}P^2$ denote the real projective plane, the space parameterizing linear subspaces of dimension one in $mathbb{R}^3$. Suppose by contradiction that you found embeddings with the mentioned property. Let $gamma colon Sigma_1 cup Sigma_2 to mathbb{R}P^2$ be the function defined by sending $p$ to the linear subspace spanned by $N_p wedge N_{varphi(p)}$, where $N_p$ is the normal direction to the image of $p$ and similarly for $N_{varphi(p)}$, and the wedge product is the usual wedge product in $mathbb{R}^3$. The function $varphi$ is indeed a function, since never the normal directions at $p$ and $varphi(p)$ are parallel. But observe that $gamma(p)$ is always a vector contained in the tangent plane at $p$. Thus, $gamma$ determines a global vector field on the image of $Sigma_1 cup Sigma_2$ that is never zero. This is obviously impossible!



Note that the above argument is essentially the one used in the Borsuk-Ulam Theorem (http://en.wikipedia.org/wiki/Borsuk–Ulam_theorem).

Friday, 27 July 2012

soft question - Can you prove equivalence without being able to calculate it?

I give you this example, from the upper edge of the propositional calculus hierarchy:



cardinal equivalence:
For each boolean formula, |quantifications| = |assignments|.



Abstractly, the linear induction on n variables is provable using all the basics: True, Nil, union, intersection, +, =, zero, and +1, upon the number of variables. Set cardinality is the principle primitive operation, used in every part of the proof.
The key final lines look roughly like:
|Qa union Qb| + |Qa intersets Qb| = |Qa| + |Qb|.
and so then, |Q| = |Qa| + |Qb| = |Pa| + |Pb| = |P|.



The base case on zero variables has two parts, the True case, and the Nil case. The induction proceeds by substituting (True, Nil) for the first variable, obtaining two smaller formulas, called Pa, and Pb. The hypothesis |Q| = |P| applies to the two smaller formulas.



Okay, so. The equivalence is clearly a fundamental identity.
However.



However, one to one mapping between satisfying assignments and valid quantifications is generally out of the question, as far as I know (else PSpace = NP would nearly follow). quantifications refer to subsets, subsets of subsets, ..., with arbitrary alternations admitting deeply rich subset structures, of the original set of satisfying assignments. So, quantifications are "encodable" by something the same size as an assignment,
but the one to one map you are asking for is Unknown, in general.
I suppose thats different from saying "it does not exist".



And it does exist for the special case, for monotone boolean formulas. The map between assignments and valid quantifications is straightforward, and is the most obvious linear map any amateur could attempt to construct between valid quantifications and satisfying assignments. I suggest that the 2cnf to 2qbf p to Q solution map could be more studied, for details on how to construct such, or else why such would fail in general. I have not gotten around to that yet.



In general, counting assignments is in a class by itself, called #P. An amateur may presume a class called #Q, too; a professional would give details.



+++



By the way, I am still asking: is this identity known by any other name?
I tried calling it #P=#Q, twelve years ago, few knew what #P was,
and #Q was too deep to describe in emails, way back then.

Thursday, 26 July 2012

lo.logic - term equality in algebraic theories

For algebraic theories how relevant is the underlying logic? Is it possible that two terms $s$ and $t$ can be shown to be equal with respect to one set of logical axioms but not necessarily so with another set of logical axioms? From my limited knowledge it seems that the logical axioms shouldn't matter much since playing with terms only requires substitution and reduction using the axioms of the theory.



My question is motivated by the syntactic category construction. I'm reading some notes on categorical logic and the syntactic category has a notion of morphism equality that depends on the theory but no mention is made of the underlying logic and I'm wondering why.



For example: Take the empty theory. The only terms are just variables so the objects of the syntactic category will be contexts and the morphisms will be tuples of variables, e.g. $x_1:[x_1,x_2]to[x_1], x_2:[x_1,x_2]to[x_1,x_2], (x_1,x_2):[x_1,x_2]to [x_1,x_2]$, etc. Now what can I claim about the arrows $x_1:[x_1,x_2]to[x_1]$ and $x_2:[x_1,x_2]to[x_1]$? Is it true that $x_1 = x_2$? If I assume the law of excluded middle then I should be able to claim something about $x_1 = x_2$ but if I don't assume the law of the excluded middle then it seems that I can't make a positive or negative claim about the status of $x_1 = x_2$ since the theory doesn't imply $x_1 = x_2$ so can I infer from this that $x_1 neq x_2$? I'm probably over-thinking it.

ag.algebraic geometry - Flat locus of $S_{1}$-morphism

Hi, everybody.



Consider an ${rm S}_{1}$- morphism $f:Xrightarrow S$ of reduced complex spaces. Assume that $f$ is open (universally open in Alg.geom), equidimensional with $n$-pure dimensional fiber, surjectiv. Let $U$ be the flat locus of $f$ (which is a dense open set).



Question: It is true that the codimension of $(X-U)cap X_{s}$ is of codimension 2 in the fiber $X_{s}$ ?



Remark: We can refer to the Thm 15.2.2, p.226 and Prop 4.7.10 of [EGA].



Thank you very much...

fa.functional analysis - Reducing limits to a canonical form

I was having difficulty in understanding the difference between convergence in probability and almost sure convergence, so I decided to try to reduce them to some sort of canonical form.



  • Convergence in probability:
    ${lim}_{n to infty } Pr(|X_n-X|
    ge e)=0$

  • Almost sure convergence:
    $Pr({lim}_{n to infty} X_n=X)=1$

After playing around with the figures, I got the following results.



  • Convergence in probability: $forall e, d, n>N(e,d): dif_x ge e text{ with } p < d $

  • Almost sure convergence: ($forall e, n>N(e): dif_x < e) text{ with } p=1$

  • Alternative form: ($forall e, n>N(e): dif_x ge e) text{ with } p=0$

A few notes:



  1. Here $dif_x$ means how far about points at this location are from the limit

  2. $N(e,d)$ simply says that we can find a suitable value of N so that this holds which depends on e and d

  3. The differences between the two types seem more obvious in this form

So, my questions are:



  1. Is this correct?

  2. Have reductions into this kind of form been studied? If so, where can I learn more about this?

triangulated categories - Is K(R-Mod) compactly generated when R is an artin algebra?

The answer is in general no - $K(Rtext{-}mathrm{Mod})$ can fail to be well generated even when $R$ is artinian. As you mention $K(Rtext{-}mathrm{Mod})$ is compactly generated if $R$ is of finite representation type. It turns out that the converse holds. This is a result of Jan Šťovíček which occurs as Proposition 2.6 in this paper. The precise result is:



Proposition Let $R$ be a ring. The following are equivalent:



(i) $K(Rtext{-}mathrm{Mod})$ is well generated;



(ii) $K(Rtext{-}mathrm{Mod})$ is compactly generated;



(iii) $R$ is left pure semisimple.



In particular, when $R$ is artinian this occurs precisely when $R$ has finite representation type.

Wednesday, 25 July 2012

higher category theory - What are Jacob Lurie's key insights?

I think, one of the key insights underlying derived algebraic geometry and Lurie's treatment of elliptic cohomology is taking some ideas of Grothendieck really serious. Two manifestations:



1) One of the points of the scheme theory initiated by Grothendieck is the following: if one takes intersection of two varieties just on the point-set level, one loses information. One has to add the possibility of nilpotents (somewhat higher information) to preserve the information of intersection multiplicities and get the "right" notion of a fiber product. Now one of the points of derived algebraic geometry (as explained very lucidly in the introduction to DAG V) is that for homological purposes this is not really the right fiber product - you need to take some kind of homotopy fiber product. This is, because one still loses information because one is taking quotients - one should add isomorphism instead and view it on a categorical level. Thus, you can take a meaningful intersection of a point with itself, for example.
This is perhaps an instance where the homological revolution, which went to pure mathematics last century, benefits from a second wave, a homotopical revolution - if I am allowed to overstate this a bit.



2) Another insight of Grothendieck and his school was, how important it is to represent functors in algebraic geometry - regardless of what you want at the end. [as Mazur reports, Hendrik Lenstra was once sure that he did want to solve Diophantine equations and did not want to represent functors - and later he was amused that he represented functors to solve Diophantine equations.] And this is Lurie's approach to elliptic cohomology and tmf: Hopkins and Miller showed the existence of a certain sheaf of $E_infty$-ring spectra on the moduli stack of elliptic curves. Lurie showed that this represents a derived moduli problem (of oriented derived elliptic curves).



Also his solution of the cobordism hypothesis has a certain flavor of Grothendieck: you have to put things in a quite general framework to see the essence. This philosophy also shines quite clearly through his DAG, I think.



Besides, I do not think, there is a single key insight in Higher Topos Theory besides the feeling that infinity-categories are important and that you can find analogues to most of classical category theory in quasi-categories. Then there are lots of little (but every single one amazing) insights, how this transformation from classical to infinity-category theory works.

ct.category theory - What is the relationship between t-structure and Torsion pair?

The two notions are related in the sense that they share a common generalization, namely the notion of torsion pair on a pre-triangulated category (this term has at least two meanings, here we mean a category which has compatible left and right triangulations - it covers several cases including triangulated categories and quasi-abelian categories). The reference for this material is



A. Beligiannis and I. Reiten: ''Homological and Homotopical Aspects of Torsion Theories''



which is available from Beligiannis' homepage. In fact one can take the analogy further and consider the analogy between TTF-triples on an abelian category and recollement of triangulated categories.



There is also another connection given by tilting theory. Suppose that $(mathcal{T},mathcal{F})$ is a torsion pair on an abelian category $mathbf{A}$. Then we can obtain a t-structure on $D= D^b(mathbf{A})$ by setting
$D^{leq 0} = { Xin D ; vert ; H^i(X)=0 ; text{for} ; i>0, H^0(X)in mathcal{T} }$
and
$D^{geq 0} = { Xin D ; vert ; H^i(X)=0 ; text{for} ; i<-1, H^{-1}(X)in mathcal{F} }$
For more information on this (in particular for some characterizations of when taking the derived category of the heart obtain from this t-structure is equivalent to $D$) one can see "Tilting in Abelian categories and quasitilted algebras" By Dieter Happel, Idun Reiten, Sverre O. Smalø.



I hope that at least goes some of the way toward answering (1) and (2).



As far as (3) is concerned I am not completely sure what to say. Certainly one can reconstruct a quasi-compact quasi-separated scheme from its derived category using the tensor structure, and if the scheme is particularly nice one can use the Serre functor. I am not aware of (or have forgotten if I knew) a way of reconstructing a scheme via t-structures (I guess one can use strictly localizaing subcategories which are particularly nice t-structures or take the heart of the standard one). One certainly can't just look at all t-structures - even for $D(mathbb{Z})$ there is a proper class of t-structures.
In the abelian case the closest thing I can think of is taking the spectrum of indecomposable injectives. This is not directly torsion theoretic but it is true that injectives control hereditary torsion theories in the sense that every hereditary torsion theory in a Grothendieck abelian category has as its torsion class the left orthogonal to some injective object.



A particularly nice special case when one can really make the connection precise is the following (due to Krause). Suppose that $mathbf{A}$ is a locally coherent Grothendieck abelian category i.e., it is a Grothendieck abelian category with a generating set of finitely presented objects and the finitely presented objects form an abelian subcategory. Then one can topologize the spectrum of indecomposable injectives in such a way that there is a bijection between hereditary torsion theories of finite type (those for which the right adjoint to the inclusion also commutes with filtered colimits) and closed subsets of the spectrum.



One last thought for the moment - although one can think of t-structures and torsion theories on abelian categories as common specializations of one more general definition the analogy can be misleading. However, there is a reasonably good analogy between hereditary torsion theories of finite type and smashing subcategories which can be made precise (again this is due to Krause). The heart of this is that every smashing subcategory of a compactly generated triangulated category is generated by an ideal of maps between compact objects. Corresponding to such an ideal there is a hereditary torsion theory of finite type in the category of additive presheaves of abelian groups on the compact objects. Something you may find particularly interesting about this (I certainly do) is that it links the theory of smashing subcategories (and the telescope conjecture) to the spectrum of indecomposable injectives in a nice abelian category.

Can any stars ever form supermassive black holes?

As you say, making black holes quickly in the early Universe is a major unsolved problem in astrophysics. There are various hypotheses, of which two roughly correspond to supermassive stars. All basically involve trying to give the black hole a headstart in mass. There isn't really enough time to grow a $100,M_odot$ black hole to $10^9,M_odot$, so the idea is to rather get something more massive than a few $times1000,M_odot$. The most recent review I know of offhand is probably Volonteri (2010), but I'm not up to date on the literature.



Basically, imagine a few hundred thousand solar masses of gas collapsing into a primordial galaxy. As the gas collapses, it potentially fragments, depending on whether or not it can cool efficiently. If so, the fragments can presumably form stars, but in a very dense cluster. Ultimately, either (a) the massive stars in the cluster collapse into black holes that then merge into a larger black hole, which can subsequently accrete its way to supermassiveness (supermass?); or (b) the individual stars first merge in the centre, creating a star of perhaps some thousands of solar masses, which would collapse into a massive black hole that could grow.



If the primordial galactic cloud doesn't fragment, we expect a sort of monolithic collapse. Somewhere in the middle, gas will start to reach hydrostatic equilibrium: a protostar forms. But it's a protostar that's potentially accreting several solar masses of material per year (or even faster). So what happens next depends on whether that rapidly infall of material has to time to reach local thermal equilibrium with the protostar, or if it just piles up on the outside.



If the former, then the protostar can become very large: thousands or tens of thousands of solar masses, which, like the supermerger product above, presumably leaves a massive black hole in the end. If the small protostar evolves independently of the infalling gas, it's probably also big enough to leave a black hole, just a much smaller one: tens, maybe hundreds of solar masses. But it's a black hole embedded in this enormous cloud of infalling gas, which potentially settles into an envelope around the black hole. This structure has been dubbed a "quasi-star", and the black hole inside can grow very rapidly in this cocoon. Eventually, the envelope will evaporate/disperse, leaving the now massive black hole to continue to accrete its way to supermassiveness.



Note that these formation mechanisms are expected to be particular to the early Universe. Once you add even a small amount of metals to the gas, then star formation is expected to look much more like the "modern" Universe. In fact, star formation in these scenarios is still far from settled. The main reason is that you need to follow how the gas evolves from the scale of the protogalaxy all the way down to the protostars inside. This is a range of scales something like $1,mathrm{AU}/10,000,mathrm{ly}approx10^{-9}$, which is numerically very difficult.



And, finally, to answer the question directly, in neither of the supermassive star options do the stars really collapse in supermassive black holes. SMBHs are so big that they can only have grown so large through accretion. The supermassive stars would collapse into their progenitors (or "seeds"), which would subsequently grow to such large masses.

Tuesday, 24 July 2012

nt.number theory - How do you calculate the group scheme of E[p] for a an elliptic curve E in characteristic p?

Dear Maxmoo,



Just to offer a slightly different perspective than that given by Kevin and Brian:



While their advice is certainly correct, when I was learning this I also found it very
helpful to make a couple of "bare hands" computations, as a kind of reality check.



For this, begin with an elliptic curve in char. $2$, in fact with two, of the form:



$$y^2 + y = x^3$$



and



$$y^2 + x y = x^3 + x $$



One of these is supersingular, the other ordinary. (I won't tell you which here!)



Now try computing the $2$-torsion concretely, using lines passing through three points
and so on.



Remember that in the end you are looking for a degree $4$ equation (you may need to change
variables to see the point at infinity; this won't show up in the affine equations I've
given you). By general theory, you know this equation won't be separable: non-reduced
group scheme structure will show up concretely as inseparability in this polynomial.



In one case (the s.s. case) it will be entirely inseparable; in the other (ordinary) case
it will have inseparability degree $2$ (so "half" inseparable, "half" separable).



Once you've done the case of char. $2$, you might want to try char. $3$ as well (since
computing the equation for the 3-torsion is also just about in reach by hand).



The reason I suggest this is that I remember, when I was learning this stuff, that all
these group schemes (especially the non-reduced ones) seemed fairly ephemeral, but after
I had made these kind of explicit computations, I had a much more concrete mental model
for what the general theory was talking about, which gave me a lot more confidence in
reading and making arguments about these kinds of things.



Best wishes,



Matt

gravity - Gravitation - Pulling or Pushing force?

I'm guessing that this misunderstanding is a result of the oft-used rubber sheet analogy. The rubber sheet analogy says that, according to general relativity, mass curves space-time like a heavy bowling ball on a near-taut blanket (or rubber sheet) curves the blanket/sheet. This resulting curve makes other bits of matter/energy move in different ways. I'm guessing that this is your confusion.



The rubber sheet analogy fails massively in one area, any demonstration of it involves gravity on Earth. If I use a bowling ball to deform a sheet, and then role a golf ball along the sheet nearby, the golf ball will move a bit towards the bowling ball because of the force of gravity around me - not "gravity" in the simulation. It thus makes it seem like gravity pulls the golf ball "down" because the bowling ball pulls the rubber sheet "down". This is the result of using a two-dimensional analogy of a three-dimensional universe.



The point is, there is no "pushing" going on in the general relativistic model of gravity. Gravity is attractive (so long as the strong energy conditions holds for the object in question), just like Newton postulated.

Monday, 23 July 2012

What could be the utmost lowest temperature in the universe/multiverse?

What temperature means...



Temperature is the measure of the energy of particles. The higher the temperature, the more energized the particles. The more energy particles have the faster they move around. This is the particles' kinetic energy that is rising. As it rises, the particles will begin using up more space. Moving particles need more space. In a vacuum this can be measured as pressure which is the stress of thermal expansion on the closed system. In an open system the matter will expand freely. As the particles increase their speed they also move more erratically so the entropy, or measure of disorder, will also increase.



The coldest...



Now understanding all that, what would the lowest temperature, or energy state, be in the universe? The answer is a state of no energy, 0 K, or absolute zero in the Kelvin scale. It is −273.15° on the Celsius scale and −459.67° on the Fahrenheit scale. At this temperature, which is impossible to occur by only thermodynamic means, the particles are completely still and entropy drops to 0.



Temperature reference points...



  • The surface temperature of the sun is 5,778 K.

  • Water boils at 373 K.

  • Water freezes at 273.15 K.

  • The moon’s darkest craters that never receive sunlight are 33 K.

  • The cosmic microwave background fluctuates around 2.8 K.

  • The Boomerang Nebula, the coolest natural place currently
    known in the universe, measures 1 K.

  • Absolute zero is 0 K.

terminology - What is the term for astronomical objects outside the solar system that are smaller than dwarf planets?

I have to disagree with the accepted answer. The terms Planets, moons, asteroids, planetesimals are generic and are not implied to mean objects in the Solar system. So, these terms are completely correct when dealing with the generic objects.



When specifically emphasizing the fact that a certain objects is not in Solar system, then you may want to add 'exo' for disambuigation, but this should be reserved for this purpose only.



A planet is a planet no matter where. Same with a house, which is a house no matter whether it's in your home town or elsewhere on the Earth.




Note in edit: My answer is based on the practice amongst professional astronomers and not on what some dictionaries say. I reckon the latter are somewhat behind the times, when all planets, asteroids etc known where those in orbiting the Sun. Wikipedia is a great resource, but anybody can change its contents and, not surprisingly, some of its pages are quite biased.

Sunday, 22 July 2012

ag.algebraic geometry - Is every curve birational to a smooth affine plane curve?

Yes. Here is a proof.



It is classical that every curve is birational to a smooth one which in turn is birational to a closed curve $X$ in $mathbb{C}^2$ with atmost double points. Now my strategy is to choose coordinates such that by an automorphism of $mathbb{C}^2$ all the singular points lie on the $y$-axis avoiding the origin. Now the map $(x,y)rightarrow(x,xy)$ from $mathbb{C}^2$ to itself will do the trick of embedding the smooth part of $X$ in a closed manner. Below are the details.



The only thing we need to show is that the smooth locus of a closed curve $Xinmathbb{C}^2$ with only double points can again be embedded in the plane as a closed curve.



Step 1. Let $S$ be the set of singular points of $X$. Choose coordinates on $mathbb{C}^2$ such that the projection of $X$ onto both the axes gives embeddings of $S$. Call the projection of $S$ on the $y$-axis as $S'$. By sliding the $x$-axis a little bit we can make sure that $S'$ doesn't contain the origin of the plane. Now I claim that there is an automorphism of $mathbb{C}^2$ which takes $S$ to $S'$. This is easy to construct by a Chinese remainder kind of argument: There is an isomorphism of the coordinate rings of $S$ and $S'$ and we need to lift this to an isomorphism of $mathbb{C}[x,y]$. I will illustrate with an example where #${S}=3$. Let $(a_i,b_i)$ be the points in $S$. Then there exists a function $h(y)$ such that $h|S'=x|S$ as functions restricted to the sets $S$ and $S'$. Here is one recipe: $h(y)=c_1(frac{y}{b_2}-b_2)(frac{y}{b_3}-b_3)(y-b_1+1)+dots$ where $c_1=a_1(frac{b_1}{b_2}-b_2)^{-1}(frac{b_1}{b_3}-b_3)^{-1}$ etc.



Look at the map $phi:(x,y)rightarrow(x-h(y),y)$ on $mathbb{C}^2$. It is clearly an automorphism and takes the set $S$ to $S'$.



Step 2. Now consider the map $psi:(x,y)rightarrow(x,xy)$ from the affine plane to itself. It is an easy check that $psi^{-1}circphi:X-Srightarrowmathbb{C}^2$ is a closed embedding.

dg.differential geometry - A comprehensive functor of points approach for manifolds

Here are two things that I think are relevant to the question.



First, I want to support Andrew's suggestion #5: synthetic differential geometry. This definitely constitutes a "yes" to your question




is there any sort of way to attack differential geometry with abstract nonsense?




--- assuming the usual interpretation of "abstract nonsense". It's also a "yes" to your question




Can we describe it as some subcategory of some nice grothendieck topos?




--- assuming that "it" is the category of manifolds and smooth maps. Indeed, you can make it a full subcategory.



Anders Kock has two nice books on synthetic differential geometry. There's also "A Primer of Infinitesimal Analysis" by John Bell, written for a much less sophisticated audience. And there's a brief chapter about it in Colin McLarty's book "Elementary Categories, Elementary Toposes", section 23.3 of which contains an outline of how to embed the category of manifolds into a Grothendieck topos.



Second, it's almost a categorical triviality that there is a full embedding of Mfd into the category Set${}^{U^{op}}$, where $U$ is the category of open subsets of Euclidean space and smooth embeddings between them.



The point is this: $U$ can be regarded as a subcategory of Mfd, and then every object of Mfd is a colimit of objects of $U$. This says, in casual language, that $U$ is a dense subcategory of Mfd. But by a standard result about density, this is equivalent to the statement that the canonical functor Mfd$to$Set${}^{U^{op}}$ is full and faithful. So, Mfd is equivalent to a full subcategory of Set${}^{U^{op}}$.



There's a more relaxed explanation of that in section 10.2 of my book Higher Operads, Higher Categories, though I'm sure the observation isn't original to me.

ag.algebraic geometry - a question about Gromov-Witten invariant

To further qualify Charles's yes: that these moduli spaces are orbifolds instead of manifolds does result in rational numbers, but this is quite natural and not much of a problem. The orbifolds here are just resulting because we're counting things that have automorphisms, here, for instance, the map from P^1 to P^1 given by the polynomial z^d has Z_d as its automorphisms: we can multiply a point in P^1 by a dth root of unity and not change where it maps to). Whenever you count things with automorphisms it's quite natural to count each thing weighted by 1/(the size of its automorphism groups), or to rigidify the things we're counting by adding some kind of extra structure so they no longer have automorphisms.



As an example: Cayley's formula that there are n^(n-2) trees on n labeled vertices - the labeling of the vertices guarantees that the objects we're counting do not have automorphisms, and we get an integer - we've rigidified the problem. If we wanted to count the number of trees on n unlabeled vertices, the problem is much more difficult. However, if we weight each such tree by the inverse of its automorphism group, then the problem has a nice answer again: it's simply n^(n-2)/n!. My point is: the rationality is not the ugly part of what's going on.



The ugly part is that these moduli spaces of maps are not even orbifolds: they have much worse singularities, and can have different components of different dimension. From deformation theory, we expect these moduli spaces to have a certain dimension. To get a finite number, we put conditions on the map that cut this dimension down until its zero. Geometrically, you should think of each of these conditions as a cycle on the moduli space, and we want to intersect them. Doing this intersection naively doesn't work when the space is singular, and furthermore the moduli space might be smooth but have a dimension different than what we were expecting. But a lot of hard work shows that these spaces have a "virtual fundamental class" of the dimension that we expect, and using this we can proceed as above to get a number. But in doing this, we've lost the sense in that we're counting something.



But it strikes me that perhaps that's not necessarily what the questioner was after; most typically this is done for smooth, projective varieties of C, but somehow the part that really matters is the symplectic structure: Gromov-Witten invariants can be defined for any symplectic manifold - they will all have almost complex structures J that "play nicely" with the symplectic form omega, and we're "counting" these maps. Or: all this works for orbifolds (which are really smooth objects), but not singular spaces.



The over $mathbb{C}$ bit is pretty necessary, I think - people have looked a little at doing in positive characteristic, but one big problem is that the orbifold stuff, which I was just telling you isn't really a problem, can be a big problem in positive characteristic if the order of your automorphisms aren't coprime with the characteristic.

Saturday, 21 July 2012

ag.algebraic geometry - Is there a good notion of `Separated Stack'?

A scheme is separated if the diagonal inclusion $X to X times X$ is a closed immersion. I what to know if there is a good generalization of `separated' for algebraic stacks?



My usual stack reference, Anton Gerashchenko's stack notes, doesn't seem to provide an answer.



In a previous MO question several related notions came up. The most similar is quasi-separated where you require the diagonal to be quasi-compact. You can check wikipedia for some relevant algebraic geometry terminology. How does this compare to separatedness?



The main obstacle that I can see in defining separated for stacks is that the property of a map of schemes $X to Y$ being separated does not appear to be local in the target. Since maps between affines are separated, it seems that every map of schemes is locally separated. This means that we shouldn't expect the usual trick of replacing an algebraic stack by a scheme which covers it to work very well.

Friday, 20 July 2012

co.combinatorics - Combinatorial sequences whose ratios $a_{n+1}/a_{n}$ are integers.

Let an be the largest power of 2 that divides Rn, the number of reduced Latin squares of order n. We know the value of an for n≤11 (see this for example). The sequence begins (1,1,1,22,23,26,210,217,221,228,235,...) for n≥1.



I wouldn't conjecture that an+1/an is always an integer (although, it seems plausible). However, we do know that an+1/an is an integer for 1≤n≤10.

at.algebraic topology - Infinity de Rham quasi-isomorphism

Yes.



Here is one way to see it:



before passing to dg-algebras, let's look at cosimplicial algebras and then later apply the normalized cochain (Moore) complex functor.



Work in a smooth (oo,1)-topos, modeled by simplicial presheaves on a site of smooth loci. In there, we have for every manifold $X$



There is a canonical injection $X^{(Delta^bullet_{inf})} to X^{Delta^bullet_{Diff}}$. We may take degreewise (internally, i.e. smoothly) functions on these, to get the cosimplicial algebras $[X^{Delta^bullet_{inf}},R]$ and $[X^{Delta^bullet_{Diff}},R]$.



The normalized cochain complex of chains on $[X^{Delta^bullet_{Diff}},R]$ is the complex of smooth singular cochains.



The normalized cochain complex of chains on $[X^{Delta^bullet_{inf}},R]$ turns out to be, by some propositions by Anders Kock, to be the deRham algebra, as discussed a bit at differential forms in synthetic differential geometry.



Therefore under the ordinary Dold-Kan correspondence we have a canonical morphism



$$
N^bullet([X^{Delta^bullet_{Diff}},R] to [X^{Delta^bullet_{inf}},R])
=
C^bullet_{smooth}(X) to Omega_{dR}^bullet(X)
$$



which is an equivalence of cochain complexes. But there is a refinement of the Dold-Kan correspondence the monoidal Dold-Kan correspondence. And this says that this functor is also a weak equivalence of oo-monoid objects.

soft question - Which math paper maximizes the ratio (importance)/(length)?

Any of three papers dealing with primality and factoring that are between 7 and 13 pages:



First place: Rivest, R.; A. Shamir; L. Adleman (1978). "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems". Communications of the ACM 21 (2): 120–126.



Runner-up: P. W. Shor, Algorithms for quantum computation: Discrete logarithms and factoring, Proc. 35nd Annual Symposium on Foundations of Computer Science (Shafi Goldwasser, ed.), IEEE Computer Society Press (1994), 124-134.



Honorable mention: Manindra Agrawal, Neeraj Kayal, Nitin Saxena, "PRIMES is in P", Annals of Mathematics 160 (2004), no. 2, pp. 781–793.

Thursday, 19 July 2012

nt.number theory - Proof of "if a^2 + b^2 = c^2 then a*b*c is divisible by 60"

There is a very friendly discussion of how to derive your parameterization geometrically in the first chapter of Rational Points on Elliptic Curves by Silverman and Tate. The method they use is the one discussed in the wikipedia article that Rob H. linked to, but I feel that the exposition in Silverman and Tate's book is considerably better.



The basic idea is as follows. Suppose that you are given integers $(x,y,z)$, not all zero, such that $x^2+y^2=z^2$. Then dividing by $z^2$ gives you rational numbers $(frac{x}{z},frac{y}{z})$ which solve the equation $x^2+y^2=1$. So we have a rational point on the unit circle. Conversely, if we are given a rational point on the unit circle, then we can clear denominators to get an integral solution to $x^2+y^2=z^2$. So the problem of parameterizing the integer solutions to your equation is equivalent to the parameterization of the rational points on the unit circle. Silverman and Tate spend the first chapter of their book explaining, in a very down to earth and readable way, how one goes about parameterizing the rational points on the circle. In fact, they do much more. They show how to parameterize the rational points on any conic having rational points (that is, to any equation $ax^2+bxy+cy^2+dx+ey+f$ having at least one rational solution $(x_0,y_0)$).

Wednesday, 18 July 2012

big list - What are some applications of other fields to mathematics?

I can think of at least three things that the question might mean, and it would probably help if Steve clarified which ones count for him!



(1) Other fields suggesting new questions for mathematicians to think about, or new conjectures for them to prove. Examples of that sort are ubiquitous, and account for a significant fraction of all of mathematics! (Archimedes, Newton, and Gauss all looked to physics for inspiration; many of the 20th-century greats looked to biology, economics, computer science, etc. Even for those mathematicians who take pride in taking as little inspiration as possible from the physical world, it's arguable how well they succeed at it.)



(2) Other fields helping the process of mathematical research. Computers are an obvious example, but I gather that this sort of application isn't what Steve has in mind.



(3) Other fields leading to new or better proofs, for theorems that mathematicians care about even independently of the other fields. This seems to me like the most interesting interpretation. But it raises an obvious question: if a field is leading to new proofs of important theorems, why shouldn't we call that field mathematics? One way out of this definitional morass is the following: normally, one thinks of mathematics as arranged in a tree, with logic and set theory at the root, "applied" fields like information theory or mathematical physics at the leaves, and everything else (algebra, analysis, geometry, topology) as trunks or branches. Definitions and results from the lower levels get used at the higher levels, but not vice versa. From this perspective, what the question is really asking for is examples of "unexpected inversions," where ideas from higher in the tree (and specifically, from the "applied" leaves) are used to prove theorems lower in the tree.



Such inversions certainly exist, and lots of people probably have favorite examples of them --- so it does seem like great fodder for a "big list" question. At the risk of violating Steve's "no theoretical computer science" rule, here are some of my personal favorites:



(i) Grover's quantum search algorithm immediately implies that Markov's inequality, that



$max_{x in [-1,1]} |p'(x)| leq d^2 max_{x in [-1,1]} |p(x)|$



for all degree-d real polynomials p, is tight.



(ii) Kolmogorov complexity is often useful for proving statements that have nothing to do with Turing machines or computability.



(iii) The quantum-mechanical rules for identical bosons immediately imply that |Per(U)|≤1 for every unitary matrix U.

Tuesday, 17 July 2012

homological algebra - A proof of the salamander lemma without Mitchell's embedding theorem?

The salamander lemma is a lemma in homological algebra from which a number of theorems quickly drop out, some of the more famous ones include the snake lemma, the five lemma, the sharp 3x3 lemma (generalized nine lemma), etc. However, the only proof I've ever seen of this lemma is by a diagram chase after reducing to R-mod by using mitchell's embedding theorem. Is there an elementary proof of this lemma by universal properties in an abelian category (I don't know if we can weaken the requirements past an abelian category)?



If you haven't heard of the salamander lemma, here's the relevant paper.



And here's an article on it by our gracious administrator, Anton Geraschenko: Click!



Also, small side question, but does anyone know a good place to find some worked-out diagram-theoretic proofs that don't use mitchell and prove everything by universal property? It's not that I have anything against doing it that way (it's certainly much faster), but I'd be interested to see some proofs done without it, just working from the axioms and universal properties.



PLEASE NOTE THE EDIT BELOW



EDIT: Jonathan Wise posted an edit to his answer where he provided a great proof for the original question (doesn't use any hint of elements!). I noticed that he's only gotten four votes for the answer, so I figured I'd just bring it to everyone's attention, since I didn't know that he'd even added this answer until yesterday. The problem is that he put his edit notice in the middle of the text without bolding it, so I missed it entirely (presumably, so did most other people).

ct.category theory - Finding the codomain of a monoid homomorphism

As has been stated, $M rightarrow G$ factors via the group completion $hat{M}$. Furthermore, since $G$ is abelian, the map $hat{M} rightarrow G$ factors via the abelianization $hat{M}/[hat{M},hat{M}]$ of $hat{M}$. Since the composite of the maps $M rightarrow hat{M} rightarrow hat{M}/[hat{M},hat{M}]$ is already known, $f$ is uniquely determined by the map $hat{M}/[hat{M},hat{M}] rightarrow G$, so we may assume $M$ to be an abelian group.



Since $f = 0$ gives us no information, we may assume that $f$ is onto. Suppose we have a $G$ so that $f$ factors via it, and let us write $f = pg$, where $g colon M rightarrow G$ and $p colon G rightarrow {0,1}$.



By $f = pg$ we must necessarily have $ker(g) subset ker(f)$, so that $g$ is isomorphic to the canonical projection $M rightarrow M/N$, where $N$ is a subgroup of $ker(f)$. In addition, $|G|$ must be even. (If $f$ is onto, $p$ must take on the values $0$ and $1$ equally, since $|G| = 2|ker(p)|$.)



I claim this is all $f$ tells you. Given $f$, we've just shown that we must have that $g$ is isomorphic to modding out by a subgroup of $ker(f)$, and that $|G|$ is even.



Conversely, given any map $g colon M rightarrow G$, whose kernel is a subgroup of $ker(f)$, and such that $|G|$ is even, then the image of $ker(f)$ will be of index 2 in $G$, so we can just compose this map with the map that mods out the image of $ker(f)$ in $G$.



Edit: Oops, $G rightarrow {0,1}$ is only a set map! Well then, let me at least try to contribute to the discussion! In this case, we can still assume $M$ to be an abelian group. If you can recover $G$ (assuming $G = M$ is not allowed), then certainly the kernel of $M rightarrow G$ cannot have any subgroups, hence must be a cyclic group of order $p$. In the case where $M = mathbb{Z}_+$, then $hat{M} = mathbb{Q}_+$ which is torsion free, so no maps $f$ exist which allow you to recover $G$ entirely, unless we allow $G = hat{M}$.

Finite-dimensional subalgebras of $C^star$-algebras

Let $A$ be a unital $C^star$-algebra and let $a_1,dots,a_n$ be a finite list of normal elements in $A$ which (together with their adjoints) generate a norm-dense $star$-subalgebra $B subset A$. Clearly, if $A$ is finite-dimensional, then every element in $A$ (and hence $B$) has finite spectrum. I am asking for the converse.




Question: Assume that each element of $B$ has finite spectrum. Is it true that $A$ has to be finite-dimensional?




The existence of finitely-generated infinite torsion groups shows that this might be a highly non-trivial problem. In this case one would consider the reduced group $C^star$-algebra and note that all monomials in the generators of the group and their inverses (which are equal to the adjoints of the generators) would have finite spectrum. However, the generated algebra would still be infinite-dimensional. I do not know of any simpler way to come up with such an example. In this case it is conceivable that the random-walk operator associated with the generating set (which is an element in the real group ring) has infinite spectrum, even though I did not prove this.



Maybe there is also need to consider the spectra of elements in matrices over $B$ (which of course follow to be finite if $A$ is finite-dimensional.) In view of this, I am not only asking for an answer to the question but also for the right question (or a better one) if the answer to the original question is negative.



A stronger assumption would be to assume that $A$ itself consists only of elements with finite spectrum. This case seems much easier to approach and the answer seems to be positive. In fact every infinite-dimensional $C^star$-algebra should contain an element with infinite spectrum.



Just to get started, a more concrete instance of the question above is:




Question: Let $p_1, p_2$ and $p_3$ be three projections in a $C^*$-algebra with the property that every (non-commutative) polynomial in $p_1, p_2$ and $p_3$ has finite spectrum. Is it true that the projections generate a finite-dimensional algebra?


soft question - Fundamental Examples

It is not unusual that a single example or a very few shape an entire mathematical discipline. Can you give examples for such examples? (One example, or few, per post, please)



I'd love to learn about further basic or central examples and I think such examples serve as good invitations to various areas. (Which is why a bounty was offered.)




Related MO questions: What-are-your-favorite-instructional-counterexamples,
Cannonical examples of algebraic structures, Counterexamples-in-algebra, individual-mathematical-objects-whose-study-amounts-to-a-subdiscipline, most-intricate-and-most-beautiful-structures-in-mathematics, counterexamples-in-algebraic-topology, algebraic-geometry-examples, what-could-be-some-potentially-useful-mathematical-databases, what-is-your-favorite-strange-function; Examples of eventual counterexamples ;




To make this question and the various examples a more useful source there is a designated answer to point out connections between the various examples we collected.




In order to make it a more useful source, I list all the answers in categories, and added (for most) a date and (for 2/5) a link to the answer which often offers more details. (~year means approximate year, *year means a year when an older example becomes central in view of some discovery, year? means that I am not sure if this is the correct year and ? means that I do not know the date. Please edit and correct.) Of course, if you see some important example missing, add it!



Logic and foundations: $aleph_omega$ (~1890), Russell's paradox (1901),
Halting problem (1936), Goedel constructible universe L (1938), McKinsey formula in modal logic (~1941), 3SAT (*1970), The theory of Algebraically closed fields (ACF) (?),



Physics: Brachistochrone problem (1696), Ising model (1925), The harmonic oscillator,(?) Dirac's delta function (1927), Heisenberg model of 1-D chain of spin 1/2 atoms, (~1928), Feynman path integral (1948),



Real and Complex Analysis: Harmonic series (14th Cen.) {and Riemann zeta function (1859)}, the Gamma function (1720), li(x), The elliptic integral that launched Riemann surfaces (*1854?), Chebyshev polynomials (?1854) punctured open set in C^n (Hartog's theorem *1906 ?)



Partial differential equations: Laplace equation (1773), the heat equation, wave equation, Navier-Stokes equation (1822),KdV equations (1877),



Functional analysis: Unilateral shift, The spaces $ell_p$, $L_p$ and $C(k)$, Tsirelson spaces (1974), Cuntz algebra,



Algebra: Polynomials (ancient?), Z (ancient?) and Z/6Z (Middle Ages?), symmetric and alternating groups (*1832), Gaussian integers ($Z[sqrt -1]$) (1832), $Z[sqrt(-5)]$,$su_3$ ($su_2)$, full matrix ring over a ring, $operatorname{SL}_2(mathbb{Z})$ and SU(2), quaternions (1843), p-adic numbers (1897), Young tableaux (1900) and Schur polynomials, cyclotomic fields, Hopf algebras (1941) Fischer-Griess monster (1973), Heisenberg group, ADE-classification (and Dynkin diagrams), Prufer p-groups,



Number Theory: conics and pythagorean triples (ancient), Fermat equation (1637), Riemann zeta function (1859) elliptic curves, transcendental numbers, Fermat hypersurfaces,



Probability: Normal distribution (1733), Brownian motion (1827), The percolation model (1957), The Gaussian Orthogonal Ensemble, the Gaussian Unitary Ensemble, and the Gaussian Symplectic Ensemble, SLE (1999),



Dynamics: Logistic map (1845?), Smale's horseshoe map(1960). Mandelbrot set (1978/80) (Julia set), cat map, (Anosov diffeomorphism)



Geometry: Platonic solids (ancient), the Euclidean ball (ancient), The configuration of 27 lines on a cubic surface, The configurations of Desargues and Pappus, construction of regular heptadecagon (*1796), Hyperbolic geometry (1830), Reuleaux triangle (19th century), Fano plane (early 20th century ??), cyclic polytopes (1902), Delaunay triangulation (1934) Leech lattice (1965), Penrose tiling (1974), noncommutative torus, cone of positive semidefinite matrices, the associahedron (1961)



Topology: Spheres, Figure-eight knot (ancient), trefoil knot (ancient?) (Borromean rings (ancient?)), the torus (ancient?), Mobius strip (1858), Cantor set (1883), Projective spaces (complex, real, quanterionic..), Poincare dodecahedral sphere (1904), Homotopy group of spheres, Alexander polynomial (1923), Hopf fibration (1931), The standard embedding of the torus in R^3 (*1934 in Morse theory), pseudo-arcs (1948), Discrete metric spaces, Sorgenfrey line, Complex projective space, the cotangent bundle (?), The Grassmannian variety,homotopy group of spheres (*1951), Milnor exotic spheres (1965)



Graph theory: The seven bridges of Koenigsberg (1735), Petersen Graph (1886), two edge-colorings of K_6 (Ramsey's theorem 1930), K_33 and K_5 (Kuratowski's theorem 1930), Tutte graph (1946), Margulis's expanders (1973) and Ramanujan graphs (1986),



Combinatorics: tic-tac-toe (ancient Egypt(?)) (The game of nim (ancient China(?))), Pascal's triangle (China and Europe 17th), Catalan numbers (18th century), (Fibonacci sequence (12th century; probably ancient), Kirkman's schoolgirl problem (1850), surreal numbers (1969), alternating sign matrices (1982)



Algorithms and Computer Science: Newton Raphson method (17th century), Turing machine (1937), RSA (1977), universal quantum computer (1985)



Social Science: Prisoner's dilemma (1950) (and also the chicken game, chain store game, and centipede game), the model of exchange economy, second price auction (1961)



Statistics: the Lady Tasting Tea (?1920), Agricultural Field Experiments (Randomized Block Design, Analysis of Variance) (?1920), Neyman-Pearson lemma (?1930), Decision Theory (?1940), the Likelihood Function (?1920), Bootstrapping (?1975)

Saturday, 14 July 2012

linear algebra - bounded homogeneous quartics

If Q is a real homogeneous quartic on $R^N$,



$Q(x) = sum_{1 <= i,j,k,l <= N} Q_{ijkl} x_i x_j x_k x_l$



what is the condition on the (totally symmetric) coefficients $Q_{ijkl}$ for Q being bounded from below? I'm looking for the simplest expression in terms of $Q_{ijkl}$. Clearly, if $Q_{ijkl}$, as considered a map from the space of real symmetric matrices to the space of real symmetric matrices is positive semi-definite, is enough. But this is a too strong condition because $x_i x_j$ is a rank-1 real symmetric matrix, so in Q(x) Q is only evaluated on rank-1 matrices, not on every real symmetric matrix.

fa.functional analysis - Direct integrals and fields of operators

Suppose we have a measure space $(X,mu)$ and a measurable field of Hilbert spaces $H_x$ on it. We can form the direct integral ${cal{H}} = int H_x d mu$, which is a Hilbert space.



Suppose now that I have a bounded operator $T$ on $cal H$, about which I know that it is decomposable.



Do you know of any kind of a "formula" which will "compute" a measurable field of operators $T_x$, such that $int T_x =T$?

soft question - Research Experience for Undergraduates: Summer Programs

Some time ago, I found this list of REU programs held in 2009.



The main aspects that characterize such programs are: (a) a great deal of lectures on specific topics; and, admittedly more importantly, (b) the chance to gain some hands-on experience with research projects.



I think that these programs are extremely interesting and are precious opportunities for undergraduates to gain a deeper understanding of specific mathematical topics as well as of the "work of the mathematical researcher".



One should note, however, that most of these programs (if not all of them) are not open to European citizens (or, at least, in general non-American applicants do not receive funding).




Q: So, I would really like to hear if you know any similar programs. More
specifically, I would like to know there are any such programs outside the U.S. (or any programs in the U.S. that accept also non-American applicants).





Remark 1: A similar question was asked on Mathematics.



Remark 2: Both questions have been updated in 2015. It would be nice to receive some answers which are up-to-date.

planet - Equations for coordinates of solar system objects

It depends a bit on how precise you would want to be. A very good discussion on how to calculate the orbits of solar system objects is given in the book by Jean Meeus, Astronomical Algorithms (1999), which is at an advanced amateur level. At professional level you have the Explanatory Supplement to the Astronomical Almanac by Urban and Siedelmann.



For precise calculations Meeus uses the VSOP theory from Bretagnon (1987) (wikipedia link). You can download all files needed for these calculations from VizieR. These files contain a lot of numerical terms that are needed in the equations. The resulting positions are however very precise.



For the highest accuracy, you can download the predicted positions from the Jet Propulsion Laboratory (JPL) Horizon system.



There are also quite a few software library projects that implement the equations from Meeus. This might be the best option. Then you do not have to implement the equations yourself. For C/C++ you have for instance LibNova.



For highest precision you have professional software libraries such as for instance the NOVAS libraries from the Naval Observatory for Fortran, C, or Python. But to use that correctly you will have to have a good understanding of celestial mechanics.

dg.differential geometry - When is a Riemannian manifold an open subset of a complete one?

This isn't an answer its a conjecture. Nice question.



Suppose that $M, N$ are a Riemannian manifolds and $Msubset N$ is an open subset and $N$ is complete. Lets assume that $M$ is path connected, so that there is no funny business in defining
the distance between $p, q in M$ to be the infimum of the length of a path joining $p$ to $q$.
Also lets assume that that path metric is bounded, so you don't have infinite ends.



There is a map from the metric space completion of $M$ into $N$ and its image will be the closure of $M$ in $N$. There is now a plethora
of obstructions to the embedding, derived from this map.



For instance:
Let $CI(overline{M})$ be those continuous functions on the metric space completion of $M$ whose restriction to $M$ is smooth. Let $I$ be the ideal of all functions in $CI(overline{M})$ that vanish at a point $p$ of the completion. It should be the case that $T=(I/I^2)^*$ is isomorphic to $mathbb{R}^n$ where $n$ is the dimension of the manifold. Next, the metric tensor should extend to the completion, where you interpret it a point at infinity as a tensor on $T$, and its coefficients should be elements of $CI(overline{M})$. Next you should be able to extend the Riemann curvature tensor appropriately as a map from the tensor square of T to itself, and the coefficients of the extension should also be in $CI(overline{M})$ and they should satisfy all the restrictions on the tensor that the Riemann curvature tensor of a smooth manifold satisfies.



Here is my conjecture : The condition above is necessary and sufficient. The reason is you should be able to build a candidate piece of the manifold $N$ with normal coordinates, and those normal coordinate patches should glue together coherently.

Friday, 13 July 2012

core - Could evaporating hot Jupiters have metallic hydrogen on their surfaces?

Metallic hydrogen is an odd substance. When you push hydrogen atoms very close together, their electrons can come free, and move around, instead of being tightly bound to the atomic nuclei. As this form of hydrogen would conduct electricity, it behaves like a metal. At least this is the theory. Nobody has been able to produce enough pressure to actually make any metallic hydrogen in the lab.



However, the pressure inside Jupiter should be high enough to form metallic hydrogen. In extrasolar planets there could be large amounts of metallic hydrogen too.



However as soon as you release the pressure from metallic hydrogen, it turns back to normal molecular hydrogen. So it could not exist on the surface of a "hot jupiter", even one from which the outer layers had been stripped away by the solar wind. The metallic hydrogen that had been in the interior would change back into molecular hydrogen as it approached the surface.

Thursday, 12 July 2012

co.combinatorics - Algorithmic aspects of maximizing a convex function over a convex set

Motivation



The problem I am facing can be considered a variant of the standard set packing problem. However; instead of being given a list of sets, I am given a function $nu : 2^N rightarrow {0,1}$ and want to find a partitioning $P$ of $N$ that maximizes $g(P) = sum_{S in P} nu(S)$. This can be shown to require somewhere between $O(2^{|N|})$ and $O(3^{|N|})$ operations.



The above problem can (almost) be reduced to the problem of finding a partition $P_3$ of $N$ into three sets that maximizes $g(P_3)$. However there are still $O(3^{|N|})$ partitionings of $N$ into three sets.



Lets say we construct such a 3 partition $(S_1,S_2,S_3)$ as follows: For each element $i in N$, we add $i$ to the first set with probability $x_i$, to the second set with probability $y_i$ and to the third with probability $z_i$, where $x_i+y_i+z_i = 1$ and $0 leq x_i, y_i, z_i$.



It can be shown that the expected value of such a probability distribution, $x,y,z$, over the 3 partitions of $N$ is $f(x,y,z) = E[g(P)] = sum_{c subset N} nu(C)left[Pi_{i in C} x_i Pi_{i not in C} (y_i+z_i) + Pi_{i in C} y_i Pi_{i not in C} (x_i+z_i) + Pi_{i in C} z_i Pi_{i not in C} (x_i+y_i)right]$.



I am considering the situation in which we relax the constraint $x_i+y_i+z_i = 1$ to $x_i+y_i+z_i leq 1$ and then using techniques akin to interior point methods for standard convex programming.



This relaxation clearly does not change maximum of $f(x,y,z)$ and with it in place $f(x,y,z)$ can be shown to be convex over our feasible set $0 leq x_i,y_i,z_i$ and $x_i +y_i+z_i leq 1$ for $i in N$.




Given the above, my question is: Is there any general theory for maximizing convex functions over convex compact sets? (Apart from that the maximum must appear on the boundary?)



First time poster, so my apologies if I have tagged this inappropriately.
I know of much work in convex programming (minimizing convex functions over convex sets) but haven't been able to find similar work for maximization.

galaxy - How long until we cannot see any stars from other galaxies?

First of all, we can't see stars from other galaxies (with a few exceptions, Cepheid variable stars for example are regularly used to determine distances to nearby galaxies). As it currently stands, we can only see stars from our own Milky Way (in which I'm including the large and small Magellanic Clouds). Type Ia supernovae aside, it might be possible to see Cepheid variables in the Andromeda galaxy, but anything further is almost certainly not possible.



The relevant number when it comes to the distance one could possibly see in the universe is the comoving distance to the cosmological horizon. This horizon defines the boundary between what can be seen and what cannot be seen, simply because the universe is not old enough for particles to have traveled that far (yes, even at the speed of light; in fact, if we're talking about what we can observe, the photon is the particle we care about).



The definition of the comoving distance is:



$$chi = frac{c}{H_{0}} int_{z=0}^{z=z_{Hor}} frac{dz^{'}}{E(z^{'})}$$



where the function $$ E(z^{'}) $$ contains your choice of cosmology (the following is for flat universes only):



$$E(z^{'}) = sqrt{ Omega_{gamma} (1+z)^{4} + Omega_{m} (1+z)^{3} + Omega_{Lambda} }$$



This distance can be extrapolated out into the future (given your choice of cosmology), and ultimately will tell you what the distance to the cosmological event horizon will asymptotically reach. This is the distance at which no object beyond could ever come into causal contact with you.



Another simpler way to calculate the particle horizon is to calculate the conformal time that has passed from the beginning of the universe, and multiply it by the speed of light $c$, where the conformal time is defined in the following way:



$$eta = int_{0}^{t} frac{dt^{'}}{a(t^{'})}$$



where $a(t^{'})$ is the scale factor of the universe, and its relationship to time depends specifically on your choice of cosmology. Again, if the ultimate value of this horizon is desired, you would need to integrate this to infinity.



SUMMARY: I'm having a hard time finding the exact number of the comoving size of the cosmological event horizon, though I'll keep searching. If I can't find it I suppose I'll have to do the calculation when I have some free time. Almost certainly it is larger than the size of the local group of galaxies, making our night sky largely safe from the fate of the universe. Anything further than this horizon, however, will monotonically become both fainter and redder. I also think that it does asymptotically reach some value, rather than eventually increasing infinitely or decreasing passed some point in time. I'll have to get back to you on that.



Here's a nice review of distance measurements in cosmology.

ag.algebraic geometry - Upper bound on greatest prime of bad reduction for a plane curve

The primes that are "bad" in your sense will divide the number $Res_x(Res_y(f,frac{partial f}{partial x}), Res_y(f,frac{partial f}{partial y}))$. (If I interpreted damiano's comment correctly).



All that is left is to bound this number. So:



Let $M := max (|a_{ij}|)$.



$parallel Res_y(f,frac{partial f}{partial x})parallel and parallel Res_y(f,frac{partial f}{partial y})parallel are < (2d)!M^{2d}$



$Rightarrow parallel Res_x(Res_y(f,frac{partial f}{partial x}), Res_y(f,frac{partial f}{partial y}))parallel < (2d^2)^{2d^2}((2d)!M^{2d})^{2d^2} ll (dM)^{4d^3+O(d^2)}$



So pick a random prime larger than this and then compute $gcd(Res_y(f,frac{partial f}{partial x}), Res_y(f,frac{partial f}{partial y}))$ in $mathbb{F}_p$. The complexity is $O(poly(d)times poly(log(M))$. Is this better than Groebner computations in $mathbb{Q}$? I have no idea...

Tuesday, 10 July 2012

convex polytopes - Efficiently sampling points from an integer lattice.

Let $mathcal{L}$ = {x$in$ $N^n$ : ||x||$_1$ $leq$ m} denote the set of integer points in the positive orthant of the $ell_1$ ball of radius $m$, where $m < n$. For each $x in mathcal{L}$, let w : $mathcal{L}$ $rightarrow$ $mathcal{R}^+$ denote an efficiently computable weighting function. Let $mathcal{D}$ define a probability distribution over $mathcal{L}$ that selects each element from $mathcal{L}$ with probability proportional to its weight:
$$Pr_mathcal{D}[x] = frac{w(x)}{sum_{yinmathcal{L}}w(y)}$$



I would like to (approximately) sample from this distribution efficiently, where efficiently means in time polynomial in n, the dimension of the space. The algorithm may query the weight function $w$ a polynomial number of times. In general, this is hard, but I know two additional facts about the weighting function that I suspect make the problem tractable.



1) The weight function is convex: in particular, for any C, the set of points with weight at least C lies inside some convex polytope.



2) The weight function is Lipschitz: for any $x,y in mathcal{L} : ||x-y||_1 leq 1$, $|w(x) - w(y)| leq$ poly(n).



Is there a known method that would allow efficient sampling from this distribution?

naked eye - Seconds of Arc and the Unaided Eye

I've been presented with this problem:




Say that Jupiter, with its diameter of 142,000 km, was located where Mars now orbits. What would be the angular size of (the newly-relocated) Jupiter during a close approach, when its distance would be 79,300,000 km? Would we be able to see Jupiter as a round object with our unaided eye, or only as a point of light?




The angular size is easy enough to calculate given $ frac{Angular space Size}{206,000} = frac{Linear space Size}{Distance} $.
So $frac{Angular space Size}{206,000} = frac{142000}{79300000}$ meaning Angular Size = 369 seconds of arc.



However, would the unaided eye be able to see this? Knowing that the eye can detect 1 minute of arc across, surely it can detect 6.15? Any help appreciated, thanks!

Monday, 9 July 2012

What would we not know without spectroscopy?

What do we depend on spectroscopy to know about in astrophysics? Or, without spectroscopy, what would remain unknown to us? I want to know how important it is.



My wife is studying spectroscopy in chemistry, and we began to discuss it and I mentioned that spectroscopy gives us the composition of heavenly bodies, as well as their radial velocities. From this we can infer much about the origins of the universe, e.g. hydrogen-rich stars are older and the universe is expanding. Are there other ways we could know about these phenomena without spectroscopy? Are there other phenomena or conclusions that we could not make without it?

Sunday, 8 July 2012

big bang theory - Was the Universe expanding before the beginning of inflation?

This is likely unanswerable. Inflation was in part a resolution to a fine tuning problem: without it, it seemed we needed the early universe in a very specific and precisely balanced state to get to where it is today, and there was no solid scientific way to explain why things were so perfectly arranged (other than to simply assert that they were).



With inflation, the state of the universe before the inflationary epoch is fairly irrelevant. Mostly all it needs is that the region that inflates into what we will know as the observable universe had enough time to achieve thermal equilibrium (which we need to explain why the universe looks pretty much the same in all directions, exactly as if things well outside of light speed communication had nonetheless achieved thermal equilibrium at some point). Curvature and such get "smeared out" by the inflation to give us the mostly flat and homogeneous universe we see today.



On a pedantic note, some researchers consider inflation and the big bang to be the same thing. At least in the sense that talking about "before inflation" is scientifically meaningless, so if the Big Bang is the (scientific) beginning then we might as well take it to be inflation.

Friday, 6 July 2012

at.algebraic topology - Cohomology classes annihilated by pullbacks

Here's my two cents although it's rather sketchy.



For any CW complex $X$, $H^3(X;mathbb{Z})=[X,K(mathbb{Z},3)]$, where $K(mathbb{Z},3)$ comes equipped with a fibration $mathbb{CP}^inftyto Pto K(mathbb{Z},3)$. The total space $P$ is contractible. Now suppose $X$ is a compact manifold of dimension $n$ which is $2$-connected and $H^3(X;mathbb{Z})=mathbb{Z}$. Then choosing a generator of $H^3(X;mathbb{Z})$ corresponds to a (homotopy class of) map $f:Xto K(mathbb{Z},3)$. The pullback bundle $f^ast Pto X$ has the property that $H^3(f^ast P;mathbb{Z})=0$.



Since we need a finite dimensional manifold which $f^ast P$ isn't, let $E$ denote the $(n+5)$-skeleta of $f^ast P$. It is compact and locally looks like $Xtimesmathbb{CP}^2$. I think(?) that $pi:Eto X$ is a fibre bundle. Since $pi_3$ is unchanged for $4$-skeleta or higher, it follows that $0=pi_3(E)=pi_3(f^ast P)$, whence $H^3(E;mathbb{Z})=0$.



Feel free to tweak the answer if need be.



Edit As pointed out by algori and Igor, the second paragraph doesn't give you a fibre bundle.

ag.algebraic geometry - Vanishing of Self-Ext groups of vector bundles

Let $E$ be a rank two vector bundle on $mathbb{P}^n$. Assume that $text{Ext}^1(E, E)=0$. Will $text{Ext}^2(E, E)$ be zero? Why? Any geometric explanation (in terms of deformation theory?)?



Edit: As pointing out by Angelo, in the case $n=2$, the answer is no. However, I really want to know when $ngeq 4$.

milky way - Difference in energy released in stellar mass black hole merger and supermassive black hole merger

Too long for a comment, but it's an incomplete answer.



The gravity waves detected were from a stellar mass binary black hole merger, sometimes abbreviated bbh for binary black hole.



The two black holes are thought to be about 36 and 29 solar masses with a final combined mass of about 62. So, roughly 5% of the black hole mass turned into into gravitational wave energy. Source.



and from Wikipedia article (which can be taken with a grain of salt)




As the orbiting black holes give off these waves, the orbit decays,
and the orbital period decreases. This stage is called binary black
hole inspiral. The black holes will merge once they are close enough.
Once merged, the single hole settles down to a stable form, via a
stage called ringdown, where any distortion in the shape is dissipated
as more gravitational waves.3 In the final fraction of a second the
black holes can reach extremely high velocity, and the gravitational
wave amplitude reaches its peak.



The existence of stellar-mass binary black holes (and gravitational
waves themselves) were finally confirmed when LIGO detected GW150914
(detected September 2015, announced February 2016), a distinctive
gravitational wave signature of two merging stellar-mass black holes
of around 30 solar masses each, occurring about 1.3 billion light
years away. In its final moments of spiraling inward and merging,
GW150914 released around 3 solar masses as gravitational energy,
peaking at a rate of 3.6×1049 watts — more than the combined power of
all light radiated by all the stars in the observable universe put
together.3[4][5] Supermassive binary black hole candidates have been
found but as yet, not categorically proven




Supermassive bbh are thought to have been observed, see here and we may see one in our lifetime, not sure how often they happen. But beyond that, I have no way of knowing how much more energetic a supermassive BBH merger would be other than to say, I would think, a whole lot, as it would have to scale upwards by some formula.



With the case of Andromeda and the Milky Way, we're talking Andromeda's 100 million solar mass black hole and the Milky way's 4-6 million solar masses. Maybe someone here can estimate, but if the energy output is even 5%-10% of the smaller of the two, that's still a few hundred thousand solar masses of energy, an absolutely crazy energy output. Even if it's far smaller than a few hundred thousand solar masses, it's still enormous energy.



I'm also not sure if it's a dangerous form of energy output as it basically squashes things back and forth maybe a couple times before returning them to normal. I have no idea what the safe distance is from that kind of merger and gravity wave creation. Interesting question though. Fun to think about.

Thursday, 5 July 2012

linear algebra - Uniqueness of dimension for topological vector spaces

Let $V$ be a complete Hausdorff locally convex topological vector space over the field $mathbb{K}$.



Let $B$ be a subset of $V$ satisfying



.



Linearly Independent: For all functions $f$ in $mathbb{K}^B$, if $displaystylesum_{b in B} f(b) cdot b = 0$, then $f$ is identically zero.



Spanning Set: For all vectors $v$ in $V$, there exists a function $f$ in $mathbb{K}^B$ such that $displaystylesum_{b in B} f(b) cdot b = v$.



.



Let $C$ be another subset of $V$ satisfying the above conditions with $B$ replaced with $C$.



Does it follow that $|B| = |C|$?



.



(I know such 'bases' don't always exist, but when they do, do they give a unique dimension?)

ag.algebraic geometry - External tensor product of two (perverse) sheaves

So we have topological spaces $A,B$ and sheaves $F,G$ on $A,B$ of vector spaces over some fixed field $k$ and want to construct a sheaf $A otimes_k B$ on the product space $A times B$. You can write it down explicitly:



Let $W subseteq A times B$ be open. Then $(F otimes_k G)(W)$ consists of those elements $s in prod_{(a,b) in W} F_a otimes_k G_b$, such that for all $(a,b) in W$ there are open sets $a in U subseteq A, b in V subseteq B$ and $t in F(U) otimes_k G(V)$ such that $U times V subseteq W$ and for all $(c,d) in U times V$, we have $t_{c,d} = s_{c,d}$. Here $t mapsto t_{c,d}$ denotes the canonical map $F(U) otimes_k G(V) to F_c otimes_k G_d$.



Note that this obviously(!) yields a sheaf on $A times B$. On stalks, there is a canonical map $(F otimes_k G)_{a,b} to F_a otimes_k G_b$; a calculation shows that it is bijective. Remark that this agrees with the definition given by Strom Borman (the same universal property holds). But here you have a description of the sections of $F otimes_k G$. In particular, you see that if $F$ and $G$ are the sheaves of $mathbb{K}$-valued continuous functions on $A$ resp. $B$, then $F otimes_mathbb{K} G$ is a rather small subsheaf of the continuous functions on $A times B$.



The whole things makes more sense, when we take $A,B$ to be two $S$-schemes (or more generally, locally ringed spaces). Then we have the fibred product $A times_S B$ which can be constructed as above (I've written this up here (in german)). Here, the tensor product is the "right" sheaf.

Tuesday, 3 July 2012

co.combinatorics - Is every matching of the hypercube graph extensible to a Hamiltonian cycle

Given that $Q_d$ is the hypercube graph of dimension $d$ then it is a known fact (not so trivial to prove though) that given a perfect matching $M$ of $Q_d$ ($dgeq 2$) it is possible to find another perfect matching $N$ of $Q_d$ such that $M cup N$ is a Hamiltonian cycle in $Q_d$.



The question now is - given a (non necessarily perfect) matching $M$ of $Q_d$ ($dgeq 2$) is it possible to find a set of edges $N$ such that $M cup N$ is a Hamiltonian cycle in $Q_d$.



The statement is proven to be true for $d in{2,3,4}$.

gr.group theory - Generators for congruence subgroups of SL_2

For positive integers $n$ and $L$, denote by $SL_n(Z,L)$ the level $L$ congruence subgroup of $SL_n(Z)$, i.e. the kernel of the homomorphism $SL_n(Z)rightarrow SL_n(Z/LZ)$.



For $n$ at least $3$, it is known that $SL_n(Z,L)$ is normally generated (as a subgroup of $SL_n(Z)$) by Lth powers of elementary matrices. Indeed, this is essentially equivalent to the congruence subgroup problem for $SL_n(Z)$.



However, this fails for $SL_2(Z,L)$ since $SL_2(Z)$ does not have the congruence subgroup property.



Question : Is there a nice generating set for $SL_2(Z,L) ?$ I'm sure this is in the literature somewhere, but I have not been able to find it.

Monday, 2 July 2012

co.combinatorics - efficient way to count hamiltonian paths in a grid graph for a given pair of vertices

Here is Mathematica code that finds all the Hamiltonian paths between opposite
corners of a $5 times 5$ grid graph:



<< Combinatorica`;
n = 5;
G = GridGraph[n, n];
(* Add dangling edges to corners to force start/end vertices *)
Gplus = AddVertex[G, {0, 0}];
Gplus = AddVertex[Gplus, {n + 1, n + 1}];
Gplus = AddEdge[Gplus, {1, n^2 + 1}];
Gplus = AddEdge[Gplus, {n^2, n^2 + 2}];
ShowGraph[Gplus]
H = HamiltonianPath[Gplus, All];
Print["Number of paths=", Length[H]];
Print["Paths=", H];
Number of paths=208
Paths={{26,1,2,3,4,5,10,9,8,7,6,11,12,13,14,15,20,19,18,17,16,21,22,23,24,25,27}, [etc.]}


GridGraph


Addendum. Setting $n=7$ to compute the comparable number for a $7 times 7$ grid
returns 223,424 Hamiltonian paths between opposite corners. [5 hrs computation time on a 2.5GHz laptop.]
The first one returned is:
{50, 1, 2, 3, 4, 5, 6, 7, 14, 13, 12, 11, 10, 9, 8, 15, 16, 17, 18,
19, 20, 21, 28, 27, 26, 25, 24, 23, 22, 29, 30, 31, 32, 33, 34, 35,
42, 41, 40, 39, 38, 37, 36, 43, 44, 45, 46, 47, 48, 49, 51}

Sunday, 1 July 2012

ag.algebraic geometry - Varieties cut by quadrics

As Pete already indicated, Mumford's theorem says that for any projective variety $Xsubset mathbb P^n$, its Veronese emberdding $v_d(X)subset mathbb P^N$ is cut out by quadrics, for $dgg0$. So a reasonable question is for the variety with a fixed projective embedding (such as Grassmannians in the Plucker embedding and not in some other random embedding).



For this latter more meaningful question, many "combinatorial" rational varieties, such as Grassmannians, Schubert varieties (as you pointed out), flag varieties, determinantal varieties, etc., are cut out by quadrics.



For the "non-combinatorial", non-rational varieties, the most classical result is Petri's theorem: a smooth non-hyperelliptic curve of genus $gge 4$ in its canonical embedding is cut out by quadrics, with the exceptions of trigonal curves and plane quintics.



There is a vast generatization of this property: $Xsubset mathbb P^n$ satisfies property $N_p$ if the first syzygy of its homogeneous ideal $I_X$ is a direct sum of $mathcal O(2)$, the second syzygy is a direct sum of $mathcal O(3)$, etc., the $p$-th syzygy is a direct sum of $mathcal O(p+1)$. In this language, $X$ is cut out by quadrics is equivalent to the property $N_1$.



A 1984 Green's conjecture is that a smooth nonhyperelliptic curve satisfies $N_p$ for $p=Cliff(X)-1$, where $Cliff(X)$ is the Clifford of $X$. This has been proved for generic curves of any genus by Voisin (in characteristic 0; it is false in positive characteristic).



Another notable case: the ideal of $2times 2$ minors of a $ptimes q$ matrix has property $N_{p+q-3}$.

lo.logic - What is the manner of inconsistency of Girard's paradox in Martin Lof type theory

Girard's paradox constructs a non-normalizing proof of False. You could read Hurken's "A simplification of Girard's paradox", or maybe Kevin Watkin's formalization in Twelf.



In general, these questions are not equivalent, though they often coincide. A "reasonable" type theory will by inspection have no normal proofs of False, and so then normalization implies consistency. The inverse (non-normalization => proof of False) is much less obvious, and it is certainly possible to construct reasonable paraconsistent type theories, where non-termination is confined under a monad and does not result in a proof of False.