Sunday, 31 October 2010

ct.category theory - Why forgetful functors usually have LEFT adjoint?

Many standard examples of algebraic "forgetful" functors U:CtomathrmSet have the following form:



  • C is a presentable category, i.e., there is a small category I and a collection S of cones of I such that C is equivalent to the full subcategory of functors ItomathrmSet consisting of those functors which send the cones of S to limit diagrams in mathrmSet;

  • U is evaluation at an object uinI.

For example, if C is the category of monoids, take I=Deltamathrmop so that functors ItomathrmSet are simplicial sets and choose S so that the objects of C are those simplicial sets X such that X0=ast and Xi+jtoXitimesXj is an isomorphism (where this map is induced by the inclusions of the first i+1 and last j+1 elements of an ordered i+j+1 element set). The object u is the two-element set [1]. (One actually needs only the full subcategory of Deltamathrmop on the objects [0], [1], [2], [3], and the cones involving these objects; expanding this gives a possibly more familiar presentation of the notion of monoid.)



In these cases (which include models of any essentially algebraic theory) the existence of a left adjoint is guaranteed by the theory of presentable categories. Indeed, the inclusion of C into mathrmSetI has a left adjoint which we compose with the constant diagram functor mathrmSettomathrmSetI to obtain a left adjoint to U. See Adamek and Rosicky, Locally presentable and accessible categories, for an excellent introduction to the subject.

gr.group theory - Why are abelian groups amenable?

Here is a simpler argument, combining 1--6 into one step.



Let G be a countable abelian group generated by x1,x2,ldots. Then a Følner sequence is given by taking Sn to be the pyramid consisting of elements which can be written as



a1x2+a2x2+cdots+anxn with lverta1rvertleqn,lverta2rvertleqn1,ldots,lvertanrvertleq1.



The invariant probability measure is then defined by mu(A)=undersetomegalimlvertAcapSnrvert/lvertSnrvert as usual.



A more natural way to phrase this argument is:



  1. The countable group mathbbZinfty is amenable.

  2. All countable abelian groups are amenable, because amenability descends to quotients.

But I would like to emphasize that there is really only one step here, because the proof for mathbbZinfty automatically applies to any countable abelian group. This two-step approach is easier to remember, though. (The ideas here are the same as in my other answer, but I think this formulation is much cleaner.)




2016 Edit: Here is an argument to see that Sn is a Følner sequence. It is quite pleasant to think about precisely where commutativity comes into play.



Fix ginG and any finite subset SsubsetG. We first analyze the size of the symmetric difference gSbigtriangleupS. Consider the equivalence relation on S generated by the relation xsimy if y=x+g (which is itself neither symmetric, reflexive, or transitive). We will call an equivalence class under this relation a "g-string". Every g-string consists of elements x1,ldots,xkinS with xj+1=xj+g.



The first key observation is that lvertgSbigtriangleupSrvert is at most twice the number of g-strings. Indeed, if zinS belongs to gSbigtriangleupS, then z must be the "leftmost endpoint" of a g-string; if znotinS belongs to gSbigtriangleupS, then zg must be the "rightmost endpoint" of a g-string; and each g-string has at most 2 such endpoints (it could have 1 if the endpoints coincide, or 0 if g has finite order).



Our goal is to prove for all ginG that fraclvertgSnbigtriangleupSnrvertlvertSnrvertto0 as ntoinfty. Since lvertabSbigtriangleupSrvertleqlvertabSbigtriangleupbSrvert+lvertbSbigtriangleupSrvert=lvertaSbigtriangleupSrvert+lvertbSbigtriangleupSrvert, it suffices to prove this for all gi in a generating set.



By the observation above, to prove that fraclvertgiSnbigtriangleupSnrvertlvertSnrvertto0, it suffices to prove that frac{text{# of $g_i$-strings in $S_n$}}{lvert S_nrvert}to 0. Equivalently, we must prove that the reciprocal frac{lvert S_nrvert}{#text{ of $g_i$-strings in $S_n$}} diverges, or in other words that the average size of a gi-string in Sn diverges.



We now use the specific form of our sets Sn=a1g1+cdots+angn,|,lvertairvertleqni. For any i and any n, set k=ni (so that lvertairvertleqk in Sn). The second key observation is that every gi-string in Sn has cardinality at least 2k+1 unless gi has finite order. Indeed given xinSn, write it as x=a1g1+cdots+aigi+cdots+angn; then the elements a1g1+cdots+bgi+cdots+angninSn for b=k,ldots,1,0,1,ldots,k belong to a single gi-string containing x. If gi does not have finite order, these 2k+1 elements must be distinct. This shows that the minimum size of a gi-string in Sn is 2n2i+1, so for fixed gi the average size diverges as ntoinfty.



When gi has finite order N this argument does not work (a gi-string has maximum size N, so the average size cannot diverge). However once N<2k+1, the subset containing the 2k+1 elements above is closed under multiplication by gi. In other words, once ngeqi+N/2 the set Sn is gi-invariant, so lvertgiSnbigtriangleupSnrvert=0.



I'm grateful to David Ullrich for pointing out that this claim is not obvious, since the quotient of a Følner sequence need not be a Følner sequence (Yves Cornulier gives an example here).

Saturday, 30 October 2010

arithmetic geometry - Integer points (very naive question)

To make sense of the notion of integer points, your scheme should be defined over mathbbZ. What do we mean by that? Of course we should not ask for a structure map tp Spec(mathbbZ), since every scheme has one such map. The right notion is the following.



Let X be a scheme over mathbbC; so by definition we have a structure map XtomathopSpecmathbbC. Then we say that X is defined over mathbbZ is there exists a scheme XmathbbZ over mathbbZ such that X is the base change of XmathbbZ to mathbbC, i. e. XcongXmathbbZtimesmathbbZmathopSpecmathbbC.



Now for such a scheme an integral point is a map mathopSpecmathbbZtoXmathbbZ such that the composition with the structure map is the identity. Note that the same can be done for every ring A in place of mathbbZ.



With this definition, the line x=0 is defined over mathbbZ, but the line x=pi is not, basically because there is no way to generate its ideal with equations having integer coefficients. So your problem does not arise anymore.



EDIT: Abstractly of course the two lines are isomorphic over mathbbC, so the line r=x=pi actually has a model over mathbbZ. The problem is that this model is not compatible with the inclusion in mathbbA2, that is, there will be no map rmathbbZtomathbbA2mathbbZ whose base change is the inclusion of r into mathbbA2. In order to have this, you would have to ask that the ideal of r in mathbbA2 should be generated by polynomials with integer coefficients.



As for your second question, there can be different models, that is, nonisomorphic schemes over mathbbZ which become isomorphic after base change to mathbbC. So before discussing the existence of integral points, you have to FIX a model, and the points will in general depend on the model.



For instance take the two conics x2+y2=2 and x2+y2=3. Both have an obvious choice of a model, given by the inclusion in mathbbA2; moreover they are isomorphic over mathbbC. But the integral points on the first one are (pm1,pm1), while the second has none.



Finally you consider the possibility that the structure over mathbbC is not relevant. This is false: the base change XmathbbZtimesmathbbZmathopSpecmathbbC is endowed with a natural map to mathopSpecmathbbC, and we ask for the isomorphism with X to be over mathbbC.

Friday, 29 October 2010

linear algebra - Statement of Lagrange's theorem on determinants(elementary question).

"Special case of a general theorem of Lagrange" doesn't sound well to me: Wikipedia writes that "Lagrange (1773) treated determinants of the second and third order. Lagrange was the first to apply determinants to questions of elimination theory; he proved many special cases of general identities.", so I think your original question is already more general than anything Lagrange has done.



Here are two simple generalizations of your original question:



(1) If



left(begin{array}{cccc}A_{1,1}&A_{1,2}&dots &A_{1,n} \ A_{2,1}&A_{2,2}&dots &A_{2,n}\ vdots &vdots &ddots &vdots \ A_{n,1}&A_{n,2}&dots &A_{n,n}end{array}right)



is a block matrix with



Ai,j=0 for every i<j, and
Ai,i being a square matrix for every i,



then its determinant is detA1,1cdotdetA2,2cdotdotscdotdetAn,n.



The easiest proof (imho) uses the Leibniz formula for determinants, which reduces it to the following combinatorial fact: If a finite set S is the union of some pairwise disjoint sets S1, S2, ..., Sn, and pi is a permutation of the set S, then either pileft(Siright)=Si for every i, or there exist i<j such that pi maps at least one element of Sj into Si. This is an exercise in induction.



(2) Another generalization: If left(Uiright)iinmathbbZ is an exact chain complex of finite-dimensional vector spaces, bounded from below and from above (i. e., the vector space Ui is zero for all sufficiently large i and for all sufficiently small i), and left(firight)iinmathbbZ is a chain homomorphism from left(Uiright)iinmathbbZ to left(Uiright)iinmathbbZ, then



prodlimitsiinmathbbZ;itextisevendetfi=prodlimitsiinmathbbZ;itextisodddetfi.

Deformation theory and differential graded Lie algebras

I hope to write more on this later, but for now let me make some general assertions: there are general theorems to this effect and give two references: arXiv:math/9812034, DG coalgebras as formal stacks, by Vladimir Hinich, and the survey article arXiv:math/0604504, Higher and derived stacks: a global overview, by Bertrand Toen (look at the very end to where Hinich's theorem and its generalizations are discussed).



The basic assertion if you'd like is the Koszul duality of the commutative and Lie operads in characteristic zero. In its simplest form it's a version of Lie's theorem: to any Lie algebra we can assign a formal group, and to every formal group we can assign a Lie algebra, and this gives an equivalence of categories. The general construction is the same: we replace Lie algebras by their homotopical analog, Loo algebras or dg Lie algebras (the two notions are equivalent --- both Lie algebras in a stable oo,1 category). We can associate to such an object the space of solutions of the Maurer-Cartan equations -- this is basically the classifying space of its formal group (ie formal group shifted by 1). Conversely from any formal derived stack we can calculate its shifted tangent complex (or perhaps better to say,
the Lie algebra of its loop space). These are equivalences of oo-categories if you
set everything up correctly. This is a form of Quillen's rational homotopy theory - we're passing from a simply connected space to the Lie algebra of its loop space (the Whitehead algebra of homotopy groups of X with a shift) and back.



So basically this "philosophy", with a modern understanding is just calculus or Lie theory: you can differentiate and exponentiate,
and they are equivalences between commutative and Lie theories (note we're saying this geometrically, which means replacing commutative algebras by their opposite, ie appropriate spaces -- in this case formal stacks). Since any deformation/formal moduli problem, properly formulated, gives rise to a formal derived stack, it is gotten
(again in characteristic zero) by exponentiating a Lie algebra.



Sorry to be so sketchy, might try to expand later, but look in Toen's article for more (though I think it's formulated there as an open question, and I think it's not so open anymore).
Once you see things this way you can generalize them also in various ways -- for example, replacing commutative geometry by noncommutative geometry, you replace Lie algebras by associative algebras (see arXiv:math/0605095 by Lunts and Orlov for this philosophy) or pass to geometry over any operad with an augmentation and its dual...

soft question - How do you become a good listener?

  • Prepare in advance.

You will probably get the most out of classes if you read the text ahead of time. This varies by the lecturer's style, but try at least skimming the material (or notes from the last class if there is no text). You don't have to understand 100% of the material before the lecture. Try to identify material that you don't know, and pay special attention to that during class. If you still don't understand, then try to ask at least one question during class, and if that doesn't satisfy you, ask the instructor after class. When you do understand the text, look for differences between the text and the instructor's presentation, both in material and emphasis. If the material is easy, ask yourself how you would present it if you were teaching the class.



When you attend a research lecture, try to do your homework ahead of time, too! Try to find an expository article, or read a few reviews and abstracts of papers in related areas so you know what people in that field find interesting, what is hard, what the key examples are, what techniques seem effective, and what the connections are with other areas. The first 5-15 minutes of the talk may be similar, and they are critical. Understanding the details of a technical talk does little good if you do not know the context.



  • Listen actively.

Keep a few examples in mind. How do the results compare with the basic examples? How much progress is there toward the examples people want to understand? What are the differences between the later examples and basic examples?



Try to understand where you are on the road map for the talk/course/the book which will eventually be written about the theory being developed.

Thursday, 28 October 2010

cv.complex variables - Contour integration problem from probability

Now, since you call erfc(1) "a closed form expression", I should confess I do not understand the rules of this game. What's the big difference between inti1nftyex2/2dx and the original integral? Or, do you ask if it is an elementary function of the parameter c?



If the latter, note that the function J(c)=ec2intiinftynftyfrace(xc)21+x2dx satisfies the equation J+4J=4sqrtpiec2, which, if you try to solve it by the method of variation of parameters, leads to the indefinite integrals like intec2cos2cdc. Those are not elementary, but not much worse than your erfc.

linear algebra - Broken Symmetry

Your examples about vector spaces and differential geometry do not make any sense to me.
One does not need coordinates or bases to prove statements in linear algebra and differential geometry.
Personally, I always use coordinate- and basis-free proofs.
For me the reason to avoid coordinates and bases is that we lose geometric intuition whenever we use them.
See my manifesto on this matter here: When to pick a basis?



One way to make the definition of natural transformation more natural is to consider the category A
with exactly two objects and one non-trivial morphism between them.
Then the set of morphisms of an arbitrary category C is the set of functors Fun(A, C).
If we want the set of functors Fun(C, D) between two categories C and D to be a category,
then the set of morphisms of this category is Fun(A, Fun(C, D)).
But it is natural to assume that we have the standard adjunction Fun(A, Fun(C, D)) = Fun(A × C, D).
Unraveling the definition of functor from A × C to D yields precisely the usual axioms for natural transformations.

big list - Any reference on multilinear algebra

Dear mingming, here are three excellent books.



1) Tensor Spaces and Exterior Algebra by Takeo Yokonuma.
Translations of Mathematical Monographs, volume 108, AMS 1992



You can browse it in Google books here



2) Laurent Schwartz ( yes, the Fields medalist of distibutions fame) wrote a book, little-known even in France : Les Tenseurs, Hermann, 1998.
It is remarkably well written and contains a wealth of information not found, to my knowledge, in other books. The bad news : it is in French and not translated...



3) Finally there is an amazingly original free book by Sergei Winitzki , Linear Algebra via Exterior Products. Here is the link

Wednesday, 27 October 2010

foliations - Differential forms, PDE's and Élie Cartan

Robert Bryant is the reigning expert on this. An excellent book on the subject (later than the one mentioned) is:
Exterior Differential Systems and Euler-Lagrange Partial Differential Equations, Chicago Lectures in Mathematics (2003), University of Chicago Press (vii+213 pages, ISBN: 0-226-07794-2.) by
R. Bryant, Phillip Griffiths and Dan Grossmann.



I just recalled, Bryant has a very nice set of nine introductory lectures on the subject. It may be just what you are looking for! They are available online here:



http://www.math.duke.edu/~bryant/MSRI_Lectures.pdf

Tuesday, 26 October 2010

soft question - How much of scheme theory can you visualize?

Well, you asked 10 different questions, and I am not sure what you mean by "nonproper" (SpecA is not proper). But let's see.



A scheme is a very geometric object, with practice - or maybe just habit - one learns to visualize it quite well. If you already see geometrically Spec of a finitely generated algebra over a field k (including algebras with nilpotents which you visualize as "thickenings", including k not algebraically closed which you visualize as Galois orbits; you looked at these, right? these are important steps) then you are almost there. Add some other standard examples such as Spec(mathbbZ), DVR, a double-headed snake (the first nonseparated scheme), and you already know plenty do start doing research.



Infinite-dimensional algebras? Well, I suppose it is just as hard or easy to imagine them as infinite-dimensional spaces.



The fiber product is a perfectly geometric notion as well, and fairly easy to visualize. You begin by looking at fiber products of sets and you progress from there through some standard examples. Isolating a fiber of a morphism is an important case. And then look at some examples where the residue fields of the scheme points change. Learn the simple way to compute the tensor product AotimesRB by using generators and relations of A, and you will be up and running in no time.



As far as the balance of geometry vs algebra, I suppose that depends on a person and everybody is different. My advisor used to say that geometry comes first and then later algebra follows, and I tend to agree. I think you get nowhere without geometric intuition.



But if you are serious, at some point you will need a solid commutative algebra foundation. Fortunately these days there are plenty of nice books, starting with the very nice and elementary "Undergraduate commutative algebra" by Miles Reid.

Monday, 25 October 2010

The ring of algebraic integers of the number field generated by torsion points on an elliptic curve

[Comment: what follows is not really an answer, but rather a focusing of the question.]



In general, there is not such a nice description even of the number field mathbbQ(a,b) -- typically it will be some non-normal number field whose normal closure has Galois group operatornameGL2(mathbbZ/nmathbbZ), where n is the order of the torsion point.



In order to maintain the analogy you mention above, you would do well to consider the special case of an elliptic curve with complex multiplication, say by the maximal order of an imaginary quadratic field K=mathbbQ(sqrtN), necessarily of class number one since you want the elliptic curve to be defined over mathbbQ. In this case, the
field K(P) will be -- up to a multiquadratic extension -- the anticyclotomic part of the n-ray class field of K.



And now it is a great question exactly what the rings of integers of these very nice number fields are. One might even venture to hope that they will be integrally generated by the x and y coordinates of these torsion points on CM elliptic curves (certainly there are well-known integrality properties for torsion points, although I'm afraid I'm blanking on an exact statement at the moment; I fear there may be some problems at 2...).



I'm looking forward to a real answer to this one!

Sunday, 24 October 2010

higher category theory - Computation of Joins of Simplicial Sets

It turns out that joins of simplicial sets are fairly easy to define, but hard to manage. In lots of cases, we'd like to compute what a join is, does it look like a horn?, a boundary?, etc? and identify it as such, so we can figure out when our morphisms from the join have certain nice properties like being anodyne, having lifting properties, and all of that wonderful stuff.



For example, consider the join, LambdanjstarDeltam. The problem that I currently face is, I can't tell what this thing looks like from the definition.



Consider an even simpler case, DeltanstarpartialDeltam. From the definition, we get a very nasty definition of this join, and I'm having trouble applying it and computing the join in terms of nicer simplicial sets.



I ask this, because on p.62 of Higher Topos Theory by Lurie, for example, he states that for some 0<jleqn LambdanjstarDeltamcoprodLambdanjstarpartialDeltamDeltanstarpartialDeltam


and says that we can identify this with the horn Lambdan+m+1j. Unraveling the definitions seems to make it harder to understand, and I just don't see how this result was achieved. However, my aim here is to understand how the computation was actually carried out, since it is completely omitted.



For convenience, here is the definition of the join of S and S for each object JinDelta
(SstarS)(J)=coprodJ=IcupIS(I)timesS(I)


Where forall(iinIlandiinI)i<i, which implies that I and I are disjoint.



EDIT AFTER ANSWER: Both Reid and Greg provided good answers to the question, and I only accepted the one that I did because Greg commented more recently. So for anyone reading this at some point in the future, read both answers, as they are both good.

career - How Much Work Does it Take to be a Successful Mathematician?

Hi Everyone,



Famous anecdotes of G.H. Hardy relay that his work habits consisted of working no more than four hours a day in the morning and then reserving the rest of the day for cricket and tennis. Apparently his best ideas came to him when he wasn't "doing work." Poincare also said that he solved problems after working on them intensely, getting stuck and then letting his subconscious digest the problem. This is communicated in another anecdote where right as he stepped on a bus he had a profound insight in hyperbolic geometry.



I am less interested in hearing more of these anecdotes, but rather I am interested in what people consider an appropriate amount of time to spend on doing mathematics in a given day if one has career ambitions of eventually being a tenured mathematician at a university.



I imagine everyone has different work habits, but I'd like to hear them and in particular I'd like to hear how the number of hours per day spent doing mathematics changes during different times in a person's career: undergrad, grad school, post doc and finally while climbing the faculty ladder. "Work" is meant to include working on problems, reading papers, math books, etcetera (I'll leave the question of whether or not answering questions on MO counts as work to you). Also, since teaching is considered an integral part of most mathematicians' careers, it might be good to track, but I am interested in primarily hours spent on learning the preliminaries for and directly doing research.



I ask this question in part because I have many colleagues and friends in computer science and physics, where pulling late nights or all-nighters is commonplace among grad students and even faculty. I wonder if the nature of mathematics is such that putting in such long hours is neither necessary nor sufficient for being "successful" or getting a post-doc/faculty job at a good university. In particular, does Malcom Gladwell's 10,000 hour rule apply to mathematicians?



Happy Holidays!

Tuesday, 19 October 2010

rt.representation theory - Classification of representations of CCR algebras?

The question depends very much on the regularity that you demand. You have to decide before asking the question which operators are supposed to be self-adjoint or merely symmetric as unbounded operators etc. Weyl has solved the problem by exponentiating everything and looking at the resulting relations. This however gives rise to some unphysical representations.



Buchholz and Grundling give a new Cstar-algebraic approach to the problem in 0705.1988 using the notion of resolvent algebra. This settles the problem very nicely from a mathematical and physical perspective.

Is Fourier analysis a special case of representation theory or an analogue?

I would like to elaborate slightly on my comment. First of all, Fourier analysis has a very broad meaning. Fourier introduced it as a means to study the heat equation, and it certainly remains a major tool in the study of PDE. I'm not sure that people who use it in this way think of it in a particularly representation-theoretic manner.



Also, when one thinks of the Fourier transform as interchanging position space and frequency space, or (as in quantum mechanics) position space and momentum space, I don't think that a representation theoretic view-point necessarily need play much of a role.



So, when one thinks about Fourier analysis from the point of view of group representation theory, this is just one part of Fourier analysis, perhaps the most foundational part,
and it is probably most important when one wants to understand how to extend the basic statements regarding Fourier transforms or Fourier series
from functions on mathbbR or S1 to functions on other (locally compact, say) groups.



As I noted in my comment, the basic question is: how to decompose the regular representation of G on the Hilbert space L2(G). When G is locally compact abelian, this has a very satisfactory answer in terms of the Pontrjagin dual group widehatG, as described in
Dick Palais's answer: one has a Fourier transform relating L2(G) and L2(widehatG).
A useful point to note is that G is discrete/compact if and only if widehatG is compact/discrete. So L2(G) is always described as the Hilbert space direct integral
of the characters of G (which are the points of widehatG) with respect to the
Haar measure on widehatG, but when G is compact, so that widehatG is discrete,
this just becomes a Hilbert space direct sum, which is more straightforward (thus the series
of Fourier series are easier than the integrals of Fourier transforms).



I will now elide Dick Palais's distinction between the Fourier case and the more general
context of harmonic analysis, and move on to the non-abelian case.
As Dick Palais also notes, when G is compact, the Peter--Weyl theorem nicely generalizes
the theory of Fourier series; one again describes L2(G) as a Hilbert space direct sum,
not of characters, but of finite dimensional representations, each appearing with multiplicity equal to its degree (i.e. its dimension). Note that the set over which one
sums now is still discrete, but is not a group. And there is less homogeneity in the description: different irreducibles have different dimensions, and so contribute in different
amounts (i.e. with different multiplicities) to the direct sum.



When G is locally compact but neither compact nor abelian, the theory becomes more complex. One would
like to describe L2(G) as a Hilbert space direct integral of matrix coefficients of irreducible unitary representations,
and for this, one has to find the correct measure (the so-called Plancherel measure) on
the set widehatG of irreducible unitary representations.
Since widehatG is now just a set, a priori there is no natural measure to choose
(unlike in the abelian case, when widehatG is a locally compact group, and so has
its Haar measure), and in general, as far as I understand, one doesn't have such a direct
integral decomposition of L2(G) in a reasonable sense.



But in certain situations (when G is of "Type I") there is such a decomposition, for
a uniquely determined measure, so-called Plancherel measure, on widehatG. But this
measure is not explicitly given. Basic examples of Type I locally compact groups are
semi-simple real Lie groups, and also semi-simple p-adic Lie groups.



The major part of Harish-Chandra's work was devoted to explicitly describing the Plancherel measure for semi-simple real Lie groups. The most difficult part of the question is the
existence of atoms (i.e. point masses) for the measure; these are irreducible unitary representations of G that embed as subrepresentations of L2(G), and are known as
"discrete series" representations. Harish-Chandra's description of the discrete series
for all semi-simple real Lie groups is one of the major triumphs of 20th century representation theory (indeed, 20th century mathematics!).



For p-adic groups, Harish-Chandra reduced the problem to the determination of the discrete series, but the question of explicitly describing the discrete series in that case remains open.



One important thing that Harish-Chandra proved was that not all points of widehatG
(when G is a real or p-adic semisimple Lie group) are in the support of Plancherel measure; only those which satisfy the technical condition of being "tempered". (So this
is another difference from the abelian case, where Haar measure is supported uniformly over
all of widehatG.) Thus in explicitly describing Plancherel measure, and hence giving
an explicit form of Fourier analysis for any real semi-simple Lie group, he didn't have to classify all unitary representations of G.



Indeed, the classification of all such reps. (i.e. the explicit description of widehatG) remains an open problem for real semi-simple Lie groups (and even more so
for p-adic semi-simple Lie groups, where even the discrete series are not yet classified).



This should give you some sense of the relationship between Fourier analysis in its representation-theoretic interpretation (i.e. the explicit description of L2(G)
in terms of irreducibles) and the general classification of irreducible unitary representations of G. They are related questions, but are certainly not the same,
and one can fully understand one without understanding the other.

Sunday, 17 October 2010

nt.number theory - Does a universal Frobenius map exist?

For any prime p, one has the Frobenius homomorphism Fp defined on rings of characteristic p.



Is there any kind of object, say U, with a "universal Frobenius map" F such that for any prime p and any ring R of characteristic p we can view the Frobenius Fp over R as "the" base change of F from U to R?



I have the following picture in mind: In some sense it should be possible to view the category of Z-algebras as a sheaf of categories over Spec Z such that the fibre over Spec Fp is just the category of F_p-algebras. A natural transformation f of the identity functor on the category of Z-algebras should restrict to a natural transformation fp of the identity functor on the category of Fp-algebras. In this naive picture one cannot expect the existence of an f such that fp is the Frobenius on Fp-algebras for all primes p. But is there way to make this picture work?



Another possible way to answer my question could be the following: Is there a classifying topos of, say, algebras with a Frobenius action? By this I mean the following: Is there a topos E with a fixed ring object R and an algebra A over it and an R-linear endomorphism f of A such that for any other topos E' with similar data R', A' there is a unique morphism of topoi E' -> E that pulls back R, A to R', A' and such that f is pulled back to the Frobenius fp of A' in case R' is of prime characteristic.



(Feel free to modify my two pictures to make them work.)

Friday, 15 October 2010

dg.differential geometry - Changing coordinates so that one Riemannian metric matches another, up to second derivatives

Let g and g be two C2-smooth Riemannian metrics defined on neighborhoods U and U of 0 in mathbbR2, respectively. Suppose furthermore that the scalar curvature at the origin is K under both metrics.



My question: Is there a coordinate transformation taking one metric to the other, such that they agree up to second derivatives at the origin? i.e., if x:UtoU is the transformation, we have



gij=gab xaixbj,



evaluating everything at 0; there are similar equations for the first and second derivatives. Clearly this is false if the scalar curvatures aren't equal. I don't care what happens away from the origin.



In the excellent thread When is a Riemannian metric equivalent to the flat metric on mathbbRn?, Greg Kuperberg says:

If remember correctly, there is a more general result due to somebody, that any two Riemannian manifolds are locally isometric if and only if their curvature tensors are locally the "same".
If "local isometry" means that the metrics are equal on a neighborhood of the origin, then the metrics I have in mind are not locally isometric, since the only information I have is that their curvatures match at one point.

Edit: I'm pretty sure that Deane answered my question, but let me clarify. Let gij be some "reasonable" metric, e.g. a bump surface metric, and consider a point p where the scalar curvature is K. Let gij be an arbitrary metric on a neighborhood U of the origin in mathbbR2, with scalar curvature K at 0.



Then the question becomes: does there exist a coordinate change on the bump surface such that the equation gij(0)=gab(p) xaixbj is satisfied, as well as the corresponding equations for the first and second derivatives? That is, there are 18 pieces of pertinent information



()   g11,g12,g22;g11,1,g12,1,g22,1,g11,2,g12,2,g22,2;g11,11,g12,11,g22,11,g11,12,g12,12,g22,12,g11,22,g12,22,g22,22.



I want to change coordinates on my nice surface such that the metric and its derivatives line up with ().

gn.general topology - Is the realization of a proper map of simplicial spaces proper ?

Let f:XrightarrowY be a map of m-dimensional simplicial spaces (which means that all simplices above dimension m are degenerate). Recall, that f is a natural transformation of functors from Delta to spaces. I want to call such a map proper, if each fn:XnrightarrowYn is proper.



So the question is, whether f is proper if and only if |f| is proper.



The finite dimensionality is required, as the following example shows:
Take X to be any simplicial space with a finite, positive number of nondegenerate simplices in each dimension. Then the map f:Xrightarrowpt is proper (in the notation from above), but |X| is not compact and hence |f| is not proper.

Wednesday, 13 October 2010

rt.representation theory - Can you "Wedge" two representations?

Not an answer, rather an attempt to hijack the question...



Some time ago I have also been wondering how to wedge two vector spaces and came up with the following construction:



Let f:UtoV and g:UtoW be two vector space morphisms. We define the vector space VwedgeUW (of course, this depends not only on U, V and W, but also on f and g, but we silently leave these out of the notation - just as in the case of fibered products) as the quotient of the tensor product VotimesW by the subspace spanned by all tensors of the form fleft(uright)otimesgleft(uright) for uinU.



This is functorial, but does anyone know any use for it? Any results about the structure of VwedgeUW as a representation, if U, V and W are representations? How does this wedgeU operation "look like" in the representation ring (for instance, the usual wedge operations look like the lambda operations lambda1, lambda2, ...).



EDIT: In characteristic neq2, we have VwedgeUW=left(VotimesWright)diagupleft(left(left(fotimesgright)circleft(mathrmid+tauright)right)left(UotimesUright)right), where tau is the transposition of the two tensorands. But it is still interesting to find out what exactly is factored out in classical cases, e. g. in representation theory of Sn.

Tuesday, 12 October 2010

it.information theory - Sanov's Theorem and Chernoff bound

You don't need to restrict yourself to finite E -- you get it for any set E of distributions. In hypothesis testing people often take E to be come convex set of distributions.



[added after comment]:
I think the question is confusingly stated. The probability being calculated is not the probability of a hypothesis per se. Cover and Thomas simplify the notation a bit. let Xn=(X1,ldots,Xn) be drawn i.i.d. according to Q on a finite set mathcalX. Let E be any set of distributions. Let TXn be the empirical distribution of X1,ldots,Xn. Sanov's Theorem says:



mathbbP(Xn:TXninE)le(n+1)|mathcalX|exp(ncdotminPinED(P|Q))



This bound holds for any n, but is only optimal asymptotically as ntoinfty. That is, the exponent won't be better than this divergence. For smaller n you can get better bounds, especially if you can get rid of that polynomial factor. The looseness in Sanov's theorem as stated is mostly in this polynomial factor -- if E is a singleton, then if you inspect the proof you see that you don't need that factor.



To take an example from coin tossing, suppose Q corresponds to a coin with bias 1/2 (a fair coin) and let E=P:P(heads)le1/4. This is of the form "sample mean is less than 1/4" (if I understand your question). Your complaint would seem to be that as stated Sanov makes you account for coins with bias much less than 1/4, when in fact you can "get away" with only considering P(heads)=1/4.



So if I understand your question now, the answer is : Sanov's theorem is loose because there is a union being taken over all distributions in E, but this looseness is mostly important for small n. The bound holds for all n but for smaller n you can get better bounds (e.g. Chernoff) by re-inspecting the proof of Sanov or through other methods. However, as ntoinfty these bounds will not be better than Sanov's in terms of the exponent multiplying n.



Hope that answers it!

gn.general topology - Can topologies induce a metric? (revised)

This is a revised version of a question I already posted, but which patently was ill posed. Please give me another try.




For comparison's sake, the axioms of a metric:



Axiom A1: (forallx)d(x,x)=0



Axiom A2: (forallx,y)d(x,y)=0rightarrowx=y



Axiom A3: (forallx,y)d(x,y)=d(y,x)



Axiom A4: (forallx,y,z)d(x,y)+d(y,z)geqd(x,z)




Let T = {X,T} be a topology, B a base of T, x,y,z in X



Definition D0: x is nearer to y than to z with respect to B (NBxyz) iff (exists b in B)  x, y in b  &  z notin b  & (nexists b in B)  x, z in b  &  y notin b



Definition D1: B is pre-metric1 iff (forallx,y)xneqyrightarrowNBxxy



Definition D2: B is pre-metric2 iff (forall x,y,z) ((z neq x & z neq y) rightarrow N_Bxyz) rightarrow x = y



Definition D3: B is pre-metric3 iff (forallx,y,z)NBzyxrightarrow(NByxzrightarrowNBxyz)




Definition: T is pre-metrici iff (existsB)B is pre-metrici (i = 1,2,3).



Definition: B is pre-metric iff B is pre-metric1, pre-metric2 and pre-metric3.



Definition: T is pre-metric iff (existsB)B is pre-metric.



Remark: D1 is an analogue of axiom A1, D2 of axiom A2, D3 of axiom A3.



Remark: T is pre-metric1 iff T is T1[not quite sure].



Remark: If T is induced by a metric, then T is pre-metric.




Question: Can a property pre-metric4 be defined such that T induces a metric iff T is induced by a metric with



Definition: B is metric iff B is pre-metric and pre-metric4.



Definition: T induces a metric iff (existsB)B is metric.



Remark: Property pre-metric4 should be an analogue of A4 (the triangle inequality).



If provably no such property can be defined does this shed a light on the difference (an asymmetry) between topologies and metric spaces? ("It's the triangle inequality, that cannot be captured topologically.")

Monday, 11 October 2010

co.combinatorics - Tournament formats

In light of the comments, I'll post an answer. It's probably not a homework question, but it's not exactly a research-level mathematics question either. The question was:




How many match ups must occur before everyone has sat out at least once or everybody has played everybody once?




Define the two teams {1,2,3,4,5,6} and {a,b,c,d,e,f}. In a volleyball context finding a solution is easy (as Gerhard "Ask Me About System Design" Paseman pointed out) -- mathematically, it is a collection of K4,4 subgraphs whose edges cover K6,6 such as



{1,2,3,4} x {a,b,c,d}
{1,2,5,6} x {a,b,e,f}
{3,4,5,6} x {c,d,e,f}.


If we interpret the original question in a strict sense (taking the word "or" literally), then two rounds would suffice.



I had a different interpretation of the question (which is slightly less trivial) which could be of interest to a chess tournament organiser, for example (often chess teams consist of 4 players with 2 reserves). That is, finding a decomposition of K6,6 into G≅4K1,1+4K1 such that each vertex is an isolated vertex in at least one G. Since G has 4 edges and K6,6 has 62=36 edges. We deduce that we need 36/4=9 rounds (i.e. components). Here's the solution I found (via a randomised algorithm using GAP)



match-up      byes
1a 3c 4e 6b 2 5 d f
1b 2e 4c 5d 3 6 a f
1c 3e 5a 6f 2 4 b d
1d 2c 3b 6e 4 5 a f
1e 2d 5f 6a 3 4 b c
1f 2a 4b 5e 3 6 c d
2b 3f 4a 6d 1 5 c e
2f 3a 4d 5c 1 6 b e
3d 4f 5b 6c 1 2 a e


I just noticed that the byes are balanced, that is each player has a bye in three rounds.



Neither of the above interpretations uses the graph-theoretic interpretation of tournament.

Tuesday, 5 October 2010

Reference for representation of Weyl group using r_alpha + c partial_alpha

Take W=Sn for simplicity, though other Weyl groups work too. Let ri denote the ith simple reflection acting on mathbbAn, and partiali=1/(xixi+1)(Idri) denote the corresponding divided difference operator.



It's easy to show that the operators ri+cpartiali satisfy the Coxeter relations. I know I saw this in a Lascoux article, but there are so many that I'm hoping mathoverflow can tell me which one so I don't have to pore over the French, or can suggest some other canonical reference, the older the better.



Separately, I'd like to know if any author explicitly discusses these in the context of the Steinberg variety, where the c should be the equivariant cohomology parameter corresponding to dilation of the cotangent bundle, I guess.

Monday, 4 October 2010

gr.group theory - Faithful characters of finite groups

Here is a short proof of the weaker version of the statement from Question 1 (giving a polynomial with rational coefficients). Let's think of characters as functions on conjugacy classes. Then chi(1)=n=rmdim(V), and chi(g) for gne1 has smaller absolute value than n (since the representation is faithful and eigenvalues of g in chi are roots of 1). In particular, chi(g)nen. Now let P be the interpolation polynomial such that P(n)=|G| and P(x)=0 for any other value x of chi. Then P(chi) is the regular character, and it's easy to see that P has rational coefficients.



However, there seems to be a counterexample to the statement that P can be chosen to have integer coefficients. Namely, take G=A5, and chi the 5-dimensional character.
Its values are well known to be 5,0,1,1, so we can take P0=(x3x)/2, and any other
polynomial which works will be of the form P=P0Q, where Q is another polynomial (as P must vanish at 0,1,1). If P has integer coefficients, then Q/2=P/(x3x) must have integer coefficients, so values of Q at integers are even. On the other hand, we must have Q(5)=1, contradiction.

it.information theory - Using Fisher Information to bound KL divergence

Is it possible to use Fisher Information at p to get a useful upper bound on KL(q,p)?



KL(q,p) is known as Kullback-Liebler divergence and is defined for discrete distributions over k outcomes as follows:



KL(q,p)=sumkiqilogfracqipi



The most obvious approach is to use the fact that 1/2 x' I x is the second order Taylor expansion of KL(p+x,p) where I is Fisher Information Matrix evaluated at p and try to use that as an upper bound (derivation of expansion from Kullback's book, 1,2,3)



If p(x,t) gives probability of observation x in a discrete distribution parameterized by parameter vector t, Fisher Information Matrix is defined as follows



Iij(t)=sumxp(x,t)(fracpartialpartialtilogp(x,t))(fracpartialpartialtjlogp(x,t))



Where sum is taken over all possible observations.



Below is a visualization of sets of k=3 multinomial distributions for some random p's (marked as black dots) where this bound holds. From plots it seems that this bound works for sets of distributions that are "between" p and the "furthermost" 0 entropy distribution.





Motivation: Sanov's theorem bounds probability of some event in terms of KL-divergence of the most likely outcome...but KL-divergence is unwieldy and it would be nicer to have a simpler bound, especially if it can be easily expressed in terms of parameters of the distribution we are working with

Sunday, 3 October 2010

computational geometry - Generating random polygons from a given triangulation of points

Given a triangulation T of a planar set point S, we would like to randomly generate a polygon (hamiltonian cycle) P.



However, it has been proved that Hamiltonian Circuit Problem on maximal planar graphs is NP-complete.



So, I suppose that uniformly random generation of such polygons is hard.



A polygon on n points can be decomposed in n2 triangles. So, the dual graph of a polygon is a tree of n2 nodes.



That implies that if we could count the induced trees of size k=n2 (where n is the size of S) on the dual graph of a triangulation (which is a 3-connected cubic planar graph), we could count the hamiltonian cycles of a maximal planar graph (planar point set triangulation). So counting the induced trees of size k on 3-connected cubic planar graph is also NP-complete.




So my question is. Is there any approximation algorithm (e.g. Markov Chain Monte Carlo) which deals with the counting of hamiltonian cycles of maximal planar set points or the induced trees of size k on 3-connected cubic planar graph ?


fa.functional analysis - Compact Convex sets and Extreme Points

[Just a historical remark.] AFAIK, the fact that the set of all extreme points of a compact convex
subset of mathbbR2 must be closed is due to the legendary American
mathematician G. Baley Price (1905-2006), in "On the
extreme points of convex sets", Duke Math. J. Volume
3, Number 1 (1937), 56-67 (page 62).

Saturday, 2 October 2010

ag.algebraic geometry - Computing stable reduction of finite covers of curves in practice

The general theory is described in various places, but I'll be following (sketchily) the description of this process appearing in section 1 of Bouw and Wewers' "Reduction of covers and Hurwitz spaces".



Background



Let R be a complete DVR, and K its function field. Say we have a G-Galois map of (smooth projective) curves over K, f:YKrightarrowXK. Assume also that the order of G is not divisible by the characteristic of the residue field of R. After replacing K by a finite extension we may assume the ramification points are K-rational, and and the smooth stably marked curve (YK,D) (where D is the ramification divisor) can be defined over R: (YR,DR). There is some variation between different papers as to what "stably marked curve" means, but I think I mean minimal semi-stable, which happens to be stable (am I wrong? correct me if I am.) If we quotient YR by the action of G we should get a semi-stable curve, which we shall denote: XR. This may no longer be a minimal semi-stable model of XK (but it's definitely a semi-stable model of it).



If I understand the theory correctly, if we assume that K is such that we have an R-model of XK which is semi-stable and such that the branch points specialize to different points, then it must be XR as constructed above.



Question



In order to understand this better, I wish to have some concrete computations under my belt. Let's try a simple yet interesting example:
Let R:=mathbbC[[t]], XmathbbC((t)):=mathbbP1mathbbC((t)) (with parameter x), and let f and YmathbbC((t)) be given affinely by y2=x(xt). (So f is the projection to x, and YmathbbC((t)) is a mathbbP1mathbbC((t)) with parameter y/x. In other words the function field of X is mathbbC((t))(x) and the function field of Y is Quot(mathbbC((t))[x,y]/(y2x(xt))), which, in turn, is equal to mathbbC((t))(y/x).)



If we let XmathbbC[[t]]:=mathbbP1mathbbC[[t]], then this is clearly a semi-stable curve, and the branch points (in XmathbbC((t))), which were 0 and t, specialize to the same point. But I want to guarantee that this would be the quotient of the stably marked curve on top. According to the last paragraph in the background section, I would get this guarantee if the branch points (interpreted in XmathbbC[[t]]) would specialize to different points. So instead choose XmathbbC[[t]] to be the blow up of mathbbP1mathbbC[[t]] at t=x=0. If we work affinely, this would be: mathbbC[[t]][x,z]/(xzt). The question now is: how do I find the stable reduction upstairs, and the map between them? How do I finish this example?

mathematics education - How seriously should a graduate student take teaching evaluations?

Positions not associated with teaching (such as industrial or government labs) will very rarely care about your teaching. When I applied for these, I didn't even bother to list my teaching on the cv.



Teaching positions (such as at a community college) will probably care about it a lot more, since they want some proof that you can teach well. But I can't say much since I don't have experience with these.



Research universities are somewhere in-between. In general, their main priority is the quality of your research. So for a standard tenure-track faculty positions, they will likely focus on selecting an interesting (research-wise) colleague rather than the best teacher.



Of course, research universities need to teach too, and they do feel the pressure to teach well. Also, "research university" is not a uniform designation; different universities will have different priorities which may include more or less emphasis on teaching.



Generally, teaching works like this at a research university. The department (math, in your case) needs to teach some courses. These are service courses to other departments (such as "calculus 1 for biology students") and internal courses (e.g. "graduate group theory"). These need to be taught adequately. If the service courses are not taught well, other departments will complain and your dean will not like it. If the internal courses are not taught well, then your colleagues will have underprepared students to deal with, and they will not like it. So people will want to know that you can teach adequately. Generally, at a research university, I would take "adequately" to mean that you will not leave the students grossly underprepared. Whether they love your teaching or not is less of an issue. So, as long as you have some teaching experience, I would say you are OK.



Now, you don't have to list ALL teaching evaluations on your cv. If the evaluations are great, mention them. If not, you can omit them and just list the course. For example:



TEACHING



Fall 2008: Calculus 1



Spring 2009: Algebra (received 4.5 / 5 evaluation)



Fall 2009: Linear algebra



...



Also, I don't think good evaluations will affect your candidacy negatively. It's true that some people might interpret interest in teaching as lack of interest in research, but I don't think good evaluations are enough for that. If you teach a lot, if you publish papers on teaching, go to teaching conferences, etc. -- in that case, yes, people might be suspicious of whether you are interested in research at all (especially if you don't have an equally active research program). But I don't think that just having good evaluations will do you any harm.

advice - Curriculum vitae: including grants you've applied for, not received (or not yet received).

First, let's assume that you're applying for a position, where your research matters. If so, then the hiring committee wants to judge as best as it can whether you will do good research after you're hired. Also, let's assume that the hiring committee is not familiar with your specific area of research, so it is not able to judge directly the quality of your work by reading it.



Published and accepted papers, as well as grants awarded, are very useful for establishing the strength of your research ability. Other measures such as quality of journals and citation numbers can strengthen your case even further and are in fact quite important if you are applying to a strong department.



Submitted papers, preprints, and grant applications do not help in judging your research ability. But they do matter. In particular, they, along with the items above, show that you are committed not only to continuing your research but also documenting it in a way that your department, your university, and your peers can judge it properly. In short, it provides evidence that you're willing to and are continuing to "play the game". This stuff won't help you beat out someone who is viewed as a stronger mathematician than you, but it might help you beat out someone viewed as on the same level but does not demonstrate the same level of explicit effort.



You should provide all evidence of research activity, whether it represents something you've already accomplished or something you are still striving for.



And you should omit anything that a hiring committee might choose to interpret as a serious distraction to your research efforts.

Page 1 of 8551234567...855Next »Last