Sunday 31 May 2009

ac.commutative algebra - Support sets along a ring homomorphism.

Let $(R,m)$ and $(S,n)$ be commutative noetherian local rings, and $f: Rrightarrow S$ be a local homomorphism (i.e., $f(m) subseteq n$) with $S$ flat as $R$-module. If $M$ is a finite generated $R$-module, then what is the relation between $Supp_s(Motimes S)$ and $Supp_R(M)$? Thanks in advance!

Wednesday 27 May 2009

ds.dynamical systems - Uniqueness in Composition of Polynomials

Your special case is right. More generally:



Let $fleft(xright)=x+b$ with $bneq 0$.
Let $gleft(xright)=cx^2+dx+e$ with $c>0$, $dinmathbb R$ and $einmathbb R$.



In fact, it is clear that every composition of $f$'s and $g$'s is a polynomial of positive degree and with positive leading coefficient (since $c>0$).



If we have a polynomial $Pinmathbb Rleft[Xright]$ which is a composition of $f$'s and $g$'s, we can always reconstruct the last step of the composition. Namely, we search for a nonnegative real $u$ such that $P-u=cQ^2+dQ+e$ for some polynomial $Qin mathbb Rleft[Xright]$ of positive degree and with positive leading coefficient. If the last step has been a $g$, then $u=0$ must work; if the last step was an $f$, but some $g$ occured in the composition, then we must have a solution with $uneq 0$ (in fact, if the last steps were $g$, $f$, $f$, ..., $f$ in this order, with $f$ occuring $k$ times, then $u$ must be $kbneq 0$); if the composition consists of $f$'s only, then there is no solution (because $P$ must have degree $1$). The important thing is that the $u$, if it exists, is unique. In fact, if there would be two different $u$'s, then the two corresponding $Q$'s - let's call them $Q_1$ and $Q_2$ - would satisfy $left(cQ_1^2+dQ_1+eright)-left(cQ_2^2+dQ_2+eright)=w$ for some nonzero real $w$ (here, $w$ is the difference of the two $u$'s). This equation rewrites as $cleft(Q_1-Q_2right)left(Q_1+Q_2+1right)=left(c-dright)Q_1-left(c-dright)Q_2+w$. Thus, (remembering that $c>0$) we conclude that



$degleft(Q_1-Q_2right)+degleft(Q_1+Q_2+1right)=degleft(cleft(Q_1-Q_2right)left(Q_1+Q_2+1right)right)$
$=degleft(left(c-dright)Q_1-left(c-dright)Q_2+wright)leqmaxleftlbrace deg Q_1,deg Q_2rightrbrace$.



But at least one of the two degrees $degleft(Q_1-Q_2right)$ and $degleft(Q_1+Q_2+1right)$ must actually be equal to $maxleftlbrace deg Q_1,deg Q_2rightrbrace$ (because $Q_1$ and $Q_2$ are linear combinations of $Q_1-Q_2$ and $Q_1+Q_2$), and thus the other one must be zero or $-infty$ (the degree of the zero polynomial). In other words, one of the polynomials $Q_1-Q_2$ and $Q_1+Q_2+1$ is constant. But the polynomial $Q_1+Q_2+1$ cannot be constant (because $Q_1$ and $Q_2$ have positive degree and positive leading coefficients). Hence, the polynomial $Q_1-Q_2$ is constant.



So let $Q_1-Q_2=k$ for $kinmathbb R$. Then, $left(cQ_1^2+dQ_1+eright)-left(cQ_2^2+dQ_2+eright)=w$ rewrites as $ckleft(Q_1+Q_2right)+dk=0$ (since $Q_1-Q_2=k$). Hence, the polynomial $ckleft(Q_1+Q_2right)$ also must be constant, so that $ck=0$ (since the polynomial $Q_1+Q_2$ is not constant, because $Q_1$ and $Q_2$ are two polynomials with positive degree and positive leading terms). Since $c>0$, this yields $k=0$, and thus $Q_1-Q_2=k=0$, so that $Q_1=Q_2$, and therefore $0=left(cQ_1^2+dQ_1+eright)-left(cQ_2^2+dQ_2+eright)=w$, contradicting $wneq 0$.

what conditions can one place on a finitely generated periodic semigroup that will ensure the semigroup is finite?

This question has been studied extensively more than 20 years ago. Most results are about semigroups satisfying identities. the first such result is by Morse and Hadlund: there exists an infinite 3-generated (infinite 2-generated) semigroup satisfying $x^2=0$ (resp. $x^3=0$). The latest results are in Sapir, M. V. Problems of Burnside type and the finite basis property in varieties of semigroups. (Russian) Izv. Akad. Nauk SSSR Ser. Mat. 51 (1987), no. 2, 319--340, 447; translation in Math. USSR-Izv. 30 (1988), no. 2, 295--314. Here is a statement from that paper: All finitely generated periodic semigroups with identity $u=v$ in $n$ variables are finite if and only if all periodic finitely geneated groups with that identity are finite and the Zimin word $Z_{n+1}$ is not an isoterm for $u=v$ or $v=u$. Here Zimin words are defined by $Z_1=x_1, ..., Z_{n+1}=Z_nx_{n+1}Z_n$. A word $Z$ is an isoterm for $u=v$ if for every substitiution $phi$, if $phi(u)$ is a subword of $Z$, then $phi(u)=phi(v)$ (by a substitution I mean a map assigning words to variables). If a periodic semigroup has some algebraic structure properties, then its finiteness can be deduced sometimes from Shevrin's results (see, for example, Shevrin, L. N.
Certain finiteness conditions in the theory of semigroups. (Russian)
Izv. Akad. Nauk SSSR Ser. Mat. 29 1965 553--566 or Shevrin, L. N.
A general theorem on semigroups with certain finiteness conditions. (Russian)
Mat. Zametki 15 (1974), 925--935. ). For example, if the semigroup is a union of groups (i.e. every element $x$ satisfies $x=x^p$ for some $p=p(x)$), then the semigroup is finite if and only if all subgroups are locally finite. There are many other results but I do not have time to list them here.

reference request - A book for problems in Functional Analysis

I would like to add to the list my favorite book on Classical Functional Analysis:



Dunford, Nelson; Schwartz, Jacob T. Linear operators. Part I. General theory. With the assistance of William G. Bade and Robert G. Bartle. Reprint of the 1958 original. Wiley Classics Library. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1988. xiv+858 pp. ISBN: 0-471-60848-3



This book contains a plenty of exercises, which allow to check understanding and much more.

Tuesday 26 May 2009

rt.representation theory - HOMFLY and homology; also superalgebras

I can perhaps say something about the second question. I apologise in advance for tooting my own horn, so to speak, as my knowledge derives from a paper I co-wrote years ago with Takashi Kimura and Arkady Vaintrob: http://arxiv.org/abs/q-alg/9602014 .



In the approach to Vassiliev invariants coming from Chern-Simons theory, one constructs a "universal weight system" from some Lie-algebraic data. In the original work of Bar-Natan and of Kontsevich, the data consists of a metric Lie algebra --- i.e., one possessing an ad-invariant inner product --- and a module. However this extends to metric Lie superalgebras and more generally to metric Yang-Baxter Lie algebras, thanks to work of Arkady Vaintrob.



gl(1|1) is a metric Lie superalgebra and in the paper cited above we worked out its universal weight system. It has two parameters (corresponding to the generators of the centre of the universal enveloping algebra) and setting them to specific values one recovers the weight system of the Alexander-Conway polynomial.



We also show en passant that one can obtain the same universal weight system from a solvable four-dimensional metric Lie algebra, so that in fact one does not actually need Lie superalgebras at all in this case.

Monday 25 May 2009

soft question - Origins of Mathematical Symbols/Names

I'm not sure how relevant this is outside of Ireland, but while doing basic mechanics, if you ever see acceleration denoted as $f$, as it is in the "log tables" here, as in $v=u+ft$, the $f$ in this case stands for the Latin for acceleration, festinatio (with festino meaning "I hurry", so festinatio would very roughly and more literally translate as "hurriedness"), which is funny because adcelero is the Latin for "I speed up" which looks a lot more like acceleration.



Similarly, displacement denoted by $s$ as in $s=ut+frac12 at^2$ is from the Latin for displacement, summoveo (with moveo meaning "I move [something]").



And, of course, velocitas, the Latin for speed. I can imagine u being used for velocity as well since the Romans actually pronounced "v" as "u", so the two are pretty much interchangeable.

fourier analysis - What is a rigorous statement for "linear time-invariant systems can be represented as convolutions"?

An abstract result in Banach algebra theory, known as Wendel's Theorem, tells us that the multiplier algebra of L^1(G) is M(G), the measure algebra, for any locally compact group G.



So, if G=R the reals, this says that if T:L^1(R) rightarrow L^1(R) is a bounded linear map which commutes with translations, then there is some measure mu on R such that T(f) = mu * f for all fin L^1(R). (And maybe this special case was known before Wendel?)



I don't know much about distributions, but this general area falls into the theory of "Multipliers" I believe.

Saturday 23 May 2009

Flatness in Algebraic Geometry vs. Fibration in Topology

A surjective flat (equals faithfully flat) map with smooth fibres is in fact a smooth morphism, and hence induces a submersion on the underlying manifolds obtained by passing
to complex points. Since the fibres are projective, it is furthermore proper (in the sense of algebraic geometry) [see the note added at the end; this is not a logical deduction from the given condition on the fibres, but nevertheless seems to be a reasonable reinterpretation of that condition], and hence proper (in the sense of topology). A theorem
of Ehresmann states that any proper submersion of smooth manifolds is a fibre bundle.
In particular, it is a fibration in the sense of homotopy theory, and the fibres are
diffeomorphic (thus also homeomorphic, homotopic, ... ).



Note: Your specific question is really about smooth morphisms (these are flat morphisms
with smooth fibres, although there are other definitions too, which are equivalent
under mild hypotheses on the schemes involved, and in particular, are equivalent for maps
of varieties over a field). One point about the notion of flat map is that it allows one
to consider cases in which the fibres over certain points degenerate, but still vary continuously (in some sense). It may well be a special feature of algebraic geometry
(and closely related theories such as complex analytic geometry) that one can have such a reasonable notion, a feature related to the fact that one can
work in a reasonable manner with singular spaces in algebraic geometry, because the singularities are so mild compared to what can occur in (say) differential topology.



[Added: I should add that I took a slight liberty with the question, in that I interpreted the condition that the fibres are projective stronger than is literally justified, in so far as I replaced it with the condition that the map is proper. As is implicit in Chris Schommer-Pries's comment below, we can find non-proper smooth surjections whose fibres are projective varieties: e.g. if, as in his example, we consider the covering of $mathbb P^1$ by two copies of $mathbb A^1$ in the usual way, then the fibres consists of either one or two points (one point for $0$ and the point at $infty$, two points for all the others), and any finite set of points is certainly a projective variety.



Nevertheless, my interpretation of the question seems to have been helpful; hopefully, with the addition of this remark, it is not too misleading.]

oc.optimization control - scalable binary linear optimization

Has anyone encountered scalable solutions to a binary linear optimization problem of the form:



min sum_{i=1}^n x_i
s.t x_i in {0,1}
Ax=b



where, x=(x_1,x_2,...,x_n)^t, b=(b_1, b_2,...,b_m)^t, b_i is positive integer and A is a very sparse matrix with entries 0 or 1.



By scalable I mean solution that handle large values of n and multiple constraints (m).



Thank you!

Friday 22 May 2009

soft question - Demystifying complex numbers

At the end of this month I start teaching complex analysis to
2nd year undergraduates, mostly from engineering but some from
science and maths. The main applications for them in future
studies are contour integrals and Laplace transform, but of course
this should be a "real" complex analysis course which I could later
refer to in honours courses. I am now confident (after
this
discussion
, especially after Gauss complaints given in Keith's comment)
that the name "complex" is quite discouraging to average students.



Why do we need to study numbers which do not belong to the real world?



Of course, we all know that the thesis is wrong and I have in mind some examples
where the use of complex variable functions simplify solving considerably
(I give two below). The drawback of all them is assuming already some
knowledge from students.



So I would be really happy to learn elementary examples which may
convince students in usefulness of complex numbers and functions in
complex variable.
As this question runs in the community wiki mode,
I would be glad to see one example per answer.



Thank you in advance!



Here comes the two promised example. The 2nd one was reminded by several answers and comments about relations with trigonometric functions (but also by notification "The bounty on your question Trigonometry related to Rogers--Ramanujan identities expires within three days"; it seems to be harder than I expect).



Example 1.
Find the Fourier expansion of the (unbounded) periodic function
$$
f(x)=lnBigl|sinfrac x2Bigr|.
$$



Solution.
The function $f(x)$ is periodic with period $2pi$ and has poles at the
points $2pi k$, $kinmathbb Z$.



Consider the function on the interval $xin[varepsilon,2pi-varepsilon]$.
The series
$$
sum_{n=1}^inftyfrac{z^n}n, qquad z=e^{ix},
$$
converges for all values $x$ from the interval.
Since
$$
Bigl|sinfrac x2Bigr|=sqrt{frac{1-cos x}2}
$$
and $operatorname{Re}ln w=ln|w|$, where we choose $w=frac12(1-z)$,
we deduce that
$$
operatorname{Re}Bigl(lnfrac{1-z}2Bigr)=lnsqrt{frac{1-cos x}2}
=lnBigl|sinfrac x2Bigr|.
$$
Thus,
$$
lnBigl|sinfrac x2Bigr|
=-ln2-operatorname{Re}sum_{n=1}^inftyfrac{z^n}n
=-ln2-sum_{n=1}^inftyfrac{cos nx}n.
$$
As $varepsilon>0$ can be taken arbitrarily small,
the result remains valid for all $xne2pi k$.



Example 2.
Let $p$ be an odd prime number.
For an integer $a$ relatively prime to $p$,
the Legendre symbol $bigl(frac apbigr)$ is $+1$ or $-1$
depending on whether the congruence
$x^2equiv apmod{p}$ is solvable or not.
One of elementary consequences of (elementary) Fermat's little theorem is
$$
biggl(frac apbiggr)equiv a^{(p-1)/2}pmod p.
qquadqquadqquad {(*)}
$$
Show that
$$
biggl(frac2pbiggr)=(-1)^{(p^2-1)/8}.
$$



Solution.
In the ring $mathbb Z+mathbb Zi=Bbb Z[i]$, the binomial formula implies
$$
(1+i)^pequiv1+i^ppmod p.
$$
On the other hand,
$$
(1+i)^p
=bigl(sqrt2e^{pi i/4}bigr)^p
=2^{p/2}biggl(cosfrac{pi p}4+isinfrac{pi p}4biggr)
$$
and
$$
1+i^p
=1+(e^{pi i/2})^p
=1+cosfrac{pi p}2+isinfrac{pi p}2
=1+isinfrac{pi p}2.
$$
Comparing the real parts implies that
$$
2^{p/2}cosfrac{pi p}4equiv1pmod p,
$$
hence from $sqrt2cos(pi p/4)in{pm1}$ we conclude that
$$
2^{(p-1)/2}equivsqrt2cosfrac{pi p}4pmod p.
$$
It remains to apply ($*$):
$$
biggl(frac2pbiggr)
equiv2^{(p-1)/2}
equivsqrt2cosfrac{pi p}4
=begin{cases}
1 & text{if } pequivpm1pmod8, cr
-1 & text{if } pequivpm3pmod8,
end{cases}
$$
which is exactly the required formula.

rt.representation theory - the affine coordinate ring of orbit closures in the ordinary nilpotent cone

For your first question, I take it that you are interested in orbit closures of nilpotent $n times n$ matrices. I don't know anything about nilpotent orbits for other Lie algebras, but some stuff is in the references below.



As to your first question, it depends on what you want. Do you want an ideal of polynomials vanishing on these orbits? Or do you want the radical of this ideal? The first case can be done without too much work, but the second question is difficult (as far as I know).



We need a way to index these orbits. They are of course indexed by partitions like you say, given by the block sizes of the Jordan blocks. One way is to say that the orbit corresponding to the partition $lambda$ is given by the conditions $lbrace A mid ker A^i = lambda_1 + cdots + lambda_irbrace$, in which case an ideal which set-theoretically defines the orbit closure is given by the equations



$dim ker A^i ge lambda_1 + cdots + lambda_i$



To get explicit equations, let A be a generic matrix of variables $x_{i,j}$. The inequality above is the same as saying that



$operatorname{rank} A^i le n - (lambda_1 + cdots + lambda_i)$,



which we can write as polynomial equations by requiring that the $(N_i+1) times (N_i+1)$ minors of $A^i$ are all 0, where $N_i = n-(lambda_1 + cdots + lambda_i)$.



As for finding the radical of this ideal, one possible reference for this stuff is Chapter 8 of Jerzy Weyman's book Cohomology of Vector Bundles and Syzygies. Some of this is based on the material in a paper in which he calculates the radical of the ideal for certain nilpotent orbits (and proves some results about them in general): http://arxiv.org/abs/math/0006232 . What you're interested in should be in the paper, though it's heavy on sheaf cohomology (hopefully you like that). For algebraic properties of these coordinate rings like normality, Gorensteinness, rational singularities, see the book.



EDIT: I should have linked to the paper that David mentions since it contains more complete results than the one I included. However, the calculating of generators for the radical ideal (but not minimal ones) is done in Section 8.2 of the book, so I still recommend looking at it. In particular, all of the relevant techniques are treated from scratch in the earlier chapters. In particular, Chapter 5 is crucial.

Thursday 21 May 2009

pr.probability - Quantum analogue of Wiener process

The Wiener process (say, on $mathbb{R}$) can be thought of as a scaling limit of a classical, discrete random walk. On the other hand, one can define and study quantum random walks, when the underlying stochastic process is governed by a unitary transform + measurement (for an excellent introduction, see http://arxiv.org/abs/quant-ph/0303081).



My question is - do quantum random walks have a reasonable continuous limit, something which would give a quantum analogue of the Wiener process?

Wednesday 20 May 2009

computational complexity - What techniques exist to show that a problem is not NP-complete?

There are several results along these lines known, all of which use one technique: You show that if the problem is NP-complete, then some very strongly believed complexity hypothesis fails. In the following explanations, an asterisk* means "unless an extremely strongly-believed complexity hypothesis (e.g., P $neq$ NP) fails".



Factoring is known to be not NP-complete.* One has to be slightly careful with Factoring for a technical reason: in its most natural version ("Given a number, factor it") it is not a "decision problem". A standard decision problem version is: "Given n, L, and U, is there a prime factor of n between L and U?" This is easily seen to be in NP -- a witness is just a factor of n between L and U. On the other hand, this problem is not* NP-complete because it is also in coNP: there is a witness that proves n has no prime factor between L and U, namely a prime factorization of n. So if Factoring were NP-complete then all problems in NP would be in coNP; i.e., NP=coNP. So the asterisk in this paragraph refers to the assumption NP $neq$ coNP, which is extremely strongly believed.



The Graph-Isomorphism problem is a more interesting example. Telling if two given graphs are isomorphic is obviously in NP (the witness is the isomorphism). But in addition, Graph-Nonisomorphism is "almost" in NP as well. Specifically, it is in the class AM, which is essentially "randomized NP". There is a constant-round randomized "interactive proof" that two graphs are not isomorphic. (Basically, if you put the two graphs behind your back, randomly relabel them, then show them to a prover and the prover can always tell you which is which, then you become convinced the graphs must be non-isomorphic.) From this it follows that Graph-Isomorphism is not NP-complete.* Because if it were, Graph-Nonisomorphism would be coNP-complete, and then all coNP problems would be in AM. And this is known to imply that "the polynomial time hierarchy collapses" (the Boppana-Hastad-Zachos Theorem), which is very widely believed not to happen.



(By the way, I assumed you were mostly interested in problems that aren't NP-complete because they are (presumably) too easy. In the other direction, there are also problems that shouldn't be NP-complete because they are too hard; e.g., $Sigma_2$-complete problems like, "Given a Disjunctive Normal Form formula $phi$ and a number $s$, is there an equivalent DNF formula of size at most $s$?")

Monday 18 May 2009

gn.general topology - Confusion over a point in basic category theory

This is not essentially different from what everyone else has said, but for some reason I feel like saying it anyway.



The number of times that a given topological space appears in the category of topological spaces is exactly once. That's the definition of the class of objects of Top.



As for isomorphism classes: there is exactly one object in Top which is isomorphic to the empty set $varnothing$ with its unique topology: namely $varnothing$. For any nonempty topological space $X$, the subclass of Top consisting of spaces $X'$ which are homeomorphic to $X$ forms a proper class: i.e., they are more numerous than any set. This is true because the class of all sets is a proper class, and if $(X,tau)$ is any nonempty topological space and $S$ is any set, then $X times {S}$ is a set which is different from $X$ but in bijection with it: $x mapsto (x,S)$, and the image of $tau$ under this bijection gives a topological space which is homeomorphic to $(X,tau)$ but with a different underlying set.



So, except for one, the homeomorphism classes in Top are "unsetically" huge. Nevertheless, the modern perspective is that this is fine (or, perhaps, harmlessly irrelevant) whereas it is a bad idea to work with the homeomorphism class of the space instead of the space itself. One way to think about this is that a given topological space $X$ is a relatively simple object, but the class of all topological spaces homeomorphic to $X$ is a ridiculously complicated object. (This has not always been the received wisdom: notably Bertrand Russell's definition of the number $2$ was the class of all sets which can be put in bijection with, say, ${emptyset, { emptyset} }$.) From a less philosophical perspective, one wants to speak about sets of maps between two topological spaces $X$ and $Y$, and if $X$ and $Y$ are only well-defined up to a homeomorphism, these sets are themselves not so well defined. As soon as one studies commutative diagrams of morphisms of objects in a category (or more generally, functorial constructions), one recognizes that the concept of "topological space up to a homeomorphism" is a painfully awkward one.



It is also simply against the spirit of category theory to pass to isomorphism classes: many would say that this overemphasizes the somewhat quaintly philosophical notion of equality of objects. Instead of saying "the topological space $X$ is equal to the topological space $Y$", many mathematicians now think it is both simpler and more useful to say "$Phi: X rightarrow Y$ is a homeomorphism". For more on this point, I highly recommend Barry Mazur's article When is one thing equal to some other thing?

Sunday 17 May 2009

graph theory - min cut and max cut

If you try to take the complement of the edge set to translate a max cut problem, you end up not minimizing cuts, but cuts minus a correction factor that depends on the size of the subsets in the partition (namely, it is the product of the sizes). This changes the nature of the problem considerably - if you have a dense graph, min cut will tend to separate a small chunk, while the corrected version will not.



Edit: A previous version complained about a difference in input type, hence David's comment.

Wednesday 13 May 2009

triangulated categories - Motivation for Cosuspended Category Axioms

Today I was wondering about the axioms given by Bernhard Keller for Cosuspended Categories.



The axioms of a triangle feel very much like exactness, but not quite. The last axiom about the large commutative diagram is particularly quizzical. While I am ok with understanding these axioms I was hoping to ask two questions about them.




1) What was the classical motivation for these axioms? Was there a particular example in mind to conform to?




and




2) Is there a modern motivating example for these axioms that differs from the classical?




I understand these things much better when I have specific examples to keep in mind, and since I am learning these in a general context, right now that is lacking. I was hoping you all could fill me in.



Thanks in advance!

Monday 11 May 2009

graph theory - Is there a matrix whose permanent counts 3-colorings?

Actually, I suppose that the answer is technically "yes," since computing the permanent is #P-complete, but that's not very satisfying. So here's what I mean:



Kirchhoff's theorem says that if you take the Laplacian matrix of a graph, and chop off a row and a column, then the determinant of the resulting matrix is equal to the number of spanning trees of the graph. It would be nice to have some analogue of this for other points of the Tutte polynomial, but this is in general too much to ask: the determinant can be computed in polynomial time, but problems such as counting 3-colorings are #P-hard.



However, if we use the permanent instead of the determinant, we don't run into these complexity-theoretic issues, at least. So, given a graph G on n vertices, can we construct a matrix of size around nxn whose permanent is the number of 3-colorings of G?



(The secret underlying motivation here is a vague personal hope that we can extend the analogy between the Laplacian matrix and the Laplacian operator [no, the naming isn't a total coincidence] to analogies between other matrices and general elliptic operators, and then prove some sort of "index theorem," which could [even more speculatively, here] help us understand why graph isomorphism is hard, prove or construct a counterexample to the reconstruction conjecture, prove the Riemann hypothesis, and achieve world peace forever.)

reference request - Infinite electrical networks and possible connections with LERW

I've been exposed to various problems involving infinite circuits but never seen an extensive treatment on the subject. The main problem I am referring to is




Given a lattice L, we turn it into a circuit by placing a unit resistance in each edge. We would like to calculate the effective resistance between two points in the lattice (Or an asymptotic value for when the distance between the points gets large).




I know of an approach to solve the above introduced by Venezian, it involves superposition of potentials. An other approach I've heard of, involves lattice Green functions (I would like to read more about this). My first request is for a survey/article that treats these kind of problems (for the lattices $mathbb{Z}^n$, Honeycomb, triangular etc.) and lists the main approaches/results in the field.



My second question (that is hopefully answered by the request above) is the following:



I noticed similarities in the transition probabilities of a Loop-erased random walk and the above mentioned effective resistances in $mathbb{Z}^2$. Is there an actual relation between the two? (I apologize if this is obvious.)

Sunday 10 May 2009

algorithms - Finding unknown integer-valued polynomials using inequalities

I've come across this interesting inequalities problem recently, which seemed straight-forward at first glance but has proven interesting enough to ask about it here.



Suppose you are given the degree of an unknown polynomial, and are told that all the coefficients are integers and are all within $min le a_k le max$. Also, you have access to an oracle who will evaluate p(q), the unknown polynomial, and compare it to your guess $g_q$, where q is the number of previous guesses you have made, and determine if your guess was less, greater than, or exactly equal to the unknown polynomial.



What is the optimal method of choosing guesses, given previous results, in order to make a correct guess as soon as possible in the worst case?



If the degree is d and the coefficients are bounded by [min, max] then it would seem the best possible method would require slightly less than $ frac {d log (max - min + 1)}{log 2}$ by binary search. Its slightly less because with an answer of > from the oracle, you exclude both the values less than and those equal to the guess, which could be more than 50% of possibilities.



If the median of the evaluated values at q of all of the remaining possible polynomials can be found, than guessing that value would be guaranteed to eliminate at least half of the possibilities. But is there any efficient way to find the median of a function of a set of polynomials that are only identified by inequalities?



For the first guess, q is zero and therefore all but the constant term of the polynomial are irrelevant. It makes sense then to make g(0) to be $frac {min + max}{2}$. But after that, the best way of finding the median quickly seems elusive.



As an example, make min = -100, max = 100 and d = 1. $frac {-100 + 100}{2}$ = 0, so g(0) should be 0. If the oracle returns > then we know $0 < a_0 le 100$ or $1 le a_0 le 100$ since we are dealing with integer coefficients.



An ideal method would include an efficient way to find the median and characterize the set of possibilities given the previous answers from the oracle. But medians can be hard to calculate so an efficient method to calculate the mean of p(q) for the set of possibilities would be close enough if an efficient method doesn't exist for medians.

Saturday 9 May 2009

nt.number theory - Supersingular elliptic curves and their "functorial" structure over F_p^2

(EDIT: I've rewritten my argument in terms of the inverse functor, i.e., base extension, since it is clearer and more natural this way.)



Much of what is below is simply a reorganization of what Robin Chapman wrote.




Theorem: For each prime $p$, the base extension functor from the category $mathcal{C}_{-p}$ of elliptic curves over $mathbf{F}_{p^2}$ on which the $p^2$-Frobenius endomorphism acts as $-p$ to the category of supersingular elliptic curves over $overline{mathbf{F}}_p$ is an equivalence of categories.




Proof: To show that the functor is an equivalence of categories, it suffices to show that the functor is full, faithful, and essentially surjective. It is faithful (trivially), and full (because homomorphisms between base extensions of elliptic curves in $mathcal{C}_{-p}$ automatically respect the Frobenius on each side). Essential surjectivity follows from Lemma 3.2.1 in



Baker, González-Jiménez, González, Poonen, "Finiteness theorems for modular curves of genus at least 2", Amer. J. Math. 127 (2005), 1325–1387,



which is proved by constructing a model for one curve and getting models for the others via separable isogenies. $square$




The same holds for the category $mathcal{C}_p$ defined analogously, but with Frobenius acting as $+p$.
Here are two approaches for proving essential surjectivity for $mathcal{C}_p$:



1) If $G:=operatorname{Gal}(overline{mathbf{F}}_p/mathbf{F}_{p^2})$ and $E$ is an elliptic curve over $mathbf{F}_{p^2}$, and $overline{E}$ is its base extension to $overline{mathbf{F}}_{p^2}$, then the image of the nontrivial element under $H^1(G,{pm 1}) to H^1(G,operatorname{Aut} overline{E})$ gives the quadratic twist of $E$ (even when $p$ is $2$ or $3$, and even when $j$ is $0$ or $1728$). Applying this to each $E$ with Frobenius $-p$ gives the corresponding elliptic curve with Frobenius $+p$.



2) Use Honda-Tate theory (actually, it goes back to Deuring in this case) to find one supersingular elliptic curve over $mathbf{F}_{p^2}$ with Frobenius $+p$, and then repeat the proof of Lemma 3.2.1 to construct the models of all other supersingular elliptic curves via separable isogenies.

Friday 8 May 2009

nt.number theory - Consequences of the Riemann hypothesis

I gave a talk on this topic a few months ago, so I assembled a list then which could be appreciated by a general mathematical audience. I'll reproduce it here.



Let's start with three applications of RH for the Riemann zeta-function only.



a) Sharp estimates on the remainder term in the prime number theorem: $pi(x) = {text{Li}}(x) + O(sqrt{x}log x)$, where ${text{Li}}(x)$ is the logarithmic integral (the integral from 2 to $x$ of $1/log t$).



b) Comparing $pi(x)$ and ${text{Li}}(x)$. All the numerical data shows $pi(x)$ < ${text{Li}}(x)$, and Gauss thought this was always true, but in 1914 Littlewood used the Riemann hypothesis to show the inequality reverses infinitely often. In 1933, Skewes used RH to show the inequality reverses for some
$x$ below 10^10^10^34. In 1955 Skewes unconditionally (no need for RH) showed the inequality reverses for some $x$ below 10^10^10^963. Maybe this was the first example where something was proved first assuming RH and later proved unconditionally.



c) Gaps between primes. In 1919, Cramer showed RH implies $p_{k+1} - p_k = O(sqrt{p_k}log p_k)$, where $p_k$ is the $k$th prime. (A conjecture of Legendre is that there's always a prime between $n^2$ and $(n+1)^2$ -- in fact there should be a lot of them -- and this would imply $p_{k+1} - p_k = O(sqrt{p_k})$. This is better than Cramer's result, so it lies deeper than a consequence of RH. Cramer also conjectured that the gap is really $O((log p_k)^2)$.)



Now let's move on to applications involving more zeta and $L$-functions than just the Riemann zeta-function. Note that typically we will need to assume GRH for infinitely many such functions to say anything.



d) Chebyshev's conjecture. In 1853, Chebyshev tabulated the primes
which are $1 bmod 4$ and $3 bmod 4$ and noticed there are always at least as many $3 bmod 4$ primes up to $x$ as $1 bmod 4$ primes. He conjectured this was always true and also gave an analytic sense in which there are more $3 bmod 4$ primes:
$$
lim_{x rightarrow 1^{-}} sum_{p not= 2} (-1)^{(p+1)/2}x^p = infty.
$$
Here the sum runs over odd primes $p$. In 1917, Hardy-Littlewood and Landau (independently) showed this second conjecture of Chebyshev's is equivalent to
GRH for the $L$-function of the nontrivial character mod 4. (In 1994, Rubinstein and Sarnak used simplicity and linear independence hypotheses on zeros of $L$-functions to say something about Chebyshev's first conjecture, but as the posted question asked only about consequences of RH and GRH, I leave the matter there and move on.)



e) The Goldbach conjecture (1742). The "even" version says all even integers $n geq 4$ are a sum of 2 primes, while the "odd" version says all odd integers $n geq 7$ are a sum of 3 primes. For most mathematicians, the Goldbach conjecture is understood to mean the even version, and obviously the even version implies the odd version. There has been progress on the odd version if we assume GRH. In 1923, assuming all Dirichlet $L$-functions are nonzero in a right half-plane ${text{Re}}(s) geq 3/4 - varepsilon$, where $varepsilon$ is fixed (independent of the $L$-function), Hardy and Littlewood showed the odd Goldbach conjecture is true for all sufficiently large odd $n$. In 1937, Vinogradov proved the same result unconditionally, so he was able to remove GRH as a hypothesis. In 1997, Deshouillers, Effinger, te Riele, and Zinoviev showed the odd Goldbach conjecture is true for all odd $n geq 7$ assuming GRH. That is, the odd Goldbach conjecture is completely settled if GRH is true.



Update: This is now an obsolete application of GRH since the odd Goldbach Conjecture was unconditionally proved by Harald Helfgott in 2013.



f) Polynomial-time primality tests. By results of Ankeny (1952) and Montgomery (1971), if GRH is true for all Dirichlet $L$-functions then the first nonmember of a proper subgroup of any unit group $({mathbf Z}/m{mathbf Z})^times$ is $O((log m)^2)$, where the $O$-constant is independent of $m$. In 1985, Bach showed under GRH that you take the constant to be 2. That is, each proper subgroup of $({mathbf Z}/m{mathbf Z})^times$ does not contain some integer from 1 to $2(log m)^2$. Put differently, if a subgroup contains all positive integers below $2(log m)^2$ then the subgroup is the whole unit group mod $m$. (If instead we knew all Dirichlet $L$-functions have no nontrivial zeros on ${text{Re}}(s) > 1 - varepsilon$ then the first nonmember of any proper subgroup is $O((log m)^{1/varepsilon})$. Set $varepsilon = 1/2$ to get the previous result I stated using GRH.) In 1976, Gary Miller used such results to show on GRH for all Dirichlet $L$-functions that there is a polynomial-time primality test. (It involved deciding if a subgroup of units is proper or not.) Shortly afterwards Solovay and Strassen described a different test along these lines using Jacobi symbols which only involved subgroups containing $-1$, so their test would "only" need GRH for Dirichlet $L$-functions of even characters in order to be a polynomial-time primality test. (Solovay and Strassen described their test only as a probabilistic test.)



In 2002 Agrawal, Kayal, and Saxena gave an unconditional polynomial-time primality test. This is a nice example showing how GRH guides mathematicians in the direction of what should be true and then you hope to find a proof of those results by other (unconditional) methods.



g) Euclidean rings of integers. In 1973, Weinberger showed that if GRH is true for Dedekind zeta-functions then any number field with an infinite unit group (so ignoring the rationals and imaginary quadratic fields) is Euclidean if it has class number 1. As a special case, in concrete terms, if $d$ is a positive integer which is not a perfect square then the ring ${mathbf Z}[sqrt{d}]$ is a unique factorization domain only if it is Euclidean. There has been progress in the direction of unconditional proofs that class number 1 implies Euclidean by Ram Murty and others, but as a striking special case let's consider ${mathbf Z}[sqrt{14}]$. It has class number 1 (which must have been known to Gauss in the early 19th century, in the language of quadratic forms), so it should be Euclidean. This particular real quadratic ring was first proved to be Euclidean only in 2004 (by M. Harper). So this is a ring which was known to have unique factorization for over 100 years before it was proved to be Euclidean.



h) Artin's primitive root conjecture. In 1927, Artin conjectured that any integer
$a$ which is not $pm 1$ or a perfect square is a generator of $({mathbf Z}/p{mathbf Z})^times$ for infinitely many $p$, in fact for a positive proportion of such $p$.
As a special case, taking $a = 10$, this says for primes $p$ the unit fraction $1/p$ has decimal period $p-1$ for a positive proportion of $p$. (For any prime $p$, the decimal period for $1/p$ is a factor of $p-1$, so this special case is saying the largest possible choice is realized infinitely often in a precise sense; a weaker version of this special case goes back to Gauss.) In 1967, Hooley showed Artin's conjecture follows from GRH. In 1984, R. Murty and Gupta showed unconditionally that the conjecture is true for infinitely many $a$, but the proof couldn't pin down a specific $a$ for which it is true, and in 1986 Heath-Brown showed the conjecture is true for all prime values of $a$ with at most two exceptions (and surely there are no exceptions). No definite $a$ is known for which Artin's conjecture is unconditionally true.



i) First prime in an arithmetic progression. If $gcd(a,m) = 1$ then there are infinitely many primes $p equiv a bmod m$. When does the first one appear, as a function of $m$? In 1934, assuming GRH Chowla showed the first prime $p equiv a bmod m$ is
$O(m^2(log m)^2)$. In 1944, Linnik unconditionally showed the bound is $O(m^L)$ for some universal exponent $L$. The latest unconditional choice for $L$ (Xylouris, 2009) is $L = 5.2$.



j) Gauss' class number problem. Gauss (1801) conjectured in the language of quadratic forms that there are only finitely many imaginary quadratic fields with class number 1. (He actually conjectured more precisely that the 9 known examples are the only ones, but for what I want to say the weaker finiteness statement is simpler.) In 1913, Gronwall showed this is true if the $L$-functions of all imaginary quadratic Dirichlet characters have no zeros in some common strip $1- varepsilon < {text{Re}}(s) < 1$. That is weaker than GRH (we only care about $L$-functions of a restricted collection of characters), but it is still an unproved condition. In 1933, Deuring and Mordell showed Gauss' conjecture is true if the ordinary RH (for Riemann zeta-function) is false, and then in 1934 Heilbronn showed Gauss' conjecture is true if GRH is false for some Dirichlet $L$-function of an imaginary quadratic character. Since Gronwall proved Gauss' conjecture is true when GRH is true for the Riemann zeta-function and the Dirichlet $L$-functions of all imaginary quadratic Dirichlet characters and Deuring--Mordell--Heilbronn proved Gauss' conjecture is true when GRH is false for at least one of those functions, Gauss' conjecture is true by baby logic. In 1935, Siegel proved Gauss' conjecture is true unconditionally, and in the 1950s and 1960s Baker, Heegner, and Stark gave separate unconditional proofs of Gauss' precise "only 9" conjecture.



k) Missing values of a quadratic form. Lagrange (1772) showed every positive integer is a sum of four squares. However, not every integer is a sum of three squares: $x^2 + y^2 + z^2$ misses all $n equiv 7 bmod 8$. Legendre (1798) showed a positive integer is a sum of three squares iff it is not of the form $4^a(8k+7)$. This can be phrased as a local-global problem: $x^2 + y^2 + z^2 = n$ is solvable in integers iff the congruence $x^2 + y^2 + z^2 equiv n bmod m$ is solvable for all $m$. More generally, the same local-global
phenomenon applies to the three-variable quadratic form $x^2 + y^2 + cz^2$ for all integers $c$ from 2 to 10 except $c = 7$ and $c = 10$. What happens for these two special values? Ramanujan looked at $c = 10$. He found 16 values of $n$ for which there is local solvability (that is, we can solve $x^2 + y^2 + 10z^2 equiv n bmod m$ for all $m$) but not global solvability (no integral solution for $x^2 + y^2 + 10z^2 = n$). Two additional values of $n$ were found later, and in 1990 Duke and Schulze-Pillot showed that local solvability implies global solvability except for (ineffectively) finitely many positive integers $n$. In 1997, Ono and Soundarajan showed that, under GRH, the 18 known exceptions are the only ones.



l) Euler's convenient numbers. Euler called an integer $n geq 1$ convenient if any odd integer greater than 1 that has a unique representation as $x^2 + ny^2$ in positive integers $x$ and $y$, and which moreover has $(x,ny) = 1$, is a prime number. (These numbers were convenient for Euler to use to prove certain numbers that were large in his day, like $67579 = 229^2 + 2cdot 87^2$, are prime.) Euler found 65 convenient numbers below 10000 (the last one being 1848). In 1934, Chowla showed there are finitely many convenient numbers. In 1973, Weinberger showed there is at most one convenient number not in Euler's list, and if the $L$-functions of all quadratic Dirichlet characters satisfy GRH then Euler's list is complete. What he needed from GRH is the lack of any real zeros in the interval $(53/54,1)$.

Thursday 7 May 2009

at.algebraic topology - What is so "spectral" about spectral sequences?

I'm not a specialist, but from browsing the french literature it appears that the interpretation mentionned is correct: Leray was motivated by studying general invariants of spaces and continuous functions, this from his interest in Mechanics and PDEs, see the quote page 6 of this (Leray was in fact chair of Mechanics at the Académie after WWII, and also chair of ODEs and Functional Equations at Collège de France). For instance in this document (warning: 60MB) are his publication list and some scanned notes from pre-WWII meetings with Bourbaki where Leray is listened to precisely about spectral theory matters. Also, Leray learned from Elie Cartan a lot about Lie groups and representation theory, and knew its relationship to quantum mechanics (i.e. again the idea of invariants).



The first paper of Leray on the topic of spectral sequences where really the word spectral appears is the comprehensive one published in 1950 (here is its Zentralblatt review), so the paper was circulating earlier. Apparently a first note in CRAS by Leray dates from 1945, then in 1947 Koszul generalized the idea, but still without the word spectral I think. These were treating cohomology stuff. On the other hand, Serre's CRAS note, which predates his thesis, appeared in 1950, and it treats homology stuff. For cohomology matters, I've seen in early papers anything from "Leray spectral sequence", to "Leray-Koszul", to "Leray-Koszul-Cartan" (since Cartan had a seminar on those things).

Tuesday 5 May 2009

cohomology of moduli spaces

Let me tell you what I know about the cohomology of congruence subgroups of Sp_{2g}(Z). As far as cohomology with rational coefficients goes, this was determined by Borel. In the limit as g->infty, it is isomorphic to a polynomial algebra with generators in degrees 4k+2. See his paper



A. Borel, Stable real cohomology of arithmetic groups, Ann. Sci. ´Ecole Norm. Sup.
(4) 7 (1974), 235–272 (1975).



I don't know of many integral calculations. I calculated H1 of the level L congruence subgroups for L odd and g at least 3 in my paper "The abelianization of the level L mapping class group", which is available on my webpage (click my name for a link). This was also determined independently by Perron (unpublished) and M. Sato. Sato's paper is "The abelianization of the level 2 mapping class group", and is available on the arXiv. He also works out H_1 for L even.



Another paper with information on H^2 is my paper "The Picard group of the moduli space of curves with level structures", which is also available on my webpage.



As a remark, both of the papers of myself mentioned above are really papers about the mapping class group and the moduli space of curves, but I ended up proving results about PPAV's and Sp_{2g}(Z) along the way

Monday 4 May 2009

nt.number theory - CM of elliptic curves

Allow me to say something which is not so much an answer to this question as to a (very natural) question that I sense is coming in the future.



There are two possible pitfalls in the definition of "has complex multiplication" for abelian varieties over an arbitrary ground field $F$.



1) Let's first talk only about complex abelian varieties. There is a serious discrepancy between the terminology "has complex multiplication" as used in the classical literature (up to and including, say, some of Shimura's papers in the 1960s) and the way it is almost invariably used today.



Classically, an abelian variety over the complex numbers was said to "have complex multiplications" whenever its endomorphism ring was strictly larger than $mathbb{Z}$. This is never the generic case, but starting in dimension $2$ it allows abelian surfaces with multiplication by an order in a real quadratic field (parameterized by Hilbert modular surfaces), by an order in an indefinite rational quaternion algebra (parameterized by Shimura curves), and so forth.



Nowadays one means something much more restrictive by "complex multiplication": one wants the endomorphism ring to be an order in a $mathbb{Q}$-algebra whose dimension is large compared to the dimension, say $g$, of $A$. In the case of an abelian variety without nontrivial abelian subvarieties ("simple"), CM means precisely that the endomorphism ring is an order in a number field of degree $2g$: it then follows from this (not so obviously) that the number field is totally complex and has an index $2$ totally real subfield. For nonsimple abelian varieties, the generally agreed upon definition is that $operatorname{End}(A)$ should contain an order in a commutative semisimple $mathbb{Q}$-algebra of dimension $2g$. Such things do not vary in moduli, even in higher dimensions.



2) If you are interested in abelian varieties over an arbitrary ground field $F$ (let me concentrate on the case of characteristic 0), then you need to distinguish between the ring of endomorphisms which are rationally defined over $F$ and the ring of endomorphisms which are defined over an algebraic closure of $F$ (hence, it turns out, over any algebraically closed field containing $F$, i.e., one cannot acquire endomorphisms by passing from $overline{mathbb{Q}}$ to $mathbb{C}$). If one does not specify, "having CM" means having CM over the algebraic closure.



As in the responses above, it is very often the case that you have to extend the ground field slightly in order to get all the endomorphisms rational over the ground field. E.g. if $E_{/mathbb{C}}$ is any elliptic curve, it can be minimally defined over $mathbb{Q}(j(E))$. If $E$ has complex multiplication (over $mathbb{C}$), then $j(E)$ is a real [not necessarily totally real] algebraic number, from which it follows that $operatorname{End}_{mathbb{Q}(j(E))}(E) = mathbb{Z}$.



In particular, the $ell$-adic Galois representation of a "CM abelian variety" [using standard conventions as in 1) and 2) above] need not have abelian image, quite. It has "almost abelian image", meaning that once you make a finite extension of the ground field to make all the endomorphisms rational, then the image will be abelian. (Both M. Emerton's and my arguments in the one-dimensional case extend easily to this case, although I think his is more insightful.)



What I have said carries over to fields of positive characteristic, but the general picture of endomorphism algebras looks different, especially over finite fields: every abelian variety over a finite field has complex multiplication in the above sense.

Sunday 3 May 2009

mg.metric geometry - Distance of a barycentric coordinate from a triangle vertex

Following from Benoît Kloeckner's comment above,



Place the points at $A=(0,0)$ at the origin, $B=(c,0)$ on the x-axis with the distance $|AB|=c$, and $C=(x,y)$,
where we now want to satisfy $|AC|=b$ and $|BC|=a$.



Simple application of the Pythagorean theorem leads to



$x^2+y^2 = b^2$
and $(x-c)^2+y^2 = a^2$



as the two constraints to be applied.



Expanding and subtracting the two equations:



$x^2-2cx+c^2+y^2=a^2$ and
$x^2 + y^2 =b^2$



$2cx-c^2=b^2-a^2$



$2cx = (b^2-a^2+c^2)$



$x = frac{b^2-a^2+c^2}{2c}$



Now you can define $y$ in terms of $x$.



Simply scale the points $vec{A}=(0,0), vec{B}=(0,c)$, and $vec{C}=(x,y)$ by their respective $(u,v,w)$ barycentric coordinates to get $D=(x_D,y_D)$ as a function of $a,b,c,u,v,w$, apply the Pythagorean theorem again to get $d = |vec{D}|$ = the square root of $(x_d)^2 + (y_d)^2$. This last step shouldn't need to be spelled out for you, but $vec{D}=uvec{A}+vvec{B}+wvec{C}$

simplicial stuff - Homotopy colimits/limits using model categories

A homotopy (limits and) colimit of a diagram $D$ topological spaces can be explicitly described as a geometric realization of simplicial replacement for $D$.



However, a homotopy colimit can also be described as a derived functor of limit. A model category structure can be placed on the category $mathrm{Top}^I$, where $I$ is a small index category, where weak equivalences and fibrations are objectwise, so that $mathrm{colim} : mathrm{Top}^I leftrightarrow mathrm{Top} : c$ form a Quillen pair, where $c$ is the diagonal functor taking an object $A$ to the constant diagram at $A$. Then the homotopy colimit can be described as a derived functor for $mathrm{colim}$: take a cofibrant replacement $QD$ for a diagram $D$, then compute $mathrm{colim}(QD) = mathrm{hocolim}(D)$. It turns out that two cofibrant replacements will give weakly equivalent homotopy colimits. As such, you would suspect that this choice is not really important.



This leaves two questions: firstly, is it necessary in most cases to construct homotopy colimits explicitly, or are its properties as a homotopical functor enough? Secondly, do any problems arise from the fact that homotopy colimit is well-defined only up to weak equivalence (through the derived functor angle)? Do cases ever arise where a more canonical definition is required?



Context: I am reading through Goodwillie's "Calculus II: Analytic Functors." There the explicit simplicial construction is used, and in particular it is needed that certain maps from holim(D) are fibrations (Definition 1.1a, for example). However, being a fibration is not invariant under weak equivalence. Does this reflect that properties of this particular choice of holim are needed, or that the paper itself is too rigid? Can these arguments be made with a non-canonical choice of holim?



I apologize ahead of time for the vague question: I've been trying to read up in this subject area for a few months now, and this has been a stumbling block.

nt.number theory - Power series $f$ such that $f(zeta_{p^n}-1)=1/p^n$ for almost all $n geq 0$

As Brian pointed out, the principal parts problem always has a solution on the open unit disk, and Lazard's 1962 article "Les zéros des fonctions analytiques d’une variable sur un corps valué complet" gives a nice proof which is also rather explicit.



Your problem has additional symmetries so one can be a bit more explicit, as follows.



Let $H$ be the set of power series holomorphic on the open unit disk, let $varphi: H to H$ be the map defined by $varphi(f)(X)=f((1+X)^p-1)$ and let $d$ be defined by $d(f)(X)=(1+X)df/dX$. Note that $dvarphi=pvarphi d$. You can check that if you have a function $f$ such that $varphi(f)-pf = (1+X/2)log(1+X)/X$, then $f(0) neq 0$ and $f(zeta_{p^n}-1) = p^{-n} f(0)$ so this answers your problem.



Therefore you need to be able to solve an equation of the form $varphi(f)-pf = g$. By taking $d$ of both sides this gives $varphi(df)-df=dg/p$. Now you can solve an equation of the form $varphi(a)-a=b$ if $b in X cdot H$ by writing $a = sum_{n geq 0} varphi^n(b)$, which should converge in $H$ for its Fréchet topology. We have $d((1+X/2)log(1+X)/X) in X cdot H$ (this is what the $(1+X/2)$ was put in for), and once you know $df$, you get $f$ by integrating and adjusting the constant.

Saturday 2 May 2009

ac.commutative algebra - product of all F_p, p prime

The answer is Yes, and this is the ultraproduct construction. Let U be any nonprincipal ultrafilter on the set of primes. This is simply the dual filter to a maximal ideal on the set of primes, containing all finite sets of primes. (In other words, U contains the Frechet filter.)



The quotient R/U is a field of characteristic 0.



The ultraproduct construction is completely general, and has nothing to do with rings or fields. If Mi for i in I is any collection of first order structures, and U is an ultrafilter on the subsets of I, then we may form the ultraproduct Π Mi/U, which is the set of equivalence classes by the relation f equiv g iff { i in I | f(i) = g(i) } in U. Similarly, the structure is imposed on the ultraproduct coordinate-wise, and this is well-defined. The most important theorem is Los's theorem, which says that Π Mi/U satisfies a first order statement phi([f]) if and only if { i in I | Mi satisfies phi(f(i)) } in U.



In your case, since every Fp is a field, the ultraproduct is also a field. And since the set of p bigger than any fixed n is in U, then the ultraproduct will have 1+...+1 (n times) not equal to 0, for any fixed n > 0. That is, the ultraproduct will have characteristic 0.




Edit: I confess I missed the part initially about containing the algebraic numbers, and so there is more to be done, as Kevin points out. What Los's theorem gives you is that something will be true in R/U just in case the set of p for which F_p has the property is in the ultrafilter U.



What you need to know is that for any finite list of equations, that there is an infinite set of primes p for which the equations have a solution in Fp. This property is equivalent to asking whether every finite list of equations over Z is true in at least one Fp, since one can always add one more equation so as to exclude any particular Fp. Is this true? (I wasn't sure.)



But according to what Kevin says in the comments below, it is true, and this is precisely what you need for the construction to go through. You can form a filter containing those sets, which would form a descending sequence of subsets of primes, and then extend this to an ultrafilter. In this case, any particular equation would have a solution in Fp for a set of p in U, and so the ultrapower R/U would have a solution. In this case, the ultrapower will contain the algebraic numbers.

at.algebraic topology - categorical homotopy colimits

I'll work in CW-complexes.



Consider the diagram $S^1_+ leftarrow *_+ rightarrow S^1_+$, where $X_+$ denotes $X$ with a disjoint basepoint added. Homotopy classes of maps $*_+ to Y$ are path components of $Y$, and homotopy classes of maps $S^1_+ to Y$ are a choice of path component and a conjugacy class of element of $pi_1$ of that path component. Therefore, if this diagram has a pushout in the homotopy category of based spaces it (co?)represents the functor sending $Y$ to a choice of path component and a pair of conjugacy classes in the same path component.



Suppose we had a representing object $X$. Considering $[X,S^0]$, we find $X$ has only two path components. So $X = X_0 coprod X_1$ where $X_0$ is the basepoint component. Each component can, up to homotopy equivalence, be constructed as a CW-complex with one zero-cell, some family of 1-cells, and some family of 2-cells.



Consider $[X, K(pi,1)_+]$ for $pi$ a group. A cell description of $X$ gives rise a description of this functor: an element of $[X, K(pi,1)]_+$ is either trivial (if $X_1$ maps to the basepoint) or is a conjugacy class of homorphism $pi_1(X_1) to pi$, because the maps on $X_1$ are not restricted to basepoint-preserving homotopies.



So it suffices to show that there are no groups $G$ so that conjugacy classes of homomorphism $G to pi$ are in natural bijective correspondence with pairs of conjugacy classes of elements of $pi$.



EDIT: Fixed up the following argument.



Given such a group $G$, the identity map determines a pair of conjugacy classes $[x], [y]$ in $G$, and choosing any representatives $x$ and $y$ determines a group homomorphism $F_2 to G$. Conversely, the conjugacy classes of the generators of $F_2$ determine a map $G to F_2$ splitting this map up to conjugacy. This would imply that the natural tranformation sending simultaneous conjugacy classes of pairs to pairs of conjugacy classes is an inclusion, which is false, e.g. in the symmetric group on 3 letters there are 9 pairs of conjugacy classes and (assuming I counted correctly) 11 simultaneous conjugacy classes of pairs.

Friday 1 May 2009

fa.functional analysis - convergence rate in Wiener's approximation theorem

Wiener has the following fantastic results about approximations using translation families:



Given a function $h: mathbb{R} to mathbb{R}$, the set ${sum a_i h(cdot - x_i): a_i, x_i in mathbb{R}}$ is



i) dense in $L^1(mathbb{R})$ if and only if the Fourier transform of $h$ has no zeros.
ii) dense in $L^2(mathbb{R})$ if and only if zeros of the Fourier transform of $h$ has zero Lebesgue measure.



After this, a further step is natually about the speed of convergence, i.e., how fast does the error vanishes with respect to the the number of translates. Now let us focus on the $L^1$ case and take $h = varphi$ to be the standard normal density whose Fourier transform does not vanish on the real line. Given a function $f in L^1(mathbb{R})$, the error of the optimal $m$-term approximation is



$mathop{inf}limits_{a_i, x_i in mathbb{R}} left|f - sum_{i=1}^m a_i h(cdot - x_i)right|_1$.



My question is whether there is any way to lower bound this quantity. Of course, there won't be any meaningful conclusion without assumptions on $f$ (e.g., $f$ is a finite-mixture of translates of $varphi$, then it is trivial). So let us consider $f = g * varphi$ for some smooth $g$ (e.g., $g = varphi$), where $*$ denotes convolution, that is, $f$ is an ``infinite"-mixture of translates of $varphi$. Any idea would be greatly appreciated. If things could be easier using $L^2, L^{infty}$ or other distance, it should also be helpful.



For the upper bound, there have been many work, the speed of convergence could be $O(m^{-2})$ or even exponential in $m$. For the lower bound, most work consider a min-max setup: for $f$ belonging to a given class of functions, the worst-case convergence rate can never by faster than $O(m^{-2})$. But for a given $f$, there seems to be no known result.