Tuesday, 31 August 2010

nt.number theory - additive structure in a small multiplicative group of a finite field?

Probably not, assuming $p$ is fixed and $n$ is large enough.
Have a look at section 5 in my paper A49 in: http://www.integers-ejcnt.org/vol7.html (for some reason the journal doesn't allow direct links to papers although is free access).



In the notation there, let $R(x)=x^{n-1}+1$. Note that, as a
consequence of your hypothesis 3), $beta^{n-1}+1 in <beta>$, which
implies that the order of $beta^{n-1}+1$ is at most that of $beta$.
This will give an upper bound for $N$ in terms of $n$, using your
hypothesis 2). I haven't done the
calculation, so I don't know if this upper bound contradicts your hypothesis 1). Note
that the bounds that I get are probably much weaker than the truth, see e.g.,
the conjecture of Poonen's discussed in the paper.

Are all Hamiltonian planar graphs are 4 colorable? Does this imply all planar graphs are colorable?

Planar graphs with a Hamiltonian loop connecting all faces do not necessarily have a Hamiltonian on their edges, which would make a 3 edge coloring and thus a 4 face coloring easy. However they have a lot of great structure. If you split the graph along the face Hamiltonian using it like an equator,the edges in the northern or southern hemisphere form trees. Thus the problem of coloring Hamiltonian planar graphs with n faces reduces to finding a mutual edge 3 coloring of any two n trees.



Regarding the Hamiltonian graphs as compositions of trees makes them easily counted, and gives a relation between graphs by relating their trees.



Between trees of the same size: Any tree of size n may converted into another of size n by iterated diamond switches, DS, of internal branches. I have proven any n-tree can be converted into any other n-tree in less than 2n DS. Nicely, if 2 trees are within n DS they have a mutual coloring.



Between trees of different sizes: Larger and smaller Hamiltonian graphs can be made by adding or subtracting branches that cross the equator, preserving the Hamiltonian loop. So inductive proofs are encouraged.



Also any tree of n roots has exactly 2^(n-3) colorings , which can be arranged on a hypercube, so you could prove all Hamiltonian graphs with n faces are colorable by proving any two color-hypercubes of n-trees intersect.



Hamiltonian graphs are a good restricted simpler group of graphs to try to prove colorable, interesting in their own right. They are even more important if their colorability implies the colorability of all planar graphs. Is it so?

Sunday, 29 August 2010

mg.metric geometry - What is the max number of points in R^3, interconnected by generic curves?

Matt's answer is correct, but at an even simpler level: If you take two generic line segments in a compact subset of R^2, they'll intersect with positive probability. If you take two generic line segments in R^3, they'll intersect with probability 0. This isn't a proof by any means, but it's the simplest conceptual reason I know of. If instead of edges we wanted surfaces, we'd have to go up to dimension 5.



::sigh:: Okay, here's a constructive example of an infinite set of points such that no straight line segments between any two of them intersect. Take any two real numbers, say 2 and $pi$, that are algebraically independent. Then I claim that the set of points $(n, 2^n, pi^n)$ is such a set.



Why? Suppose the line segment between the points with $x = a$ and $x = b$ intersected the line segment between $x = c$ and $x = d$, parameterizing the line segments so the equations:



$(a, 2^a, pi^a) + lambda (b-a, 2^b - 2^a, pi^b - pi^a)$



$(c, 2^c, pi^c) + gamma (d-c, 2^d - 2^c, pi^d - pi^c)$



give us the same point for some choice of the variables.



Looking at the first two components tells us that $lambda, gamma$ are both rational. But then the third component gives us a polynomial with rational coefficients that has a root equal to $pi$, which is impossible since $pi$ is transcendental. So none of these line segments can intersect.

Saturday, 28 August 2010

type theory - What is a semigroup or, what do I do with that associativity proof?

Mathematically, I know what a semigroup is: It is a set S along with an associative binary operation $* : S times S rightarrow S$. So far, so good.



From a computational perspective, one can represent a semigroup as the tuple $left< S,* right>$, or my preference, as a record { S: type; $* : S times S rightarrow S$} which is dependently typed. So, for example, {S = $mathbb{N}$; $* = + $} is a semigroup [assuming I have the naturals with addition already built].



Well, actually, it's not -- who says that I defined $+$ properly? Maybe I made a mistake and the $+$ that I used for $mathbb{N}$ isn't associative. So I sure shouldn't be able to build {S = $mathbb{N}$; $* = + $} and have that have 'type' semigroup.



So is the proof of associativity part of the type 'semigroup' (as it is in Coq for example), or is it part of the input to the 'constructor' for the semigroup type ? [The constructor is allowed to then forget the proof, at least for computational purposes].



One wrinkle here is that while the proof of associativity doesn't seem to say much, for the term algebra over the semigroup, it does induce two 'associator' functions which can perform the rewrite of $lceil a * (b * c) rceil leftrightarrow lceil (a * b) * crceil$ (where I use the $lceil cdot rceil$ brackets to denote working over terms since semantically $a*(b*c)$ and $(a*b)*c$ denote the same thing so there is nothing to do). That associator is really quite useful [and much more so when you go to things like rings and fields, where the induced rewrites on the term algebra give rise to what can rightly be called a 'simplifier']. So the proof is quite useful in that sense.



The question boils down to: where does the 'proof' that the binary operation is associative really belong in the theory of a single semigroup, where by 'theory' here I mean both the semantic theory and the induced equational theory of the term algebra? Once I have established that $*$ is associative, can I really throw away the proof as it is not going to be used again?



(I would really want to also ask why is it that in classical mathematics, proofs are crucial, but somehow not so important that definitions of standard objects omit them. Asking that would likely be ruled as off-limits for MO as being too 'philosophical'...)

Friday, 27 August 2010

ag.algebraic geometry - Riemann-Roch and Grothendieck duality: general case of Fulton's example 18.3.19

Fulton's "Intersection theory" book contains the following fact (example 18.3.19):



Let $X$ be a Cohen-Macaulay scheme over a field. Assume $X$ can be imbedded in a smooth scheme (so it has a dualizing sheaf $omega_Z$) and is of dimension $n$. If $E$ is locally free coherent sheaf on $X$, then:
$$tau_k(E) = (-1)^{n-k}tau_k(E^{vee}otimes omega_X) (*)$$
in $A_k(X)_{mathbb Q}$, the $k$-th Chow group of $X$ with rational coefficients. Here $tau: K_0(X) to A_*(X)_{mathbb Q}$ is the generalized Riemann-Roch homomorphism.



The formula follows from a more general one for complexes with coherent cohomology (and without Cohen-Macaulayness):
$$ sum (-1)^itau_k(mathcal H^i(C^{cdot})) = (-1)^ksum(-1)^itau_k(mathcal H^i(RHom(C^{cdot},omega^{cdot}_X))) (**)$$



In a proof I would like to use (*) in a more general setting:




Does anyone know a reference for (**) or (*) when $X$ is imbeddable in a regular scheme, not necessarily over a field (I am willing to assume $X$ is finite over some complete regular local ring)?




The original source of (**) (Fulton-MacPherson "Categorical framework for study of singular spaces") hints that a generalization is possible, then refers to Delign's appendix of Hartshorne "Residues and Duality"! We all know that fleshing out the details there is non-trivial, however.

Wednesday, 25 August 2010

mathematics education - How do you motivate a precise definition to a student without much proof experience?

I once asked my honours real analysis class to define the concept of an integer to a hypothetical bright young kid who was already perfectly familiar with the natural numbers and the operations one could perform on them, but had not yet been exposed to negative numbers. The response was both enthusiastic and chaotic; I remember one student, for instance, giving a heuristic to explain why the product of two negative numbers was positive, which was interesting but not directly useful for the problem at hand.



Nevertheless, the question served its purpose; when I did then introduce a rigorous definition of the integers (as formal differences of natural numbers, quotiented by equivalence), the need for such a formal definition was made much clearer by the lack of an "obvious" way to do it by other means. And I think it also had a residual effect in motivating the fancier epsilon-delta definitions that arose later in the course.



Another example I have seen, at the early high school level, is to challenge students to come up with a watertight definition of a rectangle. This is remarkably difficult to do for students without training in higher mathematics; not only does one have to deal with degenerate cases (e.g. line segments), but often crucial properties (e.g. that the four sides of a rectangle have to be connected at the vertices) are omitted. One can also get into interesting debates, such as whether a square should be considered a rectangle.

Tuesday, 24 August 2010

set theory - closure of separative quotients

Does there exist a partial order, nontrivial for forcing, that is countably closed, but whose separative quotient is not countably closed? Supposing the answer is yes, then is there a partial order, nontrivial for forcing, that is countably closed, but is not forcing equivalent to any countably closed separative partial order?



For those of you unfamiliar with the separative quotient of a partial order, it is defined as follows. Two elements of a partial order are compatible iff there is some element below both of them. We form the separative quotient of a partial order by taking equivalence classes: x is equivalent to y when x and y are compatible with the exact same things. We then define a new partial order for the separative quotient -- $x leq y$ iff everything compatible with x is compatible with y.



A partial order is said to be separative if whenever $x nleq y$, there is $z leq x$ such that z is incompatible with y. The separative quotient of any partial order is separative.



Some of the ways, order-theoretically speaking, that two partial orders can be forcing equivalent are



(1) They are isomorphic, or more generally,
(2) A dense subset of one of them is isomorphic to a dense subset of the other.

Sunday, 22 August 2010

reference request - Why is a smooth weak solution strong for stationary linear Stokes problem with zero-traction boundary condition?

Can anyone provide me with a reference giving details on how smooth generalized solutions of the stationary linear Stokes problem can be shown to be classical solutions when a zero-traction boundary condition is present? That is, given a smooth generalized solution of



$-nu bigtriangleup v + bigtriangledown q = f$ on $Omega subset mathbb{R}^3$



$bigtriangledown cdot v = 0$ on $Omega$



$S(v,q) = 0$ on $partial Omega$ where $S_i(v,q) =q n_i - nu sum_{j=1}^3 (partial_i v_j + partial_j v_i)n_j$ for $i=1,2,3$



how can it be shown that the zero-traction boundary condition is met? It's not difficult to show that the first two equations are satisfied on $Omega$ and using the relevant Green's formula one can obtain



$int_{partial Omega} S(v,q) cdot phi = 0$



for all solenoidal $phi in H^1$. However, I can't quite figure out why this necessarily leads to $S(v,q)=0$.

Saturday, 21 August 2010

set theory - maximum with respect inclusion of a function whose output are sets

If $W$ admits a periodic point, then, of course, $S(x)$ is constant along its orbit. But, on the contrary, whatever nice and well-structured $A$ and $W$ you have, if $W$ has no periodic points, there is always the bad map $S(x):=$ the negative invariant set generated by $x$, that is $cup_{kinmathbb{N}}W^{-k}(x),$ and this map fails to satisfy the thesis while $S(x)subset S(W(x))$ holds for all $x$.



This is to convince you that some assumption on the map $S(x)$ is in order. Here, a mild and natural assumption, to be coupled with the compactness of $Aneqemptyset$, is weak upper semicontinuity, that is,
$$ S:Ato2^X$$
is continuous w.r.to the product topology on $2^X$ where the two-point space $2:={0,1}$ is endowed with the left-order topology (whose only proper open subset is ${0}$). As a consequence, the continuous image of $A$ is a compact subset of $2^X$, therefore it has a maximal element with respect to inclusion. Due to your assumption on $W$, the equality necessarily holds for any maximal set $S(x),$ proving your thesis (incidentally, note that no further assumption on $W$ is needed).



Rmk 1. The upper semicontinuity of $S$ introduced above may be equivalently stated as:



  • $operatorname{graph(S)}:={(a,x)in Atimes X\ : ain S(x)}$ is closed in $Atimes X$, where $X$ has the discrete topology;


  • $S^* :Xto 2^A$ is a closed map, that is, for any $ain A$ the set $S^* (x):={ain A\,:\, xin S(a) }$ is a closed subset of $A$;


  • for any $ain A$ (denoting $mathcal{N}_ a$ the family of nbd's of $a$), there holds
    $$limsup_{bto a} S(b) := cup_ {Uinmathcal{N}_ a} cap_ {bin U} S(b) subset S(a).$$


Rmk 2. The fact that a compact subset $K$ of $2^X$, where $2$ has the left-order topology, admits a maximal element, is, of course, a consequence of the Zorn lemma. Indeed, if $Gamma$ is an infinite chain in $K$, it has a limit point in $K$, which turns out to be an upper bound of $Gamma$.

Friday, 20 August 2010

rt.representation theory - Polytopes related to the conjugation action of a Lie group on multiple copies of itself?

Let G be a finite dimensional real Lie group. As I understand it, the quotient space of G acting on itself by conjugation is a well studied polytope which can be identified with the fundamental alcove of G. It has all kinds of uses and consequences for the representation theory of G.




Is there a similar interpretation for the diagonal conjugation action of G on $G^n$? Have these spaces been studied? and if so, what sort of applications or uses do they have?




I'm not a representation theorist, so I apologize if my question is well-known or naive. I'm hoping that these or similar spaces will have interesting combinatorial/representation theoretic properties to the space $G/G$.

dg.differential geometry - A question about a one-form on Riemannian manifold

Assuming the dimension of $M$ is at least 2 (otherwise it's false), you can do the following. Let $p_1,p_2,dots$ be isolated points where $X$ does not vanish but where you want $omega$ to vanish. In a neighborhood $U_i$ of each $p_i$, there are coordinates $(x^1,dots,x^n)$ centered at $p_i$ on which $X$ has the coordinate representation $X = partial/partial x^1$. In each $U_i$, let $omega_i = dx^2 + |x|^2 dx^1$. Then let $U_0$ be the complement of {$p_1,p_2,dots$}, and let $omega_0=X^flat$ (the 1-form dual to $X$ via the metric). Let {$phi_0,phi_i$} be a partition of unity subordinate to the cover {$U_0,U_i$}, and let $omega = sum_{ige 0}phi_iomega_i$. The fact that $omega_i(X)>0$ at points other than $p_i$ and zeros of $X$ ensures that $omega(X)$ vanishes only at such points.

How do you relate the number of independent vector fields on spheres and Bott Periodicity for real K-Theory?

The theory of Clifford algebras gives us an explicit lower bound for the number of linearly independent vector fields on the $n$-sphere, and Adams proved that this is actually always the best possible: there are never more linearly independent vector fields.
More precisely, this gives the following number: if $n+1 = 16^a 2^b c$ with $c$ odd, $0 leqslant b leqslant 3$, we get $rho(n) = 2^b + 8a$ and there are exactly $rho(n) - 1$ linearly independent vector fields on $S^n$. This lower bound comes by construction of vector fields from Clifford module structures on $mathbb{R}^{n+1}$, and figuring these out isn't too hard, it follows from the classification of real Clifford algebras with negative definite quadratic form. This is detailed for example in Fibre Bundles by Husemöller; the material comes from the paper Clifford Modules by Atiyah, Bott, Shapiro. This classification hinges on a particular mod 8 periodicity for real Clifford algebras.



Question: How does this description of vector fields on spheres relate to Bott Periodicity in the real case (either for real $K$-Theory, in the form $KO^{n+8} cong KO^{n}$, or for the homotopy groups of the infinite orthogonal group, $pi_{n+8}(O) cong pi_n(O)$)?



In particular, I'm inclined to think there should be a rather direct relationship: after all, $K$-theory is talking about vector bundles, sections of which are vector fields! Surely the formula for the number of vector fields on spheres should have a concrete interpretation in terms of $K$-theory? The (underlying) mod $8$ periodicities must be linked!
In addition, the result of periodicity mod 8 for Clifford algebras is also often called Bott Periodicity; what is the deeper relationship here? This other post mentions that the periodicity for Clifford algebras relates to the periodicity for complex K-Theory and so it mentions BU and not BO.

soft question - Which mathematicians have influenced you the most?

Richard Courant. Several years before I started studying mathematics in earnest, I spent a summer working through his calculus texts. Only recently, on re-reading them, have I come to realize how much my understanding of calculus, linear algebra, and, more generally, of the unity of all mathematics and, to use Hilbert's words, the importance of "finding that special case which contains all the germs of generality," have been directly inspired by Courant's writings.



From the preface to the first German edition of his Differential and Integral Calculus:




My aim is to exhibit the close connexion between analysis and its applications and, without loss of rigour and precision, to give due credit to intuition as the source of mathematical truth. The presentation of analysis as a closed system of truths without reference to their origin and purpose has, it is true, an aesthetic charm and satisfies a deep philosophical need. But the attitude of those who consider analysis solely as an abstractly logical, introverted science is not only highly unsuitable for beginners but endangers the future of the subject; for to pursue mathematical analysis while at the same time turning one's back on its applications and on intuition is to condemn it to hopeless atrophy. To me it seems extremely important that the student should be warned from the very beginning against a smug and presumptuous purism; this is not the least of my purposes in writing this book.




Another example: while not a "linear algebra book" per se, I have yet to find a better introduction to "abstract linear algbera" than the first volume of Courant's Methods of Mathematical Physics ("Courant-Hilbert"; so named because much of the material was drawn from Hilbert's lectures and writings on the subject). His one-line explanation of "abstract finite-dimensional vector spaces" is classic: "for n > 3, geometrical visualization is no longer possible but geometrical terminology remains suitable."



Lest one be misled into thinking Courant saw "abstract" vector spaces as "$mathbb{R}^n$ in a cheap tuxedo," he introduces function spaces in the second chapter ("series expansions of arbitrary functions"), and most of the book is about quadratic eigenvalue problems, or, as Courant saw it, "the problem of transforming a quadratic form in infinitely many variables to principal axes."



As a final example: Courant's expository What is Mathematics? is perhaps best described as an unparalleled collection of articles carefully crafted to serve as an object at which one can point and say "this is." Moreover, while written as a "popularization," its introduction to constrained extrema problems is, without question, a far, far better introduction than any textbook I've ever seen.



I should also mention Felix Klein, not only because Klein's views on "calculus reform" so clearly influenced both the style and substance of Courant's texts, but since a number of Klein's lectures have had an equally significant influence on my own perspective. For those unfamiliar with the breadth of Klein's interests, I'm tempted to say "his Erlangen lecture, least of all" (not that there's anything wrong with it).



Lest my comments be mistaken for a sort of wistful "remembrance of things past," I'd easily place Terence Tao's writings on par with Courant's, for many of the same reasons: clear and concise without being terse, straightforward yet not oversimplified, and, most importantly, animated by a sort of — je ne sais quoi — whatever it is, it seems to involve, in roughly equal proportions: mastery of one's own craft, a genuine desire to pass it on, and the considerable expository skills required to actually do so.



Finally, I can't help but mention Richard Feynman in this context, and to plug his Nobel lecture in particular. While not a mathematician per se, Feynman surely ranks among the twentieth century's best examples of a "mathematical physicist" in the finest sense of the term, not merely satisfied by a purely mathematical "interpretation" of physical phenomena, but surprised, excited, and, dare I say, delighted by the prospect! Moreover, he was equally excited about mathematics in general, see, e.g., the "algebra" chapter in the Feynman Lectures on Physics.

Thursday, 19 August 2010

measure theory - How to show that x-y is Lebesgue-Lebesgue measurable

Nicolo is asking about functions where the inverse image
of a Lebesgue measurable set is Lebesgue measurable. This
is stronger than the usual definition of measurability
where it is required only the inverse image of each Borel
set must be Lebesgue measurable. Continuous functions need not
be measurable by this stronger criterion. If $B$ has zero
Lebesgue measure and $A=f^{-1}(B)$ has nonzero measure then each
subset of $B$ is Lebesgue measurable but its inverse image may
be non-measurable. A simple example is given by $f:xmapsto (x,0)$
from $mathbb{R}$ to $mathbb{R}^2$. Taking $A$ to be a
non-measurable subset of $mathbb{R}$ and $B=f(A)$ we see this
$f$ is not Lebesgue-Lebesgue measurable. More interesting examples
occur on the real line when there are continuous homeomorphisms
from $mathbb{R}$ to itself taking Cantor sets of positive measure
to Cantor sets of zero measure.



To return to Nicolo's example. Each surjective linear map
from $mathbb{R}^mtomathbb{R}^n$ is Lebesgue-Lebesgue measurable
as it can be decomposed as a composition of linear bijections
and the projection map $mathbb{R}^mtomathbb{R}^n$ mapping onto
the first $n$ coordinates (both these types of maps can be seen to
be Lebesgue-Lebesgue measurable). By definition, the class
Lebesgue-Lebesgue measurable maps is closed under composition
(unlike the class of Lebesgue-measurable maps!).

Wednesday, 18 August 2010

ct.category theory - What is the proper name for "compact closed" multiplicative intuitionistic linear logic?

Compact closed categories are models of classical linear logic when tensor and par collapse.



As an aside, I'm not sure that the particular resource interpretation you're suggesting genuinely works, since linear logic offers a unified and very subtle view of action and resource. If you want a pure resource interpretation of logic, you may need to look at bunched implications (ie, at categories which simultaneously have a monoidal and cartesian closed structure).



James Brotherston has investigated a version of this logic which is directly inspired by the debt/credit view, called "classical BI", both model-theoretically and proof-theoretically (though not yet categorically).

st.statistics - Generalizing the wilson score confidence interval to other distributions

This article describes the 'Wilson score confidence interval', and describes how to use it to derive the lower bound on the nth percentile confidence interval for determining sorting criteria for thumbs-up/thumbs-down type ratings in a ratings system.



How can this be generalized to a ratings system that doesn't form a binomial distribution? Specifically, how can this be determined when each rating is a real number between 0 and 1, or when each rating is one of a set of discrete ratings (eg, 1, 2, 3, 4 or 5)?



With a normal distribution, it appears that this could be simply the average minus some number of standard deviations - but to the best of my knowledge, it's not a true normal distribution, since it's limited to the range (0, 1).

Tuesday, 17 August 2010

fa.functional analysis - Regular borel measures on metric spaces

When teaching Measure Theory last year, I convinced myself that a finite measure defined on the Borel subsets of a (compact; separable complete?) metric space was automatically regular. I used the Borel Hierarchy and some transfinite induction. But, typically, I've lost the details.



So: is this true? Are related questions true? What are some good sources for this sort of questions? As motivation, a student pointed me to http://en.wikipedia.org/wiki/Lp_space#Dense_subspaces where it's claimed (without reference) that (up to a slight change of definition) the result is true for finite Borel measures on any metric space.



(I'm normally only interested in Locally Compact Hausdorff spaces, for which, e.g. Rudin's "Real and Complex Analysis" answers such questions to my satisfaction. But here I'm asking more about metric spaces).



To clarify, some definitions (thanks Bill!):



  • I guess by "Borel" I mean: the sigma-algebra generated by the open sets.

  • A measure $mu$ is "outer regular" if $mu(B) = inf{mu(U) : Bsubseteq U text{ is open}}$ for any Borel B.

  • A measure $mu$ is "inner regular" if $mu(B) = sup{mu(K) : Bsupseteq K text{ is compact}}$ for any Borel B.

  • A measure $mu$ is "Radon" if it's inner regular and locally finite (that is, all points have a neighbourhood of finite measure).

So I don't think I'm quite interested in Radon measures (well, I am, but that doesn't completely answer my question): in particular, the original link to Wikipedia (about L^p spaces) seems to claim that any finite Borel measure on a metric space is automatically outer regular, and inner regular in the weaker sense with K being only closed.

differential equations - Dropping three bodies

Consider the usual three-body problem with Newtonian
$1/r^2$ force between masses. Let the three masses start off at rest,
and not collinear. Then they will become collinear a finite time later by a theorem
I proved some time ago. (See the papers "Infinitely Many Syzygies"
and "The zero angular momentum three-body problem: all but one solution has syzygies"
available on my web site or the arXivs.) Let $t_c$ denote the first such time.



Write $r_{ij} (t)$ for the distance between mass
$i$ and mass $j$ at time $t$.



Question 1. For general masses $m_i >0$, is it true that the "moment of inertia"
$I = m_1 m_2 r_{12}^2 + m_2 m_3 r_{23}^2 + m_1 m_3 r_{13}^2$
monotonically decreases over the interval $(0, t_c)$?



Question 2. If the masses are all equal and if the initial side-lengths
satsify $0 < r_{12}(0) < r_{23} (0)< r_{13} (0)$
is it true that these inequalities remain in force: $0 < r_{12} (t) < r_{23} (t) < r_{13}(t)$
for $0 < t < t_c$? In other words: if the triangle starts off as scalene (not isosceles, and having nonzero area) does it remain scalene up to collinearity?



Motivation: The space of collinear triangles, consisting of triangles of zero area,
acts like a global Poincare section for the zero-angular momentum, negative energy
three-body problem. To obtain some understanding of the return map from this space to itself the
"brake orbits"-- those solutions for which all velocities vanish at some instant -- seem to play an organizing role.
Answering either questions would yield useful information about brake orbits.



Aside: I suspect that if the answers to either question is yes for the standard $1/r^2$ force, then it is also yes for any attractive "power law" $1/r^a$ force between masses, any $a > 0$.




added, Sept 20, 2010. The bounty is for an answer to either question 1 or 2.



I've made partial progress toward 2 using variational methods
(direct method of the calculus of variations). I can prove that if a syzygy
is chosen anywhere in a neighborhood of binary collision (so $r_{12}(t_c) = delta$, small, $r_{23} (t_) = r_{13}(t_c) + delta$)
then there exists a brake orbit solution
arc ending in this syzygy and satisfying the inequality of question 2.
The proof suggests, but does not prove, that the result holds locally near
isosceles, meaning for brake initial conditions
in a neighborhood of isosceles brake initial conditions ( so
$r_{13} (0) = r_{12} (0) + epsilon$). If I had uniqueness [modulo rotation and reflection] of brake orbits with specified syzygy endpoints, then my proof would yield a proof of this local version of the alleged theorem.
Unfortunately, my proof does not exclude the possibility of more than one orbit ending in the chosen syzygy, one of which violates the inequality.

Sunday, 15 August 2010

ra.rings and algebras - Problems concerning R and R[x]

A few questions relevant formally, but quite different in nature:



From now on, let R denote a ring.



  1. If R is a UFD , is R[x] also a UFD?


  2. If R is Noetherian, is R[x] also Noetherian?


  3. If R is a PID, is R[x] also a PID?


4. If R is an Artin ring, is R[x] also an Artin ring?



For 1, we all know it's Gauss's lemma.



For 2, we all know it's Hilbert's basis theorem.



For 3, we all know that in Z[x], the ideal (2,x) is not a principal ideal, so the answer is negative.



But what about 4?

rt.representation theory - Occurrence of the trivial representation in restrictions of Lie group representations

I would like to add to the answers by Ben and Allen. First if we extend the question to include all multiplicities and not just the multiplicity of the trivial representation then there are a number of special cases that are of interest:



1.Take $H$ to be the trivial group then the question asks for the dimension of a representation.
2.Take $H$ to be a maximal torus then we are asking for the character of a representation.
3. Take $G=Htimes H$ and $H$ the diagonal subgroup. Then we are asking for tensor product multiplicities.
4. For $V$ a representation of $K$. Take $G=SL(V)$ and $H=K$. Then we are calculating plethysms.



A paper that discusses this which gives a formula for branching rules is:



MR1120029 (92f:22022) Cohen, Arjeh M. ; Ruitenburg, G. C. M. Generating functions and Lie groups.
Computational aspects of Lie group representations and related topics
(Amsterdam, 1990),
19--28, CWI Tract, 84, Math. Centrum, Centrum Wisk. Inform., Amsterdam, 1991.



As I understand it both Ben and Allen agree that this is not a simple way of finding branching rules. The reason is that this involves a sum over the Weyl group.



If you take the special cases above then historically the first solutions to these problems were given by formulae involving a sum over the Weyl group. For some of these special cases there are solutions which don't involve cancelling terms. For example, LiE
calculates these without summing over the Weyl group. The LiE home page is
http://www-math.univ-poitiers.fr/~maavl/LiE/index.html
and the LiE manual does describe how these special cases are implemented.



However LiE treats each of these special cases separately. I think it is an interesting question whether there is an algorithm for finding branching rules which could be implemented in LiE and which does not involve a sum over the Weyl group.

Friday, 13 August 2010

ag.algebraic geometry - ring-valued points of locally ringed spaces

of course, one should expect that the concept of ring-valued points is not well-behaved for locally ringed spaces (LRS). I want to see examples for this.



so consider $LRS to Set^{Ring}, X mapsto X(-)=Hom(Spec - , X)$. if $A$ is a local ring, whose maximal ideal is principal, and $hat{A}$ its completion, and we regard local rings as locally ringed spaces whose underlying set is just one point, then $A to hat{A}$ induces a bijection $Hom(Spec R,A) to Hom(Spec R,hat{A})$ (I'll add the proof if you want). this shows that the functor is not full. but how can we see that it is not faithful?



For example, for local rings $A$, we have



$Hom_{LRS}(Spec R,A)={phi in Hom_{Ring}(A,R) : phi(mathfrak{m}_A) subseteq rad(R)}$.



If $f,g$ are local homomorphisms inducing the same maps $Hom_{LRS}(Spec -,B) to Hom_{LRS}(Spec -,A)$, it seems that they don't have to be identical ...

nt.number theory - Perron-Frobenius "inverse eigenvalue problem"

The answer to a sharper question involving integers, rather than rationals, is affirmative.




Let $lambda$ be a positive real algebraic integer that is greater in absolute value than all its Galois conjugates ("Perron number" or "PF number"). Then $lambda$ is the Perron–Frobenius eigenvalue of a positive integer matrix.


(The converse statement is an integer version of the Perron–Frobenius theorem, and is easy to prove.)



In a slightly weaker form (aperiodic non-negative matrix), this is theorem of Douglas Lind, from



The entropies of topological Markov shifts and a related class of algebraic integers.
Ergodic Theory Dynam. Systems 4 (1984), no. 2, 283--300 (MR)



I don't have a good reference for the strong form, but it was discussed at Thurston seminar in 2008-2009. One interesting thing to note is that, while the proof can be made constructive, it is non-uniform: the size of the matrix can be arbitrarily large compared to the degree of $lambda$.

ag.algebraic geometry - lisse sheaf on complex varieties

Dear Shenghao, If you really do mean a lisse sheaf on the etale site of $X$, then it doesn't make sense a priori to evaluate it on analytic open subsets of $X(mathbb C)$, since these are not in the etale site of the algebraic variety $X$. However, $F$ corresponds to a representation of the (profinite) etale $pi_1$ of $X$, which in turn is the profinite completion of the
topological $pi_1$ of $X(mathbb C)$. So there is a corresponding locally constant (in the analytic topology) $mathbb Z_{ell}$
sheaf on $X(mathbb C)$, which, being locally constant, will be constant on sufficiently small analytic open subsets.

Thursday, 12 August 2010

mp.mathematical physics - Where does a math person go to learn statistical mechanics?


So, is there a good resource for statistical mechanics for the mathematically-minded?




If you are looking for a book, the real answer is "not really". As a mathematician masquerading as a physicist (more often than not of a statistical-physical flavor) I have looked long, hard, and often for such a thing. The books cited above are some of the best for what you want (I own or have read at least parts of many of them), but I would not say that any are really good for your purposes.



Many bemoan the lack of The Great Statistical Physics text (and many cite Landau and Lifshitz, or Feynman, or a few other standard references while wishing there was something better), and when it comes to mathematical versions people naturally look to Ruelle. But I would agree that the Minlos book (which I own) is better for an introduction than Ruelle (which I have looked at, but never wanted to buy).



Other useful books not mentioned above are Thompson's Mathematical Statistical Mechanics, Yeomans' Statistical Mechanics of Phase Transitions and Goldenfeld's Lectures On Phase Transitions And The Renormalization Group. None of them are really special, though if I had to recommend one book to you it would be one of these or maybe Minlos.



You might do better in relative terms with quantum statistical mechanics, where some operator algebraists have made some respectable stabs at mathematical treatments that still convey physics. But really that stuff is at a pretty high level (and deriving the KMS condition from the Gibbs postulate in the Heisenberg picture can be done in a few lines) so the benefit is probably marginal at best.

discrete geometry - Is it possible to dissect a disk into congruent pieces, so that a neighborhood of the origin is contained within a single piece?


Problem: is it possible to dissect the interior of a circle into a finite number of congruent pieces (mirror images are fine) such that some neighbourhood of the origin is contained in just one of the pieces?




It may be conceivable that there is some dissection into immeasurable sets that does this. So a possible additional constraint would be that the pieces are connected, or at least the union of connected spaces.



A weaker statement, also unresolved : is it possible to dissect a circle into congruent pieces such that a union of some of the pieces is a connected neighbourhood of the origin that contains no points of the boundary of the circle?



This is doing the rounds amongst the grads in my department. So far no one has had anything particularly enlightening to say - a proof/counterexample of any of these statements, or any other partial result in the right direction would be much obliged!



Edit: Kevin Buzzard points out in the comments that this is listed as an open problem in Croft, Falconer, and Guy's Unsolved Problems in Geometry (see the bottom of page 87).

How to combine linear constraints on a matrix and its inverse?

Since the question suggests that the questioner is looking for an efficient algorithm for this problem, here is my attempt to answer the question from the complexity-theoretic perspective. Unfortunately, the answer is pretty negative.



The following problem, which is one of the possible formulations of the question, is NP-complete.



Given: N∈ℕ, finitely many linear constraints (equations or inequalities) over ℚ on variables aij and bij (1≤i,j≤N), and N×N rational matrices A and B satisfying AB=I and all the given linear constraints.
Question: Is there another pair (A, B) of N×N rational matrices that satisfy AB=I and all the given linear constraints?



A proof is by reduction from the following problem (called “Another Solution Problem (ASP) of SAT”):



Given: An instance φ of SAT and a satisfying assignment to φ.
Question: Is there another satisfying assignment to φ?



The ASP of SAT is known to be NP-complete [YS03].



Note: The following reduction is much simplified compared to the first version posted. See below for the first version, which proves a slightly stronger result.



We can construct a reduction from the ASP of SAT to the problem in question as follows. Given an instance of SAT with n variables x1,…,xn, let N=n and constrain A to be a diagonal matrix such that A=A−1; these are easily written as linear equality constraints on the elements of A and A−1. These constraints are equivalent to the condition that A is a diagonal matrix whose diagonal elements are ±1. Now encode a truth assignment to the n variables by such a matrix by letting aii=1 if xi is true and aii=−1 otherwise. Now it is easy to write down the constraints in SAT as linear inequalities.



With this encoding, the solutions to the given instance of SAT correspond one-to-one to the pairs (A, A−1) satisfying all the linear constraints. This establishes a reduction from the ASP of SAT to the problem in question, and therefore the problem in question is NP-complete.



Remark. This reduction can be viewed as an ASP reduction from SAT to the problem of finding a pair (A, B) of matrices satisfying given linear constraints. For more about ASP reductions, see [UN96] and/or [YS03]. (The notion of ASP reductions was used in [UN96], where the authors treated it as a parsimonious reduction with a certain additional property. The term “ASP reduction” was introduced in [YS03].)




In fact, the problem remains NP-complete even if we allow only linear constraints on the variables aij and linear constraints on the variables bij (but not a linear constraint which uses both aij and bkl). The NP-completeness of this restricted problem can also be shown by reduction from the ASP of SAT.



The following lemma is a key to construct this version of a reduction.



Lemma. Let A be a real symmetric invertible matrix. Both A and A−1 are stochastic if and only if A is the permutation matrix of a permutation whose order is at most 2.



I guess that this lemma can be proved more elegantly, but anyway the following proof should be at least correct.



Proof. The “if” part is straightforward. To prove the “only if” part, assume that both A and A−1 are stochastic. Note the following properties of A:



  • Because A is symmetric, A can be diagonalizable and all eigenvalues are real.

  • Because A is stochastic, all eigenvalues have modulus at most 1.

  • Because A−1 is stochastic, all eigenvalues have modulus at least 1.

Therefore, A can be diagonalizable and all eigenvalues are ±1, and therefore A is an orthogonal matrix. Since both the 1-norm and the 2-norm of each row are equal to 1, all but one entry in each row are 0. Therefore, A is a permutation matrix, and the only symmetric permutation matrices are the permutation matrices of some permutations whose order is at most 2. (end of proof of Lemma 1)



It is easy to write down linear constraints which enforce A to be symmetric and both A and A−1 to be stochastic. In addition, write down linear constraints which enforce A to be block diagonal with 2×2 blocks. Given an instance of SAT with n variables x1,…,xn, we encode a truth assignment by a 2n×2n matrix which is block diagonal with 2×2 blocks so that the first block is $pmatrix{1 & 0 \ 0 & 1}$ if x1 is true and the first block is $pmatrix{0 & 1 \ 1 & 0}$ if x1 is false and so on.



Now that a truth assignment can be encoded as a matrix, the rest is the same: just verify that it is easy to write down the constraints in SAT as linear inequalities and that there is one-to-one correspondence between the solutions to a SAT instance and the pairs (A, A−1) of matrices satisfying the linear constraints.




References



[UN96] Nobuhisa Ueda and Tadaaki Nagao. NP-completeness results for NONOGRAM via parsimonious reductions. Technical Report TR96-0008, Department of Computer Science, Tokyo Institute of Technology, May 1996.



[YS03] Takayuki Yato and Takahiro Seta. Complexity and completeness of finding another solution and its application to puzzles. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E86-A(5):1052–1060, May 2003.

Tuesday, 10 August 2010

at.algebraic topology - Realizing complexes with bases as cellular complexes

Here is a sketch of an argument to show that all based chain complexes are realizable. (This might end up being pretty similar to Tyler's argument.)



First one gives an algebraic argument that by a change of basis the chain complex can be put in a standard "diagonal" form. Moreover, the change of basis can be achieved by a sequence of elementary operations, as in linear algebra, but now over the integers rather than a field, using the fact that the group $GL(n,Z)$ is generated by elementary matrices, including signed permutations. The most important of the elementary operations is to add plus or minus one basis element to another. Doing such an operation in $C_i$ changes the boundary maps to and from $C_i$ by multiplication by an elementary matrix and its inverse.



The "diagonalized" chain complex can easily be realized geometrically, so it remains to see that the elementary basis change operations can be realized geometrically. In the special case of top-dimensional cells, one can slide a part of one such cell over another to achieve the elementary operation of adding plus or minus one column of the outgoing boundary matrix to another. For lower-dimensional cells one wants to do the same thing and then extend the deformation over the higher cells. It should be possible to do this directly without great difficulty. The slide gives a way of attaching a product ${cell}times I$, and this product deformation retracts onto either end, so one can use the deformation retraction to change how the higher-dimensional cells attach.



The argument should work for 1-cells as well as for higher-dimensional cells, so it shouldn't be necessary to assume that $C_1$ is trivial.



An alternative approach would be to temporarily thicken the cell complex into a handle structure on a smooth compact manifold-with-boundary of sufficiently large dimension, with one i-handle for each i-cell. Sliding an i-cell then corresponds to sliding an i-handle, and there is well-established machinery on how to do this sort of thing, as one sees in the proof of the h-cobordism theorem for example. Or one can use the language of morse functions and gradient-like vector fields as in Milnor's book on the h-cobordism theorem. Either way, after all the elementary basis changes have been realized by handle slides, one can collapse the handles back down to their core cells to get the desired based cellular chain complex.



There are plenty of details to fill in here in either the cell or handle approach. I don't recall seeing this result in the classical literature, but I wouldn't be too surprised if it were there somewhere, maybe in some paper or book on J.H.C.Whitehead's simple homotopy theory where elementary row and column operations play a big role.

Monday, 9 August 2010

ag.algebraic geometry - What is the Euler characteristic of a Hilbert scheme of points of a singular algebraic curve?

Let $X$ be a smooth surface of genus $g$ and $S^nX$ its n-symmetrical product (that is, the quotient of $X times ... times X$ by the symmetric group $S_n$). There is a well known, cool formula computing the Euler characteristic of all these n-symmetrical products:



$$sum_{d geq 0} chi left(X^{[d]} right)q^d = (1-q)^{- chi(X)}$$



It is known that $S^nX cong X^{[n]}$, the Hilbert scheme of 0-subschemes of length n over $X$. Hence, the previous formula also computes the Euler characteristic of these spaces.



What about for singular surfaces? More precisely, if $X$ is a singular complex algebraic curve, do you know how to compute the Euler characteristic of its n-symmetrical powers $S^nX$? More importantly: what is the Euler characteristic of $X^{[n]}$, the Hilbert scheme of 0-schemes of length n over $X$?



I guess it is too much to hope for a formula as neat as the one given for the smooth case. Examples, formulas for a few cases or general behaviour (e.g. if for large n, $chileft(X^{[n]}right) = 0)$ are all very welcome!

linear algebra - Why are tensors a generalization of scalars, vectors, and matrices?

I have heard it said that tensor products are the hardest thing in mathematics. Of course that's not really true, but certainly a fluent understanding of how to work with tensor products is one of the dividing lines in your education from basic to advanced mathematics.



Disclaimer: What I will discuss here are tensor products in the sense of linear algebra, so only tensor products of individual vector spaces rather than tensor fields (which is what the physicists mean by tensor product).



For a long time I could not understand how the physicists could work with tensors by thinking about them as "quantities that transform in a certain way under a change in coordinates". The only way I could come to terms with them is by their characterization as something that satisfies a universal mapping property. Do not think about what tensors (elements of a tensor product space) are but rather what the whole construction of a tensor product space can do for you. It's sort of like quotient groups (only harder), where if you focus all your energy on trying to understand cosets you kind of miss the point of quotient groups. What makes tensor product spaces harder to come to terms with than quotient groups is that most elements of a tensor product space are not elementary
tensors
$v otimes w$ but only sums of these things.



The whole (mathematical) point of tensor products of vector spaces is to linearize bilinear maps. A bilinear map is a function $V times W rightarrow U$ among $F$-vector spaces $V, W$, and $U$ which is linear in each coordinate when the other one is kept fixed. There are tons of bilinear maps in mathematics, and if we can turn them into linear maps then we can use constructions related to linear maps on them. The tensor product $V otimes_F W$ of two $F$-vector spaces provides the most extreme space, so to speak, which is a domain for the linearization of all bilinear maps out of $V times W$ into all vector spaces (over $F$). It is a particular vector space together with a particular bilinear map $V times W rightarrow V otimes_F W$ such that any bilinear map out of $V times W$ into any vector space naturally (!) gets converted into a linear map out of this new space $V otimes_F W$. Some notes I wrote on tensor products for an algebra course are at http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf, and in it I address questions like "what does $v otimes w$ mean?" and "what does it mean to say
$$
v_1 otimes w_1 + cdots + v_k otimes w_k = v_1' otimes w_1' + cdots + v_k' otimes w_k'?"
$$
Right from the start I allow tensor products of modules over a ring, not just vector spaces over a field. There are some aspects of tensor products which appear in the wider module context that don't show up for vector spaces (particularly since modules need not have bases). So you might want to skip over, say, tensor products involving ${mathbf Z}/m{mathbf Z}$ over $mathbf Z$ on a first pass if you don't know about modules.



As for the question of how tensor products generalize scalars, vector spaces, and matrices, this comes from the natural (!) isomorphisms
$$
F otimes_F F cong F, F otimes_F V cong V, V otimes_F V^* cong {rm Hom}_F(V,V).
$$
On the left side of each isomorphism is a tensor product of $F$-vector spaces, and on the right side are spaces of scalars, vectors, and matrices. In the link I wrote above, see Theorems 4.3, 4.5, and 5.9. You can also tensor two matrices as a particular example in a tensor product of two spaces of linear maps. Spaces of linear maps are vector spaces (with some extra structure to them), so they admit tensor products as well (with some extra features).



Returning to the physicist's definition of tensors as quantities that transform by a rule, what they always forget to say is "transform by a multilinear rule". I discuss the transition between tensor products from the viewpoint of mathematicians and physicists in section 6 of a second set of notes at
http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod2.pdf.

Sunday, 8 August 2010

gr.group theory - orders of products of permutations

Let $p$ be a prime, $ngg p$ not divisible by $p$ (say, $n>2^{2^p}$). Are there two permutations $a, b$ of the set ${1,...,n}$ which together act transitively on ${1,2,...,n}$ and such that all products $w(a,b)=a^{k_1}b^{l_1}a^{k_2}...$ of length at most $n$ satisfy $w(a,b)^p=1$ (here $k_i,l_iin {mathbb Z}$)?



Update: Following the discussion below (especially questions of Sergey Ivanov, here is a group theory problem closely related to the one before.



Is there a torsion residually finite infinite finitely generated group $G$ such that $G/FC(G)$ is bounded torsion? Here $FC(G)$ is the FC-radical of $G$, that the (normal) subgroup of $G$ which is the union of all finite conjugacy classes of $G$.



For explanations of relevance of this question see below (keep in mind that the direct product of finite groups coincides with its FC-radical). Note that if we would ask $G$ to be bounded torsion itself, the question would be equivalent to the restricted Burnside problem and would have negative answer by Zelmanov.



If the answer to any of the two questions above is negative for some $p>665$, then there exists a non-residually finite hyperbolic group.

puzzle - Shortest Key for the Monte Carlo Lock of Smullyan

Edited in recognition of closed-mindedness.



My brute force search shows no keys shorter than 10. Here are the only keys of length 10 and 11 respectively:



RVLVQRVLVQ
VRLVQVRLVQ



VLRVQVLRVQQ
VLVRQVLVRQQ



Curiously, there are no keys of length 12.



The only word of length 7 that does not crash under iteration is RRQRRQQ, and it evolves unboundedly. There are 74 words of length 8 that grow to over 30 letters, I think none of them cycles, and there are two eventually cycling words, one you gave and the other its pair RQVRLVQQ. The first time an odd period greater than 1 appears is at length 12, these are the originating words (all end up with period 3):



RQQVLLRLVQQQ



RLQVLLRLVQQQ



RQLVLLLRVQQQ



RQVLLLVLRQQQ



RQLVLLLVRQQQ



RQLVLLLRVLQQ



RQVLLLVLRLQQ



RQLVLLLVRLQQ



RQLLVLLRLVQQ



RQVLLLVQQRQQ



RQVLLLVLQRQQ



RQLVLLLVQRQQ

Saturday, 7 August 2010

ag.algebraic geometry - Are all polynomial inequalities deducible from the trivial inequality?

One interpretation of the question is Hilbert's seventeenth problem, to characterize the polynomials on $mathbb{R}^n$ that take non-negative values. The problem is motivated by the nice result, which is not very hard, that a non-negative polynomial in $mathbb{R}[x]$ (one variable) is a sum of two squares. What is fun about this result is that it establishes an analogy between $mathbb{C}[x]$, viewed as a quadratic extension by $i$ of the Euclidean domain $mathbb{R}[x]$; and $mathbb{Z}[i]$ (the Gaussian integers), viewed as a quadratic extension by $i$ of the Euclidean domain $mathbb{Z}$. In this analogy, a real linear polynomial is like a prime that is 3 mod 4 that remains a Gaussian prime, while a quadratic irreducible polynomial is like a prime that is not 3 mod 4, which is then not a Gaussian prime. A non-zero integer $n in mathbb{Z}$ is a sum of two squares if and only if it is positive and each prime that is 3 mod 4 occurs evenly. Analogously, a polynomial $p in mathbb{R}[x]$ is a sum of two squares if and only if some value is positive and each real linear factor occurs evenly. And that is a way of saying that $p$ takes non-negative values.



In dimension 2 and higher, the result does not hold for sums of squares of polynomials. But as the Wikipedia page says, Artin showed that a non-negative polynomial (or rational function) in any number of variables is at least a sum of squares of rational functions.



In general, if $R[i]$ and $R$ are both unique factorization domains, then some of the primes in $R$ have two conjugate (or conjugate and associate) factors in $R[i]$, while other primes in $R$ are still primes in $R[i]$. This always leads to a characterization of elements of $R$ that are sums of two squares. This part actually does apply to the multivariate polynomial ring $R = mathbb{R}[vec{x}]$. What no longer holds is the inference that if $p in R$ has non-negative values, then the non-splitting factors occur evenly. For instance, $x^2+y^2+1$ is a positive polynomial that remains irreducible over $mathbb{C}$. It is a sum of 3 squares rather than 2 squares; of course you have to work harder to find a polynomial that is not a sum of squares at all.

Classical Enumerative Geometry References

I want to start out by making this clear: I'm NOT looking for the modern proofs and rigorous statements of things.



What I am looking for are references for classical enumerative geometry, back before Hilbert's 15th Problem asked people to actually make it work as rigorous mathematics. Are there good references for the original (flawed!) arguments? I'd prefer perhaps something more recent than the original papers and books (many are hard to find, and even when I can, I tend to be a bit uncomfortable just handling 150 year old books if there's another option.)



More specifically, are there modern expositions of the original arguments by Schubert, Zeuthen and their contemporaries? And if not, are there translations or modern (20th century, say...) reprints of their work available, or are scanned copies available online (I couldn't find much, though I admit my German is awful enough that I might have missed them by not having the right search terms, so I'm hoping for English review papers or the like, though I'll deal with it if I need to.)

gr.group theory - "Kummerian" fields?

This is sort of a random, spur of the moment question, but here goes:



We define [with apologies to Conan the Barbarian] a field K to be $textbf{Kummerian}$ if there exists
an index set I, and functions $x: I rightarrow K, n: I rightarrow mathbb{Z}^+$ such that
the algebraic closure of K is equal to $K[(x(i)^{frac{1}{n(i)}})_{i in I}]$. More plainly, the algebraic closure is obtained by adjoining roots of elements of the ground field, not iteratively, but all at once.



Questions:



QI) Is there a classification of Kummerian fields?



QII) What about a classification of "Kummerian (topological) groups", i.e., the absolute Galois groups of Kummerian fields?



Here are some easy observations:



1) An algebraically closed or real-closed field is Kummerian. In particular, the groups of order 1 and 2 are Galois groups of Kummerian fields. By Artin-Schreier, these are the only finite absolute Galois groups, Kummerian or otherwise.



2) A finite field is Kummerian: the algebraic closure is obtained by adjoining roots of unity. Thus $hat{mathbb{Z}}$ is a Kummerian group.



3) An algebraic extension of a Kummerian field is Kummerian. Thus the class of Kummerian groups is closed under passage to closed subgroup. Combining with 2), this shows that any torsionfree procyclic group is Kummerian. On the other hand, the class of Kummerian groups is certainly not closed under passage to the quotient, since $mathbb{Z}/3mathbb{Z}$ is not a Kummerian group.



4) A Kummerian group is metabelian: i.e., is an extension of one abelian group by another. This follows from Kummer theory, using the tower $overline{K} supset K^{operatorname{cyc}} supset K$, where $K^{operatorname{cyc}}$ is the extension obtained by adjoining all roots of unity.



In particular no local or global field (except $mathbb{R}$ and $mathbb{C}$) is Kummerian.



5) The field $mathbb{R}((t))$ is Kummerian. Its absolute Galois group is the profinite completion of the infinite dihedral group $langle x,y | x^2 = 1, xyx^{-1} = y^{-1} rangle$. In particular a Kummerian group need not be abelian.



Can anyone give a more interesting example?



ADDENDUM: In particular, it would be interesting to see a Kummerian group that does not have a finite index abelian subgroup or know that no such exists.

Friday, 6 August 2010

set theory - How far is Lindelöf from compactness?

The answer is Yes.



Theorem. The following are equivalent for any Hausdorff
space $X$.



  1. $X$ is compact.


  2. $X^kappa$ is Lindelöf for any cardinal
    $kappa$.


  3. $X^{omega_1}$ is Lindelöf.


Proof. The forward implications are easy, using Tychonoff
for 1 implies 2, since if $X$ is compact, then
$X^kappa$ is compact and hence Lindelöf.



So suppose that we have a space $X$ that is not compact, but
$X^{omega_1}$ is Lindelöf. It
follows that $X$ is Lindelöf. Thus, there is a countable
cover having no finite subcover. From this, we may
construct a strictly increasing sequence of open sets
$U_0 subset U_1 subset dots U_n dots$
with the union $bigcuplbrace U_n ; | ; n in omega rbrace = X$.



For each $J subset omega_1$ of size $n$, let $U_J$ be
the set $lbrace s in X^{omega_1} ; | ; s(alpha) in U_n$ for each $alpha in J rbrace$. As the size of $J$ increases, the set $U_J$ allows more freedom on the
coordinates in $J$, but restricts more coordinates. If $J$ has
size $n$, let us call $U_J$ an open $n$-box, since it
restricts the sequences on $n$ coordinates. Let $F$ be the
family of all such $U_J$ for all finite $J subset omega_1$



This $F$ is a cover of $X^{omega_1}$. To
see this, consider any point $s in X^{omega_1}$. For each $alpha in
omega_1$, there is some $n$ with $s(alpha) in
U_n$. Since $omega_1$ is uncountable,
there must be some value of $n$ that is repeated unboundedly
often, in particular, some $n$ occurs at least $n$ times. Let $J$
be the coordinates where this $n$ appears. Thus, $s$ is in
$U_J$. So $F$ is a cover.



Since $X^{omega_1}$ is Lindelöf,
there must be a countable subcover $F_0$. Let $J^*$ be
the union of all the finite $J$ that appear in the
$U_J$ in this subcover. So $J^*$ is a countable subset
of $omega_1$. Note that $J^*$ cannot be finite,
since then the sizes of the $J$ appearing in $F_0$
would be bounded and it could not cover
$X^{omega_1}$. We may rearrange indices
and assume without loss of generality that $J^*=omega$ is
the first $omega$ many coordinates. So $F_0$ is
really a cover of $X^omega$, by ignoring the
other coordinates.



But this is impossible. Define a sequence $s in
X^{omega_1}$ by choosing $s(n)$ to be
outside $U_{n+1}$, and otherwise arbitrary. Note that
$s$ is in $U_n$ in fewer than $n$ coordinates below
$omega$, and so $s$ is not in any $n$-box with $J subset omega$, since any such box has $n$ values in $U_n$.
Thus, $s$ is not in any set in $F_0$, so it is not a
cover. QED



In particular, to answer the question at the end, it suffices to take any uncountable $kappa$.

rt.representation theory - Questions about Quivers

The short answer is no. You just have to think of them as formal sums, in the same way that you can only think of elements of a group algebra as formal sums.



What you can do is think of the path category of a quiver, which is the category whose objects are elements are vertices of the quiver, and whose morphisms are paths in the quiver. A representation of the path algebra as an algebra is essentially just a functor from this path category to vectors spaces.

ct.category theory - What are the auto-equivalences of the category of groups?

Suppose $F:mathrm{Grp}tomathrm{Grp}$ is an equivalence. The object $mathbb{Z}inmathrm{Grp}$ is a minimal generator (it is a generator, and no proper quotient is also a generator), and this property must be preserved by equivalences. Since there is a unique minimal generator, we can fix an isomorphism $phi:mathbb Zto F(mathbb Z)$. Now $F$ must preserve arbitrary coproducts, so for all cardinals $kappa$, the isomorphism $phi$ induces an isomorphism $phi_kappa:L_kappato F(L_kappa)$, where $L_kappa$ is the free product of $kappa$ copies of $mathbb Z$. In particular, if $1$ is the trivial group, $phi_0:1to F(1)$ is an isomorphism.



Next pick a group $Ginmathrm{Grp}$, and consider a free presentation $L_1to L_0to Gto1$, that is, an exact sequence with the $L_i$ free. (For simplicity, we can take $L_0=L(G)$ the free group on the set underlaying to $G$, and $L_1$ to be the free group on the subsetunderlaying the kernel of the obvious map $L_1to G$; this eliminates choices) Since $F$ is an equivalence, we have another exact sequence $F(L_1)to F(L_0)to F(G)to F(1)$. Fixing bases for $L_1$ and $L_0$ we can use $phi$ to construct isomorphisms $L_ito F(L_i)$ for both $iin{0,1}$. Assuming we can prove the square commutes, one gets an isomorphism $phi_G:Gto F(G)$—this should not be hard, I guess.



The usual arguments prove then in that case the assignment $G mapsto phi_G$ is a natural isomorphism between the identity functor of $mathrm{Grp}$ and $F$.

Thursday, 5 August 2010

ag.algebraic geometry - Is the field of invariants $k(V)^G$ purely transcendental over $k$?

Reference: http://www.math.u-psud.fr/~colliot/mumbai04.pdf



Proposition 4.3. on page 18 in the above reference reads as follows:
Assume $k = overline{k}$. If $V$ is a finite dimensional vector space over $k$ and $G subset GL(V)$ is an (abstract) abelian group consisting of semisimple elements, then $k(V)^G$ is pure.



I would like to find an abelian group $G subset GL(V)$ such that $k(V)^G$ is not pure (if it exists it would need to be infinite due to Fischer's theorem, and not a connected solvable group according to Proposition 4.4).



Thanks.

Wednesday, 4 August 2010

complex geometry - question about torsion sheaf

Let's assume for simplicity that $M$ is a smooth, complex, projective variety.



The set of points where the coherent subsheaf $mathcal{F}$ is not locally free is a proper closed subset of $M$ (Hartshorne, Algebraic Geometry, Chapter II, ex. 5.8), so the stalk of $ker(det(j))$ at the generic point is zero, i.e. it is a torsion sheaf.



Moreover, you can say more. Indeed, since $mathcal{E}$ is locally free and $mathcal{E} /mathcal{F}$ is torsion-free, it follows that $mathcal{F}$ is a reflexive sheaf (Hartshorne, Stable Reflexive Sheaves, Theorem 1.1), so it is locally free except along a closed subset of codimension $geq 3$ (same reference, Corollary 1.4).



In particular, if $M$ is a curve or a surface then $ker(det(j))$ is zero.

Tuesday, 3 August 2010

mathematics education - Cool problems to impress students with group theory

Here is a striking application of a particular finite non-abelian group.



Explain to your students the issue of check digits as an error-detecting device on credit cards, automobile identification numbers, etc. Two common errors in communicating strings of numbers is a single-digit error (...372... --> ...382...) or an adjacent transposition error (...32... ---> ...23...). We want to design a check digit protocol in such a way that these two common errors are both detected (though not necessarily corrected: an error sign may flash in practice and the person is just prompted to enter the numbers all over again). The simplest check digit protocol uses modular arithmetic, as follows.



If we have an alphabet of m symbols and we agree that our strings of symbols to be used all have n terms, say they are written as a_1a_2...a_n, introduce a set of weights w_1,...,w_n and a valid string is one where



w_1a_1 + ... + w_na_n = 0 mod m.



In practice we take w_n = 1 or -1 and the unique choice of a_n that fits the congruence given all the other data is the check digit.



Theorem 1: All single digit errors are caught iff (w_i,m) = 1 for all i.



That means if a valid string -- one satisfying the above congruence -- has a single term changed then the result will not satisfy the congruence and thus the error is detected.



Theorem 2: All adjacent transposition errors are caught iff (w_{i+1}-w_i,m) = 1 for all i. (Wrap around when i = n.)



For example, say m = 10 (using the symbols 0,1,2,...,9). If all single digit errors are caught then each w_i has to be taken from {1,3,5,7}, but the difference of any two of these is even, so Theorem 2 won't apply.



The conclusion is that "no check digit protocol exists on Z/10 (for strings of length greater than 1) which detects all single digit errors and all adjacent transposition errors.



Maybe you think we are just not being clever enough in our check digit protocol mod 10. For example, instead of those scaling operations a |---> w_ia which are put together by addition, we could just define in some other way a set of permutations s_i of Z/m and declare a string a_1....a_n to be valid when



s_1(a_1) + ... + s_n(a_n) = 0 mod m.



We can use this congruence to solve for a_n given everything else, so we can make check digits this way too.



Theorem 3. When m = 10, or more generally when m is even, there is an adjacent transposition error -- and in fact a transposition error in any two predetermined positions for some string -- that won't be caught.



The proof is a clever argument by contradiction, but I won't type up the details here.



Since in practice we'd like to use 10 digits (or 26 letters -- still even) for codes, Theorem 3 is annoying. The book community with their ISBN code got around this by using m = 11 with a special check digit of X (a few years ago they switched to m = 13). It is natural to ask: is there some check digit protocol on 10 symbols?



Answer: Yes, using the group D_5 (non-abelian of order 10) in place of Z/10.



This was found by Verhoeff in 1969. It has hardly been adopted anywhere, due to inertia perhaps, even though the mechanism of it would in practice always be hidden in computer code so the user wouldn't really need to know such brain-busting group theory like D_5.



You can read about this by looking at



S. J. Winters, Error Detecting Schemes Using Dihedral Groups, The UMAP Journal 11 (1990), 299--308.



The only bad thing about this article by Winters is the funny use of the word scheme, e.g., the third section of the article is called (I am not making this up) "Dihedral Group Schemes". I recommend using the word "protocol" in place of "scheme" for this check digit business since it it more mathematically neutral.



By the way, Theorem 3 should not be construed as suggesting there is no method of using Z/10 to develop a check digit protocol which detects both of the two errors I'm discussing here. See, for example,



K. A. S. Abdel-Ghaffar, Detecting Substitutions and Transpositions of Characters, The Computer Journal 41 (1998) 270--277.



Section 3.4 is the part which applies to modulus 10. I have not read the paper in detail (since I'm personally not interested enough in it), but the end of the introduction is amusing. After describing what he will be able to do he says his method "is easier to understand compared to the construction based on dihedral groups". What the heck is so hard about dihedral groups? Sheesh.

Monday, 2 August 2010

rt.representation theory - On Category O in positive characteristic

Maybe I can answer the original question more directly, leaving aside the interesting recent geometric work discussed further in later posts like the Feb 10 one by Chuck: analogues of Beilinson-Bernstein localization on flag varieties and consequences for algebraic groups (Bezrukavnikov, Mirkovic, Rumynin).



The 1979 conference paper by Haboush may be hard to access and also hard to read in detail, but it raises some interesting questions especially about centers of certain hyperalgebras. I tried to give an overview in Math Reviews: MR582073 (82a:20049) 20G05 (14L40 17B40)
Haboush,W. J.,
Central differential operators on split semisimple groups over fields of positive
characteristic.
Séminaire d’Algèbre Paul Dubreil et Marie-Paule Malliavin, 32ème
année (Paris, 1979), pp.
35–85, Lecture Notes in Math., 795, Springer, Berlin, 1980.



The hyperalgebra here is the Hopf algebra dual of the algebra of regular functions on a simply connected semisimple algebraic group $G$ over an algebraically closed field of characteristic $p$, later treated in considerable depth by Jantzen in his 1987 Academic Press book Representations of Algebraic Groups (revised edition, AMS, 2003). After the paper by Haboush, for example, Donkin finished the determination of all blocks of the hyperalgebra.



While the irreducible (rational) representations are all finite dimensional and have dominant integral highest weights (Chevalley), the module category involves locally finite modules such as the infinite dimensional injective hulls (but no projective covers). The role of the finite Weyl group is now played by an affine Weyl group relative to $p$ (of Langlands dual type) with translations by $p$ times the root lattice. In fact, higher powers of $p$ make life even more complicated.



The older work of Curtis-Steinberg reduces the study of irreducibles to the finitely many "restricted" ones for the Lie algebra $mathfrak{g}$. For these and other small enough weights, Lusztig's 1979-80 conjectures provide the best
hope for an analogue of Kazhdan-Lusztig conjectures when $p>h$ (the Coxeter number). The recent work applies for $p$ big enough": Andersen-Jantzen-Soergel, BMR, Fiebig.



Anyway, the hyperalgebra involves rational representations of $G$ including restricted representations of $mathfrak{g}$, while the usual enveloping algebra of the Lie algebra involves all its representations. But the irreducible ones are finite dimensional. I surveyed what was known then in a 1998 AMS Bulletin paper. Lusztig's 1997-1999 conjectures promised more insight into the non-restricted irreducibles and are now proved for large enough $p$ in a preprint by Bezrukavnikov-Mirkovic. This and their earlier work with Rumynin use a version of "differential operators" on a flag variety starting with the usual rather than divided-power (hyperalgebra) version of the universal enveloping algebra of $mathfrak{g}$.



To make a very long story shorter, Haboush was mainly looking for the center of the hyperalgebra (still an elusive beast unlike the classical enveloping algebra center, due to the influence of all powers of $p$). His weaker version of Verma modules may or may not lead further. But there is no likely analogue of the BGG category for the hyperalgebra in any case. That category depended too strongly on finiteness conditions and well-behaved central characters.



ADDED: It is a long story, but my current viewpoint is that the characteristic $p$ theory for both $G$ and $mathfrak{g}$ (intersecting in the crucial zone of restricted representations of $mathfrak{g}$) is essentially finite dimensional and requires deep geometry to resolve. True, the injective hulls of the simple $G$-modules with a highest weight are naturally defined and infinite dimensional (though locally finite), but the hope is that they will all be direct limits of finite dimensional
injective hulls for (the hyperalgebras of) Frobenius kernels relative to powers of $p$. Shown so far for $p geq 2h-2$ (Ballard, Jantzen, Donkin). In particular, the universal highest weight property of Verma modules in the BGG category (and others) is mostly replaced in characteristic $p$ by Weyl modules (a simple consequence of Kempf vanishing observed by me and codified by Jantzen). Then the problems begin, as Lusztig's conjectures have shown. The
Lie algebra case gets into other interesting territory for non-restricted modules.

Sunday, 1 August 2010

ag.algebraic geometry - Sheaves of Principal parts

The statement holds in general if $f : X to S$ is a morphism of locally ringed spaces. The fibred product of locally ringed spaces can be constructed explicitly without gluing constructions, and also restricts to the fibred product of schemes. See this article (german; shall I translate it?) for details. I will make use of the explicit description given there. Also I use stalks all over the place. Probably this is not the most elegant proof, but it works.



First we construct a homomorphism $mathcal{O}_X otimes_{f^{-1} mathcal{O}_S} mathcal{O}_X to Delta^{-1} mathcal{O}_{X times_S X}$. For that we compute the stalks at some point $x in X$ lying over $s in S$:



$(mathcal{O}_X otimes_{f^{-1} mathcal{O}_S} mathcal{O}_X)_x = mathcal{O}_{X,x} otimes_{mathcal{O}_{S,s}} mathcal{O}_{X,x},$



$(Delta^{-1} mathcal{O}_{X times_S X})_x = mathcal{O}_{X times_S X,Delta(x)} = (mathcal{O}_{X,x} otimes_{mathcal{O}_{S,s}} mathcal{O}_{X,x})_{mathfrak{q}}$,



where $mathfrak{q}$ is the kernel of the canonical homomorphism



$mathcal{O}_{X,x} otimes_{mathcal{O}_{S,s}} mathcal{O}_{X,x} to kappa(x), a otimes b mapsto overline{ab}.$



Thus we get, at least, homomorphisms between the stalks (namely localizations). In order to get sheaf homomorphisms out of them, the following easy lemma is useful:



(*) Let $F,G$ be sheaves on a topological space $X$ and for every $x in X$ let $s_x : F_x to G_x$ be a homomorphism. Suppose that they fit together in the sense that for every open $U$, every section $f in F(U)$ and every $x in U$ there is some open neighborhood $x in W subseteq U$ and some section $g in G(W)$ such that $s_y$ maps $f_y$ to $g_y$ for all $y in W$. Then there is a sheaf homomorphism $s : F to G$ inducing $s$.



This can be applied in the above situation: Every section in a neighborhood of $x$ in $mathcal{O}_X otimes_{f^{-1} mathcal{O}_S} mathcal{O}_X$ induced by an element in $mathcal{O}_X(U) otimes_{mathcal{O}_S(V)} mathcal{O}_X(U)$ for some neighborhoods $U$ of $x$ and $V$ of $s$ such that $U subseteq f^{-1}(V)$. This yields a section in $mathcal{O}_{X times_S X}$ on the basic-open subset $Omega(U,U,V;1)=U times_V U$ and thus a section of $Delta^{-1} mathcal{O}_{X times_S X}$ on $U$. It is easily seen, that this construction yields the natural map on the stalks.



Thus we have a homomorphism $alpha : mathcal{O}_X otimes_{f^{-1} mathcal{O}_S} mathcal{O}_X to Delta^{-1} mathcal{O}_{X times_S X}$. Now let $J$ be the kernel of the multiplication map $mathcal{O}_X otimes_{f^{-1} mathcal{O}_S} mathcal{O}_X to mathcal{O}_X$ and $I$ be the kernel of the homomorphism $Delta^# : Delta^{-1} mathcal{O}_{X times_S X} to mathcal{O}_X$. Then for every $n geq 1$ our $alpha$ restricts to a homomorphism



$(mathcal{O}_X otimes_{f^{-1} mathcal{O}_S} mathcal{O}_X)/J^n to (Delta^{-1} mathcal{O}_{X times_S X})/I^n,$



which is given at $x in X$ by the natural map



$(mathcal{O}_{X,x} otimes_{mathcal{O}_{S,s}} mathcal{O}_{X,x}) / mathfrak{p}^n to ((mathcal{O}_{X,x} otimes_{mathcal{O}_{S,s}} mathcal{O}_{X,x}) / mathfrak{p}^n)_{mathfrak{q}}$,



where $mathfrak{p} subseteq mathfrak{q}$ is the kernel of the multiplication map $mathcal{O}_{X,x} otimes_{mathcal{O}_{S,s}} mathcal{O}_{X,x} to mathcal{O}_{X,x}$.



We want to show that this map is an isomorphism, i.e. that the localization at $mathfrak{q}$ is not needed. For that it is enough to show that every element in $mathcal{O}_{X,x} otimes_{mathcal{O}_{S,s}} mathcal{O}_{X,x}$, whose image in $mathcal{O}_{X,x}$ is invertible, is invertible modulo $mathfrak{p}^n$. Or in other words: Preimages of units are units with respect to the projection



$(mathcal{O}_{X,x} otimes_{mathcal{O}_{S,s}} mathcal{O}_{X,x}) / mathfrak{p}^n to (mathcal{O}_{X,x} otimes_{mathcal{O}_{S,s}} mathcal{O}_{X,x}) / mathfrak{p}^1 cong mathcal{O}_{X,x}$.



However, this follows from the observation that the kernel $mathfrak{p}^1 / mathfrak{p}^n$ is nilpotent; cf. also this question.



I'm sure that there is also a proof which avoids stalks at all.



EDIT: So here is a direct construction of the homomorphism $mathcal{O}_X otimes_{f^{-1} mathcal{O}_S} mathcal{O}_X to Delta^{-1} mathcal{O}_{X times_S X}$:



Let $p_1,p_2$ be the projections $X times_S X to X$. Then we have for $i=1,2$ the homomorphism



$mathcal{O}_X to {p_i}_* mathcal{O}_{X times_S X} to {p_i}_* Delta_* Delta^{-1} mathcal{O}_{X times_S X} = (p_i Delta)_* Delta^{-1} mathcal{O}_{X times_S X} = Delta^{-1} mathcal{O}_{X times_S X}$,



and they commute over $f^{-1} mathcal{O}_S$. Thus we get the desired homomorphism. But I think stalks are convenient when we want to show that this is an isomorphism when modding out the ideals.