Friday 31 August 2012

earth - Asteroids in langrangian Points 4 & 5

There are Asteroids "trapped" in Jupiters Langrange points 4 and 5 called trojans and greeks. Are there any asteroids in the earths L4 and L5? Have we seen asteroids in Lagrange points of the earth and moon system? Why are only L4 and L5 stable and L1, L2 and L3 require corrections to remain at those positions?

Thursday 30 August 2012

nt.number theory - Algorithms for Diophantine Systems

No. Given any set of diophantine equations $f_1(z_1, ldots, z_n) = ldots = f_m(z_1, ldots, z_n)=0$, we can rewrite in terms of linear equations and quadratics. Create a new variable $w_{k_1 cdots k_n}$ for each monomial $z_1^{k_1} cdots z_n^{k_n}$ which occurs in the $f$'s, or which divides any monomial which occurs in the $f$'s. Turn each $f$ into a linear equation: For example, $x^3 y^2 + 7 x^2 y=5$ becomes $w_{32} + 7 w_{21} = 5$. Then create quadratic equations $z_i w_{k_1 cdots k_i cdots k_n} = w_{k_1 cdots (k_i +1) cdots k_n}$. For example, $x w_{22} = w_{32}.$ This shows that the solvability of Diophantine equations is equivalent to that of Diophantine equations of degree $leq 2$.



I'll also mention a very concrete case. The intersection of two quadrics in $mathbb{P}^3$ is a genus $1$ curve. To my knowledge, no algorithm is known to test for the existence of rational points even in this case. (But my knowledge is not very large.)

ag.algebraic geometry - "Algebraic" topologies like the Zariski topology?

Interesting questions.
Actually, this is indeed related to work on defining a natural topology on categories, which is part of noncommutative algebraic geometry.



A. Rosenberg defined the left spectrum for a noncommutative ring in 1981 (see The left spectrum, the Levitzki radical, and noncommutative schemes), and further generalized this spectrum to any abelian category (see reconstruction of schemes), and proved the so called Gabriel-Rosenberg reconstruction theorem which led to the correct definition of noncommutative scheme. I might have time to talk about this later. But for now, I shall just point out some papers, such as Spectra of noncommutative spaces.



In this paper, Rosenberg takes an abelian category as a "noncommutative space" and defines various spectra for different goals. (ONE remarkable destination is for representation theory of Lie algebras and quantum groups.)



One can not only define spectrum for abelian categories; this notion also makes sense in a non-abelian category and a triangulated category. In the paper Spectra related with localizations, Rosenberg defined the spectrum directly related to localization of categories. Roughly speaking, the spectrum of a category is a family of topologizing subcategories (which by definition, are closed under direct sum, sub- and quotient; in particular, thick or Serre subcategories) satifying some additional conditions.



There is also another paper, Underlying spaces of noncommutative schemes, trying to investigate the underlying space of a noncommutative scheme or other noncommutative "space" in noncommutative algebraic geometry. If we want to save flat descent in general, we might lose the base change property. In this work, Rosenberg deals with the "quasi-topology" (which means dropping the base change property) and defines the associative spectrum of a category.
Moreover: for the goals of representation theory, he built a framework relating representation theory with the spectrum of abelian category (in particular, categories of modules). Actually, in this language, irreducible representations are in one-to-one correspondence with the closed points in the spectrum; generic points in the spectrum also produce representations (not necessarily irreducible).



The most important part in this work is that it provided a completely categorical (algebro-geometric) way to do induction in an abelian category instead of the derived category. (I will explain this later if I have time). This semester, Rosenberg gave us a lecture course, using this framework to compute all the irreducible representations for the Weyl algebra, the enveloping algebra, quantized enveloping algebras, algebras of differential operators, $SL_2({mathbb R})$ and other algebraic groups, or related associative algebras. It works very efficiently. For example, computing irreducible representations of $U(sl_3)$ is believed to be very complicated, but using this spectrum framework, it becomes much simpler.



The general framework for these is contained in the paper Spectra, associated points and representation theory. If you want to see some concrete examples using this machine, you should look at Rosenberg's old book Noncommutative Algebraic Geometry And Representations Of Quantized Algebras.
There is another paper Spectra of `spaces' represented by abelian categories, providing the general theory for this machinery.



Furthermore, we can define the spectrum for an exact category; even more generally, for any Grothendieck site, and so for any category (because any category has a canonical Grothendieck pretopology). Rosenberg has recent work defining the spectrum for such categories -- Geometry of right exact `spaces' -- the main motivation for this work is to provide a background for higher universal algebraic K-theory for a right exact category (a category with a family of strict epimorphisms can be taken as a one-sided exact category). More important motivation is to study algebraic cycles for noncommutative schemes. (Warning: this paper is very abstract and hard to read. We will go through this paper in the lecture course this semester.)



All of these things will appear soon in his new book with Konstevich (but I am not sure of the exact time). If I have enough time to post, I will explain in more detail, how the theory of the spectrum for abelian categories comes into representation theory, and how this picture is related to the derived picture of Beilinson-Bernstein and Deligne.
In fact, today we have just learned Beck's theorem for Karoubian triangulated categories and will do the DG-version of Beck's theorem later. And then he will introduce the spectrum for triangulated categories, and explain the noncommutative algebraic geometry facts behind the BBD machine and the connection with his abelian machine.

Tuesday 28 August 2012

co.combinatorics - String of integers puzzle

I apologize for not have the math background to put this question in a more formal way.
I'm looking to create a string of 796 letters (or integers) with certain properties.



Basically, the string is a variation on a De Bruijn sequence B(12,4), except order and repetition within each n-length subsequence are disregarded.
i.e. ABBB BABA BBBA are each equivalent to {AB}.
In other words, the main property of the string involves looking at consecutive groups of 4 letters within the larger string
(i.e. the 1st through 4th letters, the 2nd through 5th letters, the 3rd through 6th letters, etc)
And then producing the set of letters that comprise each group (repetitions and order disregarded)



For example, in the string of 9 letters:



A B B A C E B C D



the first 4-letter groups is: ABBA, which is comprised of the set {AB}
the second group is: BBAC, which is comprised of the set {ABC}
the third group is: BACE, which is comprised of the set {ABCE}
etc.



The goal is for every combination of 1-4 letters from a set of N letters to be represented by the 1-4-letter resultant sets of the 4-element groups once and only once in the original string.



For example, if there is a set of 5 letters {A, B, C, D, E} being used
Then the possible 1-4 letter combinations are:
A, B, C, D, E,
AB, AC, AD, AE, BC, BD, BE, CD, CE, DE,
ABC, ABD, ABE, ACD, ACE, ADE, BCD, BCE, BDE, CDE,
ABCD, ABCE, ABDE, ACDE, BCDE



Here is a working example that uses a set of 5 letters {A, B, C, D, E}.



D D D D E C B B B B A E C C C C D A E E E E B D A A A A C B D D B



The 1st through 4th elements form the set: D
The 2nd through 5th elements form the set: DE
The 3rd through 6th elements form the set: CDE
The 4th through 7th elements form the set: BCDE
The 5th through 8th elements form the set: BCE
The 6th through 9th elements form the set: BC
The 7th through 10th elements form the set: B
etc.



* I am hoping to find a working example of a string that uses 12 different letters (a total of 793 4-letter groups within a 796-letter string) starting (and if possible ending) with 4 of the same letter. *



Here is a working solution for 7 letters:



AAAABCDBEAAACDECFAAADBFBACEAGAADEFBAGACDFBGCCCCDGEAFAGCBEEECGFFBFEGGGGFDEEEEFCBBBBGDCFFFFDAGBEGDDDDBE

zeta functions - Fractional powers of Dirichlet series?

For your first question: let $R$ be any commutative ring, and let $D(R)$ be the ring of formal Dirichlet series over $R$, i.e., the set of all functions $f: mathbb{Z}^+ rightarrow R$ under pointwise addition and convolution product.



Then the unit group of $R$ is precisely the set of formal Dirichlet series $f$ such that
$f(1)$ is a unit in $R$.



As for your second question, it is indeed equivalent to asking whether $U(D(R))$ is $n$-divisible. Here, if we take $R = mathbb{Z}$ as you asked, the answer is that for all $n geq 2$, $U(D(mathbb{Z}))$ is not $n$-divisible and that even the Dirichlet series $zeta(s)$ is not an $n$th power in $D(mathbb{Z})$.



[Now, for some reason, I switch back to the classical notation, i.e., I replace the arithmetical function $f$ by its "Dirichlet generating series" $sum_{n=1}^{infty} frac{f(n)}{n^s}$. It would have been simpler not to do this, but too late.]



Let
$f(s) = sum_{n=1}^{infty} frac{a_n}{n^s}$ be any formal Dirichlet series, and suppose
that $g(s) = sum_{n=1}^{infty} frac{b_n}{n^s}$ be a formal Dirichlet series such that
$g^2 = f$. Thus



$a_1 + frac{a_2}{2^s} + ldots = (b_1 + frac{b_2}{2^s} + ... )(b_1 + frac{b_2}{2^s} + ldots)$



$= b_1^2 + frac{2 b_1 b_2}{2^s} + frac{2 b_1 b_3}{3^s} + frac{2b_1 b_4 + b_2^2}{4^s} +
ldots$



(This multiplication is formal, i.e., it is true by definition.)



Thus $b_1 = pm sqrt{a_1}$. Suppose we take the plus sign, for simplicity. Then for all primes $p$,



$a_p = 2 b_1 b_p$, so



$b_p = frac{a_p}{2 sqrt{a_1} }$,



so we need $2 sqrt{a_1}$ to divide $a_p$, so at least we need $a_p$ to be even for all primes $p$. Further conditions will come from the composite terms.



These same considerations show that if we replaced the coefficient ring $mathbb{Z}$ by
$mathbb{Q}$ (or any coefficient field of characteristic $0$), then any formal Dirichlet series with $a_1 = 1$ is $n$-divisible for all positive integers $n$. In particular, you can write $zeta(s)^{frac{1}{n}}$ as a Dirichlet series with $mathbb{Q}$-coefficients just by applying the above procedure and successively solving for the coefficients. Whether there is a nice formula for these coefficients is a question for a better combinatorialist than I to answer.



EDIT: Based on your comments below, I now understand that you are looking for a characterization of $U(D(mathbb{Z}))$ as an astract abelian group. I believe it is isomorphic to
${ pm 1 } times prod_{i=1}^{infty} mathbb{Z}$. (Or, more transparently, to
the product of ${ pm 1}$ with the product of infinitely many copies of $prod_{i=1}^{infty} mathbb{Z}$, one for each prime number. But as abstract groups it amounts to the same thing.)

ag.algebraic geometry - what notions are "geometric" (characterized by geometric fibers)?

Sorry The title might not be suggestive enough.



The question is about things like the following: A reductive group scheme is defined to be a (really nice) group scheme whose geometric fibers are reductive groups. So in some sense, "reductiveness" is some kind of "geometric" notion.



So whatelse properties of schemes can be checked only on geometric fibers? What I know, for example, given a scheme over a field $k$, it is projective iff it is projective over $bar{k}$. But can this be extended to any base scheme?



In particular, is there any reference that collects such results? And "WHY" should this work? for example, WHY geometrically-reductive group schemes turn out to be the right generalization of reductive algebraic groups?



Sorry this question might be a little to vague, and thank you in advance.

matrices - Properties of Graphs with an eigenvalue of -1 (adjacency matrix)?

There are a lot of graphs with this property. I just introduce two classes that are very famous:



The Friendship graph $F_n$ that is $K_1nabla nK_2$, where $nabla$ means the join of two graphs. These graphs has $n-1$ eigenvalue $-1$.



The second class is graph $K_n$ that is removed $1,2,3$ or $4$ edges from it. So, this class contain also five different classes!!!

linear algebra - Field extension containing the eigenvectors of a Hermitian matrix

A special case is the following:



Pick:



An integer $n$ that is a square;



$$
H =F^{*} D F
$$



a matrix with $n$ lines and $n$ columns



where



$D$ is a diagonal square matrix with $n$ lines and with integer coefficients
$F$ is the Fourier matrix, with $n$ lines defined by



$$
F = (1/sqrt{n}) (s^{(i-1)(j-1)})
$$



where



$$
s = e^{-2 pi i/n}
$$



$*$ means conjugate-transpose.



You get



$H$ is hermitian with entries algebraic integers.



$H$ is also a circulant matrix.



Pick now:



$$
U =F^{*}
$$



so that



$$
Q(H) subseteq Q(U) = Q(s)
$$



while



$$
Q(U,D) = Q(F^{*},D) = Q(s).
$$



Observe that
$$
Q(s)
$$
is the classic extension of $Q$ containing the $n$-th roots of unity
so that it has degree



$$
varphi(n)
$$



over $Q$, where $varphi$ is the Euler totient's function.



Thus,



The extension $Q(U,D)$ over $Q(H)$ has degree $d$ bounded above by $varphi(n).$



Observe that this degree $d$ is substantially slower than $n !$ since



$$
d leq varphi(n) < n.
$$

Monday 27 August 2012

ra.rings and algebras - Rational exponential expressions

I don't have an answer to the one-variable decision
question that you asked.



But you did introduce multi-variable
expressions, and there are some fascinating results on the
multi-variable analogue of your domination problem.
(Perhaps you know all this already...) That is, given two rational
exponential expressions f(x1,...,xk)
and
f(x1,...,xk), the problem is to decide whether
f(n1,...,nk) <=
g(n1,...,nk), for all positive integers (n1,...,nk) except at most
finitely often.



The answer is that this decision problem is undecidable,
and it is undecidable even for polynomial expressions.
There is no algorithm that will tell you when one
multi-variable (positive) polynomial expression eventually
dominates another.



The reason is that the decision problem of
Diophantine equations
is encodable into this domination problem. That work
famously shows that the problem to decide if a given
integer polynomial p(x1,...,xk) has
a zero in the integers is undecidable. This is the famous
MRDP solution to Hilbert's 10th
problem
.



It is easy to reduce the Diophantine problem to the
domination problem, as follows. First, let us restrict to
non-negative integers, for which the MRDP results still
apply. Suppose we are given a polynomial
p(x1,...,xk) expression over the
integers, and want to decide if it has a solution in the
natural numbers. This expression may involve some minus
signs, which your expressions do not allow, but we will
take care of that by moving all the minus signs to one
side. Introduce a new variable x0 and consider
the domination problem:



  • Does 1 <=
    (1+n0)p(n1,...,nk)2 for
    all natural numbers except finitely often?

We can expand the right hand side, and move the negative
signs to the left, to arrive at an instance of your
domination problem, using only positive polynomials. Now,
if p(n1,...,nk) is never 0, then the
answer to the stated domination problem is Yes, since the
right hand side will always be at least 1 in this case.
Conversely, if p(n1,...,nk) = 0 has
a solution, then we arrive at infinitely many violations of
domination by using any choice of n0. Thus, if
we could decide the domination problem, then we could
decide whether p(n1,...,nk) = 0 has
solutions in the natural numbers, which we cannot do by the
MRDP theorem. QED



I think the best situation with the Diophantine equations
is that it remains undecidable with nine variables, and so
the domination problem I described above is undecidable
with ten variables. (Perhaps Bjorn Poonen will show up here
and tell us a better answer?)



Of course, this doesn't answer the one-variable question
that you asked, and probably you know all this already.



My final remark is that if one can somehow represent the
inverse pairing function, then one will get undecidability
even in the one variable case. That is, let f(n,m) =
(n+m)(n+m+1)/2 + m be one of the usual pairing functions,
which is bijective between ω2 and ω.
Let p be the function such that p(f(n,m)) = n, the
projection of the pairs onto the first coordinate. If the
expressions are enriched to allow p, then one can in effect
work with several variables by coding them all via pairing
into one variable, and in this case, the domination problem
in the one-variable case, for rational exponential
expressions also allowing the function p, will be
undecidable. It would seem speculative to suppose that p is
itself equivalent to a rational exponential expression, but
do you know this?

oc.optimization control - Why is solving a MILP w/o an objective function so much faster?

MILP is an inherently difficult (NP-hard) problem, so the behavior of your solver will vary greatly based on which heuristics it uses, what branching strategy you use, and the formulation of the original problem.



It is possible that your MILP solver has a different default behavior for problems with an objective function and pure feasibility problems. Some heuristics (such as the feasibility pump) work well for a wide variety of feasibility problems, but may not necessarily be the most efficient for other classes of problems.



In short, efficiently solving MILPs is a black art that forms much of the basis of operations research.

Sunday 26 August 2012

Could there be a brown dwarf in our solar system?

The nemesis theory proposes that a low mass star or brown dwarf in highly elliptical orbit is a companion to our sun as a solution to the cyclical mass extinction problem (http://www.theatlantic.com/science/archive/2015/11/the-next-mass-extinction/413884/) and (http://www.space.com/22538-nemesis-star.html). Scientists noticed that some mass extinctions follow a seemingly cyclical pattern, so researchers proposed that the orbit of a small companion star (or a brown dwarf) was disturbing comets or asteroids as it approached aphelion and flinging them towards the earth.



As interesting as this would be, WISE's infrared scans of the sky have revealed nothing and there is currently no evidence supporting the theory. Furthermore the orbit of any companion would have to be very large and would therefore be very unstable and would likely be detectable. The Astrobiologist David Morrison stated of the theory: "(T)he Sun is not part of a binary star system. There has never been any evidence to suggest a companion. The idea has been disproved by several infrared sky surveys, most recently the WISE mission. If there were a brown dwarf companion, these sensitive infrared telescopes would have detected it."

ag.algebraic geometry - Are root stacks characterized by their divisor multiplicities?

Definitions/Background



Suppose $S$ is a scheme and $Dsubseteq S$ is an irreducible effective Cartier divisor. Then $D$ induces a morphism from $S$ to the stack $[mathbb A^1/mathbb G_m]$ (a morphism to this stack is the data of a line bundle and a global section of the line bundle, modulo scaling). For a positive integer $k$, the root stack $sqrt[k]{D/S}$ is defined as the fiber product



$begin{matrix}
sqrt[k]{D/S} & longrightarrow & [mathbb A^1/mathbb G_m] \
pdownarrow & & downarrow wedge k \
S & longrightarrow & [mathbb A^1/mathbb G_m]
end{matrix}$



where the map $wedge k: [mathbb A^1/mathbb G_m]to [mathbb A^1/mathbb G_m]$ is induced by the maps $xmapsto x^k$ (on $mathbb A^1$) and $tmapsto t^k$ (on $mathbb G_m$). The morphism $p:sqrt[k]{D/S}to S$ is a coarse moduli space and is an isomorphism over $Ssmallsetminus D$. Moreover, there is a divisor $D'$ on $sqrt[k]{D/S}$ such that $p^*D$ is $kD'$.



The data of a morphism from $T$ to $sqrt[k]{D/S}$ is equivalent to the data a morphism $f:Tto S$ and a divisor $E$ on $T$ such that $f^*D = kE$.



The question




Suppose $mathcal X$ is a DM stack, that $f:mathcal Xto S$ is a coarse moduli space, that $f$ is an isomorphism over $Ssmallsetminus D$, and that $f^*D = kE$ for an irreducible Cartier divisor $E$ on $mathcal X$. Is the induced morphism $mathcal Xto sqrt[k]{D/S}$ an isomorphism?




I get the strong impression that the answer should be "yes", at least if additional conditions are placed on $mathcal X$.



A counterexample



Here's a counterexample to show that some additional condition needs to be put on $mathcal X$. Take $G$ to be $mathbb A^1$ with a doubled origin, viewed as a group scheme over $mathbb A^1$. Then $mathcal X=[mathbb A^1/G]to mathbb A^1$ is a coarse moduli space ("there's a $B(mathbb Z/2)$ at the origin"). If we take $Dsubseteq mathbb A^1$ to be the origin, then the pullback to $mathcal X$ is the closed $B(mathbb Z/2)$ with multiplicity 1. Yet the induced morphism from $mathcal X$ to $sqrt[1]{D/mathbb A^1}cong mathbb A^1$ is not an isomorphism.



In this case, $mathcal X$ is a smooth DM stack, but has non-separated diagonal.

Saturday 25 August 2012

pr.probability - Is the infimum of the Ky Fan metric achieved?

Consider the probability space $(Omega, {cal B}, lambda)$ where
$Omega=(0,1)$, ${cal B}$ is the Borel sets, and $lambda$ is Lebesgue measure.



For random variables $W,Z$ on this space, we define the Ky Fan metric by



$$alpha(W,Z) = inf lbrace epsilon > 0: lambda(|W-Z| geq epsilon) leq epsilonrbrace.$$



Convergence in this metric coincides with convergence in probability.



Fix the random variable $X(omega)=omega$, so the law of $X$ is Lebesgue measure,
that is, ${cal L}(X)=lambda$.




Question: For any probability measure $mu$ on $mathbb R$, does there exist
a random variable $Y$ on $(Omega, {cal B}, lambda)$ with law $mu$ so that
$alpha(X,Y) = inf lbrace alpha(X,Z) : {cal L}(Z) = murbrace$ ?




Notes:



  1. By Lemma 3.2 of Cortissoz,
    the infimum above is $d_P(lambda,mu)$:
    the Lévy-Prohorov distance between the two laws.


  2. The infimum is achieved if we allowed to choose both random variables.
    That is, there exist $X_1$ and $Y_1$ on $(Omega, {cal B}, lambda)$
    with ${cal L}(X_1) = lambda$, ${cal L}(Y_1) = mu$, and
    $alpha(X_1,Y_1) = d_P(lambda,mu)$.
    But in my problem, I want to fix the random variable $X$.


  3. Why the result may be true: the
    space $L^0(Omega, {cal B}, lambda)$ is huge. There
    are lots of random variables with law $mu$. I can't think of any
    obstruction to finding such a random variable.


  4. Why the result may be false: the
    space $L^0(Omega, {cal B}, lambda)$ is huge. A compactness
    argument seems hopeless to me. I can't think of any
    construction for finding such a random variable.


Friday 24 August 2012

stochastic calculus - Units in Ornstein-Uhlenbeck(OU) process

Take an OU process characterized by



X(0) = x

dX(t) = - a X(t) dt + b dW(t)


where a,b >0. The parameter a is usually interpreted a dissipative term, and b is a volatility term.



My question is this: What are the units of a and b? Is it true that a is (time) -1 , and b is unitless? Then how can one make sense of the variance which approaches (b 2 /(2 a)) as t goes to infinity?



Thanks for your help.

Thursday 23 August 2012

combinatorial geometry - The Join of Simplicial Sets ~Finale~

Background




Let $X$ and $S$ be simplicial sets, i.e. presheaves on $Delta$, the so-called topologist's simplex category, which is the category of finite nonempty ordinals with morphisms given by order preserving maps.



How can we derive the structure of the face and degeneracy maps of the join from either of the two equivalent formulas for it below:



The Day Convolution, which extends the monoidal product to the presheaf category:



$$(Xstar S)_{n}:=int^{[c],[c^prime] in Delta_a}X_{c}times S_{c^prime}times Hom_{Delta_a}([n],[c]boxplus[c^prime])$$



Where $Delta_a$ is the augmented simplex category, and $boxplus$ denotes the ordinal sum. The augmented simplex category is the category of all finite ordinals (note that this includes the empty ordinal, written $[-1]:=emptyset$.



The join formula (for $J$ a finite nonempty linearly ordered set):



$$(Xstar S)(J)=coprod_{Icup I=J}X(I) times S(I')$$
Where $forall (i in I text{ and } i' in I'),$ $i < i'$, which implies that $I$ and $I'$ are disjoint.



Then we would like to derive the following formulas for the face maps (and implicitly the degeneracy maps):




The $i$-th face map $d_i : (Sstar T)_n to (Sstar T)_{n-1}$ is defined on $S_n$ and $T_n$ using the $i$-th face map on $S$ and $T$. Given $sigma in S_jtext{ and }tauin T_k$ , we have:



$$d_i (sigma, tau) = (d_i sigma,tau)text{ if } i leq j, j ≠ 0.$$
$$d_i (sigma, tau) = (sigma,d_{i-j-1} tau) text{ if } i > j, k ≠ 0.$$
$$d_0(sigma, tau) = tau in T_{n-1} subseteq (Sstar T)_{n-1} text{ if } j = 0$$
$$d_n(sigma, tau) = sigma in S_{n-1} subset (Sstar T)_{n-1}text{ if } k = 0$$




We note that the special cases at the bottom come directly from the inclusion of augmentation in the formula for the join.



Edit: Another note here: I got these formulas from a different source, so the indexing may be off by a factor of -1.



Question




How can we derive the concrete formulas for the face and degeneracy maps from the definition of the join (I don't want a geometric explanation. There should be a precise algebraic or combinatorial reason why this is the case.)?



Less importantly, how can we show that the two definitions of the join are in fact equivalent?



Edit:



Ideally, an answer would show how to induce one of the maps by a universal property.



Note also that in the second formula, we allow $I$ or $I'$ to be empty, and we extend the definition of a simplicial set to an augmented simplicial set such that $X([-1])=*$, i.e. the set with one element.



A further note about the first formula for the join: $boxplus$ denotes the ordinal sum. That is, $[n]boxplus [m]cong [n+m+1]$. However, it is important to notice that there is no natural isomorphism $[n]boxplus [m]to [m]boxplus [n]$. That is, there is no way to construct this morphism in a way that is natural in both coordinates of the bifunctor. This is important to note, because without it, it's not clear that the ordinal sum is asymmetrical.

homological algebra - Question about Ext

Perhaps the best way to think about this is as follows: pick your favorite injective resolution for N and favorite projective resolution of M. Then $mathrm{Ext}(M,N)$ is given by taking Hom between these complexes (NOT chain maps, just all maps of representations between the underlying modules), and putting a differential on those in the usual way.



Now, use the usual identification of $mathrm{Hom}(A,B)cong mathrm{Hom}(Aotimes B^*,1)$ on this complex. So you see, it's the same as if we had tensored the projective resolution of $M$ with the dual of the injective resolution of $N$, which is a projective resolution of $N^*$, and then taken Hom to 1. Of course, the tensor product of two projective resolutions is a projective resolution of the tensor product, so we see this complex also computes $mathrm{Hom}(N^*otimes M,1)$.`



It also follows by abstract nonsense in one line: isomorphic functors have isomorphic derived functors.

Wednesday 22 August 2012

big list - Looking for book with good general overview of math and its various branches

Mathematics: Its Content, Methods and Meaning is an excellent overview of the full body of mathematics. It is large (3 volumes), but comes in a paperback edition that includes all three.



The draw is that it is edited by three well-known Russian mathematicians (Aleksandrov, Kolmogorov, Lavrentev) who wrote some of the articles and solicited the rest from many other Russian luminaries. It was developed as a compendium able to communicate both the vibrancy as well as the importance of each of the areas of the mathematics so that science ministers in Russia could better understand mathematics as mathematicians do.



The translation into English is excellent.



The first article, a General View of Mathematics, is highly recommended from a philosophical, historical, and phenomenological point of view.

Tuesday 21 August 2012

What is an intuitive view of adjoints? (version 1: category theory)

I like Wikipedia's motivation for an adjoint functor as a formulaic solution to an optimization problem (though I'm biased, because I helped write it). In short, "adjoint" means most efficient and "functor" means formulaic solution.



Here's a digest version of the discussion to make this more precise:



An adjoint functor is a way of giving the most efficient solution to some optimization problem via a method which is formulaic ... For example, in ring theory, the most efficient way to turn a rng (like a ring with no identity) into a ring is to adjoin an element '1' to the rng, adjoin no unnecessary extra elements (we will need to have r+1 for each r in the ring, clearly), and impose no relations in the newly formed ring that are not forced by axioms. Moreover, this construction is formulaic in the sense that it works in essentially the same way for any rng.



The intuitive description of this construction as "most efficient" means "satisfies a universal property" (in this case an initial property), and that it is intuitively "formulaic" corresponds to it being functorial, making it an "adjoint" "functor".



In this asymmetrc interpretation, the theorem (if you define adjoints via universal morphisms) that adjoint functors occur in pairs has the following intuitive meaning:



"The notion that F is the most efficient solution to the (optimization) problem posed by G is, in a certain rigorous sense, equivalent to the notion that G poses the most difficult problem which F solves."




Edit: I like the comment below emphasizing that an adjoint functor is a globally defined solution. If
$G:Cto D$, it may be true that terminal morphisms exist to some $C$'s but not all of them; when they always exist, this guarantees that they extend to define a unique functor $F:Dto C$ such that $F dashv G$. This result could have the intuitive interpretation "globally defined solutions are always formulaic".



Compare this for example to the basic theorem in algebraic geometry that a global section (of the structure sheaf) of $mathrm{Spec} (A)$ is always defined by a single element of $A$; the global sections functor is an adjoint functor representable by the formula $Hom(-,mathrm{Spec}( mathbb{Z}))$, so this is actually directly related.

rt.representation theory - Twin categories in representation of Lie algebra

The equivalence (or something very close, I haven't checked carefully what is written) follows from Beilinson-Bernstein localization. The two categories can be realized roughly speaking as D-modules on BG/N and NG/B, and the equivalence comes from the interchange of the two sides. Slightly more precisely, Beilinson-Bernstein tells us that (assuming we ignore singular infinitesimal characters, where things need to be slightly modified) if we want representations on which Z acts semisimply, we look at twisted D-modules on G/B with twisting given by (a lift from h^/W to h^ of) the eigenvalues of the Z action.
Equivalently these are D-modules on G/N which are weakly H-equivariant - meaning locally constant along the fibers G/N-->G/B, and h acts with strictly prescribed semisimple monodromy-- ie we presecribe monodromy along these fibers. If we want representations with locally finite Z action, we really just look at D-modules on G/N and ask for them to be locally constant along the fibers but don't strictly presecribe monodromies.
Now the two conditions you give for representations correspond to asking for these D-modules to be N-equivariant in the strict case or B-equivariant in the locally finite case.
In any case the whole picture is symmetric under exchanging left and right, hence the equivalence.



(There are two other categories which are symmetric under exchange of left and right -- if we impose the weak/locally finite conditions on both sides we get the category of Harish-Chandra bimodules -- ie (g+g,G)-Harish Chandra modules -- which correspond to representations of G considered as a real Lie group. If we impose strict conditions on both sides we get the Hecke category, which appears as intertwining functors acting on categories of representations and is the subject of Kazhdan-Lusztig theory. Category O, in these two forms, is some kind of intermediate form -- both of the above are monoidal categories and Category O is a bimodule for them, with your involution exchanging the two actions..)



As for a reference, this is standard but I don't know the proper reference. Similar things are discussed in the Beilinson-Ginzburg-Soergel JAMS paper on Koszul duality patterns in representation theory or the Beilinson-Ginzburg paper on wall-crossing functors (available on the arxiv). I presume Ben Webster will let us know..

Monday 20 August 2012

nt.number theory - Infinitely many prime numbers of the form $n^{2^k}+1$?

It took me a while to find this: http://www.pnas.org/content/94/4/1054.full



Anyway by Friedlander and Iwaniec (1997). They proved that there are infinitely many primes of the form $x^2 + y^4 .$ They mention near the end that they do not have a proof for primes of the form $x^2 + y^6 $ but would like one. So there is a way to go to settle $x^2 + 1.$



FYI, what I did (not remembering title, authors, anything but the result) was write a program to give the primes $x^2 + y^4 $ and put the first dozen in Sloane's sequence site search feature.

ct.category theory - A canonical and categorical construction for geometric realization

As to "why is the unit interval the canonical interval?", there is an interesting universal property of the unit interval given in some observations of Freyd posted at the categories list, characterizing $[0, 1]$ as a terminal coalgebra of a suitable endofunctor on the category of posets with distinct top and bottom elements.



There are various ways of putting it, but for the purposes of this thread, I'll put it this way. Recall that the category of simplicial sets is the classifying topos for the (geometric) theory of intervals, where an interval is a totally ordered set (toset) with distinct top and bottom. (This really comes down to the observation that any interval in this sense is a filtered colimit of finite intervals -- the finitely presentable intervals -- which make up the category $Delta^{op}$.) Now there is a join $X vee Y$ on intervals $X$, $Y$ which identifies the top of $X$ with the bottom of $Y$, where the bottom of $X vee Y$ is identified with the bottom of $X$ and the top of $X vee Y$ with the top of $Y$. This gives a monoidal product $vee$ on the category of intervals, hence we have an endofunctor $F(X) = X vee X$. A coalgebra for the endofunctor $F$ is, by definition, an interval $X$ equipped with an interval map $X to F(X)$. There is an evident category of coalgebras.



In particular, the unit interval $[0, 1]$ becomes a coalgebra if we identify $[0, 1] vee [0, 1]$ with $[0, 2]$ and consider the multiplication-by-2 map $[0, 1] to [0, 2]$ as giving the coalgebra structure.



Theorem: The interval $[0, 1]$ is terminal in the category of coalgebras.



Let's think about this. Given any coalgebra structure $f: X to X vee X$, any value $f(x)$ lands either in the "lower" half (the first $X$ in $X vee X$), the "upper" half (the second $X$ in $X vee X$), or at the precise spot between them. Thus, you could think of a coalgebra as an automaton where on input $x_0$ there is output of the form $(x_1, h_1)$, where $h_1$ is either upper or lower or between. By iteration, this generates a behavior stream $(x_n, h_n)$. Interpreting upper as 1 and lower as 0, the $h_n$ form a binary expansion to give a number between 0 and 1, and therefore we have an interval map $X to [0, 1]$ which sends $x_0$ to that number. Of course, should we ever hit $(x_n, between)$, we have a choice to resolve it as either $(bottom_X, upper)$ or $(top_X, lower)$ and continue the stream, but these streams are identified, and this corresponds to the identification of binary expansions



$$.h_1... h_{n-1} 100000... = .h_1... h_{n-1}011111...$$



as real numbers. In this way, we get a unique well-defined interval map $X to [0, 1]$, so that $[0, 1]$ is the terminal coalgebra.



(Side remark that the coalgebra structure is an isomorphism, as always with terminal coalgebras, and the isomorphism $[0, 1] vee [0, 1] to [0, 1]$ is connected with the interpretation of the Thompson group as a group of PL automorphisms $phi$ of $[0, 1]$ that are monotonic increasing and with discontinuities at dyadic rationals.)

Sunday 19 August 2012

Good books on Arithmetic Functions ?

A MathSciNet search set to Books and with "arithmetic functions" entered into the "Anywhere" field yields 148 matches. Some of the more promising ones:




The theory of arithmetic functions.
Proceedings of the Conference at Western Michigan University, Kalamazoo, Mich., April 29--May 1, 1971. Edited by Anthony A. Gioia and Donald L. Goldsmith. Lecture Notes in Mathematics, Vol. 251. Springer-Verlag, Berlin-New York, 1972. v+287 pp.




$ $




Narkiewicz, Wƚadysƚaw Elementary and analytic theory of algebraic numbers. Monografie Matematyczne, Tom 57. PWN---Polish Scientific Publishers, Warsaw, 1974. 630 pp. (errata insert).




$ $




Babu, Gutti Jogesh
Probabilistic methods in the theory of arithmetic functions.
Macmillan Lectures in Mathematics, 2. Macmillan Co. of India, Ltd., New Delhi, 1978.




$ $




Elliott, P. D. T. A. Arithmetic functions and integer products. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 272. Springer-Verlag, New York, 1985.




$ $




Sivaramakrishnan, R. Classical theory of arithmetic functions. Monographs and Textbooks in Pure and Applied Mathematics, 126. Marcel Dekker, Inc., New York, 1989.




$ $




Schwarz, Wolfgang; Spilker, Jürgen Arithmetical functions. An introduction to elementary and analytic properties of arithmetic functions and to some of their almost-periodic properties. London Mathematical Society Lecture Note Series, 184. Cambridge University Press, Cambridge, 1994.




$ $




Tenenbaum, Gérald Introduction to analytic and probabilistic number theory. Translated from the second French edition (1995) by C. B. Thomas. Cambridge Studies in Advanced Mathematics, 46. Cambridge University Press, Cambridge, 1995.


Saturday 18 August 2012

Definition of congruence subgroup for non-matrix groups

Even though every linear algebraic group (understood to mean affine of finite type) can be embedded into ${rm{GL}}_ n$, if we change the embedding then the notion of "congruence subgroup" may change (in the sense of a "group of integral points defined by congruence conditions"). So a more flexible notion is that of arithmetic subgroup, or $S$-arithmetic for a non-empty finite set of places $S$ (containing the archimedean places), in which case we can make two equivalent definitions: subgroup commensurable with intersection of $G(k)$ with a compact open subgroup of the finite-adelic points, or subgroup commensurable with intersection of $G(k)$ with $S$-integral points of some ${rm{GL}}_ n$ relative to a fixed closed subgroup inclusion of $G$ into ${rm{GL}}_ n$ (i.e., if $mathcal{G}$ denotes the schematic closure in ${rm{GL}}_ n$ over $mathcal{O}_ {k,S}$ of $G$ relative to such an embedding over $k$, we impose commensurability with
$mathcal{G}(mathcal{O}_ {k,S})$).



Personally I like to view the ${rm{GL}}_ n$-embedding as just a quick-and-dirty way to make examples of flat affine integral models (of finite type) over rings of integers (via schematic closure from the generic fiber). However, in some proofs it is really convenient to reduce to the case of ${rm{GL}}_ n$ and doing a calculation there. It is not evident when base isn't a field or PID whether flat affine groups of finite type admit closed subgroup inclusions into a ${rm{GL}}_ n$ (if anyone can prove this even over the dual numbers, let me know!), so in the preceding definition it isn't evident (if $mathcal{O}_ {k,S}$ is not a PID) whether the $mathcal{G}$'s obtained from such closure account for all flat affine models of finite type over the ring of $S$-integers. In other words, we really should be appreciative of ${rm{GL}}_ n$.

Friday 17 August 2012

arithmetic geometry - Existence of fine moduli space for curves and elliptic curves

Here is a thought on the first question. What you need to know (at least to get an algebraic space; I'll let others be more careful than I if you want a scheme) is how large n must be to ensure that an automorphism of a smooth genus g curve X which fixes n points must be the identity. Let G be the cyclic group generated by this automorphism: then the map X -> X/G is totally ramified at your n fixed points. So by Riemann-Hurwitz, g(X) [NO, 2g(X)-2, THANKS, BJORN) is at least -2|G| + n(|G|-1). If G is nontrivial, in other words, g is at least n-4 [NO, 2g+2, THANKS, BJORN]. So I think g+5 [NO, 2g+3, THANKS, BJORN] marked points should be enough. That this is necessary can be seen by taking g=2; on M_{2,6} you'll have a bunch of loci with an extra involution, parametrizing curves whose marked points are precisely the Weierstrass points.



[NO MORE LATE-NIGHT RIEMANN-HURWITZ: THANKS TO BJORN FOR CORRECTING THE ERRORS]

life - What earth organisms would survive if they arrived on Mars?

On the surface of Mars probably none, since it's too dry or too cold, or both, to stay active.



Spores or other dormant forms probably could survive for centuries, until radiation will gradually destroy the organic molecules necessary to get back into an active state.



But there are "Mars Special Regions", where either Earth microbes or potential Martian microbes - if some exist - cannot be ruled out to be able to spread.



Things might look quite different in the underground, especially in warm and wet zones, which may exist by geothermal heat and ground water.
There, at least lithotrophic bacteria/"chemolithoautotrophs" could survive and spread.
Those could then form a basis for a food chain of other organisms.



Plants dependent on sunlight, or animals dependent on oxygen would hardly survive.



For a list of species, see table 2 on page 895 of the above reference paper. The following species and genera are mentioned in the table:
Psychromonas ingrahamii, Planococcus halocryophilus strain Or1, Paenisporoarcina sp. and Chryseobacterium, Rhodotorula glutinis (yeast), Colwellia psychroerythraea, Nitrosomonas cryotolerans, (lichen) Pleopsidium chlorophanum.

orbital elements - Eclipse in sun-synchronous orbits

I am trying to figure out the eclipse periods of a satellite orbiting the Earth.
The satellite is placed on a circular sun-synchronous orbit at 800 km height. The LTAN (local time of ascending node) is 11:00 am.
First question, what exactly is LTAN? And how is it related to the other orbital parameters (like eccentricity, inclination etc.)?
Second, how can I use it to compute the eclipse periods?

solar system - How would parallax affect an object at 200-1000 AU (for example, the 9th planet)?

We know from this question that an object is hypothesized to be at between 200-1000 AU that is large enough to detect, known now as the 9th planet. It should move about 40 arcseconds per year. It seems to me that the parallax distance would be more than this distance per year. Would it be easier to detect the 9th planet using parallax than its movement around the Sun?

Thursday 16 August 2012

the sun - Can a solar flare destroy every electronic item on earth?

Here's the how: very energetic charged particles interact with the Earth's magnetic field, and when they do so they emit electromagnetic radiation. If the energy of this radiation was high enough then when it reached wires/conductors at the Earth's surface the opposite effect would take place: ie the EM radiation would result in the flow of charged particles (a current). Such currents, if strong enough, could damage very many electronic and electric devices.



As to the possibility: it seems unlikely that such a high energy event would take place (ie to destroy all/nearly all electronic equipment), but it could be possible. We don't have sufficient data.

observation - What uncertainty does an error bar signify in astronomy?

The most common way to represent uncertainty is with symmetric error bars around a central point. This is in turn commonly interpreted as the 95 % confidence interval. Ie, the actual data point is the centre of a Gaussian which has a width at 95 % of it's height as the size of the error bars.



This is only statistical uncertainty and often not explicitly stated. One also refers to measurement and discovery with different confidence intervals... discovery is commonly only claimed when there is a a 5-sigma confidence interval. Ie, if the measurement lies more than 5 widths of the peak away from the theory or prediction, you've made a discovery.



Note, we are leaving out here systematic uncertainty and instument bias, which can only increase the total uncertainty. Usually it is assumed that there is no correlation, so they are combined using the sum of their squares.



Long story short - always ask what the error bars represent, especially if they look too "clean".

applications - Defining "average rank" when not every ranking covers the whole set

Here's a mathematical modeling problem I came across while working on a hobby project.



I have a website that presents each visitor with a list of movie titles. The user has to rank them from most to least favorite. After each visit, I want to create a cumulative ranking that takes into account each visitor's individual ranking. Normally I would just take the mean ordinal rank: e.g., if Person A rated "Avatar" 10th and Person B rated it 20th, its cumulative rank would be 15th. However, new movies will be added to the list as the website grows, so each person will have ranked only a subset of the full movie list.



Any thoughts on how I can define "average rank" when some rankings do not cover the whole set? My best idea so far is to model this as a directed graph, where nodes are movies and weighted edges are preferences (e.g. "10 people ranked 'Avatar' right above 'District 9'"), and then finding sinks and sources. How else could one go about this?



(Sorry if this question is too applied.)

Wednesday 15 August 2012

observation - What is an "arc" spectrum ?

An arc spectrum is one produced by a discharge lamp where the discharge is through ionised gas, in the case of He-Ar a mixture of Helium and Argon, which produces a predictable line emission spectrum.



They are often used to provide a calibration spectrum for spectrometers.

Covering maps of Riemann surfaces vs covering maps of $k$-algebraic curves

No, the property of having small neighbourhoods whose preimage is a disjoint union
of $n$ homeomorphic open sets does not hold in the Zariski topology (once $n > 1$, i.e.
the cover is non-trivial). The reason is
that non-empty Zariski open sets are always very big; in the case of a curve,
their complement is always just a finite number of points. In particular, two different
non-empty Zariski opens are never disjoint.



There are two ways that one rescues the situation: the first is to use the differential
topology view-point on covers: they are proper submersions between manifolds of the same dimension.
In the context of compact Riemann surfaces, both source and target have the same dimension,
and maps are automatically proper, so it is just the submersion property that is left
to think about. It is a property about how tangent spaces map, which can be translated into
the algebraic context (e.g. using the notion of Zariski tangent spaces).



So if $f: X rightarrow Y$ is a regular morphism (regular morphism is the algebraic geometry terminology for an everywhere defined map given locally by rational functions) of projective curves, we can say that
$f$ is unramified at a point $p in X$ if $f$ induces an isomorphism from the Zariski tangent
space of $X$ at $p$ to the Zariksi tangent space to $Y$ at $f(P)$.



For historical reasons, if $f$ is unramified at every point in its domain, we say that
$f$ is etale (rather than a cover), but this corresponds precisely to the notion of a covering map when we pass to Riemann surfaces.



This leads to the more sophisticated rescue: one considers all the etale maps from (not necessarily projective or connected) curves $X$ to $Y$, and considers them as forming a topology on $Y$, the so-called etale topology of $Y$. This leads to many important notions
and results, since it allows one to transport many topological notions (in particular, fundamental groups and cohomology) to the algebraic context.

Tuesday 14 August 2012

Habitable Planet around Red Dwarf

Well, it's a relatively new field of study so I don't think there's any exact answers, but I've read up on this a bit out of interest. Red Dwarfs have 2 main disadvantages. One, which you've mentioned. They're relatively cold compared to our star so the planet would need to be comparatively close and therefor, probobly tidally locked. The other problem is red dwarfs tend to have very active sunspots compared to our sun, so the planet would need a crazy strong magnetic field to protect the planet. Neither of those are "deal breakers" but perhaps less likely than a sun-like star.



A tidally locked planet wouldn't necessarily have a scorching hot side and a freezing cold side. Modeling suggests that temperature would circulate. If it had extensive oceans on the tidally locked side, for example, that could create a cloud cover and not cause intense heat. You could also have the sun side be far enough away to be temperate and the star-side icy, or a livable section in a ring around the planet.



There's also the possibility that a tidally locked planet could have winds that we would find very difficult to adapt to.



I think you'd ideally have the sun side be the habitable side for photosynthesis.



On the flip side, if the star is too large, the star's life would probobly be too short for the planet to cool down enough to be habitable for us. The life of a star is the inverse cube of the size, so a star twice the size of our sun would only have a main sequence of 1.2-1.3 billion years. When you get a bit bigger than that, there's not enough time for a planet to cool off and a solar system to get past it's heavy bombardment period.



So I'd give a red dwarf a better chance of having a habitable planet than a sun 2.5 times or larger than our sun. That, and red-dwarfs are the most common stars in the galaxy, so I think there's a probability that some red dwarfs could have habitable planets.

How can we tell that the Milky Way is a spiral galaxy?

The clues we have to the shape of the Milky Way are:



1) When you look toward the galactic center with your eye, you see a long, thin strip. This suggests a disk seen edge-on, rather than a ellipsoid or another shape. We can also detect the bulge at the center. Since we see spiral galaxies which are disks with central bulges, this is a bit of a tipoff.



2) When we measure velocities of stars and gas in our galaxy, we see an overall rotational motion greater than random motions. This is another characteristic of a spiral.



3) The gas fraction, color, and dust content of our galaxy are spiral-like.



So, overall, it's a pretty convincing argument. Of course, we have to assume our galaxy is not completely unlike the other galaxies we see--but I suppose once a civilization has accepted that it does not occupy any special place in the universe, arguments about similarity seem sensible.

Monday 13 August 2012

ac.commutative algebra - Should Krull dimension be a cardinal?

The Krull dimension, as defined by Gabriel and Rentschler, of not-necessarily commutative rings is an ordinal. See, for example, [John C. McConnell, James Christopher Robson, Lance W. Small, Noncommutative Noetherian rings].



More generally, they define the deviation of a poset $A$ as follows. If $A$ does not have comparable elements, $mathrm{dev};A=-infty$; if $A$ is has comparable elements but satisfies the d.c.c., then $mathrm{dev};A=0$. In general, if $alpha$ is an ordinal, we say that $mathrm{dev};A=alpha$ if (i) the deviation of $A$ is not an ordinal strictly less that $alpha$, and (ii) in any descending sequence of elements in $A$ all but finitely many factors (ie, the intervals of $A$ determined by the successive elements in the sequence) have deviation less that $alpha$.



Then the Gabriel-Rentschler left Krull dimension $mathcal K(R)$ of a ring $R$ is the deviation of the poset of left ideals of $R$. A poset does not necessarily have a deviation, but if $R$ is left nötherian, then $mathcal K(R)$ is defined.



A few examples: if a ring is nötherian commutative (or more generally satisfies a polynomial identity), then its G-R Krull dimension coincides with the combinatorial dimension of its prime spectrum, so in this definition extends classical one when these dimensions are finite. A non commutative example is the Weyl algebra $A_{n}(k)$: if $k$ has characteristic zero, then $mathcal K(A_n(k))=n$, and if $k$ has positive characteristic, $mathcal K(A_n(k))=2n$. The book by McConnel and Robson has lots of information and references.

Sunday 12 August 2012

gravity - Is there a theory / equation showing whether or not two passing bodies will go into orbit around each other?

If there are only two bodies, then they will never enter a mutual orbit. For two objects initially gravitationally unbound, in order to become gravitationally bound you must remove energy from the system. With only two bodies (that don't collide), this does not happen. They will accelerate toward each other, change directions according to how close they get, and then leave each other again with exactly the same total energy and momentum as before, but in general shared in some other ratio (for instance, if a small body encounters a large body, the smaller will gain energy and leave with a larger velocity).



On the other hand, if you have three (or more) bodies, one may get slung out with high velocity, thus extracting energy from the two others, which can then go in orbit. But alas, there's no equation for this; the so-called N-body problem has no analytic solution, and must in general be solved numerically.

ag.algebraic geometry - Counting/constructing Toric Varieties

As far as my understanding goes the answer is no, and I will try to explain why and clarify the list of comments (I have little reputation so I cannot comment there) and give you a partial answer. I hope I do not patronise you, since you may now already part of it.



First of all, as Torsten said, it depends what you understand for classification. In this context a torus $T$ of dimension $r$ is always an algebraic variety isomorphic to $(mathbb{C}^*)^r$ as a group. A complex algebraic variety $X$ of finite type is toric if there exists an embedding $iota: (mathbb{C}^ast)^r hookrightarrow X$, such that the image of $iota$ is an open set whose Zariski closure is $X$ itself and the usual multiplication in $T=iota((mathbb{C}^ast)^r)$ extends to $X$ (i.e. $T$ acts on $X$).



Think about all toric varieties. It is hard to find a complete classification, i.e. being able to give the coordinates ring for each affine patch and the morphisms among them for all toric varieties.



However, when the toric varieties we consider are normal there is a structure called the fan $Sigma$ made out of cones. All cones live in $N_mathbb{R}cong Notimes mathbb{R}$ where $Ncong mathbb{Z}$ is a lattice. A cone is generated by several vectors of the lattices (like a high school cone, really) and a fan is a union of cones which mainly have to satisfy that they do not overlap unless the overlap is a face of the cone (another cone of smaller dimension). There is a concept of morphism of fans and hence we can speak of fans 'up to isomorphism' (elements of $mathbf{SL}(n,mathbb{Z})$). Given a lattice N, there is an associated torus $T_N=Notimes (mathbb{C}^*)$, isomorphic to the standard torus.



Then we have a 1:1 correspondence between separated normal toric varieties $X$ (which contain the torus $T_N$ as a subset) up to isomorphism and fans in $N_mathbb{R}$ up to isomorphism. There are algorithms to compute the fan from the variety and the variety from the fan and they are not difficult at all. You can easily learn them in chapter seven of the Mirror Symmetry book, available for free. Given any toric variety (even non-normal ones) we can compute its fan, but computing back the variety of this fan many not give us the original variety unless the original is normal. You can check this easily by computing the fan of a $mathbf{V}(x^2-y^3)$ (torus embedding $(t^3,t^2)$) which is the same as $mathbb{C}^1$ but obviously they are not isomorphic (the former has a singularity at (0,0)). In fact, since there are only two non-isomorphic fans of dimension 1 (the one generated by $1in mathbb{Z}$ and the one generated by 1 and -1) we see that there are only three normal toric varieties of dimension 1, the projective line and the affine line, and the standard torus.



The proof of this statement is not easy and to be honest I have never seen it written down complete (and I would appreciate any reference if someone saw it) but I know more or less the reason, as it is explained in the book about to be published by Cox, Little and Schenck (partly available) This theorem is part of my first year report which is due by the end of September, so if you want me to send you a copy when it is finished send me an e-mail.



So, yes, in the case of normal varieties there is some 'classification' via combinatorics, but in the case of non-normal I doubt there is (I never worked with them anyways).



Become a toric fan!.

black hole - Is radiation from neutron stars delayed by time dilation?

A typical neutron star of $1.5M_{odot}$ is thought to have a radius of around 8-10 km. This is only a factor of 2 larger than the Schwarzschild radius for a similar mass black hole.



We know that more massive neutron stars do exist. The current record holder is around $2M_{odot}$. Most equations of state (the adopted relationship between pressure and density) for dense nuclear matter suggest that more massive neutron stars are smaller and therefore must be even closer in radius to the Schwarzschild radius.



So the premise of you question is basically correct. It is certainly true that when you deal with neutron star spectra you do have to apply significant general relativistic corrections to measured temperatures and the same corrections would need to be applied to any temporal variations.



Thus a time-variable signal from a neutron star surface will appear slower to an observer on Earth.



For the last part, I suspect that the scenario you propose is extremely unlikely. Rhoades & Ruffini (1974) first established that there must be a maximum mass for a neutron star under GR conditions, even if we allow the equation of state to harden to the point where the speed of sound is the speed of light. This maximum mass is around $3.2M_{odot}$. This sets an upper limit to the possible value of $GM/Rc^{2} leq 0.405$ (see p.261 of Shapiro & Teukolsky, Black holes, white dwarfs and neutron stars). This in turn sets an upper limit the possible gravitational redshift (and time dilation factor) of 2.29.



Beyond this point the neutron star is unstable and will collapse to become a black hole. In reality the limit is probably a bit tighter than that because most proposed equations of state result in neutron stars becoming unstable at finite densities and at masses quite a bit lower than $3.2M_{odot}$.



So I think the most time dilation you are ever going to see from a neutron star surface is a factor of $sim 2$.

Saturday 11 August 2012

co.combinatorics - Are there analogues of Desargues and Pappus for block designs?

I passed on your question to John H. Conway. Here is his response: (NB. Everything following this line is from Conway and is written from his point of view. Of course, in the comments and elsewhere on the site, I am not Conway.)



I think it's wrong to focus on block designs in particular. This may not answer your question, but there are some interesting examples of theorems similar to Desargues's and Pappus's theorems. They aren't block designs, but they do have very nice symmetries.



I call these "presque partout propositions" (p.p.p. for short) from the French "almost all". This used to be used commonly instead of "almost everywhere" (so one would write "p.p." instead of "a.e."). The common theme of the propositions is that there is some underlying graph, where vertices represent some objects (say, lines or points) and the edges represent some relation (say, incidence). Then the theorems say that if you have all but one edge of a certain graph, then you have the last edge, too. Here are five such examples:



Desargues' theorem
Graph: the Desargues graph = the bipartite double cover of the Petersen graph
Vertices: represent either points or lines
Edges: incidence
Statement: If you have ten points and ten lines that are incident in all of the ways that the Desargues graph indicates except one incidence, then you have the last incidence as well. This can be seen to be equivalent to the usual statement of Desargues's theorem.



Pappus's theorem
Graph: the Pappus graph, a highly symmetric, bipartite, cubic graph on 18 vertices
Vertices: points or lines
Edges: incidence
Statement: Same as in Desargues's theorem.



"Right-angled hexagons theorem"
Graph: the Petersen graph itself
Vertices: lines in 3-space
Edges: the two lines intersect at right angles
Statement: Same as before, namely having all but one edge implies the existence of the latter. An equivalent version is the following: suppose you have a "right-angled hexagon" in 3-space, that is, six lines that cyclically meet at right angles. Suppose that they are otherwise in fairly generic position, e.g., opposite edges of the hexagon are skew lines. Then take these three pairs of opposite edges and draw their common perpendiculars (this is unique for skew lines). These three lines have a common perpendicular themselves.



Roger Penrose's "conic cube" theorem
Graph: the cube graph Q3
Vertices: conics in the plane
Edges: two conics that are doubly tangent
Statement: Same as before. Note that this theorem is not published anywhere.



Standard algebraic examples
Graph: this unfortunately isn't quite best seen as a graph
Statement: Conics that go through 8 common points go through a 9th common point. Quadric surfaces through 7 points go through an 8th (or whatever the right number is).



Anyway, I don't know of any more examples.



Also, I don't know what more theorems one could really have about coordinatization. I mean, after you have a field, what more could you want other than, say, its characteristic? (Incidentally, the best reference I know for the coordinatization theorems is H. F. Baker's book "Principles of Geometry".)



In any case, enjoy!

Friday 10 August 2012

Why can't different regions of the universe be the same as they have the same origin?

We don't really know what happened at the time $t=0$. It doesn't really make sense to say that everything "touched each other" at $t=0$, particularly if the Universe is infinite. But we can calculate the distance that a photon — and hence the maximum distance that any information — can travel in the expanding Universe in a given time. This calculations depends only on the Hubble parameter, and the densities of the various components of the Universe, and whether you start your calculation at $t=0$ or $t=10^{-12},mathrm{s}$ makes little difference after a second.



The answer is related to your previous question about the cosmic event horizon which was also about the particle horizon, i.e. the distance that light has been able to travel since the Big Bang. If you were present when the CMB was emitted, you could also calculate your particle horizon. It would of course be much smaller than today, since light at this time had only traveled for 380,000 years. If the Universe hadn't been expanding, this horizon would of course be 380,000 lightyears, but due to the expansion, it is quite a lot larger, roughly 850,000 lightyears.



This means that, at the time of the CMB, regions farther apart than 850 kly had not had the chance to be in causal contact.



We can also calculate how large an angle 850 kly at the CMB would suspend if observed today. It turns out to be 1.7º. That is, if the Universe had just been expanding like we observe its expansion today — i.e. depending only on the densities of the known components — patches on the CMB map separated by an angle $theta > 1.7^circ$ shouldn't look the same.



But they do.



Inflation solves this problem by saying that the Universe initially went through a much, much faster expansion, such that regions much, much farther apart than the 850 kly have been in causal contact. Still, even with inflation, the particle horizon is not infinite. So on scales much, much larger than the observable Universe, it may be inhomogeneous.

Thursday 9 August 2012

fa.functional analysis - A counter example to Hahn-Banach separation theorem of convex sets.

The Hahn-Banach theorem for a locally convex space X says that for any disjoint pair of convex sets A, B with A closed and B compact, there is a linear functional $lin X^*$ separating A and B. So, it would be nice to have a counterexample where both A and B are closed, but not compact. As no-one has posted such an example, I'll do that now, where the space X is a separable Hilbert space. In fact, as with fedjas example, there will be no separating linear functionals at all, not even noncontinuous ones.



Take μ to be the Lebesgue measure on the unit interval [0,1] and X = L2(μ). Then let,



  • A be the set of f ∈ L2(μ) with f ≥ 1 almost everywhere.

  • B be the one dimensional subspace of f ∈ L2(μ) of the form f(x) = λx for real λ.

These can't be separated by a linear function $lcolon Xtomathbb{R}$. A similar argument to fedja's can be used here, although it necessarily makes use of the topology. Suppose that $l(f)ge l(g)$ for all f in A and g in B. Then $l$ is nonnegative on the set A-B of f ∈ L2 satisfying $f(x)ge 1-lambda x$ for some λ. For any $fin L^2$ and for each $ninmathbb{N}$, choose $lambda_n$ large enough that $Vert(1-lambda_nx+vert fvert)_+Vert_2le 4^{-n}$ and set $g=sum_n 2^n(1-lambda_n x+vert fvert)_+in L^2$. This satisfies $pm f+2^{-n}gge1-lambda_nx$, so $pm l(f)+2^{-n}l(g)ge 0$ and, therefore, $l$ vanishes everywhere.



If you prefer, you can create a similar example in $ell^2$ by letting $A={xinell^2colon x_nge n^{-1}}$ and B be the one dimensional subspace of $xinell^2$ with $x_n=lambda n^{-2}$ for real λ.



Note: A and B here are necessarily both unbounded sets, otherwise one would be weakly compact and the Hahn-Banach theorem would apply.

ag.algebraic geometry - Two-dimensional quotient singularities are rational: why?

I've read that quotient singularities (that is, spectra of invariant subrings of finite groups acting linearly on polynomial rings) have rational singularities. Is there an elementary proof of this fact in dimension two? I have one proof, but it uses big guns like Grothendieck spectral sequences etc.

pr.probability - connection between the Gaussian and the Cauchy distribution

Robin, a simple explanation for why the 2-dim Brownian motion stopped when hitting the real line is that Brownian motion is conformally invariant. Let $f:Omega rightarrow Omega'$ be a conformal mapping and $B_{z,Omega}(t)$ be a Brownian motion started at $zin Omega$ and stopped at the first time $T$ when it hits the boundary of $Omega$. The conformal invariance of Brownian motion is the fact that $f(B_{z,Omega}(t))$ for $tin[0,T]$ has the same distribution as a Brownian motion in $Omega'$ started at $f(z)$ and stopped when reaching the boundary of $Omega'$ for the first time.



To connect this with the problem above of a Brownian motion started at $(0,1)$ and stopped when hitting the real line, just map the upper half plane onto the unit circle in such a way that $(1,0)$ is mapped to the origin. A Brownian motion started from the center of the circle obviously hits the boundary of the circle and a uniformly distributed point $P'$ on the boundary of the circle. Thus, the angle of the line from the center of the circle to $P'$ with another fixed line through the center of the circle is uniformly distributed between $-pi$ and $pi$. Since the conformal map from the upper half-plane to the circle maps lines through $(0,1)$ to lines through the origin, then conformal invariance of Brownian motion implies that the angle between the $y$-axis and the line from $(0,1)$ to the point $P$ where the Brownian motion hits the $x$-axis is also uniform between $-pi$ and $pi$.

Wednesday 8 August 2012

ac.commutative algebra - To prove the Nullstellensatz, how can the general case of an arbitrary algebraically closed field be reduced to the easily-proved case of an uncountable algebraically closed field?

Well, this is the opposite of what you asked, but there is an easy reduction in the other direction. Namely, if the result is true for countable fields, then it is true for all fields. I can give two totally different proofs of this, both very soft, using elementary methods from logic. While we wait for a solution in the requested direction, let me describe these two proofs.



Proof 1. Suppose k is any algebraically closed field, and J is an ideal in the polynomial ring k[x1,...,xn]. Consider the structure (k[x1,...,xn],k,J,+,.), which is the polynomial ring k[x1,...,xn], together with a predicate for the field k and for the ideal J. By the downward Loweheim-Skolem theorem, there is a countable elementary substructure, which must have the form (F[x1,...,xn],F,I,+,.), where F is a countable subfield of k, and I is a proper ideal in F[x1,...,xn]. The "elementarity" part means that any statement expressible in this language that is true in the subring is also true in the original structure. In particular, I is a proper ideal in F[x1,...,xn] and F is algebraically closed. Thus, by assumption, there is a1,...,an in F making all polynomials in I zero simultaneously. This is a fact about a1,...,an that is expressible in the smaller structure, and so it is also true in the upper structure. That is, every polynomial in J is zero at a1,...,an, as desired.



Proof 2. The second proof is much quicker, for it falls right out of simple considerations in set theory. Suppose that we can prove (in ZFC) that the theorem holds for countable fields. Now, suppose that k is any field and that J is a proper ideal in the ring k[x1,...,xn]. If V is the set-theoretic universe, let V[G] be a forcing extension where k has become countable. (It is a remarkable fact about forcing that any set at all can become countable in a forcing extension.) We may consider k and k[x1,...,xn] and J inside the forcing extension V[G]. Moving to the forcing extension does not affect any of our assumptions about k or k[x1,...,xn] or J, except that now, in the forcing extension, k has become countable. Thus, by our assumption, there is a1,...,an in kn making all polynomials in J zero. This fact was true in V[G], but since the elements of k and J are the same in V and V[G], and the evaluations of polynonmials is the same, it follows that this same solution works back in V. So the theorem is true for k in V, as desired.



But I know, it was the wrong reduction, since I am reducing from the uncountable to the countable, instead of from the countable to the uncountable, as you requested...



Nevertheless, I suppose that both of these arguments could be considered as alternative very soft short proofs of the uncountable case (assuming one has a proof of the countable case).

Tuesday 7 August 2012

ca.analysis and odes - A question regarding a claim of V. I. Arnold

Here is a problem which I heard Arnold give in an ODE lecture when I was an undergrad. Arnold indeed talked about Barrow, Newton and Hooke that day, and about how modern mathematicians can not calculate quickly but for Barrow this would be a one-minute exercise. He then dared anybody in the audience to do it in 10 minutes and offered immediate monetary reward, which was not collected. I admit that it took me more than 10 minutes to do this by computing Taylor series.



This is consistent with what Angelo is describing. But for all I know, this could have been a lucky guess on Faltings' part, even though he is well known to be very quick and razor sharp.



The problem was to find the limit



$$ lim_{xto 0} frac
{ sin(tan x) - tan(sin x) }
{ arcsin(arctan x) - arctan(arcsin x) }
$$



The answer is the same for
$$ lim_{xto 0} frac
{ f(x) - g(x) }
{ g^{-1}(x) - f^{-1}(x) }
$$
for any two analytic around 0 functions $f,g$ with $f(0)=g(0)=0$ and $f'(0)=g'(0)=1$, which you can easily prove by looking at the power expansions of $f$ and $f^{-1}$ or, in the case of Barrow, by looking at the graph.



End of Apr 8 2010 edit



Beg of Apr 9 2012 edit

Here is a computation for the inverse functions. Suppose
$$
f(x) = x + a_2 x^2 + a_3 x^3 + dots
quad text{and} quad
f^{-1}(x) = x + A_2 x^2 + A_3 x^3 + dots
$$



Computing recursively, one sees that for $nge2$ one has
$$ A_n = -a_n + P_n(a_2, dotsc, a_{n-1} ) $$
for some universal polynomial $P_n$.



Now, let
$$
g(x) = x + b_2 x^2 + b_3 x^3 + dots
quad text{and} quad
g^{-1}(x) = x + B_2 x^2 + B_3 x^3 + dots
$$



and suppose that $b_i=a_i$ for $ile n-1$ but $b_nne a_n$. Then by induction one has $B_i=A_i$ for $ile n-1$, $A_n=-a_n+ P_n(a_2,dotsc,a_{n-1})$ and $B_n=-b_n+ P_n(a_2,dotsc,a_{n-1})$.



Thus, the power expansion for $f(x)-g(x)$ starts with $(a_n-b_n)x^n$, and the power expansion for $g^{-1}(x)-f^{-1}(x)$ starts with $(B_n-A_n)x^n = (a_n-b_n)x^n$. So the limit is 1.

Methods for "additive" problems in number theory

I'm currently studying with Melvyn Nathanson,who is really considered one of the experts on additive number theory.His texts,ADDITIVE NUMBER THEORY:THE CLASSICAL BASES and ADDITIVE NUMBER THEORY:INVERSE PROBLEMS are really the standard introductions to the subject.They are both published by Springer-Verlag.



I'd also look at his papers on the Archive-he's written many of them on open problems in additive number theory.A full list can be found-with links to many of them in PDF for download-at:
http://front.math.ucdavis.edu/search?a=Nathanson%2C+Melvyn&t=&q=&c=&n=40&s=Listings.



You'll also find the most recent version of an overview of open problems in both additive number theory and combinatorics that Melvyn's been working on for a few years at:



http://arxiv.org/PS_cache/arxiv/pdf/0807/0807.2073v1.pdf



I think you'll find the latter reference particularly pertinent to your questions.



The subject is very interesting since it essentially involves all subsets of the integers whose members can be expressed as arithmetic progressions.This provides connections to not only number theoretic questions in geometric group theory and analysis,but I'm currently investigating the role of topology in determining the structure of such "sumsets" of Z.

homological algebra - Grothendieck's Tohoku Paper and Combinatorial Topology

Grothendieck's Tohoku paper was an attempt to set the foundations of algebraic topology on a uniform basis, essentially to describe a setting where one can do homological algebra in a way that makes sense. He did this by using the concept of abelian categories. Perhaps a better question to ask yourself is "Why are abelian categories a good idea?" In answering your question, I will do some major handwaving and sacrifice some rigor for the sake of clarity and brevity, but will try to place the Tohoku paper in context.



At the time, the state-of-the-art in homological algebra was relatively primitive. Cartan and Eilenberg had only defined functors over modules. There were some clear parallels with sheaf cohomology that could not be mere coincidence, and there was a lot of evidence that their techniques worked in more general cases. However, in order to generalize the methodology from modules, we needed the category in question to have some notion of an exact sequence. This is a lot trickier than it might seem. There were many solid attempts to do so, and the Tohoku paper was a giant step forward in the right direction.



In a nutshell, Grothendieck was motivated by the idea that $Sh(X)$, the category of sheaves of abelian groups on a topological space $X$ was an abelian category with enough injectives, so sheaf cohomology could be defined as a right-derived functor of the functor of global sections. Running with this concept, he set up his famous axioms for what an abelian category might satisfy.



Using the framework given by these axioms, Grothendieck was able to generalize Cartan and Eilenberg's techniques on derived functors, introducing ideas like $delta$-functors and $T$-acyclic objects in the process. He also introduces an important computational tool, what is now often called the Grothendieck spectral sequence. This turns out to generalize many of the then-known spectral sequences, providing indisputable evidence that abelian categories are the "right" setting in which one can do homological algebra.



However, even with this powerful new context, there were many components missing. For instance, one couldn't even chase diagrams in general abelian categories using the techniques from Tohoku in and of itself, because it did not establish that the objects that you wanted to chase even existed. It wasn't until we had results like the Freyd-Mitchell embedding theorem that useful techniques like diagram chasing in abelian categories became well-defined. Henceforth, one had a relatively mature theory of homological algebra in the context of abelian categories, successfully generalizing the previous methods in homological algebra. In other words, we have "re-interpreted the basics of [algebraic] topology" by allowing ourselves to work with the more general concept of abelian categories.

Sunday 5 August 2012

star - What is the illuminance of Tau Ceti?

As part of my physics project, I investigated the relationship between a light bulb's illuminance and the distance from the measuring device to it.



Illuminance was measured in flux, and distance in metres.



I derived an inverse relationship. First of all, can I check if this is the relationship I was supposed to get? (I understand that there is an inverse square relationship between luminosity and distance, but since illuminance is measured in lumens/sq metre, then I think an inverse relationship is correct...)



Next, using this relationship, I need to extend my project so that I can find distances to stars using this relationship.



As long as 2 stars have a similar mass/size/density, then I can use this relationship to find the distance to the 2nd star as long as the illuminance of the stars are known, and the distance to the 1st star is known.



So I tried to do this with an example - the Sun and Tau Ceti, as they are reasonably similar.



I have values for the Sun's distance/illuminance - $1.5!times!10^8 $km, $1.2!times!10^5$lux.
I now am trying to calculate the distance to Tau Ceti using this method.... But nowhere can I find a value for its illuminance.



Please can someone help me with this? Is there a way to convert from apparent magnitude or luminosity to illuminance?



Thank you



EDIT: The method I want to use from the inverse relationship is:



$$d_2=d_1l_1/l_2$$



where $d_1$ and $l_1$ are the known distance and illuminance of the 1st star, whilst $d_2$ and $l_2$ is that of the second, and $d_2$ is unknown.

mathematics education - How to teach introductory statistic course to students with little math background?

A bit of background: a few years ago, I designed such a course, after noticing that many of our social science majors were ending up taking a precalculus course (spending much time learning trig), which was mostly useless for their later study. I created a case-study approach to probability and statistics for students with weaker mathematics backgrounds (i.e., most of my students have been terrified by math in the past).



I wonder how much time the OP has to prepare -- it took me a long time (and a bit of grant support), as a pure mathematician to not only learn the basic probability, but more importantly to change my mindset from pure to applied mathematics. The formulas in basic probability and statistics are nearly trivial for a professional mathematician. The real work is identifying sources of statistical bias, interpreting results correctly, and accepting the fact that no studies are perfect. In statistical mechanics, you have an absurdly large sample size of molecules behaving in a very well-controlled environment. In practical statistics, you have a smaller, usually "dirtier" sample; as a pure mathematician, it's hard to accept this sometimes.



I'd begin preparing yourself by looking at three books -- the classic "How to Lie with Statistics", the new classic "Freakonomics", and Edward Tufte's "Visual display of quantitative information" (and/or his other books). None of these is directly relevant to your course material, but they will give you many ideas for teaching, for caution in the application of statistics, and for good and bad aspects of visual display of statistical information.



Directly regarding your questions: I'm not familiar with textbooks enough to advise you on this one (I wrote my own notes). But I strongly disagree with your assumption that "less diseases and more gambling" will make your class more engaging. Most people don't care about gambling; this is supposed to be a useful class, not training for a poker team. Real statistical studies are extremely interesting, especially given their life-and-death importance. Your students should be able to answer questions like "what is the probability a person has HIV, if their test result is "reactive"? How does the answer differ for populations in the U.S. vs. Mexico vs. South Africa?". Diseases, discrimination, forensic testing, climate extremes, etc.., are important issues to consider.



You might find gambling more interesting than diseases, but a teacher of math for social science students has a responsibility to approach important questions, and not contrived examples. Take it seriously!



There are many activities that you can enjoy with students. You might play a version of the Monty Hall game, for one. There are many activities with coin-tosses (e.g., illustrating the central limit theorem). You can illustrate sample size effects, by randomly sampling students in the course. You can certainly find cases in the media and recent studies, and use them as jumping-off points for discussion: you can even find funny ones in magazines like Cosmo, or on CNN.com so that the students can practice picking apart statistical arguments and deceptive rhetoric. I often make students find an article in popular media (like NYTimes.com) that refers to a study, then track down the original study, and compare the media summary to the published study to analyze how statistics are used and misused. This can make great classroom discussion.



Finally, it might be personal taste, but I would place a heavy emphasis on probability, especially Bayesian probability. Otherwise, the course can become mechanical and reinforce a common malady: students will think of statistics as the process of collecting data, putting data through a set of software/formulas to compute correlation coefficients, p-values, standard deviations, etc.., interpreting these numbers as facts about nature, and being done. The Bayesian approach, I think, requires more thought in setting up a problem, and yields more applicable results. In particular, there are significant Bayesian criticisms of "null-hypothesis statistical testing" that is the centerpiece of many studies; especially the overreliance on p-values is disturbing to me, and you might want to include criticisms of such things.

differential equations - Elliptic regularity for the Neumann problem

In the case that you mentioned, we want to avoid this cut-off/difference quotients approach, since it could be hard to prove that $partial_{x_i} (xi partial_{x_j} u)$ is a valid test function.
In general, when working with regularity theory, another standard approach is to use an 'approximated problem'. However, the kind of the approximated problem, of course, depends on the PDE.
For the Neumann like problem I suggest the following approximation:



First observe that since $int_Omega f = 0$, we can easily construct a sequence ${f_n} subset C^infty(Omega)$ such that $f_n to f$ in $L^2(Omega)$ and $int_Omega f_n=0, forall n in mathbb{N}$.
Then we consider $u_n in C^infty(Omega)$ such that



$(*) -Delta u_n + frac{1}{n} u_n = f_n, mbox{ in } Omega $ and $dfrac{partial u_n}{partial nu}=0 , mbox{on }partial Omega.$



The sequence ${ u_n }$ can be obtained by the use of Theorem 2.2.2.5, p.91 and Theorem 2.5.1.1, p. 121 of Grisvard's Book p. 91.
In fact you just need to use a boostrap argument to $-Delta u_n = -frac{1}{n} u_n + f_n$.



Notice that $int_Omega u_n=nint_Omega f_n = 0$.



Now, you use $u_n$ as your test functions and obtain the following estimate:



$(**) |nabla u_n|_{L^2}^2 leq |f_n|_{L^2}^2$, $forall n in mathbb{N}$



Now you use $-Delta u_n $, as a test function to your PDE ( observe that $-Delta u_n$ is a valid test function, anyway we don't need to worry about it since the approximated equation holds everywhere).



After integrating by parts, by using $(**)$ with some standard manipulations with your boundary terms you end up with



$|D^2 u_n |_{L^2}^2 leq C(partial Omega)|f_n|_{L^2}^2$, $forall n in mathbb{N}$.
(For instance, see Grisvard's book p.132-138, in particular eq. 3.1.1.5)



The key point for the above estimation is to control the boundary elements in terms of the mean curvature of $partial Omega$.



Now, since $int_Omega u_n =0$ we conclude that
$|u_n |_{H^2}^2 leq C(partial Omega)|f_n |_{L^2}^2 $



so that



$|u_n |_{H^2}^2 leq C(partial Omega) |f|^2$, the $L^2$ norm of $f$.



In this way we obtain $uin H^2$ such that $u_n to u$ weakly in $H^2$ and strongly in $H^1$.



Observe that the latter convergence is sufficient to handle the term $dfrac{1}{n}u_n$.
Then, we can pass to the limit in equation (*) so that $u$ is a strong solution of



$-Delta u = f$ in $Omega$



$dfrac{partial u}{partial nu}=0$ on $partial Omega$



with
$|u|_{H^2} leq C(partial Omega) |f|$, the $L^2$ norm of $f$.

Wednesday 1 August 2012

gn.general topology - Terminology for topological base closed under intersection?

Is there an established or well justified terminology for a topological base that is closed under finitary intersections?



As motivation, recall these conditions on a collection of subsets of a given set:



  1. closed under finitary intersections and arbitrary unions,

  2. closed under finitary intersections,

  3. filtered downwards,

  4. arbitrary.

Anything that satisfies one condition satisfies any later condition; conversely, anything that satisfies one condition generates something that satisfies any earlier condition. I know names for (1,3,4): ‘topology’, 'topological base' (or ‘base for a topology’), and ‘topological subbase’ (at least when thought of in this context). So I'm asking for a name for (2). And one reason that this is interesting is that the obvious way to generate (3) from (4) already gives you (2), so it really does come up.

enumerative combinatorics - Which came first: the Fibonacci Numbers or the Golden Ratio?

The answer for either of these is "hundreds of millions of years" due to their emergence/use in biological development programs, the self-assembly of symmetrical viral capsids (the adenovirus for example), and maybe even protein structure. Because of their close relationship I'd be hard pressed to say which 'came first'.



If you google for it, you'll find plenty of books and papers. However, be extremely careful about examples without a well-explained functional role... there are an arbitrarily large number of coincidences out there if you're looking for them, and humans excel at numerology.