Saturday, 31 May 2008

An example of a Z-PBW algebra which is not a PBW algebra?

I have just rechecked the assertion in the Example at the bottom of page 97 and to the best of my understanding it is correct.



I can imagine one cause of a possible confusion, namely, the variables are not listed in the order of their increase in the sense of the ordering that makes them Z-PBW-generators. The order should be $x^{sigma+1,sigma}>y^{sigma+1,sigma}>z^{sigma+1,sigma}$, the order of monomials being the lexicographical order corresponding to this order of generators. It is presumed that the surviving monomials in the PBW-basis are those that cannot be expressed as linear combinations of any smaller ones (modulo the relations).



Specifically, the quadratic monomials that do not survive (do not belong to the set $S^{sigma+1,sigma-1}$) are $x^{sigma+1,sigma}y^{sigma,sigma-1}$ and $x^{sigma+1,sigma}z^{sigma,sigma-1}$. Given their form, it is immediately clear that the PBW condition is satisfied.



UPDATE. Well, another source of a possible confusion is a misprint in our formulas. It should be $b_{sigma+1}=-1/c_sigma$ and $c_{sigma+1}=-1/(ab_sigma)$ and not the other way around. To put it simply, the constants $b_sigma$ and $c_sigma$ are chosen in such a way that the terms $x^{sigma+1,sigma}x^{sigma,sigma-1}$ cancel out in each relation.

ag.algebraic geometry - What does primary decomposition of (sub) modules mean geometrically?

Visualizing embedded primes:
In P^2, a one dimensional scheme cannot have embedded points unless its ideal has more than one generator, by the unmixedness theorem of Macaulay. So imagine we have two polynomials that define a one dimensional scheme in P^2. We will imagine this scheme as a limit of zero dimensional schemes. First take two quadratic polynomials, one of which is a product of two linear factors, i.e. take one pair of lines meeting at p, and another irreducible conic. In general the irreducible conic C meets each of the lines twice, away from p. Thus the two qudratic polynomials define a zero dimensional scheme of 4 points.



Now hold fixed the two intersections of C with one of the lines L, and let the two intersections of C with the other line M approach p, i.e. let C become tangent to M at p. When this occurs, the conic C now contains three distinct points of L, hence C has become reducible and contains L. Now the scheme defined by intersecting L+M with C has become one dimensional, reducible, and consists set theoretically only of the line L. I claim the point p is an embedded point of the component L of the scheme defined by L+M and C.



This is easy algebraically, since the ideal of the given scheme is (xy,(x(x-y)) = (x^2, xy) which is the intersection of the primary ideals (x) and (x^2, xy, y^2), with associated primes (x) and (x,y). Hence (x,y) is an embedded prime. I.e. the origin is an embedded point on the y axis for this scheme. This also helps explain the apparent failure of Bezout's theorem for this intersection of two conics apparently not having degree 4.



In general, in P^n, a scheme S with embedded subschemes must be defined by intersecting more hypersurfaces than the codimension of S. Thus such an S can always be viewed as a limit of lower dimensional schemes. It seems to me that embedded subschemes should arise when these lower dimensional schemes are reducible and some lower dimensional component comes to lie on a larger dimensional component of the limit. I do not know if this intuition is the only possibility, and since the world is wide, probably not.

Isomorphism of categories of rigged modules via completely bounded isomorphism of operator algebras

This question is a background for my previous question.



Suppose $A$ and $B$ are two algebras over $mathbb{C}$ with the sequences of norms $lbrace|cdot|_{Xi,n}rbrace$ and on $M_n(Xi)$, $Xiinlbrace A, Brbrace$, satisfying the conditions of Blecher-Ruan-Sinclar theorem (so that, if I understand it right, we may construct concrete representations). Suppose also that $fcolon A to B$ is a completely bounded map that has a completely bounded inverse $f^{-1}colon Bto A$.




Can we somehow establish an isomorphism between categories of rigged modules over $A$ and $B$. And if yes, is there any good reference?



It can probably fit into the notion of (P)-context, but I can't reach the book right now to check all the conditions.

fields - What is the prime spectrum of a Cauchy series ring?

In this answer I will treat the case in which $|text{ }|$ is not discrete.



I first claim that $mathfrak m_0$ is not the restriction of any proper ideal in
$k^{infty}.$ Indeed, choose $x in k$ such that $0 < |x| < 1$. Then $(x^i)$
is an element of $mathfrak m_0$ which is invertible in $k^{infty}$ (with
inverse equal to $(x^{-i})$, and so $mathfrak m_0$ generates the unit ideal
of $k^{infty}$.



This doesn't contradict anything; the maximal ideals of $k^{infty}$ pull-back
to prime ideals in $mathcal C(k)$ which are simply not maximal (as often happens
with maps of rings).



Furthermore, this pull-back is injective.



To see this, we first introduce some notation; namely, we
let $mathfrak m_{mathcal U}$ denote the prime ideal of $k^{infty}$ corresponding to
the non-principal ultra-filter ${mathcal U}$,and recall that $mathfrak m_{mathcal U}$ is defined as follows: an element $(x_i)$ lies in $mathfrak m_{mathcal U}$ if and only
if ${i , | , x_i = 0}$ lies in in $mathcal U$.



Now suppose that $mathcal U_1$ and $mathcal U_2$ are two distinct non-principal ultra-filters. Let $A$ be a set lying in $mathcal U_1$, but not in $mathcal U_2$.
Then $A^c$, the complement of $A$, lies in $mathcal U_2$.
Choose $x in k$ such $0 < | x | < 1,$ and let $x_i = x^i$ if $i in A$ and
$x_i = 0$ if $i notin A$. Then $(x_i)$ is an element of $mathcal C(k)$,
in fact of $mathfrak m_0$, and it lies in $mathfrak m_{mathcal U_2}$
but not in $mathfrak m_{mathcal U_1}$.



Thus $mathfrak m_{mathcal U_1}$ and $mathfrak m_{mathcal U_2}$ have distinct pull-backs.



So the map
Spec $k^{infty} rightarrow $ Spec $mathcal C(k)$
is injective
and dominant (since it comes from an injective map of rings), but is not surjective.
Choosing the valuation $|text{ }|$ allows us to add to Spec $k^{infty}$ (which is the
Stone-Cech compactification of $mathbb Z_+$) an extra point dominating all the
other points at infinity (i.e. all the non-principal ultrafilters), because the valuation now gives
us a definitive way to compute limits (provided we begin with a Cauchy sequence).

physiology - How are long time periods measured in biological systems?

The short answer is: we do not know exactly, although we do have some insights.



I will take the example of puberty.



Although a clear definition of puberty is lacking, it is quite clear that it corresponds to a period where gonadal function starts.



This, in turn, is derived from the activation of the gonadotropin system, which consists of two main cell types:



  1. a small number of neurons located in the preoptic area of the hypothalamus (a nucleus at the base of the brain) called the GnRH neurons. GnRH is the Gonadotropin-releasing hormone, a small peptide that stimulates the production of gonadotropins from the pituitary.


  2. the gonadotrophs, a specialised group of cells in the pituitary (a gland located underneath the brain) which produce two hormones, called luteinizing hormone (LH) and follicle-stimulating hormone (FSH) which stimulate the gonads to produce various hormones, such as estrogen.


In mammals, the secretion of GnRH, and thus LH/FSH varies during the course of the menstrual/estral cycle. This cyclicity lasts several days (4-5 days in rodents, ~1 month in humans) and entrains the cyclical secretion of estradiol (E2) from the gonads.



Note that this is a sort of self-sustaining cycle, as cyclic levels of E2 will then allow for cyclic GnRH secretion and so on.



But, back to your question: how does the GnRH/LH/E2 system "wake up" at puberty?



The exact mechanism is still unknown, but recent work has found an important mediator, called kisspeptin, that is produced from two population of neurons in the hypothalamus, called the kisspeptin neurons.



Kisspeptin has been shown to be a very potent activator of GnRH neurons and work in the mouse has shown that these neurons appear at the time of puberty, their number increasing dramatically between 25 and 31 days (puberty is at around 30 days in mice).



Postnatal Development of Kisspeptin Neurons in Mouse Hypothalamus; Sexual Dimorphism and Projections to Gonadotropin-Releasing Hormone Neurons - Clarkson and Herbison - Endocrinology, 2006



Similar work exist in the monkey:
Increased hypothalamic GPR54 signaling: A potential mechanism for initiation of puberty in primates - Shahab et al. - PNAS, 2005



In humans mutations in either kisspeptin, or its cognate receptor GPR54 results in disturbances of pubertal maturation because of underactivation of the system (hypogonadotropic hypogonadism) or hyperactivation (precocious puberty).



So, now the question is shifted: why do kisspeptin neurons show up only at puberty? We don't know for sure, but it looks like increased levels of E2 could be important for this.



Again, we get into a self-sustaining cycle. Growth of the body generates an increase in E2 production (possibly due to increased volume of the gonads?), which, when over a certain level permits the development of kisspeptin neurons, which will then stimulate the GnRH neurons, resulting in increased LH and E2. We then have more E2 and this makes kisspeptin neuron grow even more etc etc.



Kisspeptin system maturation
From: Postnatal development of an estradiol-kisspeptin positive feedback mechanism implicated in puberty onset. - Clarkson et al. - Endocrinology, 2009

Friday, 30 May 2008

microbiology - Can the RNA in the HIV virus make viral enzymes without entering the nucleus?

I am afraid your question is really not clear. You are asking one thing in the title of your question, another in the question body and a third in your comments. If you are asking (as you did in your comment above) what would happen if a virus with no enzymes were to infect a cell, see below.



In the case of HIV (and other retroviruses) some of the most important enzymes contained in a normal viral particle are:



  • A reverse transcriptase (RT): this enzyme reverse-transcribes the viral RNA genome to cDNA,

  • An integrase: incorporates the cDNA produced by the RT into the host cell's DNA.

  • A ribonuclease : an enzyme that can degrade (cut up) RNA. In the case of HIV this is the same protein as the RT.

  • A protease: an enzyme that can degrade other proteins.

Viruses work by copying their genetic material (their DNA or RNA) into the host cell's genome and then hijacking the cell's replication machinery to make more copies of the virus. So, what would happen if these enzymes were absent?



  • Without an integrase, the virus would not be able to insert its reverse transcribed DNA into the cell's genome.


  • Without an RT, the virus would not be able to copy its RNA into DNA.


  • The ribonuclease activity is also needed for successful reverse transcription to double stranded DNA (see [1] for a review and explanation of why.)


  • The protease is necessary for the formation of new viral particles. Mutations in this enzyme result in aberrantly assembled virus particles with low infectivity [2].


What you really must remember is that viruses are extremely streamlined. Everything they contain is essential. In some ways, you could say that viruses are the most highly evolved species (assuming they are living species), they are the most highly specialized "life forms" we know of and they have managed to get rid of all non-essential functions. The flip side of this is that everything that is left is essential, the virus cannot function without it.



So, to answer your question, a virus without any enzymes is, essentially, not a virus. It would not work, would not successfully infect cells.




References:



  1. Greg L. Beilhartz and Matthias Götte, HIV-1 Ribonuclease H: Structure, Catalytic Mechanism and Inhibitors, Viruses 2010, 2:900-926


  2. Bukrinskaya A. HIV-1 matrix protein: a mysterious regulator of the viral life cycle. Virus Res. 2007 124(1-2):1-11.


co.combinatorics - Describe a tree by junctions

I have n sectors, enumerated 0 to n-1 counterclockwise. The boundaries between these sectors are infinite branches (n of them).



These branches meet at certain points (junctions). Each junction is adjacent to a subset of the sectors (at least 3 of them).



By specifying what sectors my junctions are adjacent to, I can completely recover the tree.
This seems like something known, but I would like a reference to it.



The number of trees with n branches is given by
http://www.oeis.org/A001003
and this is quite easy to prove.



Furthermore, if I order the sectors in the description of the junctions, I can make this representation unique.



Example:
(0,1,2,3,4,5) represents the tree with only one vertex, and 6 branches connected to this junction.

Wednesday, 28 May 2008

neuroscience - What are the molecular mechanisms that make the turtle brain more resistant to hypoxia?

There is a bunch of literature on the topic. A good starting point is probably a short description with lots of references in this thesis (page 8), not to talk about other articles, which pop up in google scholar: 1, 2.



The mechanisms are multifaceted and involve principally decrease in oxygen and ATP demands: reduced neuronal activity, lower density of ion channels (but hyper-polarization of the membranes) and so on. Concerning blood flow: "Brain blood flow was continued or increased, and oxygen and creatine phosphate (PCr) stores offered some immediate protection. As PCr declined, turtle brain became increasingly reliant upon anaerobic glycolysis."

molecular biology - What is a simple protocol for staining cells in suspension?

You will get a lot of false-positives using the following method, and a real transfection of a fluorescent protein is always the way to go, because then you will prove that the transfection was really successful.



That said, you could try to use DAPI, followed by fluorescence microscopy or flow cytometry. DAPI is a very bright stain for DNA and cannot pass an intact cell membrane. If the cell membrane is disrupted, e. g. by necrosis, DAPI will enter the cell. I have never seen anyone trying to prove electroporation this way, but it will be worth a try. It could be that the time is too short for DAPI to enter the cell, so you'd need to tweak your setup.



I'd try the following: Prepare a cell solution with DAPI (something like 5 µM concentration should do). Split the solution into two electroporation cuvettes, perform the electroporation with one cuvette, leave the second one untouched as a control. After the electroporation, analyse the samples MOMENTARILY (means: within a few minutes) using a flow cytometer (if possible).



As for the cells, you could use anything. I suggest eukaryotic cells because they are more easy to visualize than bacteria. However, you would need a laminar flow hood for aseptic handling of cells. In theory you could also use bacteria, which are more easy to handle, but I'm not experienced with immunofluorescence in bacteria.

ag.algebraic geometry - The Infinitesimal topos in positive characteristic

This question was inspired by and is somewhat related to this question.



In his article "Crystals and the de Rham cohomology of schemes" in the collection "Dix exposes sur la cohomologie des schemas" Grothendieck defines the (small) infinitesimal site of an $S$-scheme $X$ using thickenings of usual opens. He then proceeds to prove that in characteristic $0$ the cohomology with coefficients in $mathcal{O}_{X}$ computes the algebraic de Rham cohomology of the underlying scheme. This is remarkable, because the definition of the site does not use differential forms and it is not necessary for $X/S$ to be smooth.



This fails in positive characteristics, and as a remedy, Grothendieck sugessts adding the additional data of divided power structures to the site, which he then calls the "crystalline site of $X/S$". This site then has good cohomological behaviour (e.g. if $X$ is liftable to char. $0$, then cohomology computed with the crystalline topos is what it "should be"). The theory of the crystalline topos was of course worked out very successfully by Pierre Berthelot.



My question is: Even though the infinitesimal site is in some sense not nicely behaved in positive characteristics, have people continued to study it in this context? What kind of results have been obtained, and has it still been useful? I'm particularly interested in results about $D$-modules in positive characteristic (i.e. crystals in the infinitesimal site if $X/S$ is smooth), but I am also curious to see in which other directions progress has been made.

Tuesday, 27 May 2008

epigenetics - How does geography affect morphological features of the human body

I've seen many times how a person born in one place, goes to another country for a long time, and then they start looking more like the people there, but I never found out how it works.
This report claims that second generation Japanese, born in the US are taller and heavier than those living in their native lands (there had been no intermingling of races).



  • To what extent do the genetic/environmental factors control human morphology ?

  • What factors apart from the diet can bring about such morphological differences ?

ct.category theory - How can I define the product of two ideals categorically?

Given a commutative ring $R$, there is a category whose objects are epimorphisms surjective ring homomorphisms $R to S$ and whose morphisms are commutative triangles making two such epimorphisms surjections compatible, and the skeleton of this category is a partial order that can be identified with the lattice of ideals of $R$. Now, I have always been under the impression that anything one can say about ideals one can phrase in this purely arrow-theoretic language: most importantly, the intersection of ideals is the product in this category and the sum of ideals is the coproduct. (Since we're working in a partial order, product and coproduct are fancy ways to say supremum and infimum. The direction of the implied ordering on ideals may differ here from the one you're used to, but that's not important.)



However, Harry's made some comments recently that made me realize I don't know how to define the product of two ideals purely in terms of this category, that is, via a universal construction like the above. It would be really surprising to me if this were not possible, so maybe I'm missing something obvious. Does anyone know how to do this?

human biology - Why do genitals feel frozen when freefalling?

Probably because the blood would be forced, physically, that is, by inertia to move upwards during free-fall, thus causing the genital's blood supply to become obstructed thus reducing their temperature and thus leading to the sensation of them being frozen as you described. Plus you might feel cooler all over your body due to the air flow against it and since your genitals wouldn't be as biologically favoured when your body's survival instincts for low-temperature environments kicks in you're body would probably reduce the blood flow to your genitals and other extremities favouring your vital organs such as your heart, lungs, brain, etc. with a greater blood supply to keep them warm and working.

Monday, 26 May 2008

soft question - Most interesting mathematics mistake?

Cantor's been mentioned, but I think the lessons there should be different. First, the really big mistake was that of highly-reputed academics (including, I believe, Poincare, Kronecker and even Wittgenstein) who rejected his ideas. And (related) second, even in a wiki devoted to mistakes it seems somewhat carping to fault Cantor for failing to spot a subtlety without at the same time adequately crediting his genius.



Somewhat along the same lines, one might mention Fourier's difficulties in getting his ideas accepted.

Is cell senescence in culture comparable to that in vivo?

A cell is 'senescent' when is has permanently left the cell-cycle. This can be caused by stresses, or by reaching the 'Hayflick limit' (the cell has reached its replicative lifespan, as defined by its telomeres).



Cells cultured in vitro can be used as models to study senescence (or sometimes 'ageing', although the distinction there is not necessarily within scope of this question) by growing the same population of cells for a long time (many passages). A common method to identify a senescent cell population is to use beta-galactosidase staining.



I was wondering to what degree senescent cells in culture (identified by beta-gal staining) are actually comparable to those you might expect in vivo. I ask because it seems to me that a terminally senescent cell may not actually survive long in an organism, so what we refer to as senescent cells in vivo are actually pre-senescent cells? I haven't got any basis for this, just a feeling. (Very scientific I know).

Sunday, 25 May 2008

senescence - Exercise causes number of cell divisions to approach Hayflick limit faster? And hence shorten life expectancy?

A world class athlete spends a lot of time performing intense exercises. Correct me if I'm wrong, but I assume that these intense exercises causes significant damage to the athlete's cells, but with proper nutrition, they are able replenish these cells with stronger ones. This process of destroying and re-creating cells causes cell divisions to approach the Hayflick limit faster, and hence shorten the athlete's life expectancy.



I also heard that caloric intake causes the number of cell divisions to approach the Hayflick limit faster. World class athletes generally consume substantially more calories than the average individual, which also contributes to a shorter life expectancy.



So for this reason, is the life expectancy of world class athletes generally shorter than the average individual? And what about casual athletes?



Edit
If world class athletes live longer lives than the average individual, why is this so despite performing activities that expedite the deterioration of telemeres, and hence, approach the Hayflick limit faster?

Friday, 23 May 2008

soft question - What should be offered in undergraduate mathematics that's currently not (or isn't usually)?

Personally, I think the answer to this question is largely going to depend on one's particularly interests (whether they lie in algebra, analysis, topology, or whatever). This can be seen from many of the previous posts.



That being said, I do think that more number theory would be a great addition to the undergraduate curriculum. Many students take an introductory number theory course (or skip it because they learned it all in high school) and then don't do any more. There are lots of great areas of number theory which don't require too much background. P-adics would be great (Gouvea even laments in his book that p-adics aren't taught earlier - so maybe such a course should use his book). One could teach a basic semester of algebraic number theory, or a course in elliptic curves (following Silverman and Tate, for example). Both of these require no more than a basic course in undergraduate algebra. You can probably find these courses at many top universities, but they usually aren't emphasized as much to undergraduates. The reason why I think that these would be good is because number theory is a particularly beautiful area of math, and by getting glimpses of modern number theory early on, students get to see how beautiful is the math that's ahead of them.
(Another possibility is to have a course on Ireland and Rosen's book A Classical Introduction to Modern Number Theory. Princeton had a junior seminar on this book, for example.)



I also think Riemann surfaces are a very beautiful topic which should be taught early on and aren't too complicated in their most basic form. For, you get to see the deep geometrical theory lying behind the $e^{2 i pi}=1$ and the ambiguity of complex square roots which you learned about when you were younger. It shows the student that there can be very deep ideas lying behind a simple observation, and it shows the beauty and deep understanding that modern mathematics can lead you to.

What is the biology behind a skin "mole"?

A mole is simply a benign tumour, i.e. a proliferated cell growth that hasn’t become cancerous. So moles are not dead cells, they are very much alive. The colour is caused by a high concentration of the melanin, which is also responsible for normal darker skin.



Since moles are tumours, they can – but in most cases don’t – give rise to melanomas, malignant skin tumours, when they lose susceptibility to cell growth regulation and start invading surrounding tissue.



Scar tissue is entirely unrelated and due to regular regrowth of epithelial cells after injury forming a linear collagen structure (as opposed to the skin’s normal collagen structure, which resembles a “weaved“ structure).

Thursday, 22 May 2008

cancer - What is a "tool strain"?

When a biologist is talking about a genetically engineered mouse strain which is a "tool strain", what does that mean? What is the exact definition of a tool strain? What is the difference between a tool strain and any other mouse strain?



Also, how are tool strains connected to recombinase techniques? Does using a recombinase automatically create a tool strain? Or are the properties "tool strain" and "recombinase containing strain" independent from each other?



If it is important, the context is mouse (and possibly other animal) strains used in cancer research.



It would help if you could keep the explanation high-level. I only have high-school biology knowledge and I am trying to make sense of the requirements for a software application for use by biologists.

cv.complex variables - Region and domains?

Standard definitions in geometric complex analysis are as follows:



A domain is a nonempty open connected set (just as in analysis in general).



A region is a set whose interior is a domain and which is contained in the closure of its interior.



For example the open unit disk and none, part, or all of its boundary (the unit circle).



The closed unit disk together with the interval $[1,2]$ on the real axis is not a region.

Wednesday, 21 May 2008

spherical geometry - Intersection of two rhumb line segments

Short Version:



How would one find the point of intersection of two rhumb line segments defined by two pairs of points on the globe? Assumptions such as a spherical Earth and following the shortest-path are A-OK.



Long Version:



Been crawling the web looking for resources. There are a few decent spherical geometry pages specific to GIS such as Ed Williams' AVSIG and this Movable Type site. Finding the intersections of line segments interpreted as great circle arcs is fairly trivial, and covered on those sites. Unfortunately this situation with rhumb line arcs is not.



Given two pairs of lat/lon locations, each pair defining a shortest-path rhumb line segment, how would you find the point of intersection?



It seems like it would be as simple as using the formula for projecting a destination location given a starting point, bearing and distance (formulas available on the Movable Type site). Taking that for both lines and setting the lat/lons equal to each other and solving for a distance. I haven't had much success deriving such a method.



This really seems like a solved problem so I'm hoping I'm just not looking in the right places!

co.combinatorics - Name of a particular conjugate permutation

This is the reverse-complement of $pi$.



In one-line notation, the reverse of a permutation is what you get by writing it backwards and the complement of a permutation is what you get when you replace each entry $i$ by $n -i + 1$. (In other words, one of these operations is multiplication by $chi$ on the right, the other on the left.) The reverse-complement is what you get by doing both of these operations, or equivalently by giving the permutation matrix a half-turn. (Together with inversion, these operations generate the dihedral group acting on each permutation matrix.)

ds.dynamical systems - Does essentially minimal imply minimal?

Suppose X is compact and totally disconnected space, and that phi a homeomorphism of X.



We say a subset Z of X is phi-invariant if phi(Z) = Z. A phi-invariant set is minimal if it is closed, phi-invariant, nonempty and the smallest of all such sets. We say (X,phi) is minimal if X itself is a minimal set.



An orbit of x in X is the set {phi^n(x) : n an integer}



A system (X,phi) is minimal iff every orbit is dense.



Given (X,phi) as above, and any point y in X. The system is "essentially minimal" if one of the following equivalent conditions hold:
1) For all x in X, y in { phi^n(x) : n >= 0, n an integer }.
2) For all x in X, y in { phi^n(x) : n < 0, n an integer }.
3) X contains a unique minimal set Y, and y in Y.



If a system is minimal, then condition 3 is satisfied (setting Y := X), and is hence essentially minimal.



Does essential minimality imply minimality?

computational complexity - Characterize P^NP (a.k.a. Delta_2^p)

Here is another interesting characterization of $P^{NP}$. I found it as an undergrad but could not publish it; it turned out to be a "folklore" result. It is entertaining nevertheless, and you will learn something about $P^{NP}$ by reproving it for yourself. We will define a natural deterministic model of computation whose class of recognized languages will be $P^{NP}$.



A machine $M$ in our model is described as follows. We take a polynomial time Turing machine which on an input of length $n$, is granted access to a bit counter that holds $n^k$ bits, for some fixed $k$. Initially the counter is all zeros. Along with the usual start, accept and reject states, the Turing machine has a special extra state called increment, with the following properties:



  • If the Turing machine enters the increment state, the counter is incremented by $1$, and the Turing machine resets to its initial starting configuration. That is, all the workspace used by the machine is reset to blanks, all tape heads move back to the beginning of the tapes, and the machine switches to its start state.

  • If the Turing machine reaches accept or reject, the entire process halts with this result.

  • If the counter reaches all-ones, the process rejects.

This is a natural way to characterize brute-force search for an NP solution: the counter represents the search space, and we run a specific polytime Turing machine that tests each counter value in turn until we decide to accept or reject.



Theorem: The class of all languages recognized by such $M$ is exactly $P^{NP}$.



Good luck with the proof. Here is a hint: one direction is easy, by Ryan O'Donnell's comment on complete languages for $P^{NP}$.

Tuesday, 20 May 2008

lo.logic - Predicative definition

A definition of an object X is called impredicative if it quantifies over a collection Y to which X itself belongs (or at least could belong). The classic example is the set occurring in Russell's paradox, defined by "the members of X are all sets s that are not members of themselves". This quantifies over all sets, including X itself.



But impredicative definitions occur (without paradox) in ordinary mathematics also. For example, one might define a real number r as the supremum of a set A that might have r itself among its members. Unraveling the definition of "supremum" we would find quantification over A (and indeed quantification over the set of all real numbers).



Russell proposed to eliminate the set-theoretic and logical paradoxes by eliminating impredicative definitions, and "Principia Mathematica" (by Russell and Whitehead) develops an elaborate mechanism for this. Unfortunately, too much of ordinary mathematics was unprovable in that system, so Russell and Whitehead found it necessary to add the so-called axiom of reducibility, whose principal effect is to counteract the predicativity-enforcing mechanism and make impredicative mathematics available again.

rt.representation theory - Are low dimensional modular representations of SL2(Fp) completely reducible?

The essential work in this direction was published from 1994 on by J.-P. Serre
and J.C. Jantzen, concerning both algebraic groups and related finite groups
of Lie type. Related papers by R. Guralnick and G.J. McNinch followed. There are uniform dimension bounds for complete reducibility, stricter in rank 1. For a finite group of simple type over a field of $q$ elements in characteristic $p$, Jantzen's upper bound is $p$ for rank at least 2 but $p-2$ in your case. The best I can do is list a few references:



MR1635685 (99g:20079) 20G05 (20G40)
Jantzen, Jens Carsten (1-OR)
Low-dimensional representations of reductive groups are semisimple.
Algebraic groups and Lie groups, 255–266, Austral. Math. Soc. Lect. Ser., 9, Cambridge Univ.
Press, Cambridge, 1997.



MR1753813 (2001k:20096)
McNinch, George J.(1-NDM)
Semisimple modules for finite groups of Lie type.
J. London Math. Soc. (2) 60 (1999), no. 3, 771--792.



MR1717357 (2000m:20018) 20C20
Guralnick, Robert M. (1-SCA)
Small representations are completely reducible.
J. Algebra 220 (1999), no. 2, 531–541.

Monday, 19 May 2008

human biology - The fundamental importance of R.E.M. Sleep. (Rapid Eye Movement)

Question:



I know that experiments have been conducted to determine the importance of R.E.M. sleep in our sleep cycle. It is particularly important for learning, information synthesis, and recovery from distress. Why else is R.E.M. sleep important? What experiments have been done/observations been made to determine the neurological mechanisms underlying R.E.M. sleep? I know that we exhibit high frequency $\alpha$ waves, similar to the waves we experience during wakefulness.



Wiki:



During REM sleep, high levels of acetylcholine in the hippocampus suppress feedback from hippocampus to the neocortex, and lower levels of acetylcholine and norepinephrine in the neocortex encourage the spread of associational activity within neocortical areas without control from the hippocampus. This is in contrast to waking consciousness, where higher levels of norepinephrine and acetylcholine inhibit recurrent connections in the neocortex. REM sleep through this process adds creativity by allowing "neocortical structures to reorganise associative hierarchies, in which information from the hippocampus would be reinterpreted in relation to previous semantic representations or nodes.



Do these reorganized neocortical hierarchies remain this way?



Just HOW integral is R.E.M. sleep to our brain development?

na.numerical analysis - Inverting a covariance matrix numerically stable

Cholesky sounds like a good option for the following reason: once you've done the Cholesky factorization to get $C=LL^T$, where $L$ is triangular, $x^TC^{-1}x = ||L^{-1}x||^2$, and $L^{-1}x$ is easy to compute because it's a triangular system. The downsides to this are that even if $C$ is sparse, $L$ is probably dense, and also that you do the same amount of work for all $C$ and $x$ while other methods may allow you to exploit some special structure and get good approximations to the solution with less work. For those reasons, you might also consider Krylov subspace methods for computing $C^{-1}x$, like conjugate gradients (since $C$ is symmetric and positive definite), especially if $C$ is sparse. $n=250$ isn't terribly large, but still large enough that Krylov subspace methods could pay off if $C$ is sufficiently sparse. (There might actually be special methods for computing $x^TC^{-1}x$ itself as opposed to $C^{-1}x$, but I don't know of any.)



Edit: Since you care about stability, let me address that: Cholesky is pretty stable, as you note. Conjugate gradients is notoriously *un*stable, but it tends to work anyway, apparently.

neuroscience - How stable is in vivo whole cell patch clamping?

After talking to a few electrophysiologists, I found out that they have very diverse opinions.



It seems that a common way of stabilizing the movement of the brain with respect to heart beat is to press the brain with some softish-rigid material like agar. Also, draining the CSF could help but then this cannot be done without sacrificing the animal.



The quality of the patch degrades over time, and it could be held for an hour or more (anecdotal).

Sunday, 18 May 2008

pr.probability - What m minimizes E(|m-X|^3) for a random variable X?

I assume you mean |m-X| as opposed to |m-EX|? Otherwise, |m-EX| is not a random variable, so E(|m-EX|^k) = |m-EX|^k is always zero (and hence minimized) when m = EX -- i.e., the mean -- and that's probably not what you're asking.



After a bit of Googling around, it looks like you might be talking about the third absolute central moment E(|X-EX|^3), which is related to something called the Barry-Esseen inequality ... see here.

Saturday, 17 May 2008

ag.algebraic geometry - Degree of canonical bundle?

Take any curve at all, of any genus g, and any divisor of degree d > 2g. This embeds the curve into projective space with degree d, and a generic projection embeds it in P^3 also with any degree d > 2g. So d and n determine almost nothing about the curve.


On the positive side, interestingly, the nice counterexample given for the original question, a rational cubic in P^3, although not determined by its degree, is completely determined by its degree and the fact that (unlike the plane cubic) it spans P^3. (Rational normal curves are about the only examples I can think of, spanning but not a complete intersection, where d,n do determine all the invariants.)



I guess you could give an inequality at least for the genus (i.e. h^1(O)) of curves in P^3, since a curve of degree d in P^3 projects to a plane curve of degree d-1, hence has genus bounded above by that of a general such plane curve. Indeed Castelnuovo has a famous such inequality.

ag.algebraic geometry - products and smooth/étale/unramified morphisms

No. As an extreme example, suppose that $g$ is the identity (which is etale everywhere), and that $f$ is not etale at some point. Then the fibre product is just $f$ again.



But in fact, this is essentially the general case. If $g$ is etale (or smooth) at a point, then it is etale (resp. smooth) in a n.h. of that point, so we may replace $Z$ by the n.h. and so assume that
$g$ is etale everywhere. Then if $f$ is not etale (or smooth) at a point $y in Y$, the product will not be etale in a n.h. of $y times Z.$



(Imagine that $Y$ was e.g. a nodal curve with a node at $y$, and that $Z$ is a smooth
curve. (Here $X$ is Spec of the ground field.) Then $Ytimes Z$ is the product of a nodal
curve and a smooth curve, which just looks like a cylinder over the nodal curve; it is
singular all along the "cylinder" over the node.)

singularity theory - Which presentations of (non)planar algebras give rise to knots?

Reidermeister's theorem states that the set of knots, modulo ambient isotopy, is isomorphic to the planar algebra generated by crossings, modulo Reidemeister moves. This planar algebra presentation is the starting point for much of quantum topology. Of course this set of generators and relations isn't unique. I'm interested in unknotting moves other than crossing changes, and I would like to ask



Is there another known "convenient" planar algebra presentation, generators modulo relations, which gives rise to knots? In particular, can I sensibly choose generators corresponding to resolutions of triple-points?


We can generalize in many ways. For example we can allow circuit algebras, which are non-planar, and obtain the set of virtual knots. I have the same question regarding such generalizations. Also



Is there a result that any presentation of a planar algebra giving rise to knots, other than the one given by crossings modulo Reidemeister moves, would necessarily be significantly harder to work with? I.e. is there some sort of non-trivial "optimality result" for the presentation "crossings mod Reidemeister moves"?

molecular biology - Why am I getting low transformation efficiency with DB3.1 E.coli cells?

I am making competent cells using DB3.1 E. coli cells. Even after following the exact protocol (Inoue method for ultracompetent cells) given in 'Sambrook and Russel', I am not getting transformation efficiency of more than 104. I am using a 5.1kb plasmid for checking transformation efficiency.



I will be thankful if any of you can share your experience in this experiment.

Friday, 16 May 2008

algebraic groups - Duflot-type theorem for Hopf algebras ?

In group cohomology Duflot's theorem states that the depth of the mod p cohomology ring of a finite group is greater than or equal to the p-rank of the center of a Sylow p-subgroup.



Is there a corresponding result for the cohomology of a finite dimensional cocommutative Hopf algebra (or, equivalently, a finite group scheme) ?



Any hint is appreciated.

gr.group theory - Is a quotient of a reductive group reductive?

It is important in answering this question that one can extend scalars to a perfect (e.g., algebraically closed) ground field, as was implicit in many of the other answers even if not said explicitly. Indeed, if $k$ is an imperfect field then it always happens that there exist many examples of pairs $(G,H)$ with $G$ a smooth connected affine $k$-group containing no nontrivial smooth connected unipotent normal $k$-subgroup and $H$ a smooth connected normal $k$-subgroup in $G$ such that $G/H$ contains a non-trivial smooth connected unipotent normal $k$-subgroup.
This can even happen when $G$ and $H$ are perfect (i.e., equal to their own derived groups), which is really disorienting if one is accustomed to working with reductive groups.



For a simple commutative example, let $k'/k$ be a purely inseparable extension of degree $p = {rm{char}}(k)$ and let $G$ be the Weil restriction ${rm{Res}} _{k'/k}(mathbf{G} _m)$, which is just "${k'}^{times}$ viewed as a $k$-group". This contains a natural copy of $mathbf{G}_m$, and since $k'/k$ is purely inseparable this $k$-subgroup $H$ is the unique maximal $k$-torus and the quotient $G/H$ is unipotent of dimension $p-1$ (over $overline{k}$ it is a power of $mathbf{G} _a$ via truncation of $log(1+x)$ in degrees $< p$, as one sees using the structure of the $overline{k}$-algebra $overline{k} otimes_k k'$). The main point then is that $G$ itself contains no nontrivial smooth connected unipotent $k$-subgroups, which is true because we are in characteristic $p > 0$ and $G$ is commutative with $G(k_s)[p] = {k'_s}^{times}[p] = 1$! Note: the unipotent quotient $G/H$ is an example of a smooth connected unipotent $k$-group (even commutative and $p$-torsion) which contains no $mathbf{G}_a$ as a $k$-subgroup (proof: commutative extensions of $mathbf{G}_a$ by $mathbf{G}_m$ split over any field, due to the structure of ${rm{Pic}}(mathbf{G}_a)$ and a small calculation); that is, $G/H$ is a "twisted form" of a (nonzero) vector group, which never happens over perfect fields.



Making examples with perfect $G$ and $H$ is less straightforward; see Example 1.6.4 in the book "Pseudo-reductive groups".



As for the suggestion to use Haboush's theorem (whose proof I have never read), I wonder if that is circular; it is hard to imagine getting very far into the theory of reductive groups (certainly to the point of proving Haboush's theorem) without needing to already know that reductivity is preserved under quotients (a fact that is far more elementary than Haboush's theorem, so at the very least it seems like killing a fly with a sledgehammer even if it is not circular).



Finally, since nobody else has mentioned it, look in any textbook on linear algebraic groups (Borel, Springer, etc.) for a proof of the affirmative answer to the original question. For example, 14.11 in Borel's book. Equally important in the theory is that for arbitrary smooth connected affine groups, formation of images also commutes with formation of maximal tori and especially (scheme-theoretic) torus centralizers; see corollaries to 11.14 in Borel's book.

neuroscience - Can parts of a human brain be asleep independently of each other, or vary in the times required for them to fall asleep?

I know that some birds and marine animals can continue complicated activity (swimming, flying?) while one hemisphere of their brain is asleep.



I'm interested if human brain has some parts of it that can be asleep while others are awake? In other words, can a human brain be only partially asleep while experiencing insomnia or similar sleep disturbances?



If human brain can have different parts "sleeping" independently of each other, is it possible that the times to "fall asleep" vary between these different parts of the brain?



I would appreciate research articles on the topic or just the names of brain regions that may exhibit behavior described above.



Update: I've taken a look at R&K "principles and practice of sleep medicine"' and it mentions the following parts as involved in sleep:



Medulla, preoptic area, hypothalamus, thalamus, entire neocortex involved in NREM.



Neurotransmitter systems: histaminergic, orexinergic, serotonergic, noradrinergic



Sleep factors: adenosine, interleukin-1 and other cytokines, prostaglandin D2, growth hormone releasing hormone, nitric oxide, all promote sleep in or around preoptic area.



This makes me hypothesize that drugs that modify the effects of these systems (ex- caffeine affecting adenosine) could result in sleep- related disturbances in these systems, potentially causing them to fall asleep later that usual. But I'm looking for more info to fihre out if this is true

Thursday, 15 May 2008

reference request - Does this problem have a name? [Ducci Sequences]

Let $a_1, ... a_n$ be real numbers. Consider the operation which replaces these numbers with $|a_1 - a_2|, |a_2 - a_3|, ... |a_n - a_1|$, and iterate. Under the assumption that $a_i in mathbb{Z}$, the iteration is guaranteed to terminate with all of the numbers set to zero if and only if $n$ is a power of two. A friend of mine knows how to prove this, but wants to be able to reference a source where this problem (and/or its generalization to real numbers) is mentioned, and we can't figure out what search terms to use. Can anyone help us out?



(If someone can figure out a better title for this question, that would also be appreciated.)

nt.number theory - Modular curves of genus zero and normal forms for elliptic curves

This is maybe the first question I actually need to know the answer to!



Let $N$ be a positive integer such that $mathbb{H}/Gamma(N)$ has genus zero. Then the function field of $mathbb{H}/Gamma(N)$ is generated by a single function. When $N = 2$, the cross-ratio $lambda$ is such a function. A point of $mathbb{H}/Gamma(2 )$ at which $lambda = lambda_0$ is precisely an elliptic curve in Legendre normal form



$$y^2 = x(x - 1)(x - lambda_0)$$



where the points $(0, 0), (1, 0)$ constitute a choice of basis for the $2$-torsion. When $N = 3$, there is a modular function $gamma$ such that a point of $mathbb{H}/Gamma(3)$ at which $gamma = gamma_0$ is precisely an elliptic curve in Hesse normal form



$$x^3 + y^3 + 1 + gamma_0 xy = 0$$



where (I think) the points $(omega, 0), (omega^3, 0), (omega^5, 0)$ (where $omega$ is a primitive sixth root of unity) constitute a choice of basis for the $3$-torsion.



Question: Does this picture generalize? That is, for every $N$ above does there exist a normal form for elliptic curves which can be written in terms of a generator of the function field of $mathbb{H}/Gamma(N)$ and which "automatically" equips the $N$-torsion points with a basis? (I don't even know if this is possible when $N = 1$, where the Hauptmodul is the $j$-invariant.) If not, what's special about the cases where it is possible?

Wednesday, 14 May 2008

galois cohomology - Why aren't there more classifying spaces in number theory?

Much of modern algebraic number theory can be phrased in the framework of group cohomology. (Okay, this is a bit of a stretch -- much of the part of algebraic number theory that I'm interested in...). As examples, Cornell and Rosen develop basically all of genus theory from cohomological point of view, a significant chunk of class field theory is encoded as a very elegant statement about a cup product in the Tate cohomology of the formation module, and Neukirch-Schmidt-Wingberg's fantastic tome "Cohomology of Number Fields" convincingly shows that cohomology is the principal beacon we have to shine light on prescribed-ramification Galois groups.



Of course, we also know that group cohomology can be studied via topological methods via the (topological) group's classifying space. My question is:




Question: Why doesn't this actually happen?


More elaborately: I'm fairly well-acquainted with the "Galois cohomology for number theory" literature, and not once have I come across an argument that passes to the classifying space to use a slick topological trick for a cohomological argument or computation (though I'd love to be enlightened). On the other hand, for example, are things like Tyler's answer to my question



Coboundary Representations for Trivial Cup Products



which strikes me as saying that there may be plenty of opportunities to carry over interesting constructions and/or lines of reasoning from the topological side to the number-theoretic one.



Maybe the classifying spaces for gigantic profinite groups are too hideous to think about? (Though there's plenty of interesting Galois cohomology going on for finite Galois groups...). Or maybe I'm just ignorant to the history, and that indeed the topological viewpoint guided the development of group cohomology and was so fantastically successful at setting up a good theory (definition of differentials, cup/Massey products, spectral sequences, etc.) that the setup and proofs could be recast entirely without reference to the original topological arguments?



(Edit: This apparently is indeed the case. In a comment, Richard Borcherds gives the link http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.bams/1183537593 and JS Milne suggests MacLane 1978 (Origins of the cohomology of groups. Enseign. Math. (2) 24 (1978), no. 1-2, 1--29. MR0497280)., both of which look like good reads.)

fa.functional analysis - Compact Hausdorff and C^*-algebra "objects" in a category.

Question 1: If I understand you correctly, you're proposing that $mathbb{C}$ should be a compact Hausdorff object in some category because it represents a functor from that category to the category CH of compact Hausdorff spaces (in something like the sense that the functor $Hom(-, mathbb{C})$ into Set factors through the forgetful functor from CH to Set). But I don't see why this should be sufficient to make $mathbb{C}$ a compact Hausdorff object.



That is, presumably, from the approach of functorial semantics, a compact Hausdorff object in a category C should be a product-preserving functor from L to C, where L is the dual of the Kleisli category for the ultrafilter monad on Set (that is, L is the Lawvere theory whose category of (Set-)models is the category of compact Hausdorff spaces). I can see how, more generally, for any Lawvere theory L and category C, every C-model of L (i.e., a product-preserving functor F from L to C) induces a representable functor Hom(-, F(1)) from C to Set which factors through the forgetful functor from Set-models of L to Set. But it's not obvious to me that the converse of this holds as well (that every representable functor from C to Set with this factorization property arises from some C-model of L).



Perhaps I'm missing something and your reasoning for $mathbb{C}$ being a compact Hausdorff object is something more than this. Perhaps I'm hopelessly confused. But, tentatively, I think the answer to question 1 is "No" or at least "Not necessarily".



(Edit: As seen below, the correspondence does go both ways, so the last line is retracted, leaving the second-to-last line...)

gr.group theory - Finding a subnormal series with specified quotients and end group of specific depth (defect)

This can always be done:



Given nontrivial groups $A_i$ for $0 le i le n$, there exists a group $G$ and a subnormal series $H = H_0 < cdots < H_n = G$ such that $H_i/H_{i-1} cong A_i$ for $0 le i < n$ and such that no shorter subnormal series from $H$ to $G$ exists.



Here is my proof:



We can assume $n > 1$, and we induct on $n$. By the inductive hypothesis, let $W$ be a group with subnormal series $V = V_1 < cdots < V_n$, such that $V_i/V_{i-1} cong A_i$ for $1 le i < n$, and such that there exists no shorter subnormal series for $V$ in $W$. Write $A = A_0$ and let $G$ be the wreath product of $A$ with $W$ corresponding to the action of $W$ on the right cosets in $V$. In other words, $G = BW$ is a semidirect product, where $B triangleleft G$ and $B$ is the direct product of $|W:V|$ copies of $A$. Also, $W$ acts to permute these direct factors of $B$, and this action is permutation isomorphic to the action of $W$ on the cosets of $V$ in $W$. (In fact, we assume that we are given a specific bijection from the set of cosets of $V$ onto the set of direct factors of $B$.)



Now let $C$ be the product of all of the direct factors of $B$ that correspond to nontrivial cosets of $V$, and note that ${bf N}_W(C) = V$. Let $H = H_0$ be the group $CV$, and for $i > 0$, let $H_i = BV_i$. It is easy to see that $H_0 < H_1 < cdots < H_n = G$ is a subnormal series with factors $A_i$ as wanted. We must show that no shorter subnormal series for $H$ exists. Note that the subnormal depth of $H_1$ is exactly $n - 1$. (This can be seen by intersecting a subnornal series for $H_1$ in $G$ with $W$. This yields a subnormal series for $V$ in $W$.)



Suppose $H triangleleft K$. We argue that $BK = BV$. Otherwise,
$BK > BV$, so $BK cap W > V$. But $BK$ normalizes $C$ since $C = B cap H$, and this contradicts the fact that $V$ is the full normalizer of $C$ in $W$. Now if $H = K_0 < K_1 < cdots < K_m = G$ is a subnormal series for $H$, then $H_1 = BV = BK_1 subseteq cdots subseteq BK_m = G$ is a subnormal series for $H_1$ with length at most $m-1$, and thus $m ge n$, as wanted.

Tuesday, 13 May 2008

simplicial stuff - Motivation for the covariant model structure on SSet/S

The quick answer is the covariant model structure on sSet/S is one way to build an infinity-category of infinity-copresheaves on S, when S is an infinity-category. In fact, I don't understand why the covariant model structure is introduced first rather than the contravariant one- it is the later which is used to construct infinity-presheaves, which are of course important examples of infinity-topoi.



(I am using the terminology "infinity-presheaf" to mean a contravariant infinity-functor from S to the infinity-category of infinity-groupoids)



It would perhaps be best to recall what happens in the case of 2-categories:



Let C be a category. We can, on one hand, consider the bicategory of weak functors C^op->Gpd, where the target is the bicategory of groupoids. On the other hand, we can consider categories fibred in groupoids over C, that is a functor D->C which is a Grothendieck fibration in groupoids. Both of these objects, weak presheaves and fibred categories respectively, naturally form bicategories.



We have a 2-functor G:Gpd^{C^op}->Fib_Gpd(C) between these bicategories given by the "Grothendieck construction". It has a left 2-adjoint and together this adjoint pair forms an equivalence of bicategories.



Lurie proves the infinity-analogue of this statement. To do so, he needs to form an infinity category of "Grothendieck fibrations in groupoids", and an infinity category of "infinity presheaves" and show they are equivalent.



The infintiy-version of Grothendieck fibration in groupoids is what Lurie calls a "right fibration". In particular, C->D is a Grothendieck fibration in groupoids if and only if N(C)->N(D) is a right fibration. Also, the fibers of any right-fibration are Kan-complexes, hence, infinity groupoids.



Given a simplicial set S, the contravariant model structure on sSet/S is enriched in sSet_Quillen so we can form the associated full simplicial category on fibrant and cofibrant objects. An an object X->S is fibrant in this model structure if and only if it is a right-fibration. Hence, the homotopy-coherent nerve of this simplicial category is the infinity-category of "Grothendieck fibrations in infinity groupoids over S".



In 2.2.1, Lurie introduces a functor St:sSet/S->sSet^{C(S)^op} where C is the left-adjoint to the homotopy-coherent nerve- here we mean we are considering functors of simplicial categories (treating sSet as a simplicial category since it is enriched over itself). Since we can identify sSet/S with Set^{(Detla/S)^op} and the functor is colimit preserving, by formal nonsense it has a right adjoint which Lurie denotes by "Un". "Un" is the "infinity-Grothendieck construction".



We can now equip sSet^{C(S)^op} with the projective model structure, and then the adjoint pair (St,Un) forms a Quillen-equivalence.



Now, sSet^{C(S)^op} can be turned into an infinity-category by applying the same construction I said before- treat it as a simplicial category and restrict to fibrant and cofibrant objects, and take the homotopy-coherent nerve. This infinity-category is the infinity-category of infinity-presheaves on S.



The Quillen-equivalence (St,Un) turns into an adjunction between the infinity-category of right-fibrations over S, and the infinity-category of infinity-presheaves over S, and moreover is an equivalence of infinity-categories.



The upshot is, the contravariant model structure gives us another way of describing infinity-presheaves.



As a side note, to understand why the contravariant model structure is defined the way it is, you should look at how the functor St is defined- the model structure is essentially "designed" to so that (St,Un) becomes a Quillen equivalence.

soft question - Famous mathematical quotes

"There are, therefore, no longer some problems solved and others unsolved, there are only problems more or less solved, according as this is accomplished by a series of more or less rapid convergence or regulated by a more or less harmonious law. Nevertheless an imperfect solution may happen to lead us towards a better one."



Henri Poincare

senescence - How is it that the WI-38 cell line isolated by Hayflick in 1962 is still very much around and not affected by the 'Hayflick Limit'?

WI-38 are the cell lines which led to the proposal of the Hayflick limit and are the classic example of cells which will only divide ~40 times. they are not exempt - they are the example that proved the rule.



Lots of cell lines are commonly available, but have to recultivated regularly from animal/tissue source. If they can be frozen, then the extraction of new cells doesn't happen so often without exceeding the limit of divisions. This allows the cell lines to be used widely for experiments despite the limitations on their division in culture.



Other cell lines like HeLa and other tumor cells or Stem Cells can, under the proper conditions, divide without limit and are easier to culture in large volumes, but are often inappropriate for a given research project.

Monday, 12 May 2008

ct.category theory - Nerves of (braided or symmetric) monoidal categories

I'm looking for references on the structure which can be roughtly described as follows: given a (braided or symmetric) monoidal category $C$, I want to consider a simplicial set $N(mathbf{B}C)$ with a single vertex, an edge for every object of $C$, a triangle with edges $X,Y,Z$ for every morphism $varphi:Zto Xotimes Y$, a tethraedron for every four triangles making up a commutative diagram involving the associator of $C$, higher coherences..



Any suggestion? thanks

Sunday, 11 May 2008

zoology - Why do the feet of the Gecko Lizard not stick to Teflon surfaces?

Gecko feet have been the focus of a lot of research over the past 10 years.



Their ability to walk up vertical or even hanging from ceilings is attributed to tiny branching hairs growing from the pads of their feet. The hairs branch down to the size of 100s of nanometers and its believed that this gives them large surfaces on the molecular size scale, giving them a strong vanderWaals adhesion force.



Micrograph of Gecko hairs from footpad



Teflon, being hydrocarbon chains covered with 'hard' electronegative flourine atoms doesn't provide the same attraction to the hairs as vanderWaals forces rely on mutual polarizability of atoms in close contact. That is to say, electronically fluffy atoms can stick together more readily.

Saturday, 10 May 2008

human biology - Slow-oxidative fibres vs fast-glycotic fibres

This is well-explained at the Wikipedia page on skeletal striated muscle.




There are two principal ways to categorize muscle fibers: the type of
myosin (fast or slow) present, and the degree of oxidative
phosphorylation that the fiber undergoes. Skeletal muscle can thus be
broken down into two broad categories: Type I and Type II. Type I
fibers appear red due to the presence of the oxygen binding protein
myoglobin. These fibers are suited for endurance and are slow to
fatigue because they use oxidative metabolism to generate ATP. Type II
fibers are white due to the absence of myoglobin and a reliance on
glycolytic enzymes. These fibers are efficient for short bursts of
speed and power and use both oxidative metabolism and anaerobic
metabolism depending on the particular sub-type. These fibers are
quicker to fatigue.




In the terms used in the article, the Type II fibers rely on anaerobic, glycolytic metabolism, whereas the Type I fibers use oxidative metabolism which, of course, requires mitochondria for the TCA cycle and oxidative phosphorylation.The Type I fibers also contain myoglobin which promotes rapid movement of O2 through the cytosol to the mitochondrial ATP synthase.

cv.complex variables - Universality of zeta- and L-functions

Voronin´s Universality Theorem (for the Riemann zeta-Function) according to Wikipedia: Let $U$ be a compact subset of the "critical half-strip" ${sinmathbb{C}:frac{1}{2}<Re(s)<1}$ with connected complement. Let $f:U rightarrowmathbb{C}$ be continuous and non-vanishing on $U$ and holomorphic on $U^{int}$. Then $forallvarepsilon >0$ $exists t=t(varepsilon)$ $forall sin U: |zeta(s+it)-f(s)|<varepsilon $.



(Q1) Is this the accurate statement of Voronin´s Universality Theorem? If so, are there any (recent) generalisations of this statement with respect to, say, shape of $U$ or conditions on $f$ ? (If I am not mistaken, the theorem dates back to 1975.)



(Q2) Historically, were the Riemann zeta-function and Dirichlet L-functions the first examples for functions on the complex plane with such "universality"? Are there any examples for functions (on the complex plane) with such properties beyond the theory of zeta- and L-functions?



(Q3) Is there any known general argument why such functions (on $mathbb{C}$) "must" exist, i.e. in the sense of a non-constructive proof of existence? (with Riemann zeta-function being considered as a proof of existence by construction).



(Q4) Is anything known about the structure of the class of functions with such universality property, say, on some given strip in the complex plane?



(Q5) Are there similar examples when dealing with $C^r$-functions from some open subset of $mathbb{R}^n$ into $mathbb{R}^m$ ?



Thanks in advance and Happy New Year!

ag.algebraic geometry - What do intermediate Jacobians do?

On a smooth complex projective variety of $dim X=n$, we have $n$ complex tori associated to it via $J^k(X)=F^kH^{2k-1}(X,mathbb{C})/H_k(X,mathbb{Z})$ (assuming I've got all the indices right) called the $k$th intermediate Jacobian.



If $k=1$, we have $J^1(X)=H^{1,0}/H_1$, and so $J^1(X)cong Jac(X)$ is an abelian variety (the bilinear form is a polarization because it has to be definite on each piece of the Hodge decomposition (I think) ) and is in fact isomorphic as PPAV's to the Jacobian of the variety.



If $k=n$, we have $H^{2n-1,1}/H_{2n-1}$, which is also a PPAV, and is in fact the Albanese of $X$.



The ones in the middle, however, the "true" intermediate Jacobians, are generally only complex tori. One example of an application is that Clemens and Griffiths proved that cubic threefolds are unirational but not rational using $J^2(X)$ for $X$ a cubic threefold.



So, what information do the intermediate Jacobians contain? I've been told that we don't really know much about that, but what is known, beyond Clemens/Griffiths?

Friday, 9 May 2008

career - How important are publications for undergrads?

I have heard vastly conflicting statements about whether undergrads applying for PhD programs should have published already, or what level of research will be expected of them. Looking at CVs of some of my school's professors, almost none of them seem to have publications from earlier than the 2nd half of their graduate studies, meaning they spent most of their time before getting their PhD without any publications or those that they had weren't worth listing, in their eyes.



Obviously, I'm going to try to get the best experience I can as an undergrad, and I hope that means getting published research, but in every area I've dipped my toe in, from probability to dynamical systems to complexity theory, the sheer amount of additional knowledge I'd need to understand even a upper-level graduate text seems intimidating.



When did you first publish, and what sort of research experience (if it's something other than publishing an article) should an undergraduate aiming for a PhD have?



Disclaimer: I'm an undergrad in CS, pretty average or maybe above average in my progress so far, and I'd like to make a career in researching some of the theoretical (and obviously math-heavy) parts of computer science, rather than software or interface.

ho.history overview - Widely accepted mathematical results that were later shown wrong?

The Busemann-Petty problem (posed in 1956) has an interesting history. It asks the following question: if $K$ and $L$ are two origin-symmetric convex bodies in $mathbb{R}^n$ such that the volume of each central hyperplane section of $K$ is less than the volume of the corresponding section of $L$:
$$Vol_{n-1}(Kcap xi^perp)le Vol_{n-1}(Lcap xi^perp)qquadtext{for all } xiin S^{n-1},$$
does it follow that the volume of $K$ is less than the volume of $L$: $Vol_n(K)le Vol_n(L)?$



Many mathematician's gut reaction to the question is that the answer must be yes and Minkowski's uniqueness theorem provides some mathematical justification for such a belief---Minkwoski's uniqueness theorem implies that an origin-symmetric star body in $mathbb{R}^n$ is completely determined by the volumes of its central hyperplane sections, so these volumes of central hyperplane sections do contain a vast amount of information about the bodies. It was widely believed that the answer to the Busemann-Problem must be true, even though it was still a largely unopened conjecture.



Nevertheless, in 1975 everyone was caught off-guard when Larman and Rogers produced a counter-example showing that the assertion is false in $n ge 12$ dimensions. Their counter-example was quite complicated, but in 1986, Keith Ball proved that the maximum hyperplane section of the unit cube is $sqrt{2}$ regardless of the dimension, and a consequence of this is that the centered unit cube and a centered ball of suitable radius provide a counter-example when $n ge 10$. Some time later Giannopoulos and Bourgain (independently) gave counter-examples for $nge 7$, and then Papadimitrakis and Gardner (independently) gave counter-examples for $n=5,6$.



By 1992 only the three and four dimensional cases of the Busemann-Petty problem remained unsolved, since the problem is trivially true in two dimensions and by that point counter-examples had been found for all $nge 5$.
Around this time theory had been developed connecting the problem with the notion of an "intersection body". Lutwak proved that if the body with smaller sections is an intersection body then the conclusion of the Busemann-Petty problem follows. Later work by Grinberg, Rivin, Gardner, and Zhang strengthened the connection and established that the Busemann-Petty problem has an affirmative answer in $mathbb{R}^n$ iff every origin-symmetric convex body in $mathbb{R}^n$ is an intersection body. But the question of whether a body is an intersection body is closely related to the positivity of the inverse spherical Radon transform. In 1994, Richard Gardner used geometric methods to invert the spherical Radon transform in three dimensions in such a way to prove that the problem has an affirmative answer in three dimensions (which was surprising since all of the results up to that point had been negative). Then in 1994, Gaoyong Zhang published a paper (in the Annals of Mathematics) which claimed to prove that the unit cube in $mathbb{R}^4$ is not an intersection body and as a consequence that the problem has a negative answer in $n=4$.



For three years everyone believed the problem had been solved, but in 1997 Alexander Koldobsky (who was working on completely different problems) provided a new Fourier analytic approach to convex bodies and in particular established a very convenient Fourier analytic characterization of intersection bodies. Using his new characterization he showed that the unit cube in $mathbb{R}^4$ is an intersection body, contradicting Zhang's earlier claim. It turned out that Zhang's paper was incorrect and this re-opened the Busemann-Petty problem again.



After learning that Koldobsky's results contradicted his claims, Zhang quickly proved that in fact every origin-symmetric convex body in $mathbb{R}^4$ is an intersection body and hence that the Busemann-Petty problem has an affirmative answer in $mathbb{R}^4$---the opposite of what he had previously claimed. This later paper was also published in the Annals, and so Zhang may be perhaps the only person to have published in such a prestigious journal both that $P$ and that $neg P$!

vision - Does retinal detachment happen more frequently at night?

In case of normal, healthy person, no, I don't think so. In fact, retinal detachment could happen at anytime of the day. Main causes are diseases and illness, such as AIDS, diabetic retinopathy, cancer, or trauma such as post-cataract surgery and being hit/kicked hard on the eyes.

evolution - Why do plants have green leaves and not red?

There are several parts to my answer.



First, evolution has selected the current system(s) over countless generations through natural selection. Natural selection depends on differences (major or minor) in the efficiency of various solutions (fitness) in the light (ho ho!) of the current environment. Here's where the solar energy spectrum is important as well as local environmental variables such as light absorption by water etc. as pointed out by another responder. After all that, what you have is what you have and that turns out to be (in the case of typical green plants), chlorophylls A and B and the "light" and "dark" reactions.



Second, how does this lead to green plants that appear green? Absorption of light is something that occurs at the atomic and molecular level and usually involves the energy state of particular electrons. The electrons in certain molecules are capable of moving from one energy level to another without leaving the atom or molecule. When energy of a certain level strikes the molecule, that energy is absorbed and one or more electrons move to a higher energy level in the molecule (conservation of energy). Those electrons with higher energy usually return to the "ground state" by emitting or transferring that energy. One way the energy can be emitted is as light in a process called fluorescence. The second law of thermodynamics (which makes it impossible to have perpetual motion machines) leads to the emission of light of lower energy and longer wave length. (n.b. wavelength (lambda) is inversely proportional to energy; long wavelength red light has less energy per photon than does short wavelength violet (ROYGBIV as seen in your ordinary rain bow)).



Anyway, chlorophylls A and B are complex organic molecules (C, H, O, N with a splash of Mg++) with a ring structure. You will find that a lot of organic molecules that absorb light (and fluoresce as well) have a ring structure in which electrons "resonate" by moving around the ring with ease. It is the resonance of the electrons that determine the absorption spectrum of a given molecule (among other things). Consult wikipedia article on chlorophyll for the absorption spectrum of the two chlorphylls. You will note that they absorb best at short wavelengths (blue,indigo,violet) as well as at the long wavelengths (red,orange,yellow) but not in the green. Since they don't absorb the green wavelengths, this is what is left over and this is what your eye perceives as the color of the leaf.



Finally, what happens to the energy from the solar spectrum that has been temporarily absorbed by the electrons of chlorophyll? Since its not part of the original question, I'll keep this short (apologies to plant physiologists out there). In the "light dependent reaction", the energetic electrons get transferred through a number of intermediate molecules to eventually "split" water into Oxygen and Hydrogen and generate energy-rich molecules of ATP and NADPH. The ATP and NADPH then are used to power the "light independent reaction" which takes CO2 and combines it with other molecules to create glucose. Note that this is how you get glucose (at least eventually in some form, vegan or not) to eat and oxygen to breath.



Take a look at what happens when you artificially uncouple the chlorophylls from the transfer system that leads to glucose synthesis. http://en.wikipedia.org/wiki/Chlorophyll_fluorescence Notice the color of the fluorescence under UV light!



Alternatives? Look at photosynthetic bacteria.

Thursday, 8 May 2008

mp.mathematical physics - The Quantum Operations On The Bipartite Systems

Given two distinct and noninteracting quantum mechanical
systems $mathfrak{S}_1$ and $mathfrak{S}_2$ with state spaces
$mathcal H_1$ and $mathcal H_2$, respectively, the state space
of the combined system $mathcal S_1+mathcal S_2$ is the
tensor product Hilbert space
$mathcal H=mathcal H_1otimesmathcal H_2$. Density operators
$Winmathcal D(mathcal H)$, and effects
$Finmathcal E(mathcal H)$. Similarly, there are corresponding
symbols $W_iinmathcal D(mathcal H_i),
F_iinmathcal E(mathcal H_i)$ for subsystems
$mathfrak{S}_i(i=1,2)$, respectively.



Given any quantum operation, $Phi:
mathcal D(mathcal H)rightarrow mathcal D(mathcal H)$, of
the composite system $mathcal S_1+mathcal S_2$.



Problem: (1) Do there exist whether or not two quantum
operation $phi_1$ and $phi_2$, of the subsystems $mathfrak{S}_1$
and $mathfrak{S}_2$, respectively, such that the following
diagram is commutative:



$$
begin{diagram}
node{mathcal D(mathcal H_1)} arrow[4]{e,t}{phi_1}node[4]{mathcal D(mathcal H_1)}\
node{}\
node{mathcal D(mathcal H_1otimesmathcal H_2)}
arrow[2]{n,l}{Tr_2} arrow[4]{e,t}{Phi} arrow[2]{s,l}{Tr_1}
node[4]{mathcal D(mathcal H_1otimesmathcal H_2)} arrow[2]{s,r}{Tr_1} arrow[2]{n,r}{Tr_2}
\
node{}\
node{mathcal D(mathcal H_2)} arrow[4]{e,b}{phi_2}
node[4]{mathcal D(mathcal H_2)}
end{diagram}
$$
i.e.
$$begin{eqnarray}
Tr_2(Phi(W))&=&frac{tr(Phi(W))}{tr(phi_1(Tr_2(W)))}phi_1(Tr_2(W)),\
Tr_1(Phi(W))&=&frac{tr(Phi(W))}{tr(phi_2(Tr_1(W)))}phi_2(Tr_1(W)),
end{eqnarray}
$$
where $phi_i: mathcal D(mathcal H_i)rightarrow
mathcal D(mathcal H_i)(i=1,2)$ and $Tr_i:
mathcal D(mathcal H)rightarrow
mathcal D(mathcal H_i)$ is a partial trace with respect to the subsystem $mathfrak{S}_i(i=1,2)$.



(2) If quantum operation $phi_1$ and $phi_2$ exist, give the
relationship among the quantum operations $Phi, phi_{1}$ and
$phi_2$.

gr.group theory - Presentation for the double cover of A_n

Yeah, Schur did this a long time ago. Let $tilde Sigma_n to Sigma_n$ be a double cover (there are two) -- lets denote them $tilde Sigma_n = Sigma_n^epsilon$ where $epsilon in {+1, -1}$.



Schur uses the notation $[a_1 a_2 cdots a_k]$ for a specific lift of the cycle $(a_1 a_2 cdots a_k) in Sigma_n$ to $Sigma_n^epsilon$ -- might as well call these $k$-cycles. Then his presentation goes like this:



$$[a_1 a_2 cdots a_k] = [a_1 a_2 cdots a_i][a_i a_{i+1} cdots a_k] forall 1 < i< k$$



and all $k$-cycles, $k>1$.



$$[a_1 a_2 cdots a_k]^{[b_1 b_2 cdots b_j]} = (-1)^{j-1}[phi(a_1) phi(a_2) cdots phi(a_k)]$$



where $phi$ is the cycle $(b_1 b_2 cdots b_j)$



$$[a_1 a_2 cdots a_k]^k = epsilon$$



for all $k$-cycles -- ie this is always $+1$ or $-1$ depending on which extension of $Sigma_n$ you're interested in. And:



$$[a_1 a_2 cdots a_k][b_1 b_2 cdots b_j] = (-1)^{(k-1)(j-1)}[b_1b_2 cdots b_j][a_1 a_2 cdots a_k]$$



provided the cycles $(a_1 a_2 cdots a_k)$ and $(b_1 b_2 cdots b_j)$ are disjoint.



The map $tilde Sigma_n to Sigma_n$ sends $[a_1 cdots a_k]$ to $(a_1 cdots a_k)$. So this gives you a corresponding presentation of the double of $A_n$ -- take your favourite presentation of $A_n$, lift the relators and see what happens using the above relations.



A small extra tidbit -- think of $Sigma_n$ as being the group of orientation-preserving isometries of $mathbb R^{n}$ that preserves a regular (n-1)-simplex. Then if you lift this group to $Spin(n)$, the extension you want is the one where $[a_1 a_2 cdots a_k]^k = -1$.

nt.number theory - Choosing a fast computer algebra system that works in characteristic p?

My personal experience is a few years old, but I don't think things have changed much. Sage is (or actually, was) more about ease of use then about performance. The only three CAS's you want to consider are



  • Singular (Macaulay 2 uses Singular's engine)

  • Cocoa.

  • Magma.

Back then the fastest of the bunch was Magma, but not by much. Regarding ease of use, it was a tie between Macaulay 2 and Magma.



And now to some criticism: I never looked at Magma's code (proprietary), but I did look at both Singular and Cocoa. None of them uses SSE/GPGPU, which could probably give you an acceleration factor of 10-100.

at.algebraic topology - Characteristic classes in generalized cohomology theories?

Not a real answer to your question, but I think it may be related. One can ask the same question fofr Chern classes. For some generalized cohomology theories you can define Chern classes of complex vector bundles, and these satisfy the usual axioms for Chern classes.



What changes is the tensor product behviour. Given two line bundles L and M, the first Chern class $c_1(L otimes M)$ is given by a universal power series in $c_1(L)$ and $c_1(M)$. For instance in ordinary cohomology $c_1(L otimes M) = c_1(L) + c_1(M)$, while in K theory $c_1(L otimes M) = c_1(L) cdot c_1(M)$. Since line bundles form a group under tensor product, this power series is a (1-dimensional) formal group law.



So to any such cohomology theory you can attach a 1-dimensional formal group. For instance you attach the additive group to ordinary cohomology and the multiplicative group to K-theory. It turns out that you can go the other way round. The other 1-dimensional formal group laws are formal expansions of the group law of an elliptic curve near the origin; these give rise to the so called elliptic cohomology theories.



I think you can find details on the above constructions in any reference about elliptic cohomology. I've only heard about these theories, so I don't know an good reference. By the way, since I'm not an expert, please correct me if anything I have written above is wrong.

rt.representation theory - Definition of the symmetric algebra in arbitrary characteristic for graded vector spaces

What is the right definition of the symmetric algebra over a graded vector space V over a field k?



More generally: What is the right definition of the symmetric algebra over an object in a symmetric monoidal category (which is suitably (co-)complete)?



Two possible definitions come to my mind:



1) Take the tensor algebra over V and identify those tensors which differ only by an element of the symmetric group, i.e. take the coinvariants wrt. the symmetric group. The resulting algebra A is then the universal algebra together with a map V -> A such that the product of elements in V is commutative.



2) Take the tensor algebra over V and divide out the ideal generated by antisymmetric two-tensors. In this case, the resulting algebra A is the universal algebra together with a map V -> A such that the product of A vanishes on all antisymmetric two-tensors (one could say that all commutators of A vanish).



The definition 1) looks more natural and gives, for example, the polynomial ring in case V is of degree 0.



The definition 2) applied a vector space shifted by degree 1 gives (up to degree shift) the exterior algebra over the unshifted vector space. However, in characteristic 2 for example, one doesn't get the polynomial ring if one starts with a vector space of degree 0.



Finally, both definitions have a shortcoming in that they don't commute well with base change.

Wednesday, 7 May 2008

ac.commutative algebra - Do there exist non-PIDs in which every countably generated ideal is principal?

The question is fully settled by Hugh Thomas' anwer, but let me mention this related interesting fact.



Theorem. There is a ring R and ideal I on R, such that every countable subset
of I is contained in a principal subideal of I, but I is
not principal.



Proof. Let I be the ideal of nonstationary subsets of
ω1, in the power set P(ω1), which is a Boolean
algebra and hence a Boolean ring. That is, I consists of those subsets of ω1 that
are disjoint from a closed unbounded subset of ω1. It is an elementary set-theoretic fact that the intersection of any countably many closed unbounded subsets of ω1 is still closed and unbounded, and thus the union of countably many
non-stationary sets remains non-stationary. Thus, every
countable subset of I is contained in a principal subideal
of I. But I is not principal, since the complement of any
singleton is stationary. QED



In the previous example, the ideal I is not maximal. If one assumes the existence of a measurable cardinal (a large cardinal notion), however, then the example can be made with I maximal.



Theorem. If there is a measurable cardinal, then there is a
ring R with a maximal ideal I, such that every countable
subset of I is contained in a principal sub-ideal of I, but
I is not principal.



Proof. Let κ be a measurable cardinal, which means that
there is a nonprincipal κ-complete ultrafilter U on the
power set P(κ), which is a Boolean algebra and thus a
Boolean ring. The ideal I dual to U is also κ-complete,
which means that I is closed under unions of size less than
κ. In particular, since kappa is uncountable, this
means that the union of any countably many elements of I
remains in I, and this union set generates a principal
subideal of I containing the given countable set. The ideal I is maximal since U was an ultrafilter. QED



I'm not sure at the moment whether the situation of this last theorem requires a measurable cardinal or not, but I'll think about it.

rna sequencing - RNA seq and using of Poly(A) or non-Poly(A) based amplification of RNA

I'm studying "Deep sequencing the circadian and diurnal transcriptome of Drosophila brain" Hughes et al., 2012. I've got some problems with the materials and methods.



Before RNAseq, the authors amplify RNA. They use two kits: Poly(A) based and non-poly(A) based. For the samples Poly(A) based amplified they do a sequencing to generate 100 bp paired-end and for the samples Non poly(A) based they do a sequencing to generate 75 single-ends.



I read in the strategy:




Total RNA was amplified, and ribosomal RNA was depleted, using a non-poly(A)-based amplification kit, which significantly diminishes the 3' bias in downstream libraries used for RNA-seq (see Methods).




1) I don't understand why they use both kits. Actually, they say that they want to asses the bias introduced by an alternative amplification methods BUT the following sequencing is not the same in both cases so there is more than one parameter which change between the both protocols so I don't understand how they can compare.



2) For the poly(A) based I understand that the rRNA are not polyadenylated so they remove rRNA from the amplified sample. However, don't they remove non-coding RNA too (long or short)?



3) For the non poly(A) based, I don't understand how they remove the rRNA but I understand that since they don't select the RNA according to their poly(A) tails they keep the rest of the RNAs which are not kept in the poly(A) based amplification.



4) When I look to the RUM statistics, I see that there is less total aligned read in non-poly(A) based: 70-80% (non polyA) and 80-90% (polyA). So it seems like we loose some information with non-poly(A) based.



To resume, why the authors use Poly(A) and non-poly(A)? Why do they show all the rest of the results from the non-poly(A) amplification since they have less aligned read? How do they compare since the following sequencing is not the same? How the rRNA are removed in both techniques and especially in non-poly(A)?



I didn't find the protocols kit with exactly the same name but here are the links to the protocols which are closer (I think).



http://www.mscience.com.au/upload/pages/nugen/nugen_ov_rna_seq-brochure.pdf



http://excilone.com/client/document/ug-arcturusE%E2%80%9E%C2%A2-riboamp%C3%82-hs-plus-amplification-kit-user-guide_38.pdf
Thank you very much,

zeta functions - Is there an analogue of the Lefschetz fixed point theorem for discrete dynamical systems?

Background/Motivation



Let $(X, f)$ be a discrete dynamical system. For now, $X$ is just a set and $f$ is just a function $f : X to X$. Suppose that $f^n$ has a finite number of fixed points for every $n$. Then the dynamical (Artin-Mazur) zeta function $zeta_f$ is given by



$displaystyle zeta_f(t) = exp left( sum_{n ge 1} frac{text{Fix } f^n}{n} t^n right)$.



The coefficients of $zeta_f(t)$ have a nice combinatorial interpretation that seems to have homological significance. A particularly famous case of this construction is that $X$ is the set of points of a variety over $overline{ mathbb{F}_p }$ and $f$ is the Frobenius map; then $zeta_f$ is a local zeta function, since $text{Fix } f^n$ is precisely the number of points of the variety over $mathbb{F}_{p^n}$.



Now give $X$ the additional structure of a compact triangulable space and let $f$ be continuous. Again suppose that $f^n$ has a finite number of fixed points for every $n$ and let $i(f, x)$ denote the index of a fixed point $x$, and let $L(f)$ be the sum of the indices $i(f, x)$ over all fixed points of $x$. Thus $L(f)$ generalizes the number $text{Fix } f$ to the case that the indices are not all equal to $1$. Similarly one defines the Lefschetz zeta function by



$displaystyle zeta_f(t) = exp left( sum_{n ge 1} frac{L(f^n)}{n} t^n right)$.



The Lefschetz fixed point theorem is then equivalent to the statement that $zeta_f$ is equal to the alternating product of the characteristic polynomial of the induced action of $f$ on the singular homology groups $H_k(X, mathbb{Q})$; in particular, $zeta_f$ is rational because there are finitely many such groups. Weil famously suggested that if one could define an analogue of singular homology for varieties over finite fields, an analogue of the Lefschetz fixed point theorem would prove the Weil conjectures. This was eventually done, and is known as etale cohomology.



However, I'm interested in a simpler dynamical system than a variety over a finite field. Let $G$ be a finite (directed, possibly with loops) graph, let $X(G)$ be the set of aperiodic closed walks on $G$ with a distinguished vertex, and let $f : X(G) to X(G)$ be the function which sends the distinguished vertex of an aperiodic closed walk to the next vertex in the walk. (An aperiodic closed walk is analogous to a point together with all of its Galois conjugates, and $f$ is conjugation.) Then $text{Fix } f^n$ is precisely the number of closed walks of length $n$ on $G$. A basic result in algebraic combinatorics then tells us that $text{Fix } f^n = text{tr } mathbf{A}^n$, where $mathbf{A}$ is the adjacency matrix of $G$, and this is equivalent to the statement that



$displaystyle zeta_f(t) = frac{1}{det(mathbf{I} - mathbf{A}t)}$.



What this suggests to me is that there is an analogue of the Lefschetz zeta function at work and that it is telling me that $X(G)$ has one nontrivial homology group on which $f$ acts as $mathbf{A}$, but I don't know if this is a reasonable interpretation. Hence my questions!



Edit, 1/8/10: Let me give an example where I can introduce another "homology group." Let $H$ be a proper subgraph of $G$, and let $X(G, H)$ denote the set of aperiodic closed walks on $G$ with a distinguished vertex and with the property that at least one edge or vertex of the closed walk is not in $H$; $f$ is the same as above. If $mathbf{B}$ denotes the adjacency matrix of $H$, it then follows that $text{Fix } f^n = text{tr } left( mathbf{A}^n - mathbf{B}^n right)$, hence



$displaystyle zeta_f(t) = frac{det(mathbf{I} - mathbf{B}t)}{det(mathbf{I} - mathbf{A}t)}$.



Questions



  • What is a sensible definition of the (say, integral) homology of a discrete dynamical system without any further structure? What conditions on $X$ are necessary to ensure that there are only finitely many homology groups, and do they hold for $X(G)$ and/or $X(G, H)$?


  • Under what conditions does an analogue of the Lefschetz fixed point theorem hold for this homology theory, and can it be made to correctly reproduce the $X(G)$ and $X(G, H)$ computations above?


noncommutative geometry - Kontsevich, and Geometric, Quantization and the Podles sphere

There exist a large family of noncommutative spaces that arise from the quantum matrices. These algebraic objects $q$-deform the coordinate rings of certain varieties. For example, take quantum $SU(2)$, this is the algebra $< a,b,c,d >$ quotiented by the ideal generated by
$$
ab−qba, ~~ ac−qca, ~~ bc−cb, ~~ bd−qdb, ~~ cd−qdc, ~~ ad−da−(q−q^{−1})bc,
$$
and the "q-det" relation
$$
ad−qbc−1
$$
where $q$ is some complex number. Clearly, when $q=1$ we get back the coordinate ring of $SU(2)$. In the classical case $S^2 = SU(2)/U(1)$ (the famous Hopf fibration). This generalises to the q-case: the $U(1)$-action generalises to a $U(1)$-coaction with an invariant subalgebra that q-deforms the coordinate algebra of $S^2$ - the famous Podles sphere. There exist such q-matrix deformations of all flag manifolds.



Since all such manifolds are Kahler, we can also apply Kontsevich deformation to them to obtain a q-defomation. My question is: What is the relationship between these two approaches?



Alternatively, we can apply Kostant-Souriau geometric quantization to a flag manifold. How does alegbra relate to its q-matrix deformation?

Tuesday, 6 May 2008

bacteriology - Are there grass or fiber eating birds?

My understanding, that may be wrong, is that cellulose/fibre has little nutritional value to many animals because it's hard to break down thus making consumption inefficient. However, Ruminating mammals, other mammals such as black bears, and many insects such as grasshoppers digest grasses.



Are there any birds with the gut flora to break down fibre? Or even some that eat grass but digest it in another way?

lo.logic - What does it mean to 'discharge assumptions or premises'?

Apollo is correct. A slightly more technical way of putting it is that "discharging" is an application of a theorem of metalogic called the deduction theorem:



T,P|-Q iff T|-P->Q



The single turnstile symbol "|-" stands for the syntactic consequences relation. The deduction theorem basically says "Q is derivable from T and P iff if P then Q is derivable from T alone". T may, of course, be an empty class of statements, in which case P->Q is tautologous.



Many systems of natural deduction introduce conditional proof as a primitive rule, but there are simpler systems that are just as powerful in which the deduction theorem is proved and conditional proof is a derived rule supported by the deduction theorem. The deduction theorem is important because it shows you don't need conditional proof as a primitive rule, and this makes the proof of other theorems in metalogic a whole lot simpler. Basically, if you have as few rules as possible it gives you fewer cases to check. For practical purposes, however, it's a whole lot easier to teach and use a system that introduces lots and lots of primitive rules as opposed to one that uses as few rules as possible.



Mathematicians use conditional proof all the time, by the way. For example, in a proof of Q by cases you get conditionals P1->Q, P2->Q, etc. by for each case supposing the antecedents, deriving Q from the supposition, then "discharging" the supposition. Then you show the disjunction of the antecedents is exhaustive.

ac.commutative algebra - For which fields K is every subring of K...?

Let me put together the previous two answers (plus epsilon) to give an answer to all three questions.



Step 1: By Gilmer's theorem, a field $K$ has all its subrings Noetherian iff:



(i) It is a finite extension of $mathbb{Q}$, or
(ii) It is an algebraic extension of $mathbb{F}_p$ or a finite extension of $mathbb{F}_p(t)$.



Step 2: Suppose $K$ is a number field which is not $mathbb{Q}$. We may write $K = mathbb{Q}[alpha]$ for some algebraic integer $alpha$. Then $R = mathbb{Z}[2alpha]$ is a non-integrally closed subring of $K$ so is not a Dedekind domain. So the only field of characteristic $0$ which has every subring a Dedekind domain is $mathbb{Q}$, in which case (by the previous question) every subring is a PID.



Step 3: Suppose $K$ has characteristic $p > 0$. If $K$ is algebraic over $mathbb{F}_p$, then every subring is a field, hence also Dedekind and a PID. If $K$ is a finite extension of $mathbb{F}_p(t)$ then it admits a subring of the form $mathbb{F}_p[t^2,t^3]$, which is not integrally closed.



So the fields for which every subring is a Dedekind ring are $mathbb{Q}$ and the algebraic extensions of $mathbb{F}_p$. For all such fields, every subring is in fact a PID.