Monday 30 June 2008

co.combinatorics - Probability of n k-sided dice showing exactly m different faces

I found the following closed form solution for the abovementioned problem:



$${1over k^n}cdot{k!over (k-m)!}cdot{{{nover m}}}$$ with ${{{nover m}}}$ being the Stirling Number of the second kind.



Although it seems to have some intuition and seems to work for a sample problem for which I have the solution this closed form is not from a trusted source. Unfortunately I can't find any other source.



My question: Could anyone acknowledge this closed form solution and/or give me a hint where to find a citable source.

dose - What is one Botulinum toxin medical unit?

1 unit is approximately 10 pg



Source




One unit of MYOBLOC (botulinum toxin type b) corresponds to the calculated median lethal intraperitoneal dose (LD50) in mice. The method for performing the assay is specific to Solstice Neurosciences' manufacture of MYOBLOC (botulinum toxin type b) . Due to differences in specific details such as the vehicle, dilution scheme and laboratory protocols for various mouse LD50 assays, units of biological activity of MYOBLOC (botulinum toxin type b) cannot be compared to or converted into units of any other botulinum toxin or any toxin assessed with any other specific assay method. Therefore, differences in species sensitivities to different botulinum neurotoxin serotypes preclude extrapolation of animal dose-activity relationships to human dose estimates. The specific activity of MYOBLOC (botulinum toxin type b) ranges between 70 to 130 Units/ng.




UPDATE



10 pg of toxin (150 kDa) is 4 x 109 molecules

Sunday 29 June 2008

pharmacology - How long does it take for the Opioids listed in the Description to induce Analgesia when Administered via IV?

These are 'peak' values when administered via IV, with the exception of oxycodone. There will be some variation person to person depending on their opioid tolerance and any concomitantly administered medications.



Buprenorphine: 60m



Butorphanol: 5m



Fentanyl: 5m



Hydromorphone: 30-90m



Methadone: 1-2h



Morphine: 20m



Oxycodone: Not administered intravenously



Pentazocine: 15m



Sufentanil: 3m



Tramadol: 2h

What happens to potassium after an action potential?

During the repolarization, relatively few ions need to cross the membrane for the membrane voltage to change and therefore the change in ions concentration outside and inside the cell is neglible. After repolarization, the concentrations are restored by the continuous action of Na⁺/K⁺-ATPase. The same happens for calcium, but I don't know exactly what kind of pump is used.

coding theory - Binary codes with large distance

No. If we take ${-1/sqrt n,1/sqrt n}^n$ instead of ${0,1}^n$, the problem reduces to asking if we can have many unit vectors $v_j$ with pairwise scalar products $-gamma$ or less where $gamma>0$ is a fixed number. But if we have $N$ such vectors, then the square of the norm of their sum is at most $N-gamma N(N-1)$. Since this square must be non-negative, we get $N-1legamma^{-1}$ regardless of the dimension.

Saturday 28 June 2008

mg.metric geometry - How to define a Voronoi reduced basis?

Let $Lambda$ be an $n$-dimensional lattice with basis $b_1,ldots,b_n$. The problem of finding a "good" basis for $Lambda$, or reducing a "bad" basis into a good one, is a very active area of research. Most basis reduction schemes try to optimize the norms of the basis vectors and their inner products. The goal is to have a basis that is as nearly orthogonal as possible. A more ambitious goal might be for the basis to make the specification of the Voronoi region (i.e. its face vectors) as simple as possible. The expression "Voronoi reduction" is taken from a publication by Conway and Sloane (Proc. Royal Soc. London A, 436 (1992), 55-68) where it is applied to lattices in dimensions $nle 3$.




What exactly should be the definition of a Voronoi reduced basis?




Central to the construction of the Voronoi region of $Lambda$ are the $2^n$ cosets of $2Lambda$. In particular, for any $xin Lambda$ one is interested in the set of minimal norm elements in $x+2Lambda$. Call this set $S(Lambda,x)$. One definition of a Voronoi reduced basis might therefore be the following:



A basis $b_1,ldots,b_n$ for $Lambda$ is Voronoi reduced if, for any $xin Lambda$ and any $yin S(Lambda,x)$ the integers $y_1,ldots,y_n$ in the expansion $y=sum_{i=1}^n y_i b_i$ always satisfy the bound $|y_i|le c_n$.




If this is a good definition, then what is the best constant $c_n$?




The Conway-Sloane paper shows that $c_n=1$ for $nle 3$. In other words, in dimension three and lower there always exists a basis such that the minimal norm vectors of the $2Lambda$ cosets can be expressed as sums with coefficients limited to $-1, 0, 1$. How fast does $c_n$ grow with $n$? Does it grow at all?

ag.algebraic geometry - Is the scalar extension functor for Chow motives conservative?

With rational coefficients, the answer is yes.



The first case to understand is when $E$ is a finite algebraic extension of $F$.
In the case when moreover $E$ is purely inseparable, then the extension of scalars
functors $CHM(F)to CHM(E)$ is fully faithful, and if $E$ is Galois of degree $d$, then the extension of scalars functor
$$pi^star:CHM(F)to CHM(E)$$
has a right adjoint $pi_star$, and for any motive $M$ over $F$, there is a trace map
$$tr_M : pi_star pi^star(M)to M$$
whose composition with the unit map
$$Mto pi_star pi^star(M)$$
is multiplication by $d$. If you work with rational coefficients, this implies that $pi^star$ is then conservative and faithful.



From there, to prove the general case, we may assume that
$E$ is a filtered colimit of smooth $F$-algebras $A_i$.
But then, for any index $i$, possibly after taking a finite extension of $F$,
the map $Fto A_i$ has a retraction, so that, writing
$CHM(E)$ as the $2$-colimit of the categories $CHM(A_i)$, we see easily that the extension of scalars functors is again faithful and conservative (for Chow motives over a smooth $F$-algebra, see for instance Definition 5.16 in Levine's paper arXiv:0807.2265).



If you really want Chow motives with integral coefficients, you mays still have to invert the (exponential) characteristic of $F$. Then, assuming furthermore that $F$ and $E$ are algebraically closed, the extension of scalars functors will be conservative again (this uses rigidity theorems; see O. Röndigs and P. A. Østvær, Rigidity in motivic homotopy theory, Math. Ann. 341 (2008), 651-675).

Friday 27 June 2008

set theory - How to define tuples?

I think the truth is that nobody cares. I mean, you care about such matters a little bit while learning how set theory can be used as a foundation for mathematics, but it soon ceases to be of any importance. In practice, the one important thing about n-tuples is the relation between the n-tuple and its components, i.e., the fact that two n-tuples are the same if and only if they have the same components in the same order.



If you don't learn to stop worrying about such minutiae, you will have plenty more troubles as you learn about number systems. What is the number 3, really? It could be the ordinal {0,1,2} (i.e., {∅,{∅},{∅,{∅}}}), or it could be the integer 3 represented as an equivalence class {(m,n):m=n+3} of ordered pairs of ordinals, or it could be the rational number 3 represented as an equivalence class {(p,q):p=3q, q≠0} of ordered pairs of integers, or it could be the real number 3 represented by whatever your method of defining the real numbers happen to be, or it could even be the complex number represented as a pair of real numbers (3,0) … I hope you get my drift. Every time you expand the number system, and often when you generalize some notion or other, the new contains an isomorphic copy of the old and nobody cares to distinguish between copies.



This practice of identification has its dangers, of course, so it's good that you worry about such things a bit while learning, but expect such matters to recede into the background in order to make room for more important things.



(For what it's worth, I think the method in your second paragraph is good, but having two kinds of ordered pairs should soon stop bothering you.)

dg.differential geometry - Is there an easy way to describe the sheaf of smooth functions on a product manifold?

Let $Mfd$ be the category of smooth manifolds (over $mathbb{R},mathbb{C}$) and $LRS$ be the category of locally ringed spaces (over $mathbb{R},mathbb{C}$). Then the functor $Mfd to LRS$ is full and faithful and indeed, you may define $Mfd$ to be the full subcategory of $LRS$, which consists of those locally ringed spaces, which are locally isomorphic to $mathbb{R}^n$ together with the sheaf of smooth functions. Unfortunately, the functor $Mfd to LRS$ does not preserve products; see below.



It should be remarked that products in $LRS$ do exist (even infinite ones); take the obvious product in $RS$ and "make it local" by introducing new points, namely prime ideals in the stalks, and take the localizations of the stalks as the new stalks. Now if we take the product of two manifolds $M,N$ in $LRS$, we get as a topological space the usual product $M times N$; however, the structure sheaf consists only of those functions, which are locally of the form $(u,v) mapsto f(u) * g(v)$ for smooth functions $f,g$ defined locally on $M,N$, or sums and also quotients of these functions. These functions are also smooth with respect to the usual smooth structure on $M times N$, but the reverse is not true: For example it seems to be true that $mathbb{R}^2 to mathbb{R}, (u,v) mapsto exp(uv)$ is not such a functions (however, I don't know how to prove this).



However, I think that Stone-Weierstraß implies that these simple functions are dense within all smooth functions. Thus, we may regard $M times N$ in $Mfd$ as the "completion" of $M times N$ in $LRS$.

human biology - Why do ion concentrations change with different secretion rates in pancreatic juice?

An interesting question!



I am attaching the picture from Boron & Boulpaep, Medical Physiology, 2nd Ed, p921 here to clarify the details left out in the hand drawn picture.



enter image description here



The flow of the digestive pancreatic enzymes increase mainly during food consumption. This is brought about by a number of redundant systems. The fasting secretion of pancreatic enzymes depends on the MMC (migrating motor complex).



Secretion rates during 24 hour period



Bicarbonate ions are exchanged with chlorine ions in the apical membrane of the acinar cells through a Chloride Bicarbonate Exchanger. This process is driven by:



  1. Secretin - most important humoral factor - Increases secretion

  2. ACh through muscarinic receptors - Increases secretion

  3. GRP (Gastrin Releasing Peptide) - Increases secretion

  4. Substance P - Decreases secretion

Mechanism of action of secretin seems to be through activation of cAMP. However even small amounts of secretin which do not raise the cAMP levels measurably can stimulate bicarbonate secretion. This suggests secretin response may be by



  1. Unmeasurably small amount of increase in cAMP

  2. Raise in cAMP levels in localised compartments

  3. Activation of alternative second messenger pathways

In case of GRP the second messenger is not known.



The answer to your question is that when secretin is secreted, the Bicarbonate-Chloride transporter is accelerated to achieve the result displayed. This mainly happens when food enters the small intestine so as to neutralize the acid permitting the digestive enzymes to function at the optimal pH.



The Chloride-Bicarobonate Transporter is an exchange transporter which explains why there is a sharp fall in the chloride levels.



Mechanism of bicarbonate ion secretion

Thursday 26 June 2008

rt.representation theory - Embedding group algebra $F[S_m X S_n]$ into a group algebra $F[S_{m+n}]$

Here's a question I've been thinking about, it's a curiosity that I don't know how to answer. There could be a simple counterexample, or it could be true and I don't know how difficult it would be to prove.



If we fix $m$, is it always possible to find a sufficiently large $n$ satisfying the conditions of the following question: [Note: My original question was to determine whether it is true for arbitrary $m,n$ which was answered below negatively; I have edited it to make the question more interesting].



Define $ phi: S_m rightarrow S_{m+n}$ is a canonical embedding, and $phi^{*} : F[S_m] rightarrow F[S_{m+n}]$ and similarly embeddings $theta: S_{n} rightarrow S_{m+n}$, and the induced map, such that $phi(S_{m}) times theta(S_{n})$ is a direct product.
Given an element $x in phi^{*}(F[S_m]), x neq 0$, is it necessary that there exist an element $x' in F[S_{m+n}]$ so that the product $xx' in theta^{*}(F[S_n]), xx' neq 0$. It seemed true in the cases that I have tried, but they are quite small so I'm not certain if this is true.



Making the assumption $ text{char} F = 0$ would make it easier I'm sure, but even in this case I can't prove it.

cell biology - How does ubiquitin recognize misfolded proteins?

Ubiquitin and the UPS



The ubiquitin proteasomal system (UPS) is a common method of regulation for many proteins in the cell. The modification is actually generated by E3 ubiquitin ligase, of which there are ~400-800 in the mammalian cell and the main source of specificity in the UPS (E2 ~40 and only 1 or 2 E1).



Ubiquitin gets its name from ubiquitous, meaning found everywhere. It's a 76 amino acid molecule that modifies proteins at lysine residues. Monoubiquitination can act as a signalling molecule whereas polyubiquitination leads to degradation. Ubiquitin is first activated via an ATP-dependent pathway to create a high energy thioester with E1. This is transthiolated to E2 and finally E3 allows the addition to the target protein. So really, your question is how E3 ubiquitin ligase recognises its target. 1



E3 ligases



There are 3 classes of E3 ligases, according to whether they contain a RING, U-box or HECT domain. These domains are responsible for either directly catalysing the addition of ubiquitin or simply facilitating another E3 ligase to do so. RING and RING-like domains are by far the most common (<600). Exactly how these domains mediate the transmission of ubiquitin is not clear, but it can be via an E3-Ub intermediate or directly from E2 to the target. The domains are relatively standardised, due to the fact they bind to E2 proteins, which contain less diversity than target proteins. However, it's the combination of the E2 and substrate binding domains that generates the selection. 2



Target recognition



So how does the target come into all of this? Each E3 ligase recognises a specific sequence on the target protein due to it's own conformation and signal sequence creating a unique interface full of intermolecular interactions- just like any protein recognises any other protein. However, although individual cases vary, the tight control primarily comes from post translational modifications on the target. Phosphorylation in particular is common in signalling and may allow E3 to bind, inducing degradation of the target. I'll use some examples to demonstrate this point, but note that every case will be different.



  • c-Cbl (E3) recognises activated receptor tyrosine kinases and ZAP 70/Syk kinases (target) by binding a phosphotyrosine sequence through its tyrosine kinase binding domain (containing SH2 domain). The phosphorylation is controlled by other cellular signals

  • Glycosylation

  • Proline hydroxylation

  • Sumoylation

Other regulations of the process that are linked but perhaps outside the scope of your question include E3 post translational modifications (eg. autoubiquitination), sequestration of E3 via binding partners (Cand1 and Roc1), pseudosubstrates and substrate competition. 7



Misfolded protein degradation



Making protein is a large investment for the cell and so there are several protective measures to prevent misfolding/aggregation. Whilst there are many different types, chaperones generally work by isolating the misfolded protein and undergoing a series of binding release steps, triggered by hydrolysis of ATP. The protein can still be released unfolded, and the cell will continually try to isolate it for refolding. However, certain chaperones form a complex with the proteasome, and can recruit other UPS components. The choice to give up on refolding appears to be a stochastic process, and will depend on the concentration of the appropriate chaperones. Misfolded proteins tend to have significant regions of exposed hydrophobic residues, and E3 ligases are therefore likely to recognise this less specifically than the degradation pathway described above.



Relevant sources include endoplasmic reticulum associated degradation and chaperones.



Summary



The answer to your question is that there is no standard recognition sequence in E3 to know which protein to target, the binding needs to be diverse in order to give the control the cell needs. That means it could be anything, but each E3 is likely to be relatively specific for its target, and is often controlled by other post translation modifications on the substrate. Misfolded proteins can be recognised and degraded due to the association of the UPS with chaperones.



If you're really interested, you can download this file and open it in something like PyMol to actually see the ligase ready to transfer ubiquitin.

Wednesday 25 June 2008

microbiology - What are the characteristic structures of bacillus M. tuberculosis and what they cause?

I answered to this question:




In most forms of the disease, the bacillus M. tuberculosis spreads slowly and widely
in the lungs, causing the formation of hard nodules (tubercles) in the
respiratory tissues and form cavities in the lungs.




and got zero points.



What is the correct answer to the question?

ag.algebraic geometry - Proof of a Theorem in the paper "Construction of bundles on P^n" by Horrocks

That is a pretty terse proof! Let me give an outline of a proof that I know. First, one could deduce the statement from a more general:



Theorem 1: Let $R$ be a regular local ring, $E$ be a reflexive $R$-module locally free on $U_R$, the punctured spectrum such that $E$ has no free direct summand. Then one can find a free module $T$ and a filtration:



$$Eoplus T = F_0 supseteq F_1 supseteq cdots F_N =0$$



with $F_i/F_{i+1}$ a syzygy of $k=R/m$.



Why is this local statement implies what you want?



Let $A=k[x_0,cdots, x_n],m=(x_0,cdots,x_n), X=Proj(A)=mathbb P^n, R=A_m$. There is natural functor from the category of vector bundles on $X$ to that of vector bundles on $U_R$, which is the same as the category of reflexive $R$-modules which are locally free on $U_R$. This is used by Horrocks all the time and is explained in Section 9 of his paper: "Vector bundles on punctured spectrum of a regular local ring".



A proof of Theorem 1 can be found in Chapter 5 (theorem 5.2) of the book "Syzygy" by Evans-Griffith. A brief outline in case you can't find the book:



As suggested in the paper you quoted, one starts with a minimal resolution of $E^*$. Then dualizing gives a complex (remember that $E^{**} cong E$ as $E$ is reflexive):



$0 to E to L_0 to L_1 cdots $



whose cohomologies are $Ext^i(E^*,R)$. Let $i>0$ be the smallest number such that $X=Ext^i(E^*,R) neq 0$ Break the l.e.s in to the exact sequences:



$0to E to L_0 to L_1 cdots to L_i to N to 0 (*)$



and $0 to X to N to N/X to 0$. Now build free resolutions for $X$ and $N/X$ and map them onto $(*)$ as in Horseshoe Lemma, stopping at the spot $E$, one gets a s.e.s:



$0 to B to Eoplus T to C to 0$
here $T$ is free and $C$ is a syzygy of $X = Ext^i(E^*,R)$. Repeat if necessary and you have a filtration whose quotient are syzygies of various $ Ext^i(E^*,R)$. But each of this $Ext$ modules has finite length (as $E$ is locally free on $U_R$), so they can be filtered by copies of $k$. Now use the same trick to build a finer filtration whose quotients are syzygies of $k$. Since $R$ is regular, the resolution of $k$ is the Koszul complex, answering your second question.

Tuesday 24 June 2008

entomology - What in the world is this critter?

We spotted this thing crawling around a (thankfully empty) bowl in our apartment in Barcelona, Spain. I have never seen anything like it. It appears to be some kind of caterpillar-like insect, but it is inside a flat, leaf-like shell. It can only move by sticking its head out a hole in the tip of the shell and it has to pull itself by its front legs (no legs under the shell). But it can actually pop out of either end of the shell! When it's threatened, it just hides inside the shell completely.



I'm completely flummoxed as to what this thing is. Does anyone have any clue?



(click the images for full-size)











soft question - Most interesting mathematics mistake?

In chapter 3 of What Is Mathematics, Really? (pages 43-45), Prof. Hersh writes:




How is it possible that mistakes occur in mathematics?



René Descartes's Method was so clear, he said, a mistake could only happen by inadvertence. Yet, ... his Géométrie contains conceptual mistakes about three-dimensional space.



Henri Poincaré said it was strange that mistakes happen in mathematics, since mathematics is just sound reasoning, such as anyone in his right mind follows. His explanation was memory lapse—there are only so many things we can keep in mind at once.



Wittgenstein said that mathematics could be characterized as the subject where it's possible to make mistakes. (Actually, it's not just possible, it's inevitable.) The very notion of a mistake presupposes that there is right and wrong independent of what we think, which is what makes mathematics mathematics. We mathematicians make mistakes, even important ones, even in famous papers that have been around for years.



Philip J. Davis displays an imposing collection of errors, with some famous names. His article shows that mistakes aren't uncommon. It shows that mathematical knowledge is fallible, like other knowledge.



...



Some mistakes come from keeping old assumptions in a new context.



Infinite dimensionl space is just like finite dimensional space—except for one or two properties, which are entirely different.



...



Riemann stated and used what he called "Dirichlet's principle" incorrectly [when trying to prove his mapping theorem].



Julius König and David Hilbert each thought he had proven the continuum hypothesis. (Decades later, it was proved undecidable by Kurt Gödel and Paul Cohen.)



Sometimes mathematicians try to give a complete classification of an object of interest. It's a mistake to claim a complete classification while leaving out several cases. That's what happened, first to Descartes, then to Newton, in their attempts to classify cubic curves (Boyer). [cf. this annotation by Peter Shor.]



Is a gap in a proof a mistake? Newton found the speed of a falling stone by dividing 0/0. Berkeley called him to account for bad algebra, but admitted Newton had the right answer... Mistake or not?



...



"The mistakes of a great mathematician are worth more than the correctness of a mediocrity." I've heard those words more than once. Explicating this thought would tell something about the nature of mathematics. For most academic philosopher of mathematics, this remark has nothing to do with mathematics or the philosophy of mathematics. Mathematics for them is indubitable—rigorous deductions from premises. If you made a mistake, your deduction wasn't rigorous, By definition, then, it wasn't mathematics!



So the brilliant, fruitful mistakes of Newton, Euler, and Riemann, weren't mathematics, and needn't be considered by the philosopher of mathematics.



Riemann's incorrect statement of Dirichlet's principle was corrected, implemented, and flowered into the calculus of variations. On the other hand, thousands of correct theorems are published every week. Most lead nowhere.



A famous oversight of Euclid and his students (don't call it a mistake) was neglecting the relation of "between-ness" of points on a line. This relation was used implicitly by Euclid in 300 B.C. It was recognized explicitly by Moritz Pasch over 2,000 years later, in 1882. For two millennia, mathematicians and philosophers accepted reasoning that they later rejected.



Can we be sure that we, unlike our predecessors, are not overlooking big gaps? We can't. Our mathematics can't be certain.




The reference to the said article by Philip J. Davis is:



Fidelity in mathematical discourse: Is one and one really two? Amer. Math. Monthly 79 (1972), 252–263.



From that article, I find particularly amusing the following couple of paragraphs from page 262:




There is a book entitled Erreurs de Mathématiciens, published by Maurice Lecat in 1935 in Brussels. This book contains more than 130 pages of errors committed by mathematicians of the first and second rank from antiquity to about 1900.There are parallel columns listing the mathematician, the place where his error occurs, the man who discovers the error, and the place where the error is discussed. For example, J. J. Sylvester committed an error in "On the Relation between the Minor Determinant of Linearly Equivalent Quadratic Factors", Philos. Mag., (1851) pp. 295-305. This error was corrected by H. E. Baker in the Collected Papers of Sylvester, Vol. I, pp. 647-650.



...



A mathematical error of international significance may occur every twenty years or so. By this I mean the conjunction a mathematician of great reputation and a problem of great notoriety. Such a conjunction occurred around 1945 when H. Rademacher thought he had solved the Riemann Hypothesis. There was a report in Time magazine.


ag.algebraic geometry - A historical question: Hurwitz, Luroth, Clebsch, and the connectedness of $mathcal{M}_g$

The connectedness of the moduli space $mathcal{M}_g$ of complex algebraic curves of genus $g$ can be proven by showing that it is dominated by a Hurwitz space of simply branched d-fold covers of the line, which in turn can be shown to be connected by proving the transitivity of the the natural action of the braid group on n-tuples of transpositions in $S_n$ with product 1, which generate $S_n$: in this action, a generator $sigma_i$ of the braid group acts as



$$(g_1, ... g_n) to (g_1, ... g_{i+1}, g_i^{g_{i+1}}, g_{i+2}, ..., g_n)$$



This argument is often referred to as "a theorem of Clebsch (1872 or 1873), Luroth (1871), and Hurwitz (1891)." Does anyone know the history of this argument more precisely, and in particular which parts are due to Luroth, which to Clebsch, and which to Hurwitz?

Monday 23 June 2008

terminology - What do you call the product of a circle and an annulus?

that corresponds to the complement of a trivial (but essencial) torus knot in a open solid torus. For those -Fico had mention- they are called cable spaces and have nice foliation into circles. Its name is CS(1,0). Can you see what is CS(2,1)?



Edit at: utc-6 = 11:50 approx



you could also say the trivial I-bundle over the torus

Sunday 22 June 2008

model theory - Actions of finite permutation groups on hereditarily finite sets.

Well, since there have been no answers in a month, let me at least point out the easy fact that if M is finite, then every set in HF(M),
and indeed, every set in V(M), is imaginary over M.



(I assume here that M is taken as urelements in the
definition of V(M), as I mentioned in my comments to the
question above, since otherwise there are problems with the
action of G on V(M) and even HF(M) being well-defined.)



Theorem. If M is finite, then every object in HF(M),
and indeed every object in V(M), is an imaginary element.



Proof: Since M is finite, we may take S=M. If pi fixes
every element of M, then it is easy to see by transfinite
induction that the action of pi on V(M) is the identity.
Namely, if pi fixes every element of V_alpha(M), then it
clearly also fixes every element of V_{alpha+1}(M). And so
it fixes every element of V(M), including HF(M)=V_omega(M).
QED



OK. What this answer really shows is that the question is not about the imaginaries over M, but rather, about gaining a greater understanding of the actiom of G on V(M). Perhaps it would be helpful to define the parameter-free version of imaiginary, where we might say that X in V(M) is pure
imaginary
over M if whenever pi is a permutation of M,
then pi(X)=X, under the induced action of pi on V(M). For
example, the set M itself has this property, as does the
power set P(M), the set {M} and {emptyset,M}, and so on. In
addition, any set whose transitive closure includes no
urelements from M will be pure imaginary. The question would be to characterize the pure-imaginary sets
over M.



This question shares many similarities with the various
forcing arguments showing the consistency of the negation
of the Axiom of Choice. Specifically, in the pre-forcing
days, set theorists built what are called the symmetric
models of set theory, by taking an infinite set of
urelements M and restricting to the elements of V(M) having
finite support. One can show that this is a model of
ZF-with-urelements having no wellordering of M. The forcing
proofs of the consistency of not-AC have exactly the same
flavor, where one adds an infinite set of mutually generic
Cohen reals, and then considers the sets that have names
with finite support over this set. This is precisely how
Cohen produced a model of ZF+not-AC, without urelements.



So one of the good reasons to study the imaginary elements over a set M is that they form a model of the set theory ZF-with-urelements. When M is infinite, however, then there can be no linear order of M in the pure imaginaries, since swapping elements outside the support of this set will not fix the order. In particular, M will not be well-orderable in this model of set theory, and so AC will not hold. For finite M, of course, there are linear orders of M having support M, and one can show that V(M) satisifes ZFC-with-urelements. But if one considers only the collection of pure-imaginaries, as I defined them above, then one will not even get ZF-with-urelements, unless M has only one element, since one will lose the Comprehensive (subset) axiom when there are parameters from M. For example, no proper nonempty subset of M can be pure imaginary. From this perspective, the pure imaginary sets are not so nice as the imaginary sets.

evolution - Population genetics and the fitness probability distribution. Why is the arithmetic mean all we need?

When recording change in allele frequency in diploid, bi-allelic, infinite and panmixic population we usually use this kind of equation:



$\delta_p = \frac{p * q *( p (w11 - w12) + q * (w12 - w22))}{\bar{w}}$



$\bar{w} = p^2 * w11 + 2*p*q*w12+q^2*w22$



$\delta_p$ = change of $p$ (frequency of one of the allele) from one time step to another



$w11$ is the mean fitness of individuals of genotype 11. $p$ and $q$ are the allele frequencies.



The only indicators for the fitness distribution is the arithmetic mean. Why don't we include other indicator of the probability distribution of fitness? The skew, the sd, the median for examples. Could you argue why we don't need to care about the probability distribution of fitness of individuals with genotypes 11 (for example)? In other words, why is the mean fitness (=w11) a sufficient statistics?



I wouldn't be able to answer if one asks me:



  1. Why don't you take the median instead of the arithmetic mean?"


  2. Why don't you care about the variance, the skew (or any other moment) of your distribution?


  3. What if the traits were not countinuous but discrete (sex is a discrete trait for example)?


Saturday 21 June 2008

nt.number theory - A result on prime numbers

Disclaimer: I am no specialist in Analytic Number Theory, nor did I read the whole paper under the link. I just looked into the end of the argument, and there is a limit computation (10) there.



From what I know from Analysis, this computation is clearly wrong, not in the sense that the answer is necessarily wrong, but in the sense that the premises do not justify the conclusion. The author attempts to compute the lower limit of the product $$liminf_{ntoinfty}left(frac{p_n}{log p_n}logfrac{p_{n+1}}{p_n}right)$$
as the product of the limits. He replaces the second factor with $log(1)=0$ and proceeds to claim that the lower limit of the product is $0$. However, even though the (lower) limit of the second factor may well be $0$, the limit of the first factor is clearly $infty$, so one cannot compute the lower limit of the product in this way.

nt.number theory - Typical value of totient function

I've just realized I was being a little bit slow. I had already found on the internet that $n^{-2}sumlimits_{k=1}^nphi(k)$ is roughly $3/pi^2$ and stupidly didn't notice that I could "differentiate" this to get exactly what I want. That is, $sumlimits_{k=1}^N phi(k)$ is about $3N^2/pi^2$, so the difference between the sum to $N+M$ and the sum to $N$ is around $6NM/pi^2$, from which it follows that the average value near $N$ is around $6N/pi^2$, which is entirely consistent with the well-known fact that the probability that two random integers are coprime is $6/pi^2$.



I'm adding this paragraph after Greg's comment. To argue that the probability that two random integers are coprime, you observe that the probability that they do not have $p$ as a common factor is $(1-1/p^2)$. If you take the product of that over all $p$ then you've got the reciprocal of the Euler product formula for $zeta(2)$, or $1^{-2}+2^{-2}+ldots= pi^2/6$. It's not that hard to turn these formal arguments into a rigorous proof, since everything converges nicely.

knot theory - What are the points of Spec(Vassiliev Invariants)?

Background



Recall that a (oriented) knot is a smoothly embedded circle $S^1$ in $mathbb R^3$, up to some natural equivalence relation (which is not quite trivial to write down). The collection of oriented knots has a binary operation called connected sum: if $K_1,K_2$ are knots, then $K_1 # K_2$ is formed by spatially separating the knots, then connecting them by a very thin rectangle, which is glued on so that all the orientations are correct. Connect sum is commutative and associative, making the space of knots into a commutative monoid. In fact, by a theorem of Schubert, this is the free commutative monoid on countably many generators. A ($mathbb C$-valued) knot invariant is a $mathbb C$-valued function on this monoid; under "pointwise" multiplication, the space of knot invariants is a commutative algebra $I$, and $#$ makes $I$ into a cocommutative bialgebra. I.e. $I$ is a commutative monoid object in $(text{CAlg})^{rm{op}}$, where $text{CAlg}$ is the category of commutative algebras.




Warm-up question: Any knot $K$ defines an algebra morphism $I to mathbb C$, i.e. a global point of $I in (text{CAlg})^{rm{op}}$. Are there any other global points?




Edit: In response to Ilya N's comment below, I've made this into its own question.



Finite type invariants



Recall that a singular knot is a smooth map $S^1 to mathbb R^3$ with finitely many transverse self-intersections (and otherwise it is an embedding), again up to a natural equivalence. Any knot invariant extends to an invariant of singular knots, as follows: in a singular knot $K_0$, there are two ways to blow up any singularity, and the orientation determines one as the "right-handed" blow-up $K_+$ and the other as the "left-handed" blow-up $K_0$. Evaluate your knot invariant $i$ on each blow-up, and then define $i(K_0) = i(K_+) - i(K_-)$. Note that although the connect-sum of singular knots is not well-defined as a singular knot, if $iin I$ is a knot-invariant, then it cannot distinguish different connect-sums of singular knots. Note also that the product of knot invariants (i.e. the product in the algebra $I$) is not the point-wise product on singular knots.



A Vassiliev (or finite type) invariant of type $leq n$ is any knot invariant that vanishes on singular knots with $> n$ self-intersections. The space of all Vassiliev invariants is a filtered bialgebra $V$ (filtered by type). The corresponding associated-graded bialgebra $W$ (of "weight systems") has been well-studied (some names: Kontsevich, Bar-Natan, Vaintrob, and I'm sure there are others I haven't read yet) and in fact is more-or-less completely understood (e.g. Hinich and Vaintrob, 2002, "Cyclic operads and algebra of chord diagrams", MR1913297, where its graded dual $A$ of "chord diagrams" is described as a sort of universal enveloping algebra). In fact, this algebra $W$ is Hopf. I learned from this question that this implies that the bialgebra $V$ of Vassiliev invariants is also Hopf. Thus it is a Hopf sub-bialgebra of the algebra $I$ of knot invariants.



I believe that it is an open question whether Vassiliev invariants separate knots (i.e. whether two knots all of whose Vassiliev invariants agree are necessarily the same). But perhaps this has been answered — I feel reasonably caught-up with the state of knowledge in the mid- to late-90s, but I don't know the literature from the 00s.



Geometrically, then, $V in (text{CAlg})^{rm{op}}$ is a commutative group object, and is a quotient (or something) of the monoid-object $I in (text{CAlg})^{rm{op}}$ of knot invariants. The global points of $V$ (i.e. the algebra maps $V to mathbb C$ in $text{CAlg}$) are a group.



Main Questions



Supposing that Vassiliev invariants separate knots, there must be global points of $V$ that do not correspond to knots, as by Mazur's swindle there are no "negative knots" among the monoid $I$. Thus my question.




Main question. What do the global points of $V$ look like?




If Vassiliev invariants do separate knots, are there still more global points of $V$ than just the free abelian group on countably many generators (i.e. the group generated by the free monoid of knots)? Yes: the singular knots. (Edit: The rule for being a global point is that you can evaluate any knot invariant at it, and that the value of the invariant given by pointwise multiplication on knots is the multiplication of the values at the global point. Let $K_0$ be a singular knot with one crossing and with non-singular blow-ups $K_+$ and $K_-$, and let $f,g$ be two knot invariants. Then $$begin{aligned} (fcdot g)(K_0) & = (fcdot g)(K_+) - (fcdot g)(K_-) = f(K_+)cdot g(K_+) - f(K_-)cdot g(K_-) neq \\ f(K_0) cdot g(K_0) & = f(K_+)cdot g(K_+) - f(K_+)cdot g(K_-) - f(K_-)cdot g(K_+) + f(K_-)cdot g(K_-)end{aligned}$$.) What else is there?



What can be said without knowing whether Vassiliev invariants separate knots?

Friday 20 June 2008

ra.rings and algebras - When is the essential extension commutes with colimits(or push forward)

First, I want to point out that in general there is no surjection from a direct product of copies of $R$ to an arbitrary module $M$. For example, if $R=mathbb{Z}$ and $M=mathbb{Z}^{(omega)}$ (a countable direct sum of copies of $mathbb{Z}$), then there is no surjection $mathbb{Z}^Ito M$ according to the paper "Extension of a theorem on direct products of slender modules" by John D. O'Neill. [There are probably much simpler examples, but this will do.]



Thus, in general, there is no map $p$, and thus no pushout $N$.



Second, maps defined from $R^{I}$ are notoriously difficult to understand, and often depend on cardinality considerations for the set $I$. In particular, when $R=mathbb{Z}$, things get really weird when $|I|$ is measurable.



That all said, assume $p$ does exist. Since, as you pointed out, $N$ is an injective module, we just need to examine when $N$ is an essential extension of $M$ (viewing $M$ as a submodule). One obvious situation when this holds is when $R$ itself is a (right) self-injective ring, for in that case $N=M$ is already injective. Of course, this case is somewhat trivial since self-injective hereditary rings are already semisimple.



More generally, write



$$N=Moplus E(R)^I/langle (p(x),-i(x)) : xin R^I rangle.$$



For each $ein E(R)^Isetminus{0}$, let $X_e:={rin R : erin R^I}$. The set $E_e={er : rin X_eR}$ is a submodule of $R^I$. Given any element $overline{(m,e)}in Nsetminus M$, we need $overline{(m,e)}Rcap Mneq (0)$. Thus, we need some element $rin X_e$ such that $mrneq -p(er)$. In other words, we do not want $p|_{E_e}$ to extend to a map $eRto M$. Thus, whatever condition you impose will need to restrict what types of maps $p$ are available.

reference request - Summation methods for divergent series

If a series has a well-defined Cesaro sum, then it has a well-defined Abel sum and they are equal. I think I first learned this from Hardy's Divergent Series; the proof is short enough to give here.



Let $a_i$ be the series in question, let $s_m = sum_{i=0}^m a_i$ and $c_n = sum_{m=0}^n s_m$. The claim that the Cesaro sum is well defined is that
$$c_n = (n+1)(L + o(1)).$$



Let $A(x) = sum a_i x^i$. Then $sum c_i x^i = A(x)/(1-x)^2$ so
$$ A(x) = (1-x)^2 sum_{n=0}^{infty} (L (n+1) + o(n+1)) x^n$$
where the $o$ is as $n to infty$, independent of $x$.
But
$$sum_{n=0}^{infty} (n+1) x^n = 1/(1-x)^2$$
so
$$A(x) = L (1-x)^2/(1-x)^2 + o left( (1-x)^2/(1-x)^2right) = L+ o(1) quad mbox{as} x to 1^{-}.$$
So the Abel sum of $a_i$ is also $L$.




Deleted an argument that Cesaro summability implies zeta summability; not sure I can sum by parts where needed.




I want to say that I feel guilty writing up special cases like this.
I have a vague impression that there is a very general philosophy here, something like Wiener's generalized Tauberian theorem. (But presumably easier, since we are generalizing Abel's theorem, not Tauber's.)
I'm hoping that someone will come by and write up an exposition of it.

taylor series - Can Convergence Radii of Padé Approximants Always Be Made Infinite?

I've found (as have others), that for some analytic functions, a Padé approximant of it has an infinite convergence radius, whereas its associated Taylor series has a finite convergence radius. $f(x)=sqrt{1+x^2}$ appears to be one such function. My questions are:



1) Is there any function where the Taylor series has the largest convergence radius of all associated Padé approximants? If so, is the Taylor series radius strictly larger, or only equal to the convergence radius of other Padé approximants (i.e. excluding the Taylor series itself)?



2) If not, is there any function that is analytic everywhere, and yet for which there is no (limit of) Padé approximant(s) that has an infinite convergence radius?



It would be both very cool and very useful if there is always a (limit of) Padé approximant(s) that has an infinite convergence radius for any function that is analytic everywhere, though I haven't the slightest how one checks/analyzes convergence of Padé approximants if the degrees of numerator and denominator both approach infinity. :)



One extra question, if there is always such a Padé approximant:



3) Is there always a numerically stable method of computing this approximant up to a finite order?

Thursday 19 June 2008

fourier analysis - Does Weyl's Inequality prove equidistribution?

This response is in answer to David's further question about whether it is possible to bound the rate at which SN/N tends to zero, as he was wanting to use Weyl's inequality to do.
This is not possible, even in the case d=2 and f(n)=θn2. (for d=1 it is not hard to show that SN is bounded so $S_N/N=O(N^{-1})$).
Set
$$
S_N(theta)=sum_{n=1}^Ne^{2pi itheta n^2}
$$
in the following. Given any function h: ℕ → ℝ+ with liminfnh(n) = 0, I show that there are irrational θ with
$$
begin{array}{}displaystylesup_Nvert S_N(theta)/(h(N)N)vert=infty.&&(*)end{array}
$$



[Note: The following is a much simpler argument than the original version]. I'll use the Baire category theorem to find counterexamples




For any countable collection An of open dense subsets of ℝ, the intersection A = ∩nAn is dense in ℝ.




In particular, any such A is nonempty. We can say more than this; if S is a countable subset of the reals then $Asetminus S=left(bigcap_nA_nright)capleft(bigcap_{sin S}mathbb{R}setminus{s}right)$ is an intersection of dense open sets, so is dense. In particular, A will contain a dense set of irrational values.



To construct counterexamples then, it is only necessary to show that the set of all θ at which the sequence diverges to infinity is an intersection of countably many open sets, and show that it contains a dense set of rational numbers. The Baire category theorem implies that it will also diverge at a dense set of irrationals.



In fact, for any sequence xn(θ) depending continuously on a real parameter θ, the set of values of θ for which it diverges to infinity is an intersection of countably many open sets
$$
{thetacolonsup_nvert x_n(theta)vert=infty}=bigcap_nbigcup_m{thetacolonvert x_m(theta)vert>n\}.
$$



So, we only need to find a dense set of rational numbers at which (*) holds.




Let θ = a/b for integers a,b with b > 0. Setting $x=S_b(theta)/b$ then $S_N(theta)/Nto x$ as $Ntoinfty$.




Proof:
If m ≡ n (mod b) then θm2 - θn2 is an integer, and $e^{2pi itheta m^2}=e^{2pi i theta n^2}$. So $nmapsto e^{2pi itheta n^2}$ has period b, giving
$$
S_{bN}(theta)=sum_{j=0}^{N-1}sum_{k=1}^{b}e^{2pi itheta(jb+k)^2}=Nsum_{k=1}^be^{2pi itheta k^2}.
$$
So, SbN(θ) = NSb(θ). Now, any N can be written as N = bM + R for some R < b. Then, $vert S_N-MS_bvertle R$ and, dividing by N gives $vert S_N/N-S_b/bvertto0$ as N goes to infinity.



As |SN(θ)/(h(N)N)| ∼ |x|/h(N) → ∞ whenever x is nonzero, the following shows that (*) holds whenever θ is of the form a/p for an odd prime p not dividing a. Such rationals are dense, so the existence of irrational θ for which (*) holds follows from the Baire category theorem.




Let θ = a/p for integers a,p with p an odd prime not dividing a. Then $x=S_p(theta)/p$ is nonzero.




Proof:
Note that $u=e^{2pi i a/p}$ is a primitive p'th root of unity with minimal polynomial $X^{p-1}+X^{p-2}+cdots+X+1$ over the rationals. Then, all proper subsets of ${1,u,u^2,ldots,u^{p-1}}$ are linearly independent over the rationals and
$$
S_p(theta)=sum_{k=1}^{p}u^{k^2}=1+2sum_{k=1}^{(p-1)/2}u^{k^2}
$$
is nonzero.



In fact as pointed out by David below, Sp is a Gauss sum and has size √p.

Wednesday 18 June 2008

dg.differential geometry - Pairing used in Lefschetz duality

I am thinking about the precise formulation of the Lefschetz duality for the relative cohomology. If I understand this Wikipedia article correctly, there is an isomorphism between $H^k(M, partial M)$ and $H_{n-k}(M)$ and hence (I suppose) a non-degenerate pairing $H^k(M, partial M) times H^{n-k}(M) rightarrow mathbb{R}$. However, I have trouble visualizing this pairing. Let $[(alpha, theta)] in H^k(M, partial M)$ and $[beta] in H^{n - k}(M)$, is it then true that
$$
left< [(alpha, theta)], [beta] right> =
int_M alpha wedge beta + int_{partial M}theta wedge beta_{|partial M}
$$
or am I missing something? If unrelated to Lefschetz duality, does this pairing ever appear in topology?



I can understand how to define a pairing on the homology by counting intersections, but I really don't see how this works for cohomology. Also, a reference on Lefschetz cohomology or just analysis/topology on manifolds with boundary would be greatly appreciated!

Tuesday 17 June 2008

ag.algebraic geometry - How to compute the dimension of a linear system on $mathbb{P}^n$

Your question is equivalent to the computation of $H^0(mathcal{I}_S(2))$.



In the example you give, $S$ is a complete intersection of $4$ quadrics and so the resolution of its ideal sheaf $mathcal{I}_S$ is given by the Koszul complex (I write $mathcal{O}$ instead of $mathcal{O}_{mathbb{P}^9}$):



$0 to mathcal{O}(-8) to mathcal{O}(-6)^{oplus 4} to mathcal{O}(-4)^{oplus 6} to mathcal{O}(-2)^{oplus 4} to mathcal{I}_S to 0$.



Tensoring with $mathcal{O}(2)$ we obtain:



$0 to mathcal{O}(-6) to mathcal{O}(-4)^{oplus 4} to mathcal{O}(-2)^{oplus 6} to mathcal{O}^{oplus 4} to mathcal{I}_S(2) to 0$.



Splitting this exact sequence into short exact ones it is immediate to check that



$H^0(mathcal{I}_S(2))=H^0(mathcal{O}^{oplus 4})=4$,



as Algori states in his comment.



Therefore the linear system of quadrics passing through $S$ has dimension $4-1=3$.

Monday 16 June 2008

ag.algebraic geometry - Can Hom_gp(G,H) fail to be representable for affine algebraic groups?

Hom(Ga, Gm) is not representable.



Let R be a ring of characteristic zero. I claim that Hom(Ga, Gm)(Spec R) is {Nilpotent elements of R}. Intuitively, all homs are of the form x -> e^{nx} with n nilpotent.



More precisely, the schemes underlying Ga and Gm are
Spec R[x] and Spec R[y, y^{-1}] respectively. Any hom of schemes is of the form y -> sum f_i x^i for some f_i in R. The condition that this be a hom of groups says that
sum f_k (x_1+x_2)^k = (sum f_i x_1^i) (sum f_j x_2^j). Expanding this, f_{i+j}/(i+j)! = f_i/i! f_j/j!. So every hom is of the form f_i = n^i/i!, and n must by nilpotent so that the sum will be finite.



Now, let's see that this isn't representable. For any positive integer k, let R_k = C[t]/t^k. The map x -> e^{tx} is in Hom(Ga, Gm)(Spec R_k) for every k. However, if R is the inverse limit of the R_k, there is no corresponding map in Hom(Ga, Gm)(Spec R). So the functor is not representable.

Sunday 15 June 2008

ac.commutative algebra - About maximal Cohen-Macaulay modules

I´m trying to solve a problem of cancellation of reflexive finitely generated modules over normal noetherian domains. When $R$ is regular domain with $dim R le 2$, for finitely generated modules, reflexive is equivalent to projective.



Now I´m studying the case $dim R=2$ and $R$ normal. In this hypothesis, reflexive modules are maximal Cohen-Macaulay modules.



I´m looking for references about this topic, with especial emphasis in lifting of homomorphism between factors of maximal CM modules: something like "... an homomorphism $M/IMto N/IN$ can be lift to an homomorphism $Mto N...$"; indescomponibles maximal CM modules are welcome too.

dg.differential geometry - How can generic closed geodesics on surfaces of negative curvature be constructed?

If you think of your surface as the upper half plane modulo a group of Moebius transformations $G$, start by representing each of your Moebius transformations $ z longmapsto frac{az+b}{cz+d}$ by a Matrix.



$$A = pmatrix{ a & b \ c & d}$$



And since only the representative in $PGL_2(mathbb R)$ matters, people usually normalize to have $Det(A) = pm 1$.



The standard classification of Moebius transformations as elliptic / parabolic / hyperbolic (loxodromic) is in terms of the determinant and trace squared. You're hyperbolic if and only if the trace squared is larger than $4$. Hyperbolic transformations are the ones with no fixed points in the interior of the Poincare disc, and two fixed points on the boundary, and they are rather explicitly "translation along a geodesic".



Elliptic transformations fix a point in the interior of the disc so they can't be covering transformations. Parabolics you only get as covering transformations if the surface is non-compact, because parabolics have one fixed point and its on the boundary -- if you had such a covering transformation it would tell you your surface has non-trivial closed curves such that the length functional has no lower bound in its homotopy class.



So your covering tranformations are only hyperbolic. That happens only when $tr(A)^2 > 4$. So how do you find your axis? It's the geodesic between the two fixed points on the boundary, so you're looking for solutions to the equation:



$$ t = frac{at+b}{ct+d}$$



for $t$ real, this is a quadratic equation in the real variable $t$. If I remember the quadratic equation those two points are:



$$ frac{tr(A) pm sqrt{tr(A)^2 - 4Det(A)}}{2c}$$



Is this what you're after?

Saturday 14 June 2008

pr.probability - Entropy of Markov processes

Consider a Markov process $X_t$ with generator $L$ and invariant distribution $pi$, whose distribution at time $t$ is given by $pi(t,dx)=phi(t,x) pi(dx)$ - in other word, $phi(t,x)$ is the density of $pi(t, dx)$ wiht respect to the invariant distribution $pi$. Define the (relative) entropy
$$ S(t) = -int phi(t,x) ln phi(t,x) pi(dx) leq 0.$$



One can expect (Boltzman H-theorem) the entropy $S$ to increase over time, and eventually to converge to $0$.



question: what conditions should be imposed in order for such a result to be true ?



Fokker-Planck equation shows that for any test function $f$,
$$ int f(x) partial_t phi(t,x) pi(dx) = int (Lf)(x) phi(t,x) pi(dx)$$
so that
$$S'(t) = -int L (ln circ phi)(x,t) phi(x,t) pi(dx), $$
but I still do not see why this quantity should be non-negative.

ag.algebraic geometry - Non-integral scheme having integral local rings

Let me try to give a counterexample. (I don't know whether it is 'nice'). First, let us rewrite your properties for an affine scheme $X=Spec(A)$.



Connectedness for $A$ means $A$ has no nontrivial idempotents;



Integrality for $A$ is the usual one ($A$ is a domain);



Local integrality means that whenever $fg=0$ in $A$, every point of $X$ has a neighborhood
where either $f$ or $g$ vanishes.



Let us construct a connected locally integral ring that is not integral.



Roughly speaking, the construction is as follows: let $X_0$ be the cross (the union of coordinate axes) on the affine plane. Then let $X_1$ be the (reduced) full preimage of $X_0$ on the blow-up of the plane ($X_1$ has three rational components forming a chain). Then blow up the resulting surface at the two singularities of $X_1$, and let $X_2$ be the reduced preimage of $X_1$
(which has five rational components), etc. Take $X$ to be the inverse limit.



The only problem with this construction is that blow-ups glue in a projective line, so $X_1$ is not affine. Let us correct this by gluing in an affine line instead (so our scheme will be an open subset in what was described above).



Here's an algebraic description:



For every $kge 0$, let $A_k$ be the following ring: its elements are collections of
polynomials $p_iin{mathbb C}[x]$ where $i=0,dots,2^k$ such that $p_i(1)=p_{i+1}(0)$.
Set $X_k=Spec(A_k)$. $X$ is a union of $2^k+1$ affine lines that meet transversally in a chain. (It may be better to index polynomials by $i/2^k$, but the notation gets confusing.)



Define a morphism $A_kto A_{k+1}$ by
$$(p_0,dots,p_{2^k})mapsto(p_0,p_0(1),p_1,p_1(1),dots,p_{2^k})$$
(every other polynomial is constant). This identifies $A_k$ with a subring of $A_{k+1}$.
Let $A$ be the direct limit of $A_k$ (basically, their union). Set $X=Spec(A)$. For every
$k$, we have a natural embedding $A_kto A$, that is, a map $Xto X_k$.



Each $A_k$ is connected but not integral; this implies that $A$ is connected but not integral. It remains to show that $A$ is locally integral.



Take $f,gin A$ with $fg=0$ and $xin X$. Let us construct a neighborhood of $x$ on which one of $f$ and $g$ vanishes. Choose $k$ such that $f,gin A_{k-1}$ (note the $k-1$ index).
Let $y$ be the image of $x$ on $X_k$. It suffices to prove that $y$ has a neighborhood on
which either $f$ or $g$ (viewed as functions on $X_k$) vanishes.



If $y$ is a smooth point of $X_k$ (that is, it lies on only one of the $2^k+1$ lines), this is obvious. We can therefore assume that $y$ is one of the $2^k$ singular points, so two components of $X_k$ pass through $y$. However, on one of these two components (the one with odd index), both $f$ and $g$ are constant, since they are pullbacks of functions on $X_{k-1}$. Since $fg=0$ everywhere, either $f$ or $g$ (say, $f$) vanishes on the other component.
This implies that $f$ vanishes on both components, as required.

Friday 13 June 2008

computational complexity - Can knowing ahead the length of 3-SAT instance really help?

If I say I can solve 3-SAT ( known to be NP-complete) in polynomial time, yet with the following 'little' proviso:
Give me first $n$ the length of your 3-SAT formula, then give me some time on my own , then as soon as you give me your formula, I will answer in less that $n^k$.



The $k$ will be constant independent of $n$ (this is not parametrized complexity)



Implicitly: after you give me $n$, I may pre-calculate as much as I want (say $n^n$ or even much more) and I may also store some results as much as I want.



Question : is this equivalent to 3-SAT?



Comment : I cannot find a polynomial solution like : calculate all solutions store them on a tree and then retrieve on question . So it seems to be as 'difficult' as 3-SAT.



Note : I took 3 SAT but any NP-complete problem Q will do : define generically the variation Q' with the length of the instance of the problem Q given ahead of the instance.

Thursday 12 June 2008

nt.number theory - What are the connections between pi and prime numbers?

Well, first of all, $pi$ is not just a random real number. Almost every real number is transcendental so how can we make the notion "$pi$ is special" (in a number-theoretical sense) more precise?



Start by noticing that $$pi=int_{-infty}^{infty}frac{dx}{1+x^2}$$
This already tells us that $pi$ has something to do with rational numbers. It can be expressed as "a complex number whose real and imaginary parts are values of absolutely convergent integrals of rational functions with rational coefficients, over domains in $mathbb{R}^n$ given by polynomial inequalities with rational coefficients." Such numbers are called periods.
Coming back to the identity
$$zeta(2)=frac{pi^2}{6}$$
There is a very nice proof of this (that at first seems very unnatural) due to Calabi. It shows that
$$frac{3zeta(2)}{4}=int_0^1int_0^1frac{dx,dy}{1-x^2y^2}$$
by expanding the corresponding geometric series, and then evaluates the integral to $pi^2/8$. (So yes, $pi^2$ and all other powers of $pi$ are periods.) But the story doesn't end here as it is believed that there are truly deep connections between values of zeta functions (or L-functions) and certain evaluations involving periods, such as $pi$. Another famous problem about primes is Sylvester's problem of which primes can be written as a sum of two rational cubes. So one studies the elliptic curve
$$E_p: p=x^3+y^3$$ and one wants to know if there is one rational solution, the central value of the corresponding L-function will again involve $pi$ up to some integer factor and some Gamma factor. Next, periods are also values of multiple zeta functions:
$$zeta(s_1,s_2,dots,s_k)=sum_{n_1>n_2>cdots>n_kgeq 1}frac{1}{n_1^{s_1}cdots n_k^{s_k}}$$
And they also appear in other very important conjectures such as the Birch and Swinnerton-Dyer conjecture. But of course all of this is really hard to explain without using appropriate terminology, the language of motives etc. So, though, this answer doesn't mean much, it's trying to show that there is an answer to your question out there, and if you study a lot of modern number theory, it might just be satisfactory :-).

molecular biology - Self pairing in DNA

DNA can adopt secondary structures like RNA, the main difference is that DNA is usually present as double-stranded DNA while RNA is in most cases present as single-stranded RNA. Double-stranded DNA won't easily adopt any other conformation than the well-known double helix as this one is more stable than possible structures each single strand could adopt on its own.



One example that occurs in nature are G-Quadruplexes which for example occur in telomeres.



There are also artificially created DNA enzymes (also called DNAzymes or deoxyribozymes) that adopt tertiary structures like ribozymes do. But there are no known DNA enzymes that occur in nature as far as I know.

human biology - Antibody production in secondary immune response

1) The secondary response requires CD4+ T cells to activate memory B cells. That first paper actually gives some evidence that some of the rapidity could arise because T cells and memory B cells are in very close proximity to each other in germinal centers.



2) Yes. The affinity of antibodies increases during the initial infection, both through isotype switching and affinity maturation, producing far superior antibodies. Upon reactivation, memory B cells can undergo further somatic hupermutation.

proteins - Are SLC52A2 and GPR172A really the same?

This isn't a case of gene splicing causing different protein variants. In the studies that identified these two functions (GHB sensitivity and riboflavin transport), they were using DNA derived from mRNA (cDNA), which means what was being expressed in their experiments did not have introns, so there was no chance for alternative splicing.



This gene has a weird history, here is the summary:



Takeda et al. 2002: Identification of G protein-coupled receptor genes from the human genome sequence



Takeda and co. download a copy of the human genome and translate it looking for GPCRs. They look at all the ORF that DIDNT have introns: "We collected intronless open reading frames (ORFs), which were long enough to cover GPCRs from the human genome". They also only look at things with 6-8 transmembrane helices as determined by the prediction program SOSUI. One of the GPCRs they found was named hGCRP41 (hypothetical protein similar to GPCR) accession number: AB083623. They report it probably has 8 transmembrane helices, and 418aa. This protein probably doesn't exist because they read straight through two introns in their analysis. The protein also terminates in the middle of a frame-shifted exon.



Ericsson et al. 2003: Identification of receptors for pig endogenous retrovirus



Next, Ericsson et al come along looking for receptors for the pig endogenous retrovirus and find this gene. They used cDNA to make a bunch of transduced cell lines that had difference human cDNAs expressed. They found two related proteins that could serve as receptors to the pig endogenous retrovirus. They named the first one HuPAR-1, which is 445aa long and has 11 transmembrane helices. This is the length of one of the proper splice variants of this gene.



Andriamampandry et al. 2007: Cloning and functional characterization of a gammahydroxybutyrate receptor identified in the human brain



Andriamampandry is looking for gammahydroxtbutyrate(GHB) receptors by using cDNA and seeing which cDNA-transduced cells become sensitive to GHB. They find two nice proteins that are sensitive to GHB, GHBh1 and C12K32. GHBh1 is identical to HuPAR-1, it has 445aa and 11 transmembrane helices. C12K32 is a frameshifted version of GHBh1 that is slightly longer, but has the same number of helices.



Yao et al. 2010: Identification and Comparative Functional Characterization of a New Human Riboflavin Transporter hRFT3 Expressed in the Brain.



Finally, we have Yao et al, who searched for homologs to their Riboflavin transporters. They found one in a cDNA library, and it was 445aa long. But they noticed upon BLASTing it that people had been calling it a GPCR. They point out that this is weird since it has 11 transmembrane helices, and go on to show that it is a Riboflavin transporter, concluding: "The molecular function of GPR172A has yet to be determined. We designated it hRFT3 (GenBank accession no. AB522904) based on its functional characterization as shown below." This paper is in pretty poor form, because somehow they don't address the Andriamampandry paper which presented data supporting its function as a GPCR.



---------



So this protein, in this specific spliced formation, has been shown to be a virus receptor, a GHB G protein-coupled receptor, and a riboflavin transporter. It is very weird that it is a GPCR with 11 transmembrane domains... they very rarely have more or less than 7. I'd wait for more studies to confirm both of these findings before I'd accept that this protein is an XXL GPCR moonlighting as a riboflavin transporter.

Wednesday 11 June 2008

at.algebraic topology - characterization of cofibrations in CW-complexes with G-action

In the model structure you describe, the cofibrations should be the retracts of the free relative G-cell maps: i.e., retracts of maps obtained by attaching cells of the form $G times S^{n} to G times D^{n+1}$.



One way to see this is via the following general machine: There is an adjoint pair
$$ G times -: mathbf{Top} leftrightarrow mathbf{GTop}: forget $$
$mathbf{Top}$ is a cofibrantly generated model category and one can check that this adjoint pair satisfies the conditions of the standard Lemma for transporting cofibrantly generated model structures across adjoint pairs (see e.g., Hirschorn's "Model categories and their localizations" Theorem 11.3.2). Thus, it gives rise to a model structure on $mathbf{GTop}$ such that a map in $mathbf{GTop}$ is an equivalence(resp. fibration) iff its image under the right adjoint (forget) is so. Moreover, the generating (acyclic/)cofibrations are precisely the images under the left adjoint ($G times -$) of the generating (acyclic/)cofibrations in $mathbf{Top}$. This yields the description of the cofibrations as retracts of (free G-)"cellular" maps.



Also, some context:



The model structure you describe (which I'd like to call "Spaces over BG") is a localization of a model structure "G-Spaces" (where the weak equivalences are maps inducing weak equivalences on all fixed point sets). An argument along the lines of the above constructs this other model structure and identifies its cofibrations with retracts of (arbitrary) relative G-cell maps: i.e., retracts of maps obtained by attaching cells of the form $G/H times S^n to G/H times D^{n+1}$ for $H$ a closed subgroup of $G$.

Tuesday 10 June 2008

soft question - Alternating forms as skew-symmetric tensors: some inconsistency?

I can't speak to what is actually used, particularly what is used by physicists! However, I can try to shed some light on the diagram and the maps in question. In actual fact, there are two diagrams here and you are conflating them. This, simply put, is the source of the confusion. Let me expand (at a bit more length than I intended!) on that.




Firstly, there are too many maps flying around and some are more canonical than others. The most canonical is the identification of $(bigotimes^k V)^*$ with $operatorname{Mult}^k(V)$ since this is by (one of the) definition(s) of the tensor product. So let us start with that. The inclusion $operatorname{Alt}^k(V) to operatorname{Mult}^k(V)$ is probably next in line since it is the inclusion of a subspace. After that, I'd put the map $bigotimes^k V^* to (bigotimes^k V)^*$. So, so far we have a diagram:




$$
begin{array}{ccccc} operatorname{Alt}^k V \
i downarrow \
operatorname{Mult}^k V &leftarrow & (otimes^k V)^* & leftarrow & otimes^k V^*
end{array}
$$




That the horizontal maps are isomorphisms is nice, but only holds for finite dimensional vector spaces so I'm not going to write in the fact that they are isomorphisms. I want to emphasise what's really canonical and what's not.




Now let us consider $(Lambda^k V)^*$. We appear to have a canonical map from this to $operatorname{Alt}^k(V)$ but in fact, we don't. We have a canonical map from this to $(bigotimes^k V)^*$ given by:




$$
f(v_1 otimes cdots otimes v_k) = f(v_1 wedge cdots wedge v_k)
$$




This is dual to the projection map $bigotimes^k V to Lambda^k V$. That projection map is pretty canonical as we usually define $Lambda^k V$ as a quotient of $bigotimes^k V$. Taking its dual is a natural thing to do, so this also appears on my list of "canonical maps". Now when we go "down" and "across" we happen to end up in the subspace $operatorname{Alt}^k(V)$ so we can add a horizontal arrow $(Lambda^k V)^* to operatorname{Alt}^k(V)$ if we like, but the new map that we add by doing this is one step removed from the really canonical maps so I'm going to leave it out at this stage.




Now we come to $Lambda^k V^*$. This is, as for $Lambda^k V$, defined as a quotient of the tensor product. So we have a projection $bigotimes^k V^* to Lambda^k V^*$. This, again, is pretty canonical. So our "canonical" diagram looks like this:




$$
begin{array}{ccccc} operatorname{Alt}^k V && (Lambda^k V)^* && Lambda^k V^* cr
i downarrow &&{p_V}^* downarrow&& uparrow p_{V^*}cr
operatorname{Mult}^k V &leftarrow & (otimes^k V)^* & leftarrow & otimes^k V^*
end{array}
$$




At this point, an obvious question is as to whether or not we can fill in the gaps. I've already said that we can in the top-left. Can we in the top-right? That is, is there a map $Lambda^k V^* to (Lambda^k V)^*$ making the diagram commute? (Thinking about infinite dimensions says that this is the correct direction.) The answer is: (drum roll) No. And the reason is quite simply that we start in $bigotimes^k V^*$ and can choose any element there as our starting point, but would want to end up in the alternating part of $(bigotimes^k V)^*$.




Okay, now we throw in the Alternator (probably time for another drum roll). The Alternator does what it says on the tin: it alternates stuff. But we have to be careful and ensure that we only apply it to stuff that can genuinely be alternated. So we have an alternator: $operatorname{Alt} colon operatorname{Mult}^k(V) to operatorname{Alt}^k(V)$ given by




$$
operatorname{Alt}(f)(v_1,dotsc,v_k) = frac{1}{k!} sum (-1)^{sigma} f(v_{1sigma}, dotsc, v_{ksigma})
$$




The $1/k!$ is to make this a left inverse of the inclusion $operatorname{Alt}^k(V) to operatorname{Mult}^k(V)$.




We also have an alternator $Lambda^k V to bigotimes^k V$ given by:




$$
v_1 wedge dotsb wedge v_k mapsto frac{1}{k!} sum (-1)^{sigma} v_{1sigma} otimes dotsb v_{ksigma}
$$




Again, the multiplier is chosen to ensure that this is a right inverse of the projection map. This is your $Sk$ map. Putting these into a diagram, we get:




$$
begin{array}{ccccc} operatorname{Alt}^k V && (Lambda^k V)^* && Lambda^k V^* cr
operatorname{Alt} uparrow &&{Sk_V}^* uparrow&& downarrow Sk_{V^*}cr
operatorname{Mult}^k V &leftarrow & (otimes^k V)^* & leftarrow & otimes^k V^*
end{array}
$$




Again, the obvious question is: can we fill in the gaps? We can fill in the first one. Indeed, the same filler map works in this diagram as in the last. That was the map $alpha colon (Lambda^k V)^* to operatorname{Alt}^k(V)$ with the property that $i alpha = eta {p_V}^*$ (where $eta colon (bigotimes^k V)^* to operatorname{Mult}^k(V)$ is the isomorphism). So:




$$
i alpha (Sk_V)^* = eta {p_V}^*(Sk_V)^* = eta (Sk_V p_V)^* = eta;text{and}; i operatorname{Alt} eta = eta
$$




Thus, as $i$ is an injection, $alpha (Sk_V)^* = operatorname{Alt} eta$.




But it's the other gap that's more interesting. Now we can fill it in. And the "filler" map is laid out for us already: it's simply follow-the-arrows. If we work it out in detail, it's the following map:




$$
begin{aligned}
f_1 wedge dotsb wedge f_k mapsto Big((v_1 wedge dotsb wedge v_k) mapsto & Sk_{V^*}(f_1 wedge dotsb wedge f_k) big( {Sk_V}^*(v_1 wedge dotsb wedge v_k)big)Big) \
&= frac{1}{k!} frac{1}{k!} sum_sigma sum_tau (-1)^{sigma} (-1)^{tau} f_{1sigma}(v_{1tau}) dotsb f_{ksigma}(v_{ktau})
end{aligned}
$$




This simplifies considerably by rewriting $f_{jsigma}(v_{jtau})$ as $f_{jrho}(v_j)$. Then we end up with $k!$ of each term, so we get:




$$
(f_1 wedge dotsb wedge f_k)(v_1 wedge dotsb wedge v_k) = frac{1}{k!} operatorname{det}(f_i(v_j))
$$




But notice the factor of $1/k!$ in this!




So to make that right-hand rectangle commute, one of the maps has to have a factor of $1/k!$ in it. It doesn't have to be the top one, but that's the most obvious one since if you modify one of the $Sk$s then you ought to modify the other one - though there's no reason to do so, and in fact this might be what's going on: the physicists are keeping one of the $Sk$s as it is and defining the other one to be suitably scaled so that the upper map is the determinant map. But that's speculation, returning to reality we have a diagram:




$$
begin{array}{ccccc} operatorname{Alt}^k V && (Lambda^k V)^* &stackrel{frac{1}{k!}operatorname{det}}{leftarrow} & Lambda^k V^* cr
operatorname{Alt} uparrow &&{Sk_V}^* uparrow&& downarrow Sk_{V^*}cr
operatorname{Mult}^k V &leftarrow & (otimes^k V)^* & leftarrow & otimes^k V^*
end{array}
$$




Finally, let's compare this to your original diagram. The key thing to notice is that in my diagrams, I have two vertical maps in one direction and one in the other. In your diagram, you have two vertical maps in the same direction (and are missing the third). But whichever of my diagrams you prefer, one of your maps is going in the wrong direction.
So, in conclusion, the mistake is that your diagram isn't supposed to commute. Rather, there's two commuting diagrams there with some maps from one diagram and some from another.




(I have a feeling that I haven't really answered the question. This was what I wrote out when trying to make sense of the question rather than towards an answer. But I hope that it helps clarify the issue for you.)

analytic number theory - Dirichlet L series and integrals

Here's what you could do (I wrote it in haste so I am not responsible for mistakes). Consider a nice function $f$ with Mellin transform $$hat{f}(s) = int_{1}^{infty} t^{s} f(t) text{dt}$$ (most importantly we don't want our $hat{f}$ to have poles and this is why I integrate from $1$ to $infty$ rather than from $0$ to $infty$). Now consider $$frac{1}{2pi i}int_{c-iinfty}^{c+iinfty} zeta(s+1)^2 hat{f}(s)cdotfrac{text{ds}}{s}$$ (with $c > 0$). On the one hand expanding $zeta(s+1)^2$ into a Dirichlet series and $hat{f}$ into an integral and interchanging both sum and integrals, we obtain that the integral above is equal to $$sum_{n geq 1} frac{d(n)}{n} int_{1}^{infty} f(t) frac{1}{2pi i} int_{c - iinfty}^{c + iinfty} frac{t^s}{n^s} frac{1}{s}text{ds} text{dt} = sum_{n geq 1} frac{d(n)}{n} int_{n}^{infty} f(t) text{dt}$$ (The interchange might be difficult and I describe a way around it, below). On the other hand we can estimate the integral appearing in the second formula in this post, by shifting the contour and picking up residues. For instance, the residue at $s = 0$ gives
$$int_{1}^{infty} (frac{log(t)^2}{2} + 2log(t) + 2gamma) f(t) text{dt}$$ [Warning: I might have messed up the residue calculation] and this is the "expected main term" (if not an exact expression!) for the integral in the second formula. Thus for example when $f(t) = e^{-epsilon t}$ you expect the third formula to be asymptotic to the fourth formula (as $epsilon$ goes to zero) [actually in this case, the two are probably identically equal].



The important feature of $f(t) = e^{-epsilon t}$ is that $hat{f}(s)$ has no poles!. The method described here, should work equally well for any nice smooth function which is not too different from $e^{-epsilon t}$.



Also, note that instead of using $frac{1}{s}$ in the second formula, you could use $Gamma(s)$. That would lead to a messier formula (you would need to take into account the poles of $Gamma(s)$ at -1,-2,...) - but interchanging sum and integral in the third formula would be much much easier.

Monday 9 June 2008

yeast - Sources for common laboratory Saccharomyces strains?

One very important resource is EUROSCARF. http://web.uni-frankfurt.de/fb15/mikro/euroscarf/



It is one of the very famous and dedicated strain repository for yeast (S. cerevisiae) strains. You can even find some very useful yeast plasmids here.



Another resource I would recommend will be the original labs, which made the mutant strains/plasmids. Yeast researchers as a community are very nice, I have almost always obtained the strains and plasmids that requested from the original labs.



Hope it helps,
Cheers!!

Mitosis in human body - Biology

In many cases cell division depends on the stage of development an organism is in. The rate of cell division is obviously much faster in a developing organism and from what I understand fully differentiated cells such as neuron and those in skeletal muscles don't divide (correct me if I'm wrong here).



In early development totipotent cells (stem cells that can become anything) begin to differentiate dependent on environmental factors, turning into multipotent (partially differentiated) cells that can only lead to certain cell types. For example: mesodermal precursors can differentiate to myoblasts, which can go on to differentiate into myotubes, later forming muscles.



Epithelial and and blood cells are the two of the main types of cells that need to be constantly replaced in developed organisms. As far as I know cells lining the gut epithelium are fastest to divide. They are created from stem cells in 'crypts' (pockets) in the lining and are pushed outwards, where they are later broken down (by what I would assume would be abrasion and intestinal juices). My book gives them a lifespan of 3-5 days. External skin cells are much slower to divide (though I'n not sure by exactly how much).



Red blood cells have have a lifespan of approximately 120 days. They are replenished by stem cells in the marrow of certain bones (e.g. a femur). Neutrophils are the next most common blood cell, with a circulating life of 8 hours (but lifespan may be a few days). Per day, roughly equal numbers of RBCs and neutrophils are created and are most numerous new cells created per day. Lymphocytes, another white blood cell, are responsible for immune 'memory' can persist for years. The fastest recorded mitotic cycle for a mammalian cell (in culture) is ~8-10hrs.

Saturday 7 June 2008

mg.metric geometry - Intrinsic metric with no geodesics

There is a very simple example of an intrinsic, complete metric space that is not geodesic (read in Ballmann's "Lectures on Spaces of Nonpositive Curvature": it is the graph on two vertices $x,y$, linked by edges $e_n$ of length $1+1/n$.



Of course it does not answer your question, but it may be possible to improve this example to one that does. Call $X_1$ the graph described above, and define $X_{n+1}$ from $X_n$
as follows: $X_n$ has a vertex $x'$ for each vertex $x$ of $X_n$, plus a vertex $v_e$ for each edge $e$ of $X_n$. For each edge $e=(xy)$ of $X_n$ we define edges $f_e^n$ and $g_e^n$ of $X{n+1}$: $f_e^n$ connects $x'$ to $v_e$ and has length $(1+1/n)$ times the original length of $e$, and $g_e^n$ does the same
but replacing $x'$ by $y'$.



Now it should be possible to construct the desired example by a limiting process. For example, take all vertices along the construction: the distance between any two of this points is constant as long it is defined, so we get a metric space. Its completion might be what you want (but I a not so sure of that after witting these lines).

ag.algebraic geometry - Hilbert scheme of points

Let X be a smooth projective variety. We consider the Hilbert scheme X_[n] of points on X.
Denote Z the uiversal subscheme in X×X_[n]. We know that Z|(X×ζ)=ζ, where ζ belongs to
X
[n]. But does the ideal sheaf I_Z have the same universal property, i.e. I_Z|_(X×ζ)=I_ζ?

bioenergetics - Why can ATP not be stored in excess?

Not a complete answer, but a few random thoughts to start off the conversation:



1) There is another molecule that is used as a fast access store, and that is phosphocreatine which can be used to very rapidly rephosphorylate ADP in muscle. In resting muscle it is present at about 5x the level of ATP.



2) Levels of ATP are also used by cells as a regulatory input - in other words the fall in ATP levels with the onset of exercise triggers a response to replenish ATP through e.g. the breakdown of glycogen. In this view it is useful to have a final stage "energy currency" which can act directly as an enzyme substrate and whose level is a sensitive indicator of current energy demand.



3) ATP is also a substrate for RNA polymerase. If ATP was present at vastly higher levels than UTP, CTP and GTP it would probably cause errors in transcription, and might also interfere with the regulatory role of GTP binding proteins, since it would act as a competitor for binding at the GTP binding site.



4) In any case, if ATP was to be maintained at a much higher concentration for rapid use, presumably as soon as it began to be used the ATP generating systems would have to start up to try to replenish the pool. In other words things would really be no different from the way they are, but would simply operate at a higher resting level of ATP.

Friday 6 June 2008

human biology - Stopping the effect of hormone

The binding is reversible typically; part of the potency of a drug is ow well and for how long it binds to its target. There's a natural equilibrium of binding and dissociation. Many drugs, once bound to their cognate receptor, cause a down regulation of their cognate receptor on the target cell. The bound/activated downstream signalling pathways may be inhibited by ubiquitination of the downstream signals themselves or upregulation of antagonists etc. The hormone itself has a half life, which is very important, thus levels naturally decrease and for some hormones this is incredibly rapid. Levels may decrease due to breakdown or excretion. Increase of binding hormones may decrease free hormone thus it's effect also.

homological algebra - How to construct a ring with global dimension m and weak dimension n?

If $R$ is Noetherian then they are equal.



For $n=0$ one can use the fact that any Boolean ring has weak dimension $0$ (any module is flat), but a free Boolean ring on $aleph_n$ generators have global dimension $n+1$, see the last paragraph of this paper.



For any given pair of $(m,n)$ one can perhaps use polynomial rings over the examples for $n=0$ case (The global dimensions do go up properly, but the behavior of weak dimensions seem to be trickier, may be someone who is a real expert can confirm this?)

Thursday 5 June 2008

replication - What is the mechanism of labeling a DNA molecule with deuterated water?

I would assume that the labeling occurs in the reduction of NTPs to form dNTPS.



This process (catalyzed by ribonucleotide reductase) involves protonating the hydroxyl group on the 2' carbon, allowing it to leave as water, and then adding a hydride to the newly formed carbocation. The two hydrogen atoms (the proton and the hydride) come from two thiols on the enzyme, which in the process are oxidized to form a disulfide bond.



The crux is that, in the presence of heavy water, the two thiols would rapidly exchange their hydrogen for deuterium. That means that the hydride that gets added to the carbocation would be a deuterium, and the resulting dNTP would be deuterium labeled.



This is the only step that I can imagine the labeling working for, as a carbon-hydrogen bond is formed (which doesn't exchange rapidly with the solvent) using the hydrogen from a sulfer-hydrogen bond (which does exchange rapidly with the solvent).

fa.functional analysis - Self-adjoint extension of locally defined differential operators

The following is well known. Given a symmetric differential operator, like $partial_x^2$, defined on smooth functions of compact support on $mathbb{R}$, $C_0^infty(mathbb{R})$, one can count the number of independent $L^2$-normalizable solutions of $partial_xpm i$ and use the von Neumann index theorem to classify possible self-adjoint extensions of this operator on $L^2(mathbb{R})$. This can be generalized to more complicated differential operators, to $mathbb{R}^n$ as well as bounded open subsets thereof.



On the other hand, suppose that I have a manifold $M$ that is covered by a set of open charts $U_i$ with differential operators $D_i$ defined in corresponding local coordinates. It is easy to check if the $D_i$ are restrictions of a globally defined differential operator $D$ on $M$: the transition functions on intersections of charts $U_icap U_j$ must transform $D_i$ into $D_j$ and vice versa. Suppose that is the case and that I am interested in self-adjoint extensions of $D$ to $L^2(M)$ (supposing that an integration measure is given and that $D$ is symmetric with respect to it). Now, the question:




Is there way of classifying the self-adjoint extensions of $D$ on $L^2(M)$ in terms of its definition in local coordinates, the actions of $D_i$ on $C_0^infty(U_i)$.




A simple example would be the cover of $S^1$ by two overlapping charts. I know that a self-adjoint extension of $partial_x^2$ on $[0,1]$ with periodic boundary conditions gives the naturally defined self-adjoint Laplacian on $S^1$. Then $(0,1)$ is interpreted as a chart on $S^1$ that excludes one point. However, I don't know how to define the self-adjoint Laplacian on $S^1$ if it's given on two overlapping charts.

mg.metric geometry - How to compare finite point sets in normed spaces?

Consider the complete bipartite graph $G$ with bipartition $(A,B)$, and let the weight of an edge $ab$ be $d(a,b)$. Then $d(A,B)$ is simply the weight of a minimum weight perfect matching of $G$. Finding minimum weight perfect matchings is a well-studied problem. In particular, we can compute $d(A,B)$ in polynomial-time. Indeed, even in the case that the edge weights do not come from a metric, efficient algorithms exist. Also, even in the case that the graph is not bipartite, we can find minimum weight perfect matchings in polynomial-time. See Combinatorial Optimization, by Cook, Cunningham, Pulleyblank, and Schrijver for the sordid details.

stem cells - potency in preformed germ-line

In almost all metazoa, the pro-germline cells get segregated from other stem cells at an early stage of development and they thrive and differentiate in their neighborhood. This is important in order to preserve the germline. This post provides some basic explanation.



However, even drosophila have adult multipotent stem cells and help in the formation of midgut as reported by this study.

Wednesday 4 June 2008

muscles - Does a piezoelectric organic substance exist?

I don't know if it would contract by the amount that you are after, but bone (which has both organic and inorganic components) is piezoelectric.



For an overview, see http://silver.neep.wisc.edu/~lakes/BoneElectr.html.




They suggest that two different mechanisms are responsible for these effects: classical piezoelectricity due to the molecular asymmetry of collagen in dry bone, and fluid flow effects, possibly streaming potentials in wet bone.


mg.metric geometry - Bounding the product of lengths of basis vectors of a unimodular lattice

I don't know how good the bound is you can obtain from this, but what about taking a Korkine-Zolotarev reduced basis of $Lambda$, say $(b_1, dots, b_n)$: then, by this paper, $|b_i|_2^2 le frac{i + 3}{4} lambda_i(Lambda)^2$, where $lambda_i(Lambda)$ is the $i$-th successive minimum of $Lambda$. By Minkowski, $prod_{i=1}^n lambda_i(Lambda) le gamma_n^{n/2} det Lambda = gamma_n^{n/2}$ (in your case), $gamma_n$ being the $n$-th Hermite constant, whence you get $A le prod_{i=1}^n |b_i|_2 le frac{gamma_n^{n/2}}{2^n} prod_{i=1}^n sqrt{i + 3}$.

Tuesday 3 June 2008

blood circulation - Do body lotions enter into bloodstream of people? And how do they do it?

Lotions, like any other drug, can effectively enter the blood system. However, the fraction of the applied lotion that actually enters is really small, and it only achieves a significant concentration in the zone near the application. The skin, if healthy, offers a very high resistance to the passage of substances. Moreover, the bloodstream actually dilutes even more the little fraction it achieves to enter, further reducing its effects. If the skin is damaged the penetration is greater, though you'll never be able to get drunk by dropping booze in an injury.

Monday 2 June 2008

ag.algebraic geometry - sheaves of representations on galois groups, can there be interesting cohomology?

Consider a field $K$ (of characteristic 0, say) and its absolute galois group $G_K^{ab} = Gal(overline{K}/K)$, given the Krull topology: $U_E(sigma) = sigma Gal(overline{K}/E)$ form a basis of the topology, ranging over $sigma in G_K^{ab}$ and $E/K$ finite galois.



Fix a group $G$ and denote by $R_E$ its representation ring over $E$, and by $R_E^sigma subset R_E$ the elements of $R_E$ fixed by $sigma$.



We can construct a sheaf $mathcal{F}$ on $G_K^{ab}$ by setting $mathcal{F}(U_E(sigma)) = R_E^sigma$. It is a simple exercise to verify the axioms.



One might hope that the sheaf cohomology of $mathcal{F}$ encodes information about the splitting behaviour of representations of G over various ground fields, but this is not the case: $G_K^{ab}$ is known to be totally disconnected, hausdorff and compact. It is a theorem [1, 5.1] that $H^r(G_K^{ab}, mathcal{F}) = 0$ for $r > 0$. Furthermore the $U_E(sigma)$ are actually clopen, so most useful subsets I can think of are also compact, hence their cohomology is equally uninteresting.




Is there a way to produce a useful cohomology along these lines?




Here "useful" essentially means "non-trivial", and "along these lines" basically "involving the galois action on $R_E$ for various $E$".



[1] http://www.jstor.org/stable/2035693

zoology - Why is there a difference in the rotation of the tail fin in fish compared to marine mammals?

While fish tend to move from side to side (lateral undulation) for which a vertical tail makes sense, the land ancestors of marine mammals had their limbs under them and so their spines were already adapted to up and down movement (dorsoventral undulation). When these animals moved to marine environments, they continued up and down movement in their swimming, for which a horizontal tail makes sense.



(The wikipedia article on fins gives some more detail, and links to this webpage on Berkeley.edu. A paper by Thewissen et al. suggests that for cetaceans, dorsoventral undulation as a swimming strategy came first, and the horizontal tail evolved later.




More detail:



In a third example beyond fishes and marine mammals, the icthyosaurs and other aquatic reptiles developed vertical tails, even though like marine mammals they evolved from four-footed land animals. This may be because the legs/spines/gaits of land reptiles differ from those of land mammals, so that the earliest icthyosaurs swam with lateral undulation, as reflected in their spinal modifications.



This blog post by Brian Switek gives a really superb run-down on the issue, with figures and citations. I'll quote this part which deals with the dorsoventral undulation theory:




...mammals and their relatives were carrying their legs underneath their bodies and not out to the sides since the late Permian, and so the motion of their spine adapted to move in an up-and-down motion rather than side-to-side like many living reptiles and amphibians. Thus Pakicetus[the pre-cetacean land mammal] would not have evolved a tail for side-to-side motion like icthyosaurs or sharks because they would have had to entirely change the way their spinal column was set up first.




Switek goes on to to talk about exceptions in some marine mammals:




At this point some of you might raise the point that living pinnipeds like seals and sea lions move in a side-to-side motion underwater. That may be true on a superficial level, but pinnipeds primarily use their modified limbs (hindlimbs in seals and forelimbs in sea lions) to move through the water; they aren’t relying on propulsion from a large fluke or caudal fin providing most of the propulsion with the front fins/limbs providing lift and allowing for change in direction. This diversity of strategies in living marine mammals suggests differing situations encountered by differing ancestors with their own suites of characteristics, but in the case of whales it seems that their ancestors were best fitted to move by undulating their spinal column and using their limbs to provide some extra propulsion/direction.