Monday 31 August 2009

the sun - Why is the Sun's atmosphere (the corona) so hot?

It is still an open question, even though it is clear that it is linked to the magnetic field of the Sun. The following hypothesis are the following:



  • Heating waves mechanism: propagation of magnetohydrodynamic waves can heat significantly the corona. These waves can be produced in the solar photosphere and propagate carrying energy through the solar atmosphere; they turn in the end into shock waves that dissipate energy as heat in the corona.


  • Magnetic reconnection mechanism: magnetic field lines are tied to the surface of the Sun and even when the plasma moves, they stay tied to their original position (their "foot points"); therefore, plasma movements can drag the magnetic field lines and entangled them in geometries such as loops, that can eventually "reconnect". "Reconnection" corresponds to a modification of the field line topology (see the image that pictures it better than my words). During this process, heat and energy are released quite efficiently.


  • Nanoflares mechanism: the dissipation of a lot of small scale structures (slow convective motions of magnetic foot points could create current sheets, coutinually dissipated and reformed, providing heating through Ohmic dissipation). You can see it as a kind of a lot of magnetic loops reconnecting together, instead of one big loop reconnecting alone.


Magnetic reconnection



Black lines represent the magnetic field lines. They are tied to the surface of the Sun (the foot points). One of the magnetic field lines is looping, and it is "crossing itself". When it reconnects, it forms, on the top, a magnetic loop and, on the bottom, an magnetic arch. As pointed out by this diagram, magnetic reconnection mechanism is also associated to Coronal Mass Ejection (CME), since the energy released by magnetic reconnection is powerful enough to eject a significant amount of matter.



Source: A Contemporary View of Coronal Heating

Finding questions between functional analysis and set theory

Though this is a little more advanced, there is actually some very exciting research right now at the intersection of descriptive set theory, ergodic theory, and von Neumann algebras. It is quite striking that the three areas have powerful tools for looking at similar problems, and yet tend to be applicable in different cases. For a nice introduction to some of these ideas from a more set-theoretical point of view I would say check out
"Topics in Orbit Equivalence" by Kechris and Miller.



http://www.springerlink.com/content/0pwfmbrandag/



Here is a link where you can download it (though you might need a subscription but many universities will have it so it should work on a department computer.) It is actually quite elementary, you need some basic descriptive set theory and measure theory, but arrives at quite deep theorems.

Sunday 30 August 2009

star atlas with absolute magnitude and spectral type

BSC5p may be a nice database to start with.



The parallax (check according box) and vmag will provide a basis for the calculation of the absolute visual magnitude. The absolute visual magnitude may be lower (star may be absolutely brighter), if the extinction isn't negligible.



1/parallax is a good estimate for the distance. Take this distance as an approximate for the luminosity distance, if you can assume negligible extinction. Now calculate the visual magnitude of the star at 10 parsec distance to get the absolute visual magnitude.



Example: Alp1Cen (HR 5459) has vmag -0.01, and parallax 0.751 arc seconds. Therefore the distance is about 1/0.751 = 1.33 parsec. We get absolute visual magnitude
$$ M=m-5cdot (mbox{log}_{10}D_L - 1)=-0.01-5cdot (mbox{log}_{10}1.33 - 1)=4.37.$$



An overview of absolute magnitude and spectral types.



More detailed discussion, and tables can be found in this paper.



More tables.



This book seems to contain a table with values as of 2009, which is cited here.

What are dark matter and dark energy?

If you read an article that presents a hypothesis on dark matter and dark energy, that is about as much as you will find right now. No one on this planet knows what they are. We only know from the behavior we see in the cosmos that there must be something we cannot detect out there influencing the movement of the heavens.



If you read and understand everything that has currently been written by an authority on the subject right now (January 2015), you will still not know the answers to the questions you have posed. If you truly must know, I suggest you begin by applying to enter into a PhD program in cosmology or astronomy.

Saturday 29 August 2009

Why do the planets in our solar system orbit in the same plane?

In the protostar stage of the Sun, it was surrounded by a (spinning) gas cloud. This cloud behaved like a fluid (well, a gas is a fluid), so it flattened out into an accretion disk due to conservation of angular momentum. The planets eventually formed from the dust/gas in the disk from compression of the dust in the disk. This process won't end up moving the dust out of the plane (all the vertical force of gravity is toward the disk), so the final planet is in the plane too.



Why does an accretion disk have to be flat? Well, firstly, let's imagine the protostar and gas cloud before the accretion disk formed. Usually such a setup will have particles spinning in mostly one direction. The ones spinning in retrograde orbits will end up reversing themselves due to collisions.



In this gas sphere, there will be an equal number of particles with positive and negative vertical velocities (at a given point in time; due to rotation the velocity signs will flip). From collisions, eventually these will all become zero.



A particle revolving around a planet will always revolve such that the projection on the planet is a great circle. Thus we cannot have a particle with vertical velocity zero but with vertical position nonzero (as that would imply an orbit that isn't a great circle). So as the vertical velocity decreases, the orbit inclination decreases too. Eventually leading to an accretion disk with very little vertical spread.

supernova - Is there a star simulation software that can handle mass ejections and supernovae?

Regarding Terminology:
'Mass ejections' (at least semantically) are very different from supernovae. A "mass ejection" would generally refer to something like what the sun does as part of its normal activity --- ejecting very small amounts of plasma, or on the other end of the spectrum, the ejection of massive shells of material by very massive (e.g. $M gtrsim 40 M_odot$) stars$^{[1]}$. Supernovae, on the other hand, are the explosion and destruction of an entire star (or at least everything outside of the core).



Simulations
The sad fact of the matter is that there are no simulations which can fully self-consistently explode a star. In other words, there is no code that can start with a star and have it naturally explode$^{[2]}$. There are many specialized codes, however, which deal with particular aspects of the explosions. The initial collapse and explosion is usually done with very specialized, complex hydrodynamic codes by people like Christian Ott, Adam Burrows, Chris Fryer, etc. These codes are not public.



More general hydrodynamic codes are usually used to study the effects of supernovae, i.e. how the blast waves evolve under different circumstances. In these simulations, the user artificially deposits a huge amount of energy in the center of a stellar model (mimicking the energy produced by an explosion), and then sees how the system evolves. One of the most popular and advanced codes for this is called FLASH, produced and managed at the U of Chicago.




[1] For a technical article see, e.g.: http://arxiv.org/abs/1010.3718
[2] There are numerous reasons for this, first and foremost that it is an incredibly difficult computational task. Stars are generally about $10^6$ km in size, while their cores are only a few km --- to understand the explosion, the code needs to model this entire dynamic range which is incredibly challening. Additionally, the densities and temperatures involved in the cores of supernovae are way outside of the range of whats ever been explored in a laboratory --- so information about that materials behavior is still largely uncertain. There are lots of other big challenges (e.g. incorporating general relativity, advanced nuclear reactions, etc) but these are some of the key issues people are exploring.

distances - What would the night sky look like if Earth orbited an intergalactic star?

Planets, if any in that star system, would be visible to the naked eye, the way you can see Mercury, Venus, Mars, Jupiter, Saturn and Uranus in our system. On a virtually empty sky, they would draw that much more attention.



Of course, if you had a Moon (or several moons), that would be visible too.



Very rarely and briefly, asteroids passing by very close to the planet might also be visible. In an empty sky, that would be a big event - and difficult to explain with an under-developed astronomy.



Comets would be visible as usual, if the system has an Oort cloud and enough perturbations within it.



Other than planets, moons, asteroids and comets:



From a desert, or a farm way out in the boondocks, you will see the nearest galaxies as faint "clouds" of luminescence in the sky, the way you can see Andromeda now from Earth. Any light pollution from cities nearby would kill it.



There would be no individual stars visible to the naked eye, since all the stars you can see that way must be very, very close (within your own galaxy, if you're located in one). It goes without saying that naked-eye star clusters that you can see from Earth (like the Pleiades) would not be visible there.



There are nebulae visible to the naked eye from a place far away from cities on Earth, like the Great Orion Nebula, but there would be no such thing visible from your star system.



So most people (city-dwellers) in a modern civilization in a place like that would not see anything in the night sky except planets (if any).



Keep in mind that the Milky Way and Andromeda are fairly close, as galaxies go. If your system was outside a cluster, and had no planets, then the sky would be completely empty to the naked eye no matter where you're looking from. Only telescopes would be able to see anything.




So there could be a civilization that exists in which the night is completely void of light?




unlikely but yes (unless they make telescopes)




that would suck




yes



It may also retard the development of astronomy, cosmology and fundamental physics - especially if they had no other planets and moons.

teaching - Looking for an introductory textbook on algebraic geometry for an undergraduate lecture course

I am now supposed to organize a tiny lecture course on algebraic geometry for undergraduate students who have an interest in this subject.



I wonder whether there are some basic algebraic geometry texts considering the level of undergraduate students who have not learnt commutative algebra or homological algebra; they just know linear algebra and basic abstract algebra.



I am looking for some textbooks which provide a lot of examples (more computations using linear algebra and calculus). Actually, I am also looking for some textbooks based on very basic mathematics but which talk a little bit about a modern view point.



Thanks in advance!

Friday 28 August 2009

the sun - Can the Sun be used as a point source of light to achive better focus?

There's a limit to how well sunlight can be focused by a parabolic reflector or a lens because the Sun is not a point source. I wonder if the sunlight could be engineered to work as if it was (much more similar to) a point source.



If almost all of the Solar disc was covered by a coronograph, the light which gets through a small hole would be (almost) a point source. Is that correct?



If then an array of such covered reflectors or lenses concentrated their light to one and the same point, would a better focus be achieved than without covers?

Selecting a Telescope for Viewing Planets


Is it possible to view Saturn in little yellowish and Mars in little
reddish using following telescopes?




It is definitely possible to observe the rings of Saturn with telescopes this size. Even the Cassini division should sometimes appear visible, if the instruments are well collimated and seeing is not too bad. In terms of color, Saturn is just a boring buttery-yellow even in bigger scopes, so I wouldn't worry about that.



But Saturn is getting lower in the sky these days. If you hurry up and get the scope quickly, you may catch it for a few weeks at sunset, low in the western sky. After this, you'll have to wait until next year.



Mars is a different animal. Most of the time, all you'll see is a bright brick-red round dot, even in a bigger scope than these ones. But every couple years Mars is at opposition, when it's closest to Earth. We just had one a few months ago. Then you can see some of the big features, such as Syrtis Major, or the polar ice caps, or Hellas Basin full of frost or fog, like a big, bright white area.



However, that's only doable briefly around oppositions. The scope must be in perfect collimation, and seeing must cooperate.



http://en.wikipedia.org/wiki/Astronomical_seeing



If everything is at maximum parameters, I'm sure you could see Syrtis Major in a scope this size. During the last opposition, I've seen all of the above features, plus more (Utopia Planitia, Sinus Sabaeus, etc), in as low as 150 mm of aperture, in a scope with great optics, perfectly collimated, during nights with excellent seeing.



Anyway, for Mars you'll have to wait until the next opposition, in May 2016.



Later this year, in December, Jupiter will start rising in the East, and you could use your scope to watch it - an aperture like this is enough to see the 4 big moons and at least 2 equatorial belts. It will be high in the sky at a comfortable time in the evening early next year.



Until then, you can always observe the Moon, two weeks out of every four.



Also, the planets and the Moon are not the only things accessible with this aperture. Most of the Messier objects are visible in a 100 ... 150 mm scope, even in suburban areas. M13 is spectacular at any aperture above 100 mm. The Great Orion Nebula is awesome even with binoculars. The Pleiades are great too. Most of these deep space objects require low magnification for the best view, but every case is a bit different.



Plenty of double stars out there, too: Mizar, Albireo, even Polaris. All visible in small apertures.




I am going to buy one of them. Which one is worth more for the money
with the price difference?




The instruments are about the same. In theory, the bigger one has a bit more resolving power and a bit larger collecting area, so theoretically it should be slightly better.



In practice, with mass-produced instruments like these, it usually depends on the build quality, which can vary.



The smaller instrument is an f/8. The longer focal ratio means less aberrations; it also makes it easier for cheap eyepieces to function well, whereas at f/6 ... f/5 a cheap eyepiece may start to exhibit aberrations of its own (independent of telescope aberrations).



Also, an f/8 is easier to collimate than an f/6.



Overall, I would look at it as a matter of price. If you can easily afford the bigger one, get it. Otherwise, the smaller instrument might be a bit easier to maintain, is less demanding in terms of optics, and it should perform pretty close to the other one - all else being equal.



But since you're focused on planetary observations, remember this:



It is far more important to learn to correctly collimate your telescope, and develop it into a routine whereby you do a quick collimation check every time before you observe - it only takes a couple minutes. For planetary observations, the smaller telescope, in perfect collimation, will perform far better than the larger one, uncollimated. Heck, the little scope, perfectly collimated, will perform better on planets than a MUCH larger telescope, uncollimated - that's how important collimation is.



Improper collimation, or lack thereof, is one of the major factors for lackluster performance for a majority of amateur telescopes (along with poor quality optics - but there's nothing you can do about that, whereas collimation can be improved).



Search this forum, or just google, the term collimation, and read the numerous documents you'll find. Or start here:



http://www.cloudynights.com/documents/primer.pdf



Or here:



How can I collimate a dobsonian telescope with a laser collimator?



The owner's manual should also provide some recommendations regarding collimation (I hope).

Thursday 27 August 2009

mathematics education - Why does undergraduate discrete math require calculus?

In the context of college students, I agree with Alexander Woo's explanation. By the way, the best and the brightest often place out of calculus (that's the case at Yale, and I imagine it's not that much different at Berkeley), so the percentages of weak students at best schools aren't as dire as you might think.



Concerning the last question,



"Why isn't discrete mathematics offered to high school students without calculus background?"



Not only is that possible, but it had been the norm in the past within the "New Math" curriculum, when everyone had to learn about sets and functions in high school. This ended in a PR disaster and a huge backlash against mathematics, because generations of students were lost and got turned off by mathematics for life; some of them later became politicians who decide on our funding. Consequently, it was abandoned. (Apparently, calculus in HS was introduced as a part of the same package and survived.)



I'd be interested to know if there are any high school – college partnerships that offer discrete mathematics to H.S. students with strong analytical skills, and how do they handle the prerequisites question.

gt.geometric topology - An algebraic proof of Mumford's smoothness criterion for surfaces?

(Disclaimer: I'm a beginner in this area, so welcome corrections.)



Let $(X,x)$ be a germ of a complex surface (i.e. locally the zero set of some holomorphic functions) and assume that $x$ an isolated singular point. Mumford proved that if the local fundamental group of $X$ at $x$ is trivial, then in fact $x$ is smooth.



All the critters in the above paragraph have algebraic analogues, and the conversion was carried out (I believe) by Flenner: Let $A$ be a two-dimensional complete local normal domain containing an algebraically closed field of characteristic zero; if the 'etale fundamental group of [EDIT: the punctured spectrum of] $A$ is trivial, then $A$ is regular.



However, Flenner's proof is essentially by reduction to Mumford's theorem [as far as I, a non-German-speaker, can tell], rather than a new algebraic (or algebro-geometric) proof. So:




Does there exist a purely algebraic or algebro-geometric proof of Mumford's theorem?





Motivations include: (1) Mumford's proof is completely opaque to me; (2) No, I mean really really opaque; (3) I'm curious about extensions of the theorem to non-isolated singularities [which should probably be another question].

plasma physics - On analogies between gas and stellar systems

The analogy is rather weak and not really useful.



So-called collisionless stellar systems (those for which relaxation by stellar encounters has no appreciable effect over their lifetime), such as galaxies, can be described by the collisionless Boltzman equation, but never settle into thermodynamic equilibrium (only into some dynamical or virial equilibrium). Thus, the only other systems with somewhat similar behaviour are collisionless plasmas.



Sound, turbulence, viscosity etc are all effected by close-range collisions (not mere encounters) between the molecules. These also maintain thermodynamic equilibrium and a Maxwell-Boltzmann velocity distribution. Stellar systems have none of these processes and their velocities are in general anisotropically distributed and don't follow a Maxwell distribution.



Gases are in some sense simpler to understand, because their dynamics is driven by local processes and because statistical methods are very useful. Stellar systems are driven by gravity, i.e. long-range non-local processes, and intuition from the physics of gases is often very misleading (for example, a self-gravitating system has negative heat capacity -- this also applies to gas spheres, such as stars).



Note also that the number of particles in a gas is much much larger ($sim10^{26}$) than the number of stars in a galaxy ($sim10^{11}$), though the number of dark-matter particles may be much higher.

hodge theory - Semistable filtered vector spaces, a Tannakian category.

Let $k$ be a field (char = 0, perhaps). Let $(V,F)$ be a pair, where $V$ is a finite-dimensional $k$-vector space, and $F$ is a filtration of $V$, indexed by rational numbers, satisfying:



  1. $F^i V supset F^j V$ when $i < j$.

  2. $F^i V = V$ for $i << 0$. $F^i V = { 0 }$ for $i >> 0$.

  3. $F^i V = bigcap_{j < i} F^j V$.

We define:
$$F^{i+} V = bigcup_{j > i} F^j V.$$



The slope of $(V,F)$ (when $V neq { 0 }$) is the rational number:
$$M(V,F) = frac{1}{dim(V)} sum_{i in Q} i cdot dim(F^i V / F^{i+} V).$$



The pair $(V,F)$ is called semistable if $M(W, F_W) leq M(V, F)$ for every subspace $W subset V$, with the subspace filtration $F_W$.



A paper of Faltings and Wustholz constructs an additive category with tensor products, whose objects are semistable pairs $(V,F)$. A paper of Fujimori, "On Systems of Linear Inequalities", Bull. Soc. Math. France, seems to imply that the full subcategory of slope-zero objects (together with the zero object) is Tannakian (the abelian category axioms require semistability), with fibre functor to the category of $k$-vector spaces (though Fujimori considers quite a bit more).



Does anyone know another good reference for the properties of this Tannakian category? Can you describe the associated affine group scheme over $k$? I'm particularly interested, when $k$ is a finite field or a local field.



Update: I think the slope-zero requirement is too strong (though it is assumed in Fujimori). It seems to exclude almost all the semistable pairs $(V,F)$, if my linear algebra is correct. Anyone want to explain this to me too?

Wednesday 26 August 2009

surface - How easy is it to mine water on Ceres?

It has been suggested by some futuristic or sci fi leaning thinkers, that Ceres' surface might be mined for water to support human exploration and settlement of space. But NASA's Dawn mission and other observations show that it is very dark and seems to consist of "hydrated minerals" in an environment warm enough to make water ice sublimate. Does this mean that mining water on Ceres is really hard? Requiring deep drilling or excavating through thick hard surface layers before useful water ice is encountered.



Could a lander on Ceres just heat up the surface and collect water sublimating from it?

Tuesday 25 August 2009

solar system - Hypothetical beyond Neptune far away planets orbiting the Sun

The recent possible discovery of Planet Nine by Batygin & Brown (2016) has caused quite a stir, in the astronomy community, Astronomy , and the rest of the world. This is, of course, in part because any mention of such a discovery will cause a stir, but it is also in part because of the claimed probability if the movements of the Trans-Neptunian Objects (TNOs) being purely by chance: 0.007%, or 1 in 14,000, which corresponds to a probability of ~ 3.8$sigma$.



That said, the actual probability that this planet exists is a bit lower - 90%, according to Brown, and 68.3%, according to Greg Laughlin, an astronomy at UC-Santa Cruz who knew of the results prior to the paper's publication. The difference between these odds and the odds mentioned in the paper is due to the fact that perhaps there are other contributing factors that could have caused the movements of the TNOs - and also that these informal numbers are more like guesses than estimates.



The answers to Why hasn't the "9th Planet" been detected already? have already given a variety of methods that won't work in the case of Planet Nine, and why they haven't worked. Besides the obvious possibility that Planet Nine just doesn't exist, here are some of them:



  • It might be confused with a background star, blending in with the Milky Way.[1]

  • It's far away, and dim - perhaps a 22nd magnitude object.[1], [2]

  • Most methods of detecting exoplanets won't work, including[3]
    • Radial velocity.

    • Transit.

    • Gravitational microlensing

    • Direct imaging.


  • Prior surveys either don't look enough in-depth, in-depth but in other areas of the sky, or cover wavelengths in which this planet doesn't show up in.[4] This is another problem with the use of WISE detailed in MBR's answer here. Others problems are that planet is probably not massive enough for WISE to see it, and is extremely far away. It may also be cold and have a low albedo, making direct detection hard.


I bring all this up because, surprisingly, the case of Planet Nine is very relevant to the detection of these other planets that you mentioned in your question.



HD 106906 b



There are several similarities here between this planet and Planet Nine.



  • HD 106906 b has an orbit with a semi-major axis of at least 650 AU. By comparison, Planet Nine is thought to have a semi-major axis of about 700 AU, according to the estimates in the paper. That said, other orbital properties are unknown for HD 106906 b and may differ.

  • HD 106906 b has a mass of about 11 Earth masses; Planet Nine has a mass of about 10 Earth masses.

One difference is that HD 106906 b may have formed where it is at the moment. Scattering, like Planet Nine experienced, appears to be unlikely. This means that the two could have different compositions - although I would think that both might be ice giants or the remains thereof, given where they formed.



In short, HD 106906 b may be very similar to Planet Nine, and while we do not know much about either property, it seems safe to say that detection methods and problems would be similar.



Fomalhaut b



Fomalhaut b is a bit different. Its semi-major axis is ~177 AU - much smaller than these other two planets - and may be anywhere from 10 Earth masses to several hundred Earth masses.



Interestingly enough, Fomalhaut b was also indirectly detected, just like Planet Nine. A gap was found in Fomalhaut's dust disk, which could only have been caused by a massive planet. Later, it was directly imaged.



Direct detection might be possible if Fomalhaut b was in the Solar System, especially given its greater size and mass. Additionally, it would have an enormous impact on TNOs. However, it would be a gas giant, not an ice giant, so it would be difficult to explain how it moved out so far.




I discussed those planets to take a slight detour and talk about what might (and might not) be beyond Neptune. I'll be more explicit here.



What could be beyond Neptune



  • An ice giant, like Planet Nine, or the remains of one. This would have been formed closer in, near Jupiter and Saturn, which existed in a period resonance with Saturn. Other giant planet resonances would have also existed. Then a instability broke the resonances, bringing the ice giant inwards towards Saturn and then Jupiter, where it was flung outwards. This changed the orbits of the other four gas giants. I talked about this is another answer of mine, which I seem to be continuously referencing.



    That said, the initial answer was incorrect, which I have since noted, and was revised. Batygin's estimates found that Planet Nine would have been ejected many, many, millions of years before the resonances were supposedly broken (according to evidence from the Late Heavy Bombardment). However, this does not mean that there could not have been a sixth giant. The simulations Nesvorný & Morbidelli (2012) that explored the evolution of a Solar System with four, five, and six giant planets found some good results with five and six giant planets.


  • A super-Earth or mini-Neptune.1 These are planets that are, at the most, 10 Earth masses. Super-Earths would be terrestrial planets with thick atmospheres; mini-Neptunes would be gas planets. Note that these names do not say anything else about their compositions - for example, super-Earths are not necessarily habitable.

  • Something else. The region has not yet been explored well, and there could be some other unforeseen objects. This kind of object would likely be cold, as WISE (see next section) might have detected it otherwise.

What cannot be beyond Neptune




So, assuming that there is something beyond Neptune, how can we fine it? Well, now that its existence has been hypothesized and we know where it should be, astronomers can turn telescopes towards that location and use direct imaging.



There's been some excitement because the Subaru Telescope will be used. Other optical (and possibly infrared and other wavelengths) telescopes will most likely be used as well. Answers to What wavelength to best detect the "9th planet"? (especially Rob Jeffries' answer) indicate that optical and infrared/near-infrared wavelengths will be the best choice. As of today (1/24/2016), we can only speculate on what other instruments will be used, and what their chances are of finding this planet - if it exists.



Who knows? Maybe something else unexpected will turn up.




1 Now we get into the different things Planet Nine could be. My answer to What type of planetary-mass object would Planet Nine be? covers this, but there are better answers to Ninth planet - what else could it be? than mine.
2 Be careful of the difference between an ice giant and a gas giant.

group cohomology - Why is the standard definition of cocycle the one that _always_ comes up??

Late to the party as usual, but: the goal of this answer is to convince you that the standard convention for $2$-cocycles is so natural that you should consider it perverse to consider any other convention, modulo "applying a canonical involution to everything," as you say. To keep things simple let's only deal with trivial action on coefficients. The motivating question is the following:




What does it mean for a group $G$ to act on a category $C$?




For starters we should attach to each element of $G$ a functor $F(g) : C to C$. Next we could require that $F(g) circ F(h) = F(gh)$, but we should really weaken equalities of functors to natural isomorphisms whenever possible. Hence we should attach to each pair of elements of $G$ a natural isomorphism



$$eta(g, h) : F(g) circ F(h) to F(gh).$$



This is the point at which we pick a convention for how we're going to represent $2$-cocycles. Instead of talking about $eta(g, h)$ we could talk about its inverse; which we choose corresponds to whether we prefer to talk about lax monoidal or oplax monoidal functors, since what we're going to end up writing down is a lax monoidal resp. an oplax monoidal functor from $G$ (regarded as a discrete monoidal category) to $text{Aut}(C)$ (regarded as a monoidal category under composition).



In any case, let's stick to the above choice (the lax one). Then the isomorphisms $eta(g, h)$ should satisfy some coherence conditions, the important one being the "associativity" condition that the two obvious ways of going from $F(g_1) circ F(g_2) circ F(g_3)$ to $F(g_1 g_2 g_3)$ should agree.



Now let's assume that in addition all of the functors $F(g)$ are the identity functor $text{id}_C : C to C$. Then the only remaining data in a group action is a collection of natural automorphisms



$$eta(g, h) : text{id}_C to text{id}_C$$



of the identity functor. For any category $C$, the natural automorphisms of the identity functor naturally form an abelian (by the Eckmann-Hilton argument) group which here I'll call its center $Z(C)$ (but this notation is also used for the commutative monoid of natural endomorphisms of the identity). So we get a function



$$eta : G times G to Z(C).$$



The important coherence condition I mentioned above now reduces (again by the Eckmann-Hilton argument) to the condition that for any $g_1, g_2, g_3 in G$ we have



$$eta(g_1, g_2) eta(g_1 g_2, g_3) = eta(g_2, g_3) eta(g_1, g_2 g_3)$$



which is precisely the standard cocycle condition. (Coboundaries come in when you ask what it means for two group actions to be equivalent; I'm going to ignore this.)



The only reason this condition, which recall is in general just the statement that the two obvious ways of going from $F(g_1) circ F(g_2) circ F(g_3)$ to $F(g_1 g_2 g_3)$ should agree, could ever have looked anything other than completely natural is that it's a degenerate special case where the sources and targets of the various maps involved have been obscured because they are identical. In particular, of course I could have instead chosen to think about the natural isomorphisms



$$eta(g, g^{-1} h) : F(g) circ F(g^{-1} h) to F(h)$$



(which corresponds to your $f(1, g, h)$), but now



  • it's no longer at all obvious how to state the associativity condition succinctly, and

  • this requires that I make explicit use of the fact that $G$ is a group.

The discussion up til now in fact gives a perfectly reasonable definition for what it means for a monoid to act on a category. (If I want to weaken "natural isomorphism" to "natural transformation," though, I get two genuinely different possibilities depending on whether I pick lax or oplax monoidal functors.)



Reflecting on associativity suggests that, for a more "unbiased" point of view, we should consider families of natural isomorphisms



$$eta(g_1, g_2, dots g_n) : F(g_1) circ F(g_2) circ dots circ F(g_n) to F(g_1 g_2 dots g_n)$$



and then impose a "generalized associativity" condition that every way of composing them to get a natural isomorphism with the same source and target as $eta(g_1, g_2, dots g_n)$ should give $eta(g_1, g_2, dots g_n)$. Another way to say this is that the cocycle condition (in the $F(g) = text{id}_C$ special case, at least) should really be written



$$eta(g_1, g_2, g_3) = eta(g_1, g_2) eta(g_1 g_2, g_3) = eta(g_2, g_3) eta(g_1, g_2 g_3).$$



This is in the same way that we can consider a monoid operation to be a family $m(g_1, g_2, dots g_n) = g_1 g_2 dots g_n$ of operations satisfying a generalized associativity condition, and in particular satisfying



$$m(g_1, g_2, g_3) = m(m(g_1, g_2), g_3) = m(g_1, m(g_2, g_3)).$$



Namely, by "associativity" we usually mean that the middle expression equals the right, but really the reason that the middle expression equals the right is that they both equal the left.

Monday 24 August 2009

Why is Mars considered the outer edge of the "goldilocks zone"?

We could err by being 'chauvinists' as Carl Sagan would have said, because we are reasoning taking in account the biology and chemistry of life of 1 planet only: a sample of 1...



Before 1995, we thought that alien solar systems would be similar to ours, with small rocky planets closer to the star and giant planets further out.
Almost nobody thought about circumbinary planets, planets with periods under one day, scorched jupiters ridiculously close to their stars, ultra compact systems with four or more planets crowded in few tenths of Astronomical Units, free-floating cloudy dwarfs with precipitation of melted iron and hot sand (brown dwarfs? planets? planemos?) and the list goes on and on.



Enceladus, that is quite small for planetary standards and that are far from the most optimistic and inclusive Goldilocks boundary has a subsurface ocean, of water.

Sunday 23 August 2009

lo.logic - A problem of an infinite number of balls and an urn

You are describing what is known as a
supertask, or
task involving infinitely many steps, and there are
numerous interesting examples. In a previous MO answer, for
example, I described an entertaining example about the
deal with the
Devil
, which is similar to your example. Let me
mention a few additional examples here.



In the article "A beautiful supertask" (Mind,
105(417):81-84, 1996), the author Laraudogoitia considers
the situation with Newtonian physics in which there are
infinitely many billiard balls, getting progressively
smaller, with the $n^{th}$ ball positioned at $frac1n$, converging to $0$.
Now, set ball $1$ in motion, which hits ball 2 in such a
way that all energy is transferred to ball 2,
which hits ball 3 and so on. All collisions take place in
finite time, because of the positions of the balls, and so the motion disappers into the
origin; in finite time after the collisions are completed,
all the balls are stationary. Thus:



  • Even though each step of the physical system is energy-conserving,
    the system as a whole is not energy-conserving in time.

The general conclusion is that one cannot expect to prove
the principle of conservation of energy throughout time
without completeness assumptions about the nature of
time, space and spacetime.



A similar example has the balls spaced out to infinity, and
this time the collisions are arranged so that the balls
move faster and faster out to infinity (using Newtonian
physics), completing their progressively rapid interactions in finite total time. In
this case, once again, a physical system that is
energy-preserving at each step does not seem to be
energy-preserving throughout time, and the energy seems to
have leaked away out to infinity. The interesting thing
about this example is that one can imagine running it in
reverse, in effect gaining energy from infinity, where the
balls suddenly start moving towards us from infinity,
without any apparent violation of energy-conservation in
any one interaction.



Another example uses relativistic physics. Suppose that you
want to solve an existential number-theoretic question, of
the form $exists nvarphi(n)$. In general, such statements
are verified by a single numerical example, and there is in
principle no way of getting a yes-no answer to such
questions in finite time. The thing to do is to get into a
rocket ship and fly around the earth, while your graduate
student---and her graduate students, and so on in
perpetuity---search for an additional example, with the
agreement that if an example is ever found, then a signal
will be sent up to your rocket. Meanwhile, you should
accelerate unboundedly close to the speed of light, in such
a way that because of relativistic time contraction, the
eternity on earth corresponds to only a finite time on the
rocket. In this way, one will know the answer is finite
time. With rockets flying around rockets, one can in
principle learn the answer to any arithmetic statement in
finite time. There are, of course, numerous issues with
this story, beginning with the fact that unbounded energy
is required for the required time foreshortening, but
nevertheless Malament-Hogarth spacetimes can be constructed
to avoid these issues, and allow a single observer to have
access to an infinite time history of another individual.



These examples speak to an intriguing possible argument
against the Church-Turing thesis, based on the idea that
there may be unrealized computational power arising from
the fact that we live in a quantum-mechanical relativistic
world.

Saturday 22 August 2009

formal languages - Is my definition of a context algebra new?

In my DPhil thesis, I defined what I called a context algebra as a model of meaning in natural language. The idea is to mathematically formalise the notion that meaning is determined by context. It can also be viewed as the equivalent of the syntactic monoid for fuzzy languages.



Let $L$ be a function from $A^* $ to $mathbb{R}$ where $A^* $ is the free monoid on a set $A$. For $x in A^* $, we define the context vector $hat{x}$ as the function from $A^* times A^* $ to $mathbb{R}$ as
$$hat{x}(y,z) = L(yxz)$$
It is then easy to show that the vector space generated by elements {$hat{x} : x in A^* $} is an algebra over the reals given multiplication defined by $hat{x}cdot hat{y} = widehat{xy}$ (it is just necessary to show that no matter which elements of $A^*$ are used to form basis elements, the definition of multiplication is the same). The algebra is associative and has unit element $hat{epsilon}$ where $epsilon$ is the empty string.



You can also define a linear functional $phi$ on the algebra by
$$phi(f) = sum_{x,y in A^*} f(x,y)$$
If $phi(hat{epsilon})$ is finite then the algebra becomes a non-commutative probability space with the linear functional $phi'(f) = phi(f)/phi(hat{epsilon})$.



I have not come across anyone who is aware of previous work along these lines, nevertheless, given the breadth of knowledge here , my first question is



1) Is this a new idea? Is it very similar to any existing work?



As a non mathematician (but aspiring amateur), my second question is



2) Is this of interest to mathematicians? Or is it just an obscure but fairly trivial example of existing maths?



Finally,



3) What would be required to develop this to a point where it would make an interesting paper for a maths journal? Are there any points for investigation that stand out? Is there any particular journal this might be suited to?



Thanks in advance

Reference material - Astronomy Meta

The following books are ones I've found useful over the years. I should note that many of these books I've used for my classes, and so they contain various levels of mathematics (proofs, etc..). I'll try to give a rating (1=not super rigorous, 2=fairly rigorous, and 3=most rigorous) as to how rigorous they are mathematically speaking.



For Astrophysics:



1) Astrophysics in a Nutshell -> 2, but not too many pictures



2) An Introduction to Modern Astrophysics -> 2, has some pictures intermingled, but not what you're describing.



For Cosmology:



1) Introduction to Cosmology -> 2, has some good pictures in it, but interspersed.



2) Principles of Physical Cosmology -> 3, not too many.



3) Cosmological Physics -> 3, not too many.



For Observational Astronomy:



1) An Introduction to Astronomical Photometry Using CCDs -> 1 (This is a free pdf, actually), not really great in the way of pictures (they're largely hand drawn or scanned in).



2) Observational Astronomy -> 2, some pictures, but not many are full page, and not in color.



For Galactic Dynamics:



1) Galactic Dynamics -> 2/3, this is the one which has a nice little full page picture section. The only problem is that it's a bit advanced and is really only about galactic astrophysics, not really about much else in astronomy/astrophysics.




Books that have amazing full page photos of astrophysical objects are:



1) Far Out, by Michael Benson



2) Space Places by Roger Ressmeyer



3) Hubble - National Geographic




I apologize that this does not really answer the question. I'm unaware of a book that has both mathematical rigor and full page, high quality images of the things it talks about. I think it would be an amazing idea to come up with one though!

ap.analysis of pdes - Solutions to a Monge-Ampère equation on the simplex

Let $Delta_k$ be the k-simplex and $mu$ a non-negative measure over $Delta_k$. I want to know if there exists a function $u : Delta_k to mathbb{R}$ such that $u$ is convex, $u(e_i) = 0$ for all vertices $e_i$ of $Delta_k$, and $M[u] = mu$ where
$M[u] = detleft(frac{partial^2 u}{partial x_j partial x_k}right)$ is the Monge-Ampère operator. Furthermore, I'd like to know if the solution is unique. Any techniques for how one might solve a specific instance of this problem would be a bonus.



My background is not in PDEs but the closest I've found to an answer seem to be in [1] and [2] where the boundary conditions are more restrictive and the domain is required to be strictly convex for uniqueness.



[1] "On the fundamental solution for the real Monge-Ampère operator", Blocki and Thorbiörnson, Math. Scand. 83, 1998



[2] "The Dirichlet problem for the multidimensional Monge-Ampère equation", Rauch and Taylor, Rocky Mountain Journal of Mathematics, 7(2), 1977.



Any other pointers to solving this type of problem would be greatly appreciated.

Friday 21 August 2009

career - Is there any disadvantage from non-academic job turn to academic job in math

It's indeed possible [I spent many years in industry building math software, and now I'm a tenured prof, albeit in computer science and software engineering department rather than math; although my research involves building mechanized mathematics systems...]. I started my academic career having previously 'published' 0 academic papers! I was, however, already well-known within the computer algebra community, and my work was known [so I was able to get many good academic reference letters]. The reason for me to report this is that it is important to be able to convince the academic community that you really have something to contribute, else why would they hire you? So, if you intend to move back to academia, either write papers or make sure that somehow the community 'knows' you and appreciates your work.



From my experience, I would say that the hardest part is to go from having well-defined goals with precise deadlines, often driven by external pressures, to writing research papers with no deadline. Getting up to speed on producing papers while being on the 'tenure clock' was most unpleasant. And, of course, at the beginning teaching courses can (and likely will) swallow all available time. Unless you're in an enlightened department (I wasn't) where untenured faculty are given a lighter teaching load to allow them the time to get settled into their research career.



If at all possible, get a post-doc in between a non-academic job and a tenure-track position. This will give you the time needed to 'switch gears'. I probably would not have done that myself (the salary cut was already substantial enough as it is, I didn't want to make it even worse). It depends on your personal situation.

Thursday 20 August 2009

Can things move faster than light inside the event horizon of a black hole?

I think your initial question is a good one, but the text gets a bit more jumbled and covers a few different points.




Can things move faster than light inside the event horizon of a black
hole?




Nice question.




Black Holes are regions of space where things get weird.




I'm 100% OK with this statement. I think it's a true enough summary and I'm sure I've heard physicists say this too. Even if "Weird" isn't a clearly defined scientific term, I'm 100% fine with this (even without a citation).




Past the event horizon of a black hole, any moving particle
instantaneously experiences a gravitational acceleration towards the
black hole that will cancel out it's current velocity, even light.
That means that the gravity well of the black hole must be able to
accelerate from -C* to 0 instantly
✝.




Are you quoting somebody here? Anyway, this isn't quite true. Black holes don't accelerate things from -c (which I'm guessing would be a light beam trying to fly away from the singularity but inside the event horizon), to 0 "instantaneously".



Perhaps a better way to look at it is to consider curvature of space, and inside a black hole, space curves so much that all directions point to the singularity. It's the "all roads go to Rome" scenario, even if you do a complete 180, you're still on a road that leads to the singularity.



I understand the temptation to look at that as deceleration, but I think that's a bad way to think about it. Light doesn't decelerate, it follows the curvature of space.




Given that fact, we can assume the gravitational acceleration of black
holes is C/instant**. Given this, it stands to reason that in
successive instants, the particle will be moving at speeds greater
than C, because it is experiencing greater gravitational forces and
continuous gravitational acceleration.
Does this actually make sense? Is there something I'm missing here? By
this logic, it seems like anything inside of the event horizon of a
black hole could and should move faster than C due to gravitational
acceleration.




outside of a black hole, continuous acceleration would never lead to a speed greater than C. You can accelerate for billions and trillions of years and all you'd do is just add more 9s to the right of the decimal point.



You seem to be assuming that inside a black hole this can happen, but I'm not sure why you'd assume that.



"continuous gravitational acceleration" - no matter how strong, is no guarantee for faster than light travel. That's logically inconsistent with the laws of relativity.




Edit: I showed this question to a friend and he questioned if the
hypothetical particles that were radiating from the singularity (The
photon traveling exactly away from the black hole) might be hawking
radiation; that is, the gravitation acceleration of a black hole is
only strong enough to curve the path of light around a non-zero radius
(thus not actually stopping it, but altering it's course), and not
powerful enough to decelerate light. Is this actually what hawking
radiation is, or is he as confused as I am?




I think, a more correct way to look at hawking radiation is to see it as something that forms just outside of the black hole, a particle/anti particle pair and one escapes and the other falls inside, and that's probably not 100% correct either, but the singularity itself doesn't send out particles. Hawking radiation has to do with quantum properties of space. It's not a property of black holes. The black hole just happens to be unique in that it can capture one half of a virtual particle pair and the other half can escape.



This also is a pretty different topic than your original question.




*Where movement towards the singularity would be considered a positive value, movement away from the singularity is a negative value, that
is, anything moving at the speed of light away from the singularity
would be moving with a velocity of -C relative to the singularity.
✝If it couldn't accelerate from -C to 0 instantly, any photon
traveling exactly away from the black hole would be able to escape the
event horizon.



**An instant is an arbitrary amount of time, it could be a fraction of a second, a second, a minute....




I think it's a good idea to differentiate mass-less pure energy particles and particles with mass. You seem to be saying that a ray of light can be traveling away from a black hole at the speed of light, get caught in the gravity, slow down and then fall back into the black hole like a ball that's tossed straight up into the air from the surface of the Earth. That's probably not what happens. The ray of light follows the path of space time ahead of it, which happens to be curved so much that it points into the black hole, even if, in the classical sense, the light begins by pointing away. All space curves into the singularity once you're inside the event horizon, so there is no "away from" anymore. At least, that's how I think it works.

co.combinatorics - Tetris in 3D with 5 units

Background: There are 7 "bricks" used in the game of Tetris. These are the 7 unique combinations of 4 unit squares in which every square shares at least one edge with another square. ("unique" in this case refers to the idea that no brick can be rotated in 2-D space to become another brick.)



Question: Using 5 unit cubes, how many unique "bricks" could be formed in which each cube shares at least one face with another cube? (Please provide a proof to this in your answer if you can find one.)

Wednesday 19 August 2009

terminology - Time period in which a planet rotates

Since etymologically sidereal references the stars (Latin sidus), one might expect a sidereal day to correspond to the rotation of the Earth with respect to distant "fixed" stars. But this does not seem to be the case, and because the the difference is not significant, the two definitions are sometimes used interchangeably.



I don't know of an explicit IAU definition of sidereal day. However, since sidereal time is defined as the hour angle vernal equinox (which is a local definition, although Greenwich is conventional), defining the (mean) sidereal day in terms in reference to the equinox is practically the only sensible choice.



Effective 1985, UT1 is now computed using very long baseline interferometry of distant quasars, and so can be taken to be authoritative regarding the "fixed stars". The coordinated universal time (UTC) time approximates UT1 with atomic clocks. Anyway, UT1 derived the rotational period of the Earth as $p = 86164.09890369732,mathrm{s}$ of UT1 time and the mean sidereal day (in 2000) would be $86164.090530833,mathrm{s}$ of UT1 time, following the explanation:




The length of one sidereal day is defined by two successive transits of the mean equinox; while the Earth is rotating eastward, the mean equinox is moving westward due to precession. Therefore, one sidereal day is shorter than the Earth's rotational period by about $0.008,mathrm{s}$, the amount of precession in the right ascension in one day.




See: Aoki, S., et al., The new definition of universal time, Astron. Astrophys. 105, 359-361 (1982).

Tuesday 18 August 2009

the sun - Motion of the sun as observed from mercury

Before about 1966, Mercury was thought to be tide-locked, almost half always sunlit and another near-half always dark – as most moons, including ours, are tide-locked to their primaries, and for the same reason. The difference in the strength of the primary's gravity between the inner and outer ‘poles’ creates a force tending to pull those points away from the center of the satellite, along the line joining it to the primary. If the satellite is ellipsoidal rather than spherical, the tide will tend to align the ellipsoid's long axis to the primary.



But Mercury's orbit is so eccentric that the strength of the solar tide varies by a ratio of 4:7 (if I've computed correctly). The rotation rate nearly matches the revolution rate at perihelion, when tide is strongest and Mercury is moving fastest; if the match were perfect (if the orbital eccentricity were a bit less), the sun's apparent path would have cusps rather than little loops. Presumably the imperfection is because the tidal effect does not vanish away from perihelion.



The loops have nothing to do with axial tilt; Envite was probably thinking of the analemma.

computer science - Are there any pairing functions computable in constant time (AC⁰)

The pairing function f(a,b) = (a + b)(a + b + 1)/2 + a is the one that arises by drawing diagonals on the natural number lattice, and marching down them from upper left to lower right. See the picture here.



To compute f(a,b), one needs to perform some additions and a multiplication, which seems to be quadratic time in the length of a and b, that is, in log(a)+log(b), which would seem to be constant time in max(a,b), but I'm not sure if this would be what you meant.



In that book, it is noted that Polya has proved that any surjective polynomial pairing function is equal to this function or to its dual form f(b,a). (And someone gave a talk here at CUNY a few weeks ago on precisely this fact.) So if this function is not acceptable to you, then you will find no polynomial surjective function.



But here is another function, which seems to be a little faster to compute. Suppose that a and b are given to me in their binary representation. Now, I just interleave their binary digits, using 0's if the digits of one of them runs out. This is surely a pairing function, and I can compute it linear time of the lengths of the input.

Monday 17 August 2009

ct.category theory - "synthetic" reasoning applied to algebraic geometry



A hyperlinked and more detailed version of this question is at

nLab:synthetic differential geometry applied to algebraic geometry.

Repliers are kindly encouraged to copy-and-paste relevant bits of their reply here into that wiki page.



The axioms of synthetic differential geometry are intended to pin down the minimum abstract nonsense necessary for talking about the differential aspect of differential geometry using concrete objects that model infinitesimal spaces.



But the typical models for the axioms – the typical smooth toposes – are constructed in close analogy to the general mechanism of algebraic geometry: well-adapted models for smooth toposes use sheaves on C ∞Ring op (the opposite category of smooth algebras) where spaces in algebraic geometry (such as schemes) uses sheaves on CRing op.



In fact, for instance also the topos of presheaves on k−Alg op, which one may think of as being a context in which much of algebraic geometry over a field k takes place, happens to satisfy the axioms of a smooth topos (see the examples there).



This raises some questions.



Quesions:



To which degree do results in algebraic geometry depend on the choice of site CRing op or similar?



To which degree are these results valid in a much wider context of any smooth topos, or smooth topos with certain extra assumptions?



In the general context of structured (∞,1)-toposes and generalized schemes: how much of the usual lore depends on the choice of the (simplicial)ring-theoretic Zariski or etale (pre)geometry (for structured (∞,1)-toposes), how much works more generally?



More concretely:



To which degree can the notion of quasicoherent sheaf generalize from a context modeled on the site CRing to a more general context. What is, for instance, a quasicoherent sheaf on a derived smooth manifold? If at all? What on a general generalized scheme, if at all?



Closely related to that: David Ben-Zvi et al have developed a beautiful theory of integral transforms on derived ∞-stacks.



But in their construction it is always assumed that the underlying site is the (derived) algebraic one, something like simplicial rings.



How much of their construction actually depends on that assumption? How much of this work carries over to other choices of geometries?



For instance, when replacing the category of rings /affine schemes in this setup with that of smooth algebra / smooth loci, how much of the theory can be carried over?



It seems that the crucial and maybe only point where they use the concrete form of their underlying site is the definition of quasicoherent sheaf on a derived stack there, which uses essentially verbatim the usual definition QC(−):Spec(A)↦AMod.



What is that more generally? What is AMod for A a smooth algebra? (In fact I have an idea for that which I will describe on the wiki page in a moment. But would still be interested in hearing opinions.)



Maybe there is a more intrinsic way to say what quasicoherent sheaves on an ∞-stack are, such that it makes sense on more general generalized schemes.

cosmology - Why is the E-mode polarization spectrum out of phase with the Temperature spectrum?

The E-mode polarization power spectrum of the Cosmic Microwave Background displays the same acoustic peaks that can be seen in the (more famed) temperature power spectrum. However, they are out of phase with respect to each other. A number of sources give answers that all come down to the following:




The main EE' power spectrum has peaks that are out of phase with those in theTT' spectrum, because the polarization anisotropies are sourced by the fluid velocity.




But I fail to understand this. Can someone elaborate on this?

light - how to assess a periodical signal?

You have it the wrong way around. You have to establish a periodicity before doing the folding.



There are various ways to do this, but the most frequently (pun intended) used is the Lomb-Scargle periodogram.



Start with a datset consisting of a set of points $(t_i, m_ipm sigma_i )$, where $t_i$ are times of observation and $m_i$ and $sigma_i$ are the fluxes and their uncertainties at those times.



You then calculate an array of chi-squared statistics, by fitting a mean level $bar{m}$ and sinusoidal functions over a range of discrete frequencies $omega$. The aim is to maximise the statistic
$$p(omega) = frac{chi^2_{0} - chi^2(omega)}{chi^2_{0}},$$
where
$$chi^{2}_{0} = sum (m_i - bar{m})^2/sigma_i^{2}$$ and
$$chi^{2}(omega) = sum (y_i - bar{m})^2/sigma_i^{2},$$ with $y_i = Asin(omega t_i + phi) + A_0$, where $A$, $A_0$ and $phi$ are free parameters that are optimised for each frequency (i.e. you find the minimum possible $chi^2(omega)$ at each frequency).



Once a period is obtained from a peak in the power spectrum, $p_{best}$, then it must be established whether this peak is "real" or significant.



Theoretically, the probability that $p>p_{best}$ is $(1-p_{best})^{(N-3)/2}$ for $N$ independent measurements (see Cumming, Marcy & Butler 1999). This is the "false alarm probability" - the chance that you would get $p>p_{best}$ from random, aperiodic data with those uncertainties.



However, real data has correlated noise and the points are not independent. So what you do instead (see Collier Cameron et al. 2009) is shuffle all the times of the data points (i.e. randomly reallocate the times of each observation) and go through the process again. Do this a squillion times with different random reallocations and see how many times you end up with $p>p_{best}$. This gives you some idea about how confident you can be that your detected periodic signal is real. This is called a "bootstrap Monte-Carlo method".



There are other ways to do this; my answer is not complete; but it's the one I'm familiar with.

star - Is metallicity low at the central region or nucleus of the Milky Way?

The stars in the Galactic bulge are predominantly metal-rich (by that I mean have a metallicity similar to the Sun or even a little higher).



Even though these stars are predominantly old, the bulge is thought to have formed extremely quickly and the interstellar medium from which the stars were formed would have been enriched with metals very quickly.



Here is a plot from Zoccali et al. (2009) (which I recommend reading). It shows the metallicity distribution of many stars in the Galactic bulge, measured using high-resolution spectroscopy. It shows that the highest metallicity stars are towards the middle (it is very hard to get samples right in the middle because of extinction) and the averge metallicity falls as you mover further from the centre (the samples with more negative Galactic latitude $b$.). The percentages in the plots are the estimated contamination from the disk population in each sample.



Metallicity of the Galactic bulge from Zoccali et al. (2009).



The dependence of metallicity on height and radial coordinate in the Galactic disk is still keenly debated. There is general consensus that the metallicity falls with radial distance from the Galactic centre and with height above the Galactic plane. The gradient is of order a few hundredths of a dex per kpc between a few kpc and 10-12 kpc from the Galactic centre. It may flatten beyond this. You could have a look at the discussion in Cheng et al. (2012), but there are many other attempts at parameterising the gradient and it is a technically difficult thing to do.

Sunday 16 August 2009

co.combinatorics - Gamma function versions of combinatorial identites?

Chapter 5.5 of Concrete Mathematics discusses generalizing binomial coefficient identities to the Gamma function. It doesn't discuss the two integrals you mention, though.



Doing a bit of thinking on my own, if $n$ is a positive integer then
$$int_{z=0}^n binom{n}{z} dz = int_{z=0}^n frac{n! dz}{Gamma(1+z) Gamma(n+1-z)}$$
$$int_{z=0}^{n} frac{n! dz}{(n-z)(n-1-z) cdots (1-z) Gamma(1-z) Gamma(1+z)}.$$



We have $Gamma(1+z) Gamma(1-z) = pi z/sin (pi z)$, if I haven't made any dumb errors, so this is
$$int_{0}^n frac{ n! sin (pi z) dz}{pi z (n-z)(n-1-z) cdots (1-z)}.$$



I suspect this integrand does not have an elementary anti-derivative, because it reminds me of $int sin t dt/t$. But there might be some special trick which would let you compute the integral between these specific bounds.

Saturday 15 August 2009

ag.algebraic geometry - Is every flat unramified cover of quasi-projective curves profinite?

(More editing for cleanliness)



The statement is false. I learned of this example from "James" at this blog post. If you take a nodal cubic curve (notably quasiprojective), there is a flat, unramified cover by an infinite connected chain of copies of P^1, each glued transversely to its successor at a point. This is not profinite. If I'm not mistaken, the etale fundamental group of the nodal cubic over a separably closed field (with a chosen basepoint) is $mathbb{Z}$, not its profinite completion.



Edit: Regarding the correct definition of etale fundamental group: In SGA1 Exp 5, Grothendieck (and Mme. Raynaud?) build up axiomatics for the theory of the fundamental group using only profinite sets, and the group is defined following one peculiar claim. In the beginning of Exp 5 Section 7, there is the assertion that for any connected locally noetherian scheme $S$, and any geometric point $a: ast to S$, the functor that takes an etale cover $X to S$ to the set of geometric points over $a$ (with the usual morphisms) lands in the category of finite sets. The example I gave above seems to contradict this, but if you look in Exp 1, you find that all of SGA1 is written under a definition of etale morphisms that assumes that they are finite type (which this example is not). Anyway, one reason why Pete Clark only sees profinite definitions for the etale fundamental group, is that people like to use finite type morphisms, while etale morphisms only have to be locally of finite presentation (according to EGA4, and Wikipedia I guess).



As for the question of infinite degree etale covering maps between locally finite type geometrically integral schemes, I don't think one exists, since (if I'm not mistaken) you automatically get an infinite degree algebraic extension of function fields, which is therefore infinitely generated. I'm having trouble thinking through the details of this, though.

orbit - Derivation of the formula for longitude of ascending node for a satellite

I've been looking into the document IS-GPS-200H to understand how to calculate satellite location in the ECEF coordinate.



I am having problem understanding the formula to derive $Omega$, the longitude of the ascending node (LAN) relative to Greenwich at given time $t$:



$$
Omega = Omega_0 + left( dot{Omega - w} right)times t_k - w times t_{oe}
$$



where:
$$
Omega_0: text{LAN relative to vernal equinox, at the beginning of the week}\
dot{Omega}: text{angular velocity for LAN, relative to vernal equinox.}\
w: text{angular velocity of earth, relative to vernal equinox.}\
t_k: t - t_{oe}\
t_{oe}: text{ephemeris reference epoch}\
$$
(and let us denote the beginning of the week as $t_0$ for brevity).



But if I try to work out this from scratch:



  1. At $t = t_0$, LAN was $Omega_0$. But since what we really need is the difference of LAN and longitude of Greenwich, we also need to know $w_0$, the initial longitude of Greenwich at $t = t_0$.
    $$
    Omega(t = t_0) = Omega_0 - w_0
    $$

  2. At the ephemeris reference epoch time $t = t_{oe}$, LAN and the earth both rotate with their respective angular momentum and hence:
    $$
    Omega(t = toe) = Omega_0 + w_0 + (dot{Omega} - w) times t_{oe}
    $$

  3. As time varies from $toe$ to $t$, again LAN and the earth both rotate with their own respective angular momentum and hence
    $$
    Omega(t) = Omega_0 + w_0 + (dot{Omega} - w) times t_{oe} + (dot{Omega} - w) times t_k
    $$
    which obviously differs from the right formula by $w_0 + dot{Omega} times t_{oe}$.

My question is where am I making mistakes/misunderstanding the eqution?
Explain also why we don't need to know $w_0$ or equivalent input, that would be greatly appreciated.

real analysis - How can I measure the Morse index in infinite dimensions?

It seems that simply one can't measure it. Here below is briefly described
an example of a nondegenerate indefinite inner product space having
no cardinal-valued Morse index.



Consider $(ell^{2},<.,.>)$ as
naturally embedded (via Riesz) into its (huge) algebraic dual, say
$mathcal{A}$, let $mathcal{F}$ be the real vector space of all
finitely supported functions from $mathbb{R}$ to $mathbb{ell^{textrm{2}}}$,
and put $V$:= $mathcal{Atimes F}$. Next, write $mathcal{A}$
as a direct sum
$mathcal{A}=ell^2oplus{E}$ (hence
$dim E=2^{c}$),
let $pi:mathcal{A}$ $to$ ${E}$
be the attached algebraic projection, and let $[.,.]$
be a scalar product on $Etimes E$. If $u=$($varphi,f)$, and $v=$($psi,g)$
are in $V$, then define the bilinear symmetric pairing



$a(u,v)$:= - $[pi varphi,pi psi]$ + $varphi(sum_{tinmathbb{R}}g(t))+psi(sum_{tinmathbb{R}}f(t))$ + < $ sum_{tinmathbb{R}} f(t),sum_{tinmathbb{R}}g(t))> $
- $sum_{tinmathbb{R}}$ <
$f(t),g(t)$> .



Define also the subspace $W$ of $V$ by $W$ := {$ {{ (varphi,f)|:varphi=-sum_{tinmathbb{R}}f(t)} }$}.
Then is not hard to see that:



1) $W$ is negative definite (w.r.t. $a$), and $W^{bot}$ = { 0 } , hence $a$ is non-degenerate.



2) [Using the C-B-S inequality and Riesz] Any maximal negative
definite subspace $mathcal{N}$ of $V$ containing $W$ is a linear
subspace of $ell^{2}timesmathcal{F}$, hence $dim$ $mathcal{N}$
= $c$.



3) Any maximal negative definite subspace $mathcal{M}$ of $V$ containing
$Etimes ${ 0 }$ $ has $dimmathcal{M}$ $>$ $c$.



Consequently, $mathcal{M}$ and $mathcal{N}$ are not isomorphic
as real vector spaces.

Friday 14 August 2009

the sun - Cooling of stars

In our current universe white dwarfs are the first ones that should cool, because they are already "cold" (not producing anything anymore, just radiating heat) remnants of a former star. The time for this to happen is disputed (10^15 or 10^37 years), but is far bigger than the age of universe, so nobody expects to find one "cooled star" yet. See this article for details on "black dwarfs" as they are called.



In your sudden stop hypothetical universe probably the first one to cool would be the smallest and coolest stars, a M9V red dwarf (with 7.5% solar mass, 8% solar radius, 2,300K temperature). Note that brown dwarfs are categorized as substellar objects, so they shouldn't be considered. I don't think the poorly understood "Mpemba effect" can be yet applied to stars.

nt.number theory - Forms over finite fields and Chevalley's theorem

Perhaps it is obvious to most readers, but about a year ago I spent several days trying to determine for which pairs (d,n) there existed an anisotropic degree d form in n variables over a finite field $mathbb{F}_q$. The question was motivated by Exercise 10.16 in Ireland and Rosen's classic number theory text: "Show by explicit calculation that every cubic form in two variables over $mathbb{F}_2$ has a nontrivial zero."



As many students have discovered over the years, this is false: e.g. take



$f(x_1,x_2) = x_1^3 + x_1^2 x_2 + x_2^3$.



I knew about the existence and anisotropy of norm hypersurfaces for all $n = d$. But what about $n < d$? I confess that I spent some time proving this result in several special cases and even dragged a postdoc into it. Here is a copy of the sheepish email I sent out (in particular to Michael Rosen) later on:




If K is a field, and f(x_1,...,x_n) is an anisotropic form of degree d in n variables, then f(x_1,...,x_{n-1},0) is an anisotropic
form of degree d in n-1 variables.



So let K be any field which admits field extensions of every positive degree d. Then for all d there is an anisotropic norm form
N in d variables of degree d. For any n < d, setting (d-n) of the variables equal to 0 gives an anisotropic form of degree d in
n variables. In particular, this proves "the converse of Chevalley-Warning".



So, not so fascinating after all, then.



I think it is still nontrivial to ask what happens if the hypersurface f is required to be geometrically irreducible. For instance, despite the fact that (q,3,3) is anisotropic, every geometrically irreducible cubic curve over a finite field has a rational point.




AS's question about classifying anisotropic hypersurfaces with $d = n$ is interesting. It may also be interesting to look at the case $d < n$. It is certainly not clear to me that all such anistropic hypersurfaces come from intersecting a norm hypersurface of larger dimension with a linear subspace.



I also want to add that the following generalization seemed less trivial to me (and I still don't know the answer): Chevalley-Warning is also true for sytems of polynomial equations $f_1(x_1,ldots,x_n) = ldots = f_r(x_1,ldots,x_n)$ so long as the sum of the degrees of the $f_i$'s is strictly less than $n$. What kind of counterexamples can we construct here when $d = d_1 + ldots + d_r geq n$?

Wednesday 12 August 2009

gt.geometric topology - Does a triangulation without fixed simplex property always exist?

EDITED. The arugment related to Mostov rigidity is completed according to a nice suggestion of Tom Church



The answer to the first question is no. There exsit manifolds of dimension 3 such that every simlicial map of the manfiold to itself (for any simplicial decomposition) has a fixed point (and hence a fixed simplex). At the same time every 3-mafiold adimits a smooth self-map without fixed points.



Namely, take $M^3$ with vanishing first and second homology ($H_1(M^3,R)=H_2(M^3,R)=0$) and that is a hyperbolic 3-manifold. Moreover take such $M^3$ that does not have isometries. Existence of such manifolds is a standard result of 3-dimensional hyperbolic geometry. Let us prove that every such manifold gives us an example.



Proof. All compact 3-manifolds have zero Euler characteristics, so on $M^3$ there is a non-vanishing vector field $v$. Take the flow $F_t$ generated by $v$ in small time $t$. This will give us a family of diffeo $F_t$ of $M^3$ that don't have fixed points for small $t$. So $M^3$ in not FPP.



Now, let us show that $M^3$ has FSP. Take any simplicial decomposition of $M^3$. First we state a simple lemma (without a a proof)



Lemma. Consider a simplicial decomposition of a compact orienable manifold. Suppose we have a simpicial map from it to itself, that send simplexes of highest dimensions to simplexes of higher dimentions (i.e. don't collapse them) and don't indentify them. Then this is an automorphism of finite order.



Corollary. Every non-identical simplicial map $phi$ from $M^3$ to itself either collapses a simplex of dimension 3 or identifies two such simplexes. In paricular the generator of $H^3(M^3,mathbb Z)$ is sent to zero by this map.



This corollary together with Lefshetz fixed point theorem implies immediately that $phi$
has a fixed point, and so it proves FSP for $M^3$ (we use that $H_1(M^3)=H_2(M^3)=0$).



Proof of corollary. If $phi$ does not collapse 3 simplexes of $M^3$ or identify them, then it is a homeomorphism of $M^3$ of finite order (Lemma above). From Mostov rigidity it follows that this automorphism is homotopic to the identity. In order to show that it IS in fact the identity we need to use a more involved statement suggseted by Tom Church below. Namely, a paritial case of Proposition 1.1 in http://www.math.uchicago.edu/~farb/papers/hidden.pdf
says that for a hyperbolic 3-manifold the group of isometries of any Riemanninan metric on it is isomorphic to a subgroup of the group of hyperbolic isometries. By our choice the group of hyperbolic isometries of $M^3$ is trivial. It is clear that $phi$ preserves a Riemannian metric on $M^3$. So by Prop 1.1 it is the identity.



From this it immediately follows that $phi$ sends $H^3(M^3, Z)$ to zero (since the volume is contacted). End of proof.

Tuesday 11 August 2009

Complex orientations on homotopy

I am wondering if there is a more "geometric" formulation of complex orientations for cohomology theories than just a computation of $E^*mathbb{C}$P$^{infty}$ or a statement about Thom classes. It seems that later in Hopkins notes he says that the complex orientations of E are in one to one correspondence with multiplicative maps $MU rightarrow E$, is there a treatment that starts with this perspective? How do the complex orientations of a spectrum E help one compute the homotopy of $E$, or the $E$-(co)homology of MU? Further, what other kinds of orientations could we think about, are there interesting $ko$ or $KO$ orienations? how much of these $E$-orientations of X is detected by E-cohomology of X?



I do have some of the key references already in my library, for example the notes of Hopkins from '99, Rezk's 512 notes, Ravenel, and Lurie's recent course notes. If there are other references that would be great. I am secretly hoping to get some insight from some of the experts. (I guess I should really also go through Tyler's abelian varieties paper)



(sorry for the on and off texing but the preview is giving me weird feedback.)



EDIT: I eventually found the type of answer i was looking for in some notes of Mark Behrens on a course he taught. This answer is that a ring spectrum $R$ is complex orientable is there is a map of ring spectra $MU to R$. This also appears in COCTALOS by Hopkins but neither source takes this as the more fundamental concept. Anyway, the below answer is more interesting geometrically.

ac.commutative algebra - reduced ⊗ reduced = reduced; what about connected?

Several questions actually.



All rings and algebras are supposed to be commutative and with $1$ here.



(1) Let $k$ be a field, and let $A$ and $B$ be two $k$-algebras. I need a proof that if $A$ and $B$ are reduced (i. e., the only nilpotents are $0$) and $mathrm{char}k=0$, then $Aotimes_k B$ is reduced as well.



The condition $mathrm{char}k=0$ can be replaced by "$k$ is perfect", but I already know a proof for the $mathrm{char}k>0$ case (the main idea is that every nilpotent $x$ satisfies $x^{p^n}=0$ for some $n$, where $p=mathrm{char}k$), so I am only interested in the $mathrm{char}k=0$ case.



Please don't use too much algebraic geometry - what I am looking for is a constructive proof, and while most ZFC proofs can be made constructive using Coquand's dynamic techniques, the more complicated and geometric the proof, the more work this will mean.



BTW the reason why I am so sure the above holds is that some algebraist I have spoken with has told me that he has a proof using minimal prime ideals, but I haven't ever seen him afterwards.



Ah, and I know that this is proven in Milne's Algebraic Geometry for the case $k$ algebraically closed.



(2) What if $k$ is not a field anymore, but a ring with certain properties? $mathbb{Z}$, for instance? Can we still say something? (Probably only to be thought about once (1) is solved.)



(3) Now assume that $k$ is algebraically closed. Can we replace reduced by connected (which means that the only idempotents are $0$ and $1$, or, equivalently, that the spectre of the ring is connected)? In fact, this even seems easier due to the geometric definition of connectedness, but I don't know the relation between $mathrm{Spec}left(Aotimes_k Bright)$ and $mathrm{Spec}A$ and $mathrm{Spec}B$. (I know that $mathrm{Spm}left(Aotimes_k Bright)=mathrm{Spm}Atimesmathrm{Spm}B$ however, but this doesn't help me.)



PS. All algebras are finitely generated if necessary.

Monday 10 August 2009

computational complexity - Use of randomness in constant parallel time

Not sure what you mean by "0-ary random-bool gates", but I think you mean: take a circuit $C$ with $n$ real inputs and $poly(n)$ extra inputs. For each input $x$ of length $n$, the "probabilistic" circuit $C$ is said to output $b$ on $x$ iff when we attach $x$ to the real inputs, and put a uniform random input on the extra inputs, the probability $C(x)=b$ is at least $2/3$.



Given that, it is known that $BPAC0 subset non-uniform-AC0$. This was first proved by Ajtai and Ben-Or in:




Miklós Ajtai, Michael Ben-Or: A Theorem on Probabilistic Constant Depth Computations STOC 1984: 471-474




A very short and sweet paper. However it only results in non-uniform constant depth circuits. I believe that the best known uniform simulation of $BPAC0$ is with quasipolynomial size constant depth circuits, by Klivans:




Adam Klivans: On the Derandomization of Constant Depth Circuits. RANDOM-APPROX 2001: 249-260




Under some very weak hardness assumptions, $BPAC0 = AC0$, see:




Emanuele Viola: Hardness vs. Randomness within Alternating Time. IEEE Conference on Computational Complexity 2003




Edit: I should also mention the work of Nisan (1991) "Pseudorandom bits for constant depth circuits" which really made a lot of the later work possible.

Sunday 9 August 2009

ho.history overview - Italian school of algebraic geometry and rigorous proofs

Many of the amazing results by Italian geometers of the second half of the 19th and the first half of the 20th century were initially given heuristic explanations rather than rigorous proofs by their discoverers. Proofs appeared only later. In some cases, an intuitive explanation could be more or less directly translated into modern language. In some other cases, essentially new ideas were required (e.g., among others, the classification of algebraic surfaces by Shafarevich's seminar; construction of the moduli spaces of curves and their projective compactifications by Deligne, Mumford and Knudsen; solution of the Luroth problem by Iskovskikh and Manin).



I would like to ask: what are, in your opinion, the most interesting results obtained by pre-1950 Italian geometers which still do not have a rigorous proof?



[This is a community wiki, since there may be several answers, none of which is the "correct" one; however, please include as many things as possible per posting -- this is not intended as a popularity contest.]



[upd: since I'me getting much less answers that I had expected (in fact, only one so far), I would like to clarify a couple of things: as mentioned in the comments, I would be equally interested in results which are "slightly false" but are believed to be essentially correct, e.g. a classification with a particular case missing etc. I'm also interested in natural generalizations that still haven't been proven such as extending a result to finite characteristic etc.]

Friday 7 August 2009

lo.logic - Propositional Logic, First-Order Logic, and Higher-Order Logics

I've been reading up a bit on the fundamentals of formal logic, and have accumulated a few questions along the way. I am pretty much a complete beginner to the field, so I would very much appreciate if anyone could clarify some of these points.



  1. A complete (and consitent) propositional logic can be defined in a number of ways, as I understand, which are all equivalent. I have heard it can be defined with one axiom and multiple rules of inferences or multiple axioms and a single rule of inference (e.g. Modus Ponens) - or somewhere inbetween. Are there any advantage/disvantages to either? Which is more conventional?


  2. Propositional (zeroth-order) logic is simply capable of making and verifying logical statements. First-order (and higher order) logics can represent proofs (or increasing hierarchial complexity) - true/false, and why?


  3. What exactly is the relationship between an nth-order logic and an (n+1)th-order logic, in general. An explanation mathematical notation would be desirable here, as long as it's not too advanced.


  4. Any formal logic above (or perhaps including?) first-order is sufficiently powerful to be rendered inconsistent or incomplete by Godel's Incompleteness Theorem - true/false? What are the advantages/disadvantages of using lower/higher-order formal logics? Is there a lower bound on the order of logic required to prove all known mathematics today, or would you in theory have to use an arbitrarily high-order logic?


  5. What is the role type theory plays in formal logic? Is it simply a way of describing nth-order logic in a consolidated theory (but orthogonal to formal logic itself), or is it some generalisation of formal logic that explains everything by itself?


Hopefully I've phrased these questions in some vaguely meaningful/understandable way, but apologies if not! If anyone could provide me with some details on the various points without assuming too much prior knowledge of the fields, that would be great. (I am an undergraduate Physics student, with a background largely in mathematical methods and the fundamentals of mathematical analysis, if that helps.)

nt.number theory - Is the given expression, monotonically increasing or decreasing with increasing x?

I'm not sure I should bother answering this question, because it seems like the original poster may not have asked the right question. However, it is a nice exercise in basic asymptotics.




For $x$ sufficiently large, the sum in question is decreasing.



First, note that this sum is equal to
$$sum_{k geq 1} frac{(log x)^{k-1}}{x k! zeta(k+1)}.$$
(See here for a very similar series; we are using the highly nontrivial identity $sum mu(i)/i=0$ to get rid of the "$k=0$" term.)



Substituting $x=e^u$, we want to know whether or not
$$e^{-u} sum_{k geq 1} frac{u^{k-1}}{k! zeta(k+1)}$$
is increasing or decreasing in $u$. One can justify taking term by term derivatives, so we want to know whether
$$e^{-u} left( sum_{k geq 2} frac{(k-1) u^{k-2}}{k! zeta(k+1)} - sum_{k geq 1} frac{u^{k-1}}{k! zeta(k+1)} right)$$
is positive or negative.



Rearranging terms, we are interested in the sign of
$$e^{-u} sum_{ell geq 0} frac{u^{ell}}{ell!} left( frac{ 1}{(ell+2) zeta(ell+3)} - frac{1}{(ell+1) zeta(ell+2)} right).$$



The quantity in parenthesis is $-1/(ell+1)(ell+2) + O(2^{- ell})$. So we are interested in the sign of
$$e^{-u} left( - sum_{ell geq 0} frac{u^{ell}}{(ell+2)!} + sum O(frac{ u^{ell} 2^{- ell}}{ell !}) right) =$$
$$e^{-u} left( - frac{e^u -1-u}{u^2} + O(e^{u/2}) right)=$$
$$-1/u^2 + O(e^{-u/2}).$$



This is negative for $u$ sufficiently large.

Thursday 6 August 2009

set theory - Equivalent definitions of M-genericity.

I'm trying to learn about forcing, and have heard that there are several equivalent ways to define genericity. For instance, let M be a transitive model of ZFC containing a poset (P, ≤). Suppose G ⊆ P is such that q ∈ G whenever both p ∈ G and q ≥ p. Suppose also that whenever p,q ∈ G then there is r ∈ G such that r ≤ p and r ≤ q.
Then the following are equivalent ways to say that G is generic:



(1) G meets every element of M dense in P. That is, for all D ∈ M, if for all p ∈ P there is q ∈ D such that q ≤ p, then G ∩ D is nonempty.



(2) G is nonempty and meets every element of M dense below some p ∈ G. That is, for all p ∈ G and all B ∈ M, if for each q ≤ p there is r ∈ B such that r ≤ q, then G ∩ B is nonempty.



Proving this equivalence seemed like it would be an easy exercise, but I think I'm missing something. Can someone point me toward a source where I can find a proof? I hope this is an acceptable question; this is my first time posting.



EDIT: Typo and omission fixed.

ra.rings and algebras - Tensor products and two-sided faithful flatness

Here's an example. Let $R = {mathbb C}[x]$ and let $S = {mathbb C}langle x,yrangle/(xy-yx-1)$, i.e. the first Weyl algebra $A_1$. Then $S$ is free as both a left and right
$R$-module, and comes equipped with the natural ($R$-bimodule) inclusion of $R$. On the
other hand, if you take $M = {mathbb C}[x]/(x) = N$, you'll get for $Motimes_R Sotimes_R M$ the zero module. Indeed, any element of $S$, i.e. any differential operator with polynomial coefficients (writing $partial = partial/partial x$ in place of $y$) can be written in the form $sum_i p_i(x) partial^i$, so any element in $Motimes_R S = {mathbb C}[x]/(x) otimes S$ is represented by an expression $sum_i c_i partial^i$ where the $c_i$ are constants, and now an induction on $k$ shows (I believe, my brain is a little fuzzy at this hour) that, for the right $R$-module structure on $S/xS$, one has $partial^k cdot x = k partial^{k-1}$. One can conclude that $Motimes_R Sotimes_R M = 0$, whereas of course $Motimes_R Mcong M$ in this example...

Wednesday 5 August 2009

computational complexity - Best-case Running-time to solve an NP-Complete problem

If P is an NP-complete problem, then define Pk = instances of P in which the instances have been blown up from size n to size nk by padding them with blanks. Then Pk is also NP-complete, but if P takes time exp(p(n)) to solve where p is some polynomial then Pk can be solved in time essentially exp(p(n1/k)) (there's a little more time required to check that the input really does have the right amount of padding but unless the running time is polynomial this is a negligable fraction of the total time). So there is no "easiest" problem: for every problem you name this construction gives another easier but still NP-complete problem.



As for non-artificial problems: most hard graph problems like Hamiltonian circuit, that are hard when restricted to planar graphs, can be solved in time exponential in √n or in (√n)(log n) by dynamic programming using a recursive partition by graph separators.