One obvious but important observation is that, for operators on a n-dimensional vector space over a field, if 1<n<infty, we have ABneqBA generically. In other words, consider the commutativity locus mathcalCn of all pairs of ntimesn matrices A,B such that AB=BA as a subset of mathbbAn2. This is clearly a Zariski closed set -- i.e., defined by the vanshing of polynomial equations. It is also proper: take e.g.
A = left[ begin{array}{cc} 1 & 1 \ 0 & 1 end{array} right] oplus 0_{n-2} and B = left[ begin{array}{cc} 0 & 1 \ 0 & 1 end{array} right] oplus 0_{n-2}. Since mathbbAn2 is an irreducible variety, mathcalCN therefore has dimension less than N2. This implies that over a field like mathbbR or mathbbC where such things make sense, mathcalCN has measure zero, thus giving a precise meaning to the idea that two matrices, taken at random, will not commute.
One could ask for more information about the subvariety mathcalCN: what is its dimension? is it irreducible? and so forth. (Surely someone here knows the answers.)
I would guess it is also true that for a Banach space E (over any locally compact, nondiscrete field k, say) of dimension >1, the locus mathcalCE of all commuting pairs of bounded linear operators is meager (in the sense of Baire category) in the space B(E,E)timesB(E,E) of all pairs of bounded linear operators on E.
Kevin Buzzard has enunciated a principle that without further constraints, the optimal answer to a question "What is a necessary and sufficient condition for X to hold?" is simply "X". This seems quite applicable here: I don't think you'll find a necessary and sufficient condition for two linear operators to commute which is nearly as simple and transparent as the beautiful identity AB=BA.
Still, you could ask for useful sufficient conditions. Diagonalizable operators with the same eigenspaces, as mentioned by Jonas Meyer above, is one. Another is that if A and B are both polynomials in the same operator C: this shows up for instance in the Jordan decomposition.
No comments:
Post a Comment