My guess is that they are more useful in probability than in analysis. Many people have the impression that probability is just analysis on spaces of measure 1. However, this is not exactly true. One way to tell analysts and probabilists apart: ask them if they care about independence of their functions.
Suppose that mathcalF1,mathcalF2,...,mathcalFn are families of subsets of some space Omega. Suppose further that given any AiinmathcalFi we know that P(A1capA2cap...capAn)=P(A1)P(A2)...P(An). Does it follow that the sigma(mathcalFi) are independent? No. But if the mathcalFi are pi-systems, then the answer is yes.
When proving the uniqueness of the product measure for sigma-finite measure spaces, one can use the pi-lambda lemma, though I think there is a way to avoid it (I believe Bartle avoids it, for instance). However, do you know of a text which avoids using the monotone class theorem for Fubini's theorem? This, to me, has a similar feel to the pi-lambda lemma. Stein and Shakarchi might avoid it, but as I recall their proof was fairly arduous.
Here is a direct consequence of the pi-lambda lemma when you work on probability spaces:
Let a linear space H of bounded functions contain 1 and be closed under bounded convergence. If H contains a multiplicative family Q, then it contains all bounded functions measurable with respect to the sigma-algebra generated by Q.
Why is this useful? Suppose that I want to check that some property P holds for all bounded, measurable functions. Then I only need to check three things:
- If P holds for f and g, then P holds for f+g.
- If P holds for a bounded, convergent sequence fn then P holds for limfn.
- P holds for characteristic functions of measurable sets.
This theorem completely automates many annoying "bootstrapping from characteristic functions" arguments, e.g. proving Fubini's theorem.
No comments:
Post a Comment