My guess is that they are more useful in probability than in analysis. Many people have the impression that probability is just analysis on spaces of measure 1. However, this is not exactly true. One way to tell analysts and probabilists apart: ask them if they care about independence of their functions.
Suppose that $mathcal{F}_1,mathcal{F}_2,...,mathcal{F}_n$ are families of subsets of some space $Omega$. Suppose further that given any $A_iin mathcal{F}_i$ we know that $P(A_1cap A_2 cap ...cap A_n)=P(A_1)P(A_2)...P(A_n)$. Does it follow that the $sigma(mathcal{F}_i)$ are independent? No. But if the $mathcal{F}_i$ are $pi$-systems, then the answer is yes.
When proving the uniqueness of the product measure for $sigma$-finite measure spaces, one can use the $pi$-$lambda$ lemma, though I think there is a way to avoid it (I believe Bartle avoids it, for instance). However, do you know of a text which avoids using the monotone class theorem for Fubini's theorem? This, to me, has a similar feel to the $pi$-$lambda$ lemma. Stein and Shakarchi might avoid it, but as I recall their proof was fairly arduous.
Here is a direct consequence of the $pi$-$lambda$ lemma when you work on probability spaces:
Let a linear space H of bounded functions contain 1 and be closed under bounded convergence. If H contains a multiplicative family Q, then it contains all bounded functions measurable with respect to the $sigma$-algebra generated by Q.
Why is this useful? Suppose that I want to check that some property P holds for all bounded, measurable functions. Then I only need to check three things:
- If P holds for f and g, then P holds for f+g.
- If P holds for a bounded, convergent sequence $f_n$ then P holds for $lim f_n$.
- P holds for characteristic functions of measurable sets.
This theorem completely automates many annoying "bootstrapping from characteristic functions" arguments, e.g. proving Fubini's theorem.
No comments:
Post a Comment