I think this question may be slightly deeper than some people are giving it credit for being. I lectured a course in probability to first-year undergraduates at Cambridge recently, and a previous lecturer, who was a genuine probabilist, was very keen to impress on me the importance of talking "correctly" about random variables. It took me a while to understand what he meant, but basically his concern was that the notion of a sample space should be very much in the background. It's tempting to define a random variable as a function on a probability measure space (not that this particular course used measure theory -- but some more elementary substitute for the definition would have been needed), but his view was that this was absolutely not how probabilists think about random variables.
The practical point was not so much to come up with a better formal definition of random variables, but rather to try, whenever possible, to prove results about random variables without referring to the sample space. It's surprising how little you need to mention them (or not surprising if you're a real probabilist). I seem to remember that the one place where I found I really wanted sample spaces was when it came to proving linearity of expectations.
The compromise I reached in that particular course was to define random variables using sample spaces (which makes them seem fairly straightforward objects) but then to tell people to prove as much as they could just with reference to the distribution of the random variable itself. In other words, I gave the "wrong" definition and immediately admitted that it was wrong.
Added very slightly later: I see that I am interpreting the question differently from everyone else. I am not talking about two functions on a measure space being equivalent if they agree outside a set of measure zero -- which is indeed not a very interesting issue. I am talking about two functions on different measure spaces being the same random variable if you can find a nice map between the measure spaces such that one function is (up to a set of measure zero) the obvious transformation of the other. One of the big advantages of not specifying a sample space is when you start talking about several random variables. For instance, if you start by discussing the tossing of a coin, it's sort of clear that your sample space is ${0,1}$, but if another coin enters the picture does that mean you have to go back and prove everything for a more complicated function defined on ${0,1}^2$? Not if you didn't mention the sample space in the first place but just the random variable.
No comments:
Post a Comment