Not surprisingly, if you have a large collection of such random variables, and you add them all up, the distributions you get tend to be pretty tightly concentrated -- intuitively you think they should be

*more*concentrated than they would be if you add up a bunch of independent random variables. I don't know of formal results in this direction, but at the very least they're not

*less*concentrated than a sum of independent random variables. (In the extreme cases, the sum is constant, so we get concentration on a point.) This particular result is a Chernoff bound for negatively dependent random variables.

I don't want to go into the details here. (I'll probably try to make a blog post summarizing the talk, but the usual procedure for these things seems to be that the post should follow the talk, and the talk isn't quite ready yet.) But I just want to give a quote (edited for typos) from the draft monograph I linked to above:

In more sophisticated forms, the phenomenon of the concentration of measure underlies much of our physical world. As we know now, the world is made up of microscopic particles that are governed by probabilistic laws – those of quantum and statistical physics. The reason that the macroscopic properties determined by these large ensembles of particles nevertheless appear deterministic when viewed on our larger scales is precisely the concentration of measure: the observed possibilities are concentrated into a very narrow range.

In three words? God plays dice.

## No comments:

Post a Comment