For the first lecture of Michael Mitzenmacher's Algorithms and Data Structures class this semester, he gave a lecture on how to extract "fair" randomness from a biased coin. That is, given a coin which is possibly unfair, how can we use it to simulate a fair coin?
The canonical first solution is attributed to Von Neumann -- flip the coin twice. If the output is HT, call the result 0; if it's TH, call the result 1; if it's HH or TT, repeat. It turns out that this scheme can be extended, which isn't surprising since we just threw out quite a bit of information. One starts by saying that if the first four flips turn up HHTT or TTHH we call the result 0 or 1, respectively; this means that there's only a one-in-eight chance that we don't get a bit out of the first four flips. Not surprisingly, this can be iterated.
Furthermore, if we consider the problem of extracting a stream of bits 0 or 1 which are independent and identically distributed, with probability one-half of being 0 or 1, a more complex iteration scheme can actually do maximally well. That is, if the heads probability of the original coin is p, it's possible to get H(p) = -p log2 p - (1-p) log2 (1-p) bits per coin flip, which is the binary entropy of a single flip of the coin and thus the best one could hope to do.