Showing posts with label news. Show all posts
Showing posts with label news. Show all posts

18 July 2008

Five miles an hour = 30 cents a gallon?

"Every five miles an hour faster costs you an extra 30 cents a gallon." From yesterday's New York Times, among others. This is often mentioned in reference to bringing back the national 55 mile per hour speed limit.

What does this even mean? I assume it means that it takes, say, seven percent more gasoline per mile to drive 65 mph than to drive 60 mph. (30 cents is around seven percent of the current average gasoline price, $4.10 or so per gallon.) Why not just say that? This also has the advantage that when gas prices change, the fact doesn't become outdated.

Although as many people point out, the lower speed limit is a hard sell, in part because of the value of time. If you're about to drive 65 miles at 65 mph, it'll take you an hour; say you get 20 miles per gallon, so that uses 3.25 gallons of gasoline. Slowing to 60 mph, it takes five minutes longer, but saves seven percent of that gasoline, or 0.23 gallons -- perhaps $1 worth. So if you value an hour at more than $12 (more generally, at more than three gallons of gasoline), you should drive faster! Of course I've committed the twin fallacies of "everything is linear" and a bunch of sloppy arithmetic, and I've ignored that different cars get different gas mileage, but the order of magnitude is right -- and it's clear to me some people value their time at more than this and some at less. And a better analysis would take into account the probability of getting in accidents, speeding tickets, etc. (I'm mostly pointing this out because otherwise some of you will.)

Oh, and on a related note, people will do things for $100 worth of gas that they wouldn't do for $100 worth of money.

18 June 2008

Crop circles and π

Baffling crop circles equal pi.

The main thing here is a picture of a crop circle which encodes the first ten decimal digits of π. The article doesn't explain how, but I will. "Read" from the inside out; notice that the main part of the figure consists of ten concentric arcs joined by short segments. The arcs have angular lengths 3, 1, 4, 1, 5, 9, 2, 6, 5, and 4 tenths of a circle.

The article says that "This may cause more controversy in the debate whether crop circles are a result of extraterrestrial activity." It seems to me evidence against extraterrestriality; there's no reason why extraterrestrials would use decimal. The usual hypothesis seems to be that if aliens want to send us numbers, they'll do so in binary; this seems reasonable because 2 is really the only "special" numerical base, being the smallest practically usable one. (You can't send arbitrary real numbers in "base 1", and the time to send the integer N scales like N, as opposed to like log N in any base greater than 1.

And I thought that crop circles had pretty thoroughly been shown to be the work of human pranksters.

Of course, a much simpler way to encode π in a circle is to just have the circle itself.

05 February 2008

Perhaps the last post about delegate allotment

The Democratic primaries all allocate delegates proportionally; a fair proportion of the Republican delegates are winner-take-all.

The conventional wisdom seems to be that that means that "all other things being equal", the Republicans will choose a nominee sooner than the Democrats, because a Republican candidate can build up a large lead in delegates more easily than a Democratic candidate.

But the arithmetic works both ways. A Republican candidate can build up a larger lead in delegates... but a lead in delegates of, say, 10% of the total number of delegates is a lot less safe for a Republican. I suspect if one analyzed it properly -- asking questions about how the probability of a given candidate winning changes during the primary season -- the two systems wouldn't seem all that different.

A system like the Republican one in which some states are winner-take-all and others are proportional, though, just seems too asymmetrical to be stable. Quite a bit probably depends on whether the election is close... so in the end it probably turns into a question where candidates and their people try to argue for one method or another on ideological grounds when they're really just trying to calculate what makes them most likely to win. I suppose you can't blame them for trying.

11 October 2007

Cops work fifty hours a year?

From the Philadelphia City Paper, today's issue: The First Stoned, by Chris Goldstein, is an opinion piece arguing against marijuana prohibition.

He writes:
In Philadelphia alone, more than 7,000 are arrested each year, mostly for simple possession. It takes about three hours to process each arrest, thereby taking police officers off the street for about 21,000 hours annually. That's like having 400 cops working full time every year just busting pot smokers.

21,000 hours is a lot of time, yes. But 21,000 divided by 400 is 52.5. These "full time" cops are only working fifty-two hours a year! (In fact, it wouldn't surprise me if Goldstein got 400 by dividing 21,000 by 52, although I can't see how that would make sense as weeks have nothing to do with the problem.)

The actual "400" there should be about 10, assuming cops work two thousand hours a year.

I support the legalization of marijuana, but I don't support people that justify it with bad math.

22 August 2007

They say nobody reads anymore

John Armstrong is begging me in a comment at at Concurring Opinions to comment on this article from the Associated Press, which makes the following claims:

  • One in four adults say they read no books at all in the past year

  • "The typical person claimed to have read four books in the last year -- half read more and half read fewer." -- so although they don't want to use the word "median", they're saying the median number of books read is four.

  • "Excluding those who hadn't read any, the usual number read was seven." Assuming that "usual number" means "median", this is saying that five-eights of people (the quarter who read no books, plus half of the rest) read seven or less books in the last year.


So what do we know about the distribution? One-quarter of people read no books; one-quarter read between one and four; one-eighth read between four and seven; three-eighths read more.


They claim a 3% margin of error, as well, which is standard for polls involving a thousand people (as this one was), but that margin of error only applies to the survey as a whole. The article includes a lot of claims of the form "Xs read more than Ys", but the number of Xs or Ys that were polled is less than a thousand, so the margin of error is greater.

It seems hard to get these numbers, though. The article claims that "In 2004, a National Endowment for the Arts report titled "Reading at Risk" found only 57 percent of American adults had read a book in 2002, a four percentage point drop in a decade. The study faulted television, movies and the Internet." (Emphasis mine.) If you interpret both of these claims at face value, 18 percent of non-readers have been converted to readers in the last five years. This seems unlikely.

I keep a list of the books I read; there are approximately one hundred and seventeen books on it. Yes, you read that right. That number's a bit inflated by the fact that about a third of those books were books I was rereading. But it's also a bit deflated because I have a tendency not to count textbooks, research monographs, and so on. (There's a pile of books up to my knee -- mostly library books, which is why they're in a pile, so I remember to return them -- which I read in the last year but aren't on this list.) I've also probably read a dozen or so books while sitting at the bookstore because I was too cheap to buy them, and some book-length online works...

Of course, I'm not typical.

A "book" doesn't seem like the right unit here, though. For one thing, some books are much longer than others. To take the two books on my shelf that I suspect have the most and fewest words, I estimate that Victor Hugo's Les Misérables has about 700,000 words, and Susanna Kaysen's Girl, Interrupted has about 40,000. For another thing, the person who reads a lot of books is going to engage a lot more deeply with some of them than others. There have been books that I've read in two hours and never gone back to; there have been books that I've returned to over and over again and find new insights every time. Most, of course, fall somewhere in between. And what if you don't finish a book? Does it still count?

I think the more useful metric would be "how much time have you spent reading books in the past year?" Because you can't ask people that, I suspect the Right Thing to do is to call people up and say "how much time have you spent reading books in the last week?", doing so at various times of year to control for the fact that reading is probably higher at some times of year than others, and averaging the results. But no one cares that much. (Actually, I suspect people in the publishing industry care very much, but they're not releasing their findings if they have any.) Or perhaps "how many words have you read in the past year?", but counting this seems almost impossible. (It wouldn't surprise me to learn that this number is rising, as a lot of online content is in written form.)

And as "Katie" commenting at Concurring Opinions points out, anyone at the high end of the spectrum probably underestimates. I don't know if this is true. I would have estimated I read "about two books a week" in the last year, for 104 books in the year; when I went and looked at the list, the actual count was 117. This might not matter all that much, because the pollster was asking about the median, and the people who are going to have trouble are the ones that are above the median. But I suspect that this poll is a lot like the recent polls about the number of sexual partners; people feel that they should lie about the number of books they read. I'm not sure in which direction they're likely to lie, though; I suspect it's correlated with education, and also with how many books one thinks one's friends read.

And what's so great about books, anyway? Why do we assume that reading books is automatically better than reading any other source of the written word? I suppose the argument is that a 100,000-word book requires more intellectual effort to read than, say, one hundred 1,000-word newspaper or magazine articles, because there is more interrelation among the ideas. But books come, for the most part, predigested. A lot of the real intellectual work is done in taking those clippings from various sources and making a book out of them. But this isn't a study of how intellectuals read, it's a study of how the person in the street reads. And even "study" is a bit too strong. They called up a thousand people and asked them some desultory questions. The AP did the poll itself. Let's face it, they're just trying to sell newspapers.

01 July 2007

37% "directly affected" by Pennsylvania minimum wage increase?

Pennsylvania's minimum wage was raised from $6.25 to $7.15 effective today.

According to the AFL-CIO, "37 percent of Pennsylvania workers who would benefit directly from a minimum wage increase work full time". The point that the AFL-CIO is trying to make here is that it's not just high-school kids working part-time who work for minimum wage.

This was misreported on myphl17 as that 37 percent of workers would be "directly affected" by the raising of the minimum wage. I take this to mean that 37 percent of Pennsylvania workers make between $6.25 and $7.14 an hour, which seemed ridiculously high. Of course, "directly affected" is vague, and might not mean exactly that. But it seems like the right interpretation.

The Keystone Research Center, in fact, said that "427,000 Pennsylvania workers would benefit directly from an increase in the state's minimum hourly wage from $5.15 to $7.15 by January 2007". The raise ended up happening in two steps, to $6.25 six months ago and then to $7.15 today; they further break it down to say that about 100,000 workers were making between $5.15 and $6.24, and 300,000 between $6.25 and $7.14. The population of Pennsylvania is about 12 million. So 2.5 percent of all Pennsylanians are "directly affected" by this raise; perhaps five percent of workers are.

Incidentally, I support raising the minimum wage; although I'm not sure what exactly it should be. The purpose of this post was to point out the numbers that were quoted that just didn't make sense.

the price of water

I found a link to this article in the New York Daily News which claims that in New York City, tap water costs 24 cents a gallon. This made me suspicious, because I'd always been under the impression that tap water was orders of magnitude cheaper than bottled water, and you're not going to find bottled water under, say, $1 a gallon. Also, you don't hear about people struggling to pay their water bills. This page (which was the first source I could find) says that with water-saving fixtures, a toilet flush is 1.5 to 3.5 gallons, and a shower is 2 to 4 gallons per minute. If it costs fifty cents to a dollar to run a shower for a minute -- and what's the average shower, ten minutes? -- you know people would consider not showering.
I was curious how much water costs in Philadelphia. (I rent; my landlord pays my water bill.) Anyway, it's $21.14 per 1000 cubic feet, or about 0.2826 cents (not dollars) per gallon, as Google's built-in calculator informed me.
And it turns out that New York City has new rates effective tomorrow; they're $2.02 per 100 cubic feet (see page 4), or 0.2700 cents per gallon. I suspect it was 0.24 cents per gallon before the increase, and someone just thought that looked wrong, because usually when you see, say, "Candy Bars: 0.75 cents" you just assume they meant three-quarters of a dollar. The difference is a factor of 100. I bet the person responsible for this error would say "oh, it doesn't matter". Okay, I'll cut their salary by a factor of 100. Let's see how they like living on a few hundred bucks a year!
Anyway, the Philadelphia Water Department has this fact sheet which includes the question: "How can I better understand the levels of elements in my water?" They go on to define parts per million/billion/trillion. They state:

1 part per million is similar to making a line of quarters from Center City to Conshohocken, and then walking that line to find the one quarter that is flipped up heads instead of tails.

It's 15.1 miles from Broad and Walnut to Conshohocken. (Note that the Google Maps route requires one to walk on the Schuylkill Expressway, which would be unwise.) This is 956,736 inches. A U. S. quarter is 0.955 inches across, so the distance is very nearly the length of a million quarters put side-by-side.
But then they go on to say:

1 part per billion is equal to 1 green apple in a barrel containing 1 billion red apples.

Well, duh. But can you picture a barrel containing a billion red apples?
Let's say the inside of a SEPTA bus is eighty feet long, ten feet high, and ten feet across. Let's furthermore say that an apple is a four-inch sphere. Then if we fill the bus with apples, it's two hundred forty apples long, thirty apples high, and thirty apples across -- it holds 216,000 apples. So if we could fill five thousand SEPTA buses with apples, that would be a billion apples. That might be easier to picture. Except SEPTA only has 1,388 buses.. Okay, fill the busses with kiwis instead -- you can probably fit four kiwis in the space of one apple.
Finally, they say

1 part per trillion is similar to 1 inch in 16,000,000 miles or 1 penny in 10,000,000,000 dollars.

Sixteen million miles is one-sixth of the distance to the sun! However, ten billion dollars actually might be an understandable figure. The per capita income in Philadelphia was $16,509 as of the 2000 census; the population of the city at that time was 1,517,550. The product of those is about 25 billion dollars. So a part per trillion is like finding a penny in all the money Philadelphians make in five months.
and yet... a part per trillion still is still very nearly a trillion molecules per mole, since Avogadro's number (6.02 × 1023) isn't far from a trillion trillion. A mole of water is 18 grams -- roughly a mouthful. How would people feel, knowing that there are as many molecules of [insert nasty contaminant here] in a mouthful of water as pennies made in Philadelphia in five months? (Yes, I know that I'm sweeping a lot under the rug here. My point is that a mole is a large number of large numbers.)

28 June 2007

help! the Earth is sinking!

Earth's inner heat keeps cities afloat. The rocks that the Earth is made of expand when it's warmer, like most materials; thus if the inside of the Earth were not as heat the Earth would be smaller.

Derrick Hasterok and David Chapman, of the University of Utah, say that the significance of this heating has been overlooked. In particular, it's stronger in some areas than in others -- the rock under the western U. S. is hotter than that under the eastern U. S., so the general fact that the West tends to be higher than the East is in part due to this phenomenon.

However, they claim that "New York would drop to 1,427 feet below the Atlantic ocean, Boston and Miami even deeper. Los Angeles would rest 3,756 feet below the surface of the Pacific ocean." This just doesn't feel right. Perhaps those places would fall to those heights below the current sea level -- I take this to mean they'd be slightly closer to the center of the Earth. But sea level would be redefined to be the new average height of the sea. The only way all these places could suddenly be under sea level is if there were more water.

In any case, it doesn't matter, because the heat is coming from radioactive decay of some very long-lived isotopes. Worry about global warming.

(Those of you who thought this blog was supposed to be about probability -- as the title might lead you to believe -- may be wondering why I'm making this post. But this blog is also about silly uses of mathematics in the media.)

24 June 2007

Bloomberg as a kingmaker?

President? Or Kingmaker? by Patrick Healy, in today's New York Times.

Michael Bloomberg, mayor of New York City, recently officially changed from a member of the Republican party to an independent. This has been interpreted as a harbinger of a presidential run as an independent. However, that's a long shot, even though there's speculation that Bloomberg would be willing to spend a billion dollars of his own money on his campaign.

Healy suggests that Bloomberg ought to run as a "kingmaker". He should attempt to win one or two large states (New York is the obvious choice, since he's mayor of the city that makes up nearly half that state's population) and basically forget about the others. After that, he would need to hope that neither the Republican nor the Democratic candidate has 270 electoral votes. The election then by default goes to the House of Representatives. However, electoral votes aren't cast until December 15, six weeks after the general election. So Bloomberg could make deals with one of the two major-party candidates.

This has been tried before; George Wallace attempted it in 1968, Strom Thurmond in 1948. But neither of those elections was close enough for the strategy to work.

This raises a question, though. Let's say Bloomberg can win New York (31 electoral votes). What are the chances that the other states are evenly split enough?

Let's assume that each state's winner is decided by flipping a coin. (This, of course, does not reflect the reality of American politics -- some states are much more likely to break one way or the other -- but bear with me.) Then each candidate expects to win half of the remaining electoral votes -- that's 253.5. The variance of the number of electoral votes won by, say, the Democratic candidate is the sum of the variances of the number of electoral votes won in each state. In a state with n electoral votes, that's n2/4. Adding the results up for each state, we see that the variance of the number of electoral votes won by the Democrat is 2326.25; the standard deviation is the square root of this, 48.23. I'll assume that the distribution is normal -- if all the states were the same size, this would be the Central Limit Theorem, and hopefully the fact that the states aren't all the same size doesn't kill us. So the probability that the Democrat gets 270 electoral votes in this scheme is the probability that a normally distributed random variable with mean 253.5 and standard deviation 48.23 is at least 269.5; that's 37%. Similarly for the Republican. That leaves Bloomberg a 26% chance -- barely one in four -- that this scheme would work. He might be willing to take those odds.

But, of course, there are some states that are sure to go one way or the other. Say only one-third of states (representing one-third of electoral votes) are sure to go to the Democrats, one-third to the Republicans, and one-third in play. Then the variance gets divided by 3; the standard deviation is now 27.85; Bloomberg's chances of the election being close enough for this strategy to come into play are 44%.

And this whole analysis neglects the finer points of electoral college strategy. States aren't independent of each other -- we wouldn't see an election in which Utah went Democratic while Massachusetts went Republican, or even one where Virginia went Democratic but New Jersey went Republican, to be a little more reasonable. (Both of those states are probably in play, but New Jersey is far enough left of Virginia that they shouldn't break that way.) And in the end it could come down to just a few states -- the 2004 election basically came down to Florida, Ohio, and Pennsylvania -- in which case this whole normal approximation breaks down. But we won't know whcih states those are for a long time yet.

edit, 12:09pm: Can Bloomberg Win? suggests the reverse of Healy's plan -- Bloomberg wins a few states, the Democrat and Republican split the rest of the states, and cuts a deal with electors of the party that gets less electoral votes that makes him President. Rasmussen Reports talks about possible "electoral chaos" which could fundamentally change the way we elect our Presidents.

edit, 7:47pm: As reader Elizabeth has pointed out in a comment, New York is reliably Democratic; this changes things a bit, so the chances that Bloomberg plays the spoiler by allowing neither other candidate to get 270 electoral votes (under the second set of assumptions) are more like 38%.

22 June 2007

Six murders in one day in Philadelphia.

There were six homicides in Philadelphia yesterday. The headline in the Philadelphia Inquirer is "Summer's beginning: Six dead in one day". The events happened as follows:

  • a triple homicide in North Philadelphia;

  • a triple shooting in Kensington -- two died, one was critically wounded;

  • one man shot to death in Kingsessing.


I saw the headline while walking past a newspaper box well before I read the article. I thought "hmm, six murders in one day, is that a lot?" Last year Philadelphia had 406 murders; this year there have been 195 so far, as compared to 177 up until this time last year. The number I carry around in my head is that Philadelphia has one murder a day, although the actual 2006 figure was about 1.11 murders per day.

Since I didn't know that there had only been three incidents, I assumed that the six murders had all been separate. Furthermore, I assumed that murders are committed independently, since the murderers aren't aware of each other's actions. This second assumption seems believable to me. I've heard that, say, school shootings inspire copycats, mostly because they create a media circus around them -- at the time of the Virginia Tech massacres I remember people saying that the media shouldn't cover the shootings so much because they might "give people ideas", and I vaguely recall similar sentiments around the time of Columbine. But a single murder, in a city where the average day sees one murder, doesn't draw much attention.

If the murders are independent, then I figure I can model the random variable "number of murders per day" with a Poisson distribution. The rate of the distribution would be the average number of murders per day, which is 1.11; thus the probability of having n murders in a day should be e-1.11 (1.11)n/n!. This leads to the numbers:



n:01234567+
Prob. of n murders in one day0.32960.36580.20300.07510.02090.00460.000860.00016

So six or more murders should happen in a day about one day in a thousand, or once in almost three years. That seems like an argument for newsworthiness. But on the other hand, let's say there's some lesser crime -- crime X -- that is committed in Philadelphia with such frequency that crime X does not occur on only one day in a thousand. (Such a crime would be something that happens 2516 times per year, or 6.9 times a day.) I don't see that being front-page news. Lots of one-in-a-thousand things happen every day.

Of course, what actually occurred yesterday was not six independent murders. It sounds like there were only three murderers. So it's time for new assumptions. Let's now assume that all murderers act independently, but that two in five of them kills one person; two in five kill two people; one in five kill three people. This means the average murderer kills 1.8 people. Further, let's say that murderers go out and kill people as a Poisson process with rate 0.62 -- that's the old rate divided by 1.8, so there are still the same number of murders.

(The assumptions of how many people a murderer murders are made up, I admit, but the only list of murders I can find are the Inquirer's interactive maps, and it doesn't seem worth the time to harvest the data I'd need from them.)

Now, for example, the probability that three people are murdered on any given day is the sum of the probability that there's one triple homicide, one double and one single, or three single. Running through the computation, I get:



n:01234567+
Prob. of n murders in one day0.53790.13340.14990.10120.03720.02300.01030.0071

The probability of one or two murders in a day goes down; the probability of zero, or of three or more, goes up. Suddenly yesterday isn't nearly as rare. Days with six or more murders are, under these assumptions, 1.74% of all days -- just over six per year.

The calculation I'm afraid to do -- if I even could do it -- is "how likely am I to get murdered each time I go outside?" Fortunately I live in a decent neighborhood; but some neighborhoods not that far away from me have had some of the worst violence. But it occurred to me that at 400 murders a year, if you live in Philadelphia for 75 years there will be thirty thousand murders in that time span. Philly has about 1.5 million people. So if things stay like they are, the average Philadelphian has a one in fifty chance of dying by murder. In comparison, the nationwide murder rate in 2005 was 5.6 per 100,000; multiplying by an average lifespan of 75 years we get 420 murders per 100,000 people. So one in every two hundred and forty Americans will die of murder, if things stay like they are.

20 June 2007

The Million-Dollar Waitress and stock-picking scams

The Million-Dollar Waitress, from Business Week.

A waitress in Ohio, Mary Sue Williams, may win the million-dollar grand prize in a CNBC stock-picking contest. She was in sixth place when the contest ended on May 25. But a flaw in the way the contest was set up meant that, basically, the five people ahead of her "cheated". They found a way to select stocks to buy at, say, $20 -- and then wait until they went up to $25 before pressing the "buy" button. Because of the way the contest had been programmed, they only had to pay $20 in fake money to buy the stock. (I'm putting "cheated" in quotes because although this is clearly against the spirit of the contest -- you couldn't do this in the real market -- perhaps one could argue that the contest is defined by whatever the computer lets people do.)

Furthermore, it's not really clear how meaningful a contest like this is. From what I can tell, the contest lasted for ten weeks; the winner each week won $10,000 and the grand-prize winner won $1,000,000. Although I can't find information about the prize structure, my guess would be that any other prizes were much smaller. So this encourages risk-taking that nobody would take with actual money. Let's say there's a second-place prize, and it's $100,000. In "real life", most people would be happy with that money. But if you bet -- and I'm using "bet" here because this really does feel like gambling -- all your money on some very risky stock, then you have a shot at multiplying your money by ten. But it's probably a lot more likely that you'll lose it all.

Jim Kraber, on the other hand, played legitimately but at one point had 1600 portfolios. This enabled him to take the sort of risks that someone with a single portfolio -- or even a number of portfolios that could reasonably be played with with hard currency -- would never take, because he could afford to just throw out a portfolio that wasn't succeeding.

Williams admits that "Part of this was luck... a lot of it was a gut feeling, some eenie-meenie-minie-moe, and common sense." Now, I'm not saying that she can't pick stocks -- who knows, she might be quite good at it. But this story reminds me of the following scam. I get a mailing list with 64,000 people on it. (I'm not sure whether this should be postal mail or e-mail; the story, which is not mine and which I learned from a book of John Allen Paulos, predates e-mail.) I tell 32,000 of them they should, say, bet on football team A to win this week, and 32,000 that they should bet on the same team to lose. The team either wins or loses. Next week, I take the 32,000 that got the right answer this week, and split them in half. To 16,000 I say that team B will win this week, and to 16,000 I say that team B will lose. I repeat this six times, until 1,000 people have received six correct predictions. Note that when I begin this scheme I don't know which 1,000 people this will be, but I know they'll exist. Then I try to sell these thousand people my "system". Maybe they'd buy it!

19 June 2007

Forbes on roulette (and a few other casino games)

The Best Bets at the Casino, from Forbes.com. (I found this one through the bar at the top of the screen in gmail that delivers me all sorts of random news.)

As I've said before, probability theory was invented to solve problems in games of chance. Supposedly the first nontrivial probability problem was the problem of points. Two people are flipping a coin, one bets on heads and the other bets on tails. They agree flip until either heads has occurred ten times or tails has occurred ten times. But they have to quit when the score is seven to five; how should they split up the money?

Anyway, the Forbes article claims: "If your goal is to nab the best risk-adjusted return (as opposed to playing for hours on end), place fewer, smarter and larger bets." I'm not entirely sure what this means, because "risk-adjusted return" is a vague phrase. If you bet, say, $10 each on 100 spins of the roulette wheel, or $100 each on 10 spins of the roulette wheel, either way you expect to lose about fifty bucks -- $52.63 to be exact. This is what we mean when we say that the "house edge" in roulette is 5.26% -- it means that the house will, on average, take 5.26% of your bet. (If all your bets are on "red" or "black", then roulette is basically like flipping a coin -- a very complicated coin -- except that there are certain special outcomes where both "red" and "black" lose.)

But the variance of your winnings on a single spin at $10 is 100 "square dollars" or so, so the variance of your winnings on 100 spins is 10,000 "square dollars". The standard deviation is about $100. For a single spin at $100, the variance is 10,000 "square dollars", and so the variance on 10 spins at $100 each is 100,000 "square dollars"; the standard deviation is about $316. What does this mean? It means that if you make a few big bets then the random fluctuations won't cancel out. So what you do have is a greater probability of coming out ahead. Maybe this is what they mean by "risk-adjusted return".

To make the statement about "random fluctuations" more precise, I could use the central limit theorem. The central limit theorem says, essentially, that when you add up a bunch of random things you get something that's approximately normally distributed. But it's not really fair to do that here. Why? Because the normal distribution is a continuous distribution -- that is, it can take whatever value we like. The distribution of the amount of money you have after ten spins is quite "lumpy" -- it can be, say, $0 (if we win on five spins and lose on five spins), or $200 (if we win on six and lose on four), but not something in between. I'm afraid of losing something in the holes between $0 and $200, as it were.


(By the way, there are roulette bets that let you bet that any of 1, 2, 3, 4, 5, 6, 12, or 18 numbers come up. These all have the same house edge -- except for the 5. Don't take that one.)

Also, rumor has it that there are devices that you can sneak into a casino which will watch the roulette wheel and tell you where it's more likely to come up. I'm not sure if they're small and unobtrusive enough -- or sufficiently good at prediction -- to be good for anything.

They also seem to claim that it's possible to win at blackjack -- in expectation without counting cards; that's not so! The house edge in blackjack under most rules is about 0.5%. But even if you could win in expectation, I'm not sure if it's worth the trouble. If I'm going to have to pay attention to what I'm doing, I don't want to have a high chance of losing money.