Necessary but not sufficient: 596,000 google hits.
Sufficient but not necessary: 134,000 google hits.
Exercise for readers: explain the vast gap here. It seems like the two should be equally common, but when a friend of mine used "sufficient but not necessary" that sounded strange to me; that's what led me to Google, which shows that indeed this phrase is much less common than the reverse.
31 July 2008
30 July 2008
The five bridges of Kaliningrad
In 1736 Euler solved the following problem: the city of Königsberg is set on both sides of the Pregel river and on two islands between them. There are bridges connecting the various landmasses; is it possible to walk around the city in such a way that you cross each bridge exactly once? The answer is no; (the network of landmasses and bridges in) Königsberg didn't have an Eulerian path. In order to have an Eulerian path the graph corresponding to this network must have zero or two nodes of odd degree; that is, if we consider the number of bridges on each landmass, exactly zero or two of these numbers can be odd. In Königsberg all four degrees were odd.
But the bridges were bombed in World War II, the city was renamed Kaliningrad, and only five of them were rebuilt. These are bridges connecting each of the islands to each of the shores, and a brige connecting the two islands. As you can see, there are three bridges on each island and two on each shore of the river; two of these numbers are odd, so there exists an Eulerian path. It's still somewhat useless, because you have to start on one island and end on the other.
Here's a map of the route, from Microsiervos (in Spanish), and here are some pictures that some folks took when they were visiting Kaliningrad and actually doing this.
I also remember once seeing the analogous network for New York City (the relevant landmasses being Manhattan, Long Island, Staten Island, the Bronx, and New Jersey), which has a lot of bridges, with the question of whether that network had an Eulerian path. I don't remember the answer. I also think it wouldn't be as much fun; New York has lots of traffic and is much larger than Kaliningrad.
But the bridges were bombed in World War II, the city was renamed Kaliningrad, and only five of them were rebuilt. These are bridges connecting each of the islands to each of the shores, and a brige connecting the two islands. As you can see, there are three bridges on each island and two on each shore of the river; two of these numbers are odd, so there exists an Eulerian path. It's still somewhat useless, because you have to start on one island and end on the other.
Here's a map of the route, from Microsiervos (in Spanish), and here are some pictures that some folks took when they were visiting Kaliningrad and actually doing this.
I also remember once seeing the analogous network for New York City (the relevant landmasses being Manhattan, Long Island, Staten Island, the Bronx, and New Jersey), which has a lot of bridges, with the question of whether that network had an Eulerian path. I don't remember the answer. I also think it wouldn't be as much fun; New York has lots of traffic and is much larger than Kaliningrad.
29 July 2008
A nonreligious statement
Through my logs, I came across a forum where people have pointed to a post on this blog.
They then veer off into saying things about religion. I suspect this may be due to the title of this blog.
I just want to state that "God Plays Dice" has nothing to do with the Judeo-Christian-Islamic-etc. deity. It is a reference to the following quote of Einstein, in a letter to Max Born:
The purpose of the title is that I feel that probability is an important tool for understanding the world, which Einstein may have been a bit skeptical about, at least in the case of quantum mechanics. And there's something of a tradition in the titling of math blogs of taking sayings of well-known mathematicians and "replying" to them. (By "tradition" I mean The Unapologetic Mathematician also does it, in response to Hardy's A Mathematician's Apology.)
Also, for some reason I had thought it was Bohr, not Born, that he wrote this to. I suspect this is because I've heard more things about Bohr than Born, and they sound similar.
I suspect the people at the forum in question won't read this, though. But making this post makes me feel like I've replied to them.
edited, 5:56 pm: I was wondering if there were any blogs whose titles riff on the quote that "A mathematician is a device for turning coffee into theorems" (usually attributed to Erdos, but supposedly actually due to Renyi). I found Tales from an English Coffee Drinker. The quote from Goethe, "Mathematicians are like Frenchmen: whatever you say to them they translate into their own language and forthwith it is something entirely different", also would be good as a source for a blog title.
They then veer off into saying things about religion. I suspect this may be due to the title of this blog.
I just want to state that "God Plays Dice" has nothing to do with the Judeo-Christian-Islamic-etc. deity. It is a reference to the following quote of Einstein, in a letter to Max Born:
Quantum mechanics is very impressive. But an inner voice tells me that it is not yet the real thing. The theory produces a good deal but hardly brings us closer to the secrets of the Old One. I am at any rate convince that He does not play dice."(I'm copying this out of Gino Segre's Faust in Copenhagen; it's originally from Einstein's letter to Born, December 4, 1926, which is reprinted in The Born-Einstein Letters.) The "Old One" to whom Einstein is referring here was, as far as we know, not what is usually meant by "God"; I suspect that this is why the translator (Irene Born) chose this translation, although I don't know what Einstein said in the original German. To be totally honest, I don't know if the original was even in German.
The purpose of the title is that I feel that probability is an important tool for understanding the world, which Einstein may have been a bit skeptical about, at least in the case of quantum mechanics. And there's something of a tradition in the titling of math blogs of taking sayings of well-known mathematicians and "replying" to them. (By "tradition" I mean The Unapologetic Mathematician also does it, in response to Hardy's A Mathematician's Apology.)
Also, for some reason I had thought it was Bohr, not Born, that he wrote this to. I suspect this is because I've heard more things about Bohr than Born, and they sound similar.
I suspect the people at the forum in question won't read this, though. But making this post makes me feel like I've replied to them.
edited, 5:56 pm: I was wondering if there were any blogs whose titles riff on the quote that "A mathematician is a device for turning coffee into theorems" (usually attributed to Erdos, but supposedly actually due to Renyi). I found Tales from an English Coffee Drinker. The quote from Goethe, "Mathematicians are like Frenchmen: whatever you say to them they translate into their own language and forthwith it is something entirely different", also would be good as a source for a blog title.
Perception of racial distribution
Here's something interesting from a New York Times poll a couple weeks ago. People were asked what percentage of all Americans are black. Results include that 8 percent of whites, and 17 percent of blacks, guessed that more than 50 percent of all Americans are black. (It's question 80 in the poll.)
The actual figure, from the 2006 census estimates, is 12.4 percent. (If you had asked me, I would have probably said twelve percent, which is the figure I learned quite some time ago.)
Jordan Ellenberg, who linked to this poll, asks whether people are ignorant of what "50 percent" means, or whether they're ignorant of the actual makeup of the United States population. I'm not sure how to answer this.
But I'd be interested to know how people's guesses of the percentage of the population which is black are correlated with the percentage of the population in their immediate area which is black. People probably expect that the people around them are representative of the general population, because psychologically we may be wired that way; numbers, even numbers obtained from counting millions of people, just don't have the same psychological impact as the faces you see while walking down the street. (You might have to factor in some other things, though, such as people's choice of television shows, movies, etc.; subconsciously we might not be that good at distinguishing between people that we're seeing on television and people we're seeing in reality.)
Similar questions could be asked in other populations. For example, if you ask Philadelphians about the racial distribution of Philadelphia, what do they say? For black and white people, the answer is 44.3% black, 41.8% white, from this Census Bureau page with a ridiculously long URL. But most Philadelphians live in neighborhoods that are mostly black or mostly white, so I suspect you'd get a lot of extreme answers.
Although the extreme answers might not correspond to what people actually see day to day! There may be people living in mostly-white neighborhoods who think most Philadelphians are white, or people living in mostly-black neighborhoods who think most Philadelphians are black. But you might also see people living in mostly-white neighborhoods who feel like their neighborhood is one of the only places where white people live, and guess that the city is mostly black, or vice versa. (Note to people who know anything about Philadelphia -- I am not saying that such neighborhoods exist, or that I know which ones they are. I'm just saying I can imagine them.)
Yes, in my secret other life I want to study things like that.
The actual figure, from the 2006 census estimates, is 12.4 percent. (If you had asked me, I would have probably said twelve percent, which is the figure I learned quite some time ago.)
Jordan Ellenberg, who linked to this poll, asks whether people are ignorant of what "50 percent" means, or whether they're ignorant of the actual makeup of the United States population. I'm not sure how to answer this.
But I'd be interested to know how people's guesses of the percentage of the population which is black are correlated with the percentage of the population in their immediate area which is black. People probably expect that the people around them are representative of the general population, because psychologically we may be wired that way; numbers, even numbers obtained from counting millions of people, just don't have the same psychological impact as the faces you see while walking down the street. (You might have to factor in some other things, though, such as people's choice of television shows, movies, etc.; subconsciously we might not be that good at distinguishing between people that we're seeing on television and people we're seeing in reality.)
Similar questions could be asked in other populations. For example, if you ask Philadelphians about the racial distribution of Philadelphia, what do they say? For black and white people, the answer is 44.3% black, 41.8% white, from this Census Bureau page with a ridiculously long URL. But most Philadelphians live in neighborhoods that are mostly black or mostly white, so I suspect you'd get a lot of extreme answers.
Although the extreme answers might not correspond to what people actually see day to day! There may be people living in mostly-white neighborhoods who think most Philadelphians are white, or people living in mostly-black neighborhoods who think most Philadelphians are black. But you might also see people living in mostly-white neighborhoods who feel like their neighborhood is one of the only places where white people live, and guess that the city is mostly black, or vice versa. (Note to people who know anything about Philadelphia -- I am not saying that such neighborhoods exist, or that I know which ones they are. I'm just saying I can imagine them.)
Yes, in my secret other life I want to study things like that.
26 July 2008
Bill Rankin's population density graphs
Last week I wrote a post about population densities.
Take a look at the interesting graphs at Bill Rankin's Radical Cartography; they show how population density is related to:
Take a look at the interesting graphs at Bill Rankin's Radical Cartography; they show how population density is related to:
- racial and ethnic groups (American Indians and Alaska Natives, not surprisingly, live at the lowest population densities; what surprised me was the large amount of Hispanic population at between 1 and 10 per square mile, which Rankin says might correspond to ranchers);
- age. Roughly speaking, people ages 18 to 39 or under 5 are overrepresented at "high" densities (above 4000 or so), and other ages are overrepresented at "low" densities (below that same cutoff). This is, I suspect, a reflection of people moving to the city when they leave their parents house, and then leaving the city when it's time for their kids to go to school.
- income is highest at suburban and central-city densities, with a valley in between. Not surprising; in general the central part of a city is rich, it's surrounded by poorer neighborhoods, and then eventually income starts going up again. Rural places are poor as well.
- gender -- there are more women at high density, which I can't explain.
- population and area -- I tried to make a plot like this but had some trouble, because I was just playing around with output from another web site and didn't have the raw data.
Yellow books
I'm currently watching Science Saturday at bloggingheads.tv, which this week features Peter Woit (Not Even Wrong) and Sabine Hossenfelder (Backreaction).
When the video started, I thought "hmm, Woit has an awful lot of yellow books behind him for a physicist".
Woit, it turns out, works in the mathematics department at Columbia, as I was reminded when he started talking about the different job situations in physics and mathematics about fifteen minutes in. Basically, Woit says that jobs in physics are scarcer compared to PhDs in physics than the analogous situation in mathematics, so physicists feel more pressure to "do everything right" -- in his view this means they feel unnatural pressure to work in string theory, which Woit sees as a bad thing. After all, what if string theory's wrong? Physics as a discipline should diversify.
When the video started, I thought "hmm, Woit has an awful lot of yellow books behind him for a physicist".
Woit, it turns out, works in the mathematics department at Columbia, as I was reminded when he started talking about the different job situations in physics and mathematics about fifteen minutes in. Basically, Woit says that jobs in physics are scarcer compared to PhDs in physics than the analogous situation in mathematics, so physicists feel more pressure to "do everything right" -- in his view this means they feel unnatural pressure to work in string theory, which Woit sees as a bad thing. After all, what if string theory's wrong? Physics as a discipline should diversify.
25 July 2008
Mathematicians in politics?
Quite some time ago, the folks at 360 asked if there have been heads of state who were by training mathematicians. This is really two questions in one: people who were trained as mathematicians, and people who had a mathematical career before going into politics.
The first question doesn't seem that interesting, because it seems to include cases in which Politician X majored in math as an undergrad, then went to law school, became a lawyer, and then entered politics from the law, as so many do. That's not the question I want to answer.
For the second question, a bit of clicking around turns up this list, which inclues Alberto Fujimori (president of Peru), Paul Painlevé (prime minister of France), and Eamon de Valera (president of Ireland). Painlevé in particular made a name for himself as a mathematician; the other two appear to have at least taught it in some capacity at some point.
I had thought that Henri Poincaré had been in politics, but it appears that I was confusing him with his cousin Raymond. Borel served in the French National Assembly. I haven't done any sort of systematic sampling, but it seems like mathematician-politicians are particularly prevalent in France, that wonderful country where they name streets after mathematicians. (Here in the United States, for example in my native city of Philadelphia, we name streets after mathematical objects, namely the positive integers.)
One interesting close call is Einstein. The story has it that he was offered the presidency of Israel in 1952. Of course Einstein was a physicist, but given the title of this blog I feel I can mention him.
The first question doesn't seem that interesting, because it seems to include cases in which Politician X majored in math as an undergrad, then went to law school, became a lawyer, and then entered politics from the law, as so many do. That's not the question I want to answer.
For the second question, a bit of clicking around turns up this list, which inclues Alberto Fujimori (president of Peru), Paul Painlevé (prime minister of France), and Eamon de Valera (president of Ireland). Painlevé in particular made a name for himself as a mathematician; the other two appear to have at least taught it in some capacity at some point.
I had thought that Henri Poincaré had been in politics, but it appears that I was confusing him with his cousin Raymond. Borel served in the French National Assembly. I haven't done any sort of systematic sampling, but it seems like mathematician-politicians are particularly prevalent in France, that wonderful country where they name streets after mathematicians. (Here in the United States, for example in my native city of Philadelphia, we name streets after mathematical objects, namely the positive integers.)
One interesting close call is Einstein. The story has it that he was offered the presidency of Israel in 1952. Of course Einstein was a physicist, but given the title of this blog I feel I can mention him.
24 July 2008
The 2000 election, eight years later
Outcomes of presidential elections and the house size, by Michael Neubauer and Joel Zeitlin. (It's in a journal, at PS: Political Science and Politics, Vol. 36, No. 4 (Oct., 2003), pp. 721-725 -- but that's not where I found it, and that's not where the link goes.) The link comes from thirty-thousand.org, a site which claims that congressional districts were never intended to be as large as they are; they advocate one per fifty thousand people, which is six thousand representatives. (Thirty thousand is approximately the original number of people per representative.)
The authors look at the 2000 U. S. presidential election and concluded that given the way in which seats in the House of Representatives are awarded, if the House had 490 members or less the election would have gone to Bush; with 656 or more, it would have gone to Gore; in between it goes back and forth with no obvious pattern, and some ties. The ties come at odd numbers of House members, which surprised me. But the size of the Electoral College is the number of House members, plus the number of Senators (always even, since there are two per state), plus three electoral votes for DC. So an odd number of House members means an even number of electoral votes, as in the current situation where there are 435 in the House and 538 electoral votes.
In case you're wondering why a small House favors Bush and a large house favors Gore, it's because the states that Gore won made up a larger portion of the population, but Bush won more states. In the large-House limit, the number of electoral votes that each state gets is proportional to the population, since the two votes "corresponding to" Senators are essentially negligible. In the small-House limit, each state has 3 electoral votes (I'm assuming that each state has to be represented) and so counting electoral votes amounts to counting states.
The states that Bush won had a total population in the 1990 Census (the relevant one for the 2000 election) of 120,614,084; the states that Gore won, 129,015,599. So 51.68% of the population was in states won by Gore, 48.32% in states won by Bush. Bush won 30 states, Gore 21. (I'm counting DC as a state, which seems reasonable, although the 23rd Amendment says that DC can't have more electors than the least populous state, even though it does have more people than the least populous state.)
So if there are N House members, we expect Bush to win 60 + .4832N electoral votes; the 60 votes are two for each state, the .4832N his proportion of the House. Similarly, Gore expects to win 42 + .5168N electoral votes. (The three are for DC; I'm assuming that DC would always get three electoral votes in this analysis, which isn't quite true. So Bush wins by 18 - .0336N electoral votes, which is positive if N is less than 535. The deviations between this and the truth basically amount to some unpredictable "rounding error".
If you look at the difference between the number of Bush votes and the number of Gore votes, you do see roughly a linear trend. To me it looks like a random walk superimposed on linear motion. This isn't surprising. As we move from N seats to N+1 seats in the House, 51.68% of the time the next seat should go to a Gore state; 48.32% of the time, to a Bush state. (The method that's used allots the seats "in order", i. e. raising N by 1 always adds a seat to a single state. Not all apportionment methods have this property; this is the Alabama paradox.) So the difference between the number of seats in Bush states and in Gore states will fluctuate, but the overall trend is clear. Of course the noise isn't actually random, coming as it does directly from the populations of the states, but the dependence on the state populations is so complicated that we might as well think of it as random.
I believe that something similar would happen with any set of election results in which more states voted for candidate A, but the states that voted for candidate B collectively had greater population. (Note that the latter criterion is not the same as candidate B winning the popular vote.)
Incidentally, I remember hearing in 2000 that if the House had had only a few more seats than it did, or even a few less seats, Gore would have won -- the implication being that N = 435 was a particularly fortuitous choice for the Republicans. This isn't true. But it's also possible that my memory is false.
The authors look at the 2000 U. S. presidential election and concluded that given the way in which seats in the House of Representatives are awarded, if the House had 490 members or less the election would have gone to Bush; with 656 or more, it would have gone to Gore; in between it goes back and forth with no obvious pattern, and some ties. The ties come at odd numbers of House members, which surprised me. But the size of the Electoral College is the number of House members, plus the number of Senators (always even, since there are two per state), plus three electoral votes for DC. So an odd number of House members means an even number of electoral votes, as in the current situation where there are 435 in the House and 538 electoral votes.
In case you're wondering why a small House favors Bush and a large house favors Gore, it's because the states that Gore won made up a larger portion of the population, but Bush won more states. In the large-House limit, the number of electoral votes that each state gets is proportional to the population, since the two votes "corresponding to" Senators are essentially negligible. In the small-House limit, each state has 3 electoral votes (I'm assuming that each state has to be represented) and so counting electoral votes amounts to counting states.
The states that Bush won had a total population in the 1990 Census (the relevant one for the 2000 election) of 120,614,084; the states that Gore won, 129,015,599. So 51.68% of the population was in states won by Gore, 48.32% in states won by Bush. Bush won 30 states, Gore 21. (I'm counting DC as a state, which seems reasonable, although the 23rd Amendment says that DC can't have more electors than the least populous state, even though it does have more people than the least populous state.)
So if there are N House members, we expect Bush to win 60 + .4832N electoral votes; the 60 votes are two for each state, the .4832N his proportion of the House. Similarly, Gore expects to win 42 + .5168N electoral votes. (The three are for DC; I'm assuming that DC would always get three electoral votes in this analysis, which isn't quite true. So Bush wins by 18 - .0336N electoral votes, which is positive if N is less than 535. The deviations between this and the truth basically amount to some unpredictable "rounding error".
If you look at the difference between the number of Bush votes and the number of Gore votes, you do see roughly a linear trend. To me it looks like a random walk superimposed on linear motion. This isn't surprising. As we move from N seats to N+1 seats in the House, 51.68% of the time the next seat should go to a Gore state; 48.32% of the time, to a Bush state. (The method that's used allots the seats "in order", i. e. raising N by 1 always adds a seat to a single state. Not all apportionment methods have this property; this is the Alabama paradox.) So the difference between the number of seats in Bush states and in Gore states will fluctuate, but the overall trend is clear. Of course the noise isn't actually random, coming as it does directly from the populations of the states, but the dependence on the state populations is so complicated that we might as well think of it as random.
I believe that something similar would happen with any set of election results in which more states voted for candidate A, but the states that voted for candidate B collectively had greater population. (Note that the latter criterion is not the same as candidate B winning the popular vote.)
Incidentally, I remember hearing in 2000 that if the House had had only a few more seats than it did, or even a few less seats, Gore would have won -- the implication being that N = 435 was a particularly fortuitous choice for the Republicans. This isn't true. But it's also possible that my memory is false.
Solids of revolution?
For some reason, in calculus classes here in the U.S. we spend a lot of time teaching students how to find the volume of solids of revolution. This is invariably confusing, because often students try to memorize the various "methods" (disk, washer, cylindrical shell) and have trouble getting a handle on the actual geometry. When I've taught this, I've encouraged my students to draw lots of pictures. This is more than I can say for some textbooks, which try to give "rules" of the form "if a curve is entirely above the x-axis, and it's being rotated around the x-axis, between y = a and y = b, then here's the formula" -- I don't recall explicitly knowing that students have tried to memorize such a formula and failed, but I wouldn't be surprised if a lot of the wrong answers I've gotten from students on questions like this are based on such things.
Now, as a probabilist, I do have some use for calculus in my work. But I can't remember the last time I needed to know the volume of a solid of revolution.
Then again, my work is not particularly geometrical in nature.
So, I ask -- why do we spend so much time on this? Is this something that students actually need to know? I'm inclined to guess that it's just tradition. But at the same time I can't rule out that solids of revolution are actually so prevalent in engineering and physics (the traditional "customers" for the calculus course). Also, a lot of the "standard" calculus course seems to be a sequence of contrived problems that exist basically to make the student do various derivatives and integrals.
Now, as a probabilist, I do have some use for calculus in my work. But I can't remember the last time I needed to know the volume of a solid of revolution.
Then again, my work is not particularly geometrical in nature.
So, I ask -- why do we spend so much time on this? Is this something that students actually need to know? I'm inclined to guess that it's just tradition. But at the same time I can't rule out that solids of revolution are actually so prevalent in engineering and physics (the traditional "customers" for the calculus course). Also, a lot of the "standard" calculus course seems to be a sequence of contrived problems that exist basically to make the student do various derivatives and integrals.
Olympic math?
Report: Olympics Mathematically Likely To Happen This Year, from The Onion. Although the Olympic spokesperson in the article seems to think there's some weird ten-year periodicity...
A United States presidential election is also mathematically likely this year. There has been talk of "when will we announce our vice presidential candidates? The Olympics get in the way!", which is kind of silly.
A United States presidential election is also mathematically likely this year. There has been talk of "when will we announce our vice presidential candidates? The Olympics get in the way!", which is kind of silly.
23 July 2008
Rubik's cube hustling?
So I've finally memorized a solution to the Rubik's Cube. (I may be speaking too soon; let's see if the move I could never remember is still in my head tomorrow.)
I'm very slow, though. It's not the most efficient solution.
That got me thinking. There are pool hustlers, who act like they're no good at pool, start taking bets, and then all of a sudden are really good. If someone could solve the Rubik's cube really quickly, could they make money off it as a Rubik's cube hustler? Bring the cube somewhere where there are people, act like you can only solve it slowly, take bets, and then solve it quickly.
It just might work.
It would be crucial to find the right audience, though -- somewhere where people are familiar with the cube. So a bar, the typical place for pool hustling, wouldn't work. The right math department might work. But not mine -- I have readers within my department, and I'm pretty sure I've given too much away by making this post. Fortunately I don't have the skill to pull this off anyway.
I'm very slow, though. It's not the most efficient solution.
That got me thinking. There are pool hustlers, who act like they're no good at pool, start taking bets, and then all of a sudden are really good. If someone could solve the Rubik's cube really quickly, could they make money off it as a Rubik's cube hustler? Bring the cube somewhere where there are people, act like you can only solve it slowly, take bets, and then solve it quickly.
It just might work.
It would be crucial to find the right audience, though -- somewhere where people are familiar with the cube. So a bar, the typical place for pool hustling, wouldn't work. The right math department might work. But not mine -- I have readers within my department, and I'm pretty sure I've given too much away by making this post. Fortunately I don't have the skill to pull this off anyway.
18 July 2008
Lower speed limits, part two
One thing people complain about in regards to slower speed limits, which I wrote about earlier today, is that when speed limits are lower it takes longer to get places. This is, of course, true. But on the other hand you use less fuel.
From Wikipedia on fuel economy in automobiles: "The power to overcome air resistance increases roughly with the cube of the speed, and thus the energy required per unit distance is roughly proportional to the square of speed." Furthermore, this is the dominant factor for large velocity.
So let's say your fuel usage, measured in fuel used per unit of distance (say, gallons per mile), at velocity v, is kv2. (k is some constant that depends on the car. A typical value of k, for a car using 0.05 gallons per mile at 60 mph, is 0.000014.) Let's say you value your time at a rate c -- measured in, say, dollars per hour, and the price of fuel is p.
Then for a journey of length d, you'll spend dpkv2 in fuel, and cd/v in time. Your total cost is and differentiating and setting f'(v) = 0, the optimal speed is (c/2pk)1/3. The cost of the journey at this speed is
So according to this model, if you value your time more you should go faster; not surprisingly your value of time c and the price of fuel p show up only as c/p -- effectively, your value of time measured in terms of fuel.
Also, the optimal speed doesn't go down that slowly as p increases -- it only goes as p-1/3. But a doubling in gas prices still leads to a 20 percent reduction in optimal speed -- perhaps roughly in line with what people are suggesting. Taking c = 10, p = 4.05, k = 0.000014 gives an optimal speed of 45 miles per hour, although given the crudeness of this model (I've assumed that all the fuel is used to fight air resistance) I'd take that with a grain of salt, and I won't even touch the fact that different people place different values on their time and get different fuel economy. We can't just let everyone drive at their optimal speed.
Besides, part of the whole point of this is that if we use less fuel, demand for fuel will drop significantly below supply and oil prices will go down. So to forecast the effects of a lower speed limit I'd have to factor in that gasoline could get cheaper -- and let's face it, I can't predict the workings of the oil market.
From Wikipedia on fuel economy in automobiles: "The power to overcome air resistance increases roughly with the cube of the speed, and thus the energy required per unit distance is roughly proportional to the square of speed." Furthermore, this is the dominant factor for large velocity.
So let's say your fuel usage, measured in fuel used per unit of distance (say, gallons per mile), at velocity v, is kv2. (k is some constant that depends on the car. A typical value of k, for a car using 0.05 gallons per mile at 60 mph, is 0.000014.) Let's say you value your time at a rate c -- measured in, say, dollars per hour, and the price of fuel is p.
Then for a journey of length d, you'll spend dpkv2 in fuel, and cd/v in time. Your total cost is and differentiating and setting f'(v) = 0, the optimal speed is (c/2pk)1/3. The cost of the journey at this speed is
So according to this model, if you value your time more you should go faster; not surprisingly your value of time c and the price of fuel p show up only as c/p -- effectively, your value of time measured in terms of fuel.
Also, the optimal speed doesn't go down that slowly as p increases -- it only goes as p-1/3. But a doubling in gas prices still leads to a 20 percent reduction in optimal speed -- perhaps roughly in line with what people are suggesting. Taking c = 10, p = 4.05, k = 0.000014 gives an optimal speed of 45 miles per hour, although given the crudeness of this model (I've assumed that all the fuel is used to fight air resistance) I'd take that with a grain of salt, and I won't even touch the fact that different people place different values on their time and get different fuel economy. We can't just let everyone drive at their optimal speed.
Besides, part of the whole point of this is that if we use less fuel, demand for fuel will drop significantly below supply and oil prices will go down. So to forecast the effects of a lower speed limit I'd have to factor in that gasoline could get cheaper -- and let's face it, I can't predict the workings of the oil market.
Five miles an hour = 30 cents a gallon?
"Every five miles an hour faster costs you an extra 30 cents a gallon." From yesterday's New York Times, among others. This is often mentioned in reference to bringing back the national 55 mile per hour speed limit.
What does this even mean? I assume it means that it takes, say, seven percent more gasoline per mile to drive 65 mph than to drive 60 mph. (30 cents is around seven percent of the current average gasoline price, $4.10 or so per gallon.) Why not just say that? This also has the advantage that when gas prices change, the fact doesn't become outdated.
Although as many people point out, the lower speed limit is a hard sell, in part because of the value of time. If you're about to drive 65 miles at 65 mph, it'll take you an hour; say you get 20 miles per gallon, so that uses 3.25 gallons of gasoline. Slowing to 60 mph, it takes five minutes longer, but saves seven percent of that gasoline, or 0.23 gallons -- perhaps $1 worth. So if you value an hour at more than $12 (more generally, at more than three gallons of gasoline), you should drive faster! Of course I've committed the twin fallacies of "everything is linear" and a bunch of sloppy arithmetic, and I've ignored that different cars get different gas mileage, but the order of magnitude is right -- and it's clear to me some people value their time at more than this and some at less. And a better analysis would take into account the probability of getting in accidents, speeding tickets, etc. (I'm mostly pointing this out because otherwise some of you will.)
Oh, and on a related note, people will do things for $100 worth of gas that they wouldn't do for $100 worth of money.
What does this even mean? I assume it means that it takes, say, seven percent more gasoline per mile to drive 65 mph than to drive 60 mph. (30 cents is around seven percent of the current average gasoline price, $4.10 or so per gallon.) Why not just say that? This also has the advantage that when gas prices change, the fact doesn't become outdated.
Although as many people point out, the lower speed limit is a hard sell, in part because of the value of time. If you're about to drive 65 miles at 65 mph, it'll take you an hour; say you get 20 miles per gallon, so that uses 3.25 gallons of gasoline. Slowing to 60 mph, it takes five minutes longer, but saves seven percent of that gasoline, or 0.23 gallons -- perhaps $1 worth. So if you value an hour at more than $12 (more generally, at more than three gallons of gasoline), you should drive faster! Of course I've committed the twin fallacies of "everything is linear" and a bunch of sloppy arithmetic, and I've ignored that different cars get different gas mileage, but the order of magnitude is right -- and it's clear to me some people value their time at more than this and some at less. And a better analysis would take into account the probability of getting in accidents, speeding tickets, etc. (I'm mostly pointing this out because otherwise some of you will.)
Oh, and on a related note, people will do things for $100 worth of gas that they wouldn't do for $100 worth of money.
17 July 2008
Population densities vary over nine orders of magnitude
The United States has an area of 3,794,066 square miles, and a population, as of the 2000 census, of 281,421,906. This gives a population density of 74.2 people per square mile.
But what is the average population density that Americans live at? It's not 74.2 per square mile. Only about 11 percent of Americans live in census block groups (the smallest resolution the census goes down to; there are about 200,000 of these, corresponding to about 1,500 people each) lower than this density. That's not too surprising; that average includes lots of empty space.
But the median American, it turns out, lives in a block group with a density of 2,521.6 per square mile. At least, when I asked the web site I was using for the distribution of block groups by population density that's what it said; the front page says this number is 2,059.23. I suspect the smaller number is actually the median population density of block groups, not of individuals; the block groups tend to have lower populations in less dense areas, which explains the difference. This number was surprisingly high to me, and seems to illustrate how concentrated population is.
In case you're wondering, the most densely populated block group is one in New York County, New York -- 3,240 people in 0.0097 square miles, for about 330,000 per square mile. The least dense is in the North Slope Borough of Alaska -- 3 people in 3,246 square miles, or one per 1,082 square miles. The Manhattan block group I mention here is 360 million times more dense than the Alaska one; population densities vary over a huge range.
Here's a table; in the first row is a percentile n, in the second row the population density such that n% of Americans live in a block group with that density (in people per square mile) or less. (Generating such a table at fakeisthenewreal.com is slow, which is why I'm providing it here.)
I hesitate to interpret this. But I must admit that I'm curious if demographers have some way of predicting the general shape of this data. It's clear in the US that more people live at "intermediate" densities than at very high or low ones -- but that's not exactly a meaningful statement.
(Facts from fake is the new real, crunching Census Bureau data.)
By the way, Wikipedia has an article entitled list of U. S. states by area. This includes an almost entirely useless map which colors the larger states darker. I can see which states are larger without the colors, because they're larger, which is kind of the point of a map. The area the state takes up on my screen should be proportional to its actual area.
But what is the average population density that Americans live at? It's not 74.2 per square mile. Only about 11 percent of Americans live in census block groups (the smallest resolution the census goes down to; there are about 200,000 of these, corresponding to about 1,500 people each) lower than this density. That's not too surprising; that average includes lots of empty space.
But the median American, it turns out, lives in a block group with a density of 2,521.6 per square mile. At least, when I asked the web site I was using for the distribution of block groups by population density that's what it said; the front page says this number is 2,059.23. I suspect the smaller number is actually the median population density of block groups, not of individuals; the block groups tend to have lower populations in less dense areas, which explains the difference. This number was surprisingly high to me, and seems to illustrate how concentrated population is.
In case you're wondering, the most densely populated block group is one in New York County, New York -- 3,240 people in 0.0097 square miles, for about 330,000 per square mile. The least dense is in the North Slope Borough of Alaska -- 3 people in 3,246 square miles, or one per 1,082 square miles. The Manhattan block group I mention here is 360 million times more dense than the Alaska one; population densities vary over a huge range.
Here's a table; in the first row is a percentile n, in the second row the population density such that n% of Americans live in a block group with that density (in people per square mile) or less. (Generating such a table at fakeisthenewreal.com is slow, which is why I'm providing it here.)
Percentile | 5 | 10 | 20 | 30 | 40 | 50 |
Density | 29.3 | 64.9 | 226.9 | 677.5 | 1499.8 | 2521.6 |
Percentile | 60 | 70 | 80 | 90 | 95 | |
Density | 3737.2 | 5257.1 | 7529.0 | 13261.9 | 24219.5 |
(Facts from fake is the new real, crunching Census Bureau data.)
By the way, Wikipedia has an article entitled list of U. S. states by area. This includes an almost entirely useless map which colors the larger states darker. I can see which states are larger without the colors, because they're larger, which is kind of the point of a map. The area the state takes up on my screen should be proportional to its actual area.
16 July 2008
Base sixty is kind of tricky
Base sixty is kind of tricky. A traffic warden used a calculator to tell when the parking a driver had paid for would expire, got the wrong answer, and gave him a ticket. He got the wrong answer because he was treating time as a decimal -- so 2:49 became 2.49 -- and as you know, there are sixty minutes in an hour, not one hundred. The driver had paid for 75 minutes, so the warden found 2.49 + .75 = 3.24 and decided he had paid until 3.24. (I shudder to think what would have happened if the warden had noticed that 75 minutes is one hour fifteen minutes, and done the computation 2.49 + 1.15 = 3.64 -- obviously the time 3:64 doesn't exist.)
Have there been cheap calculators that work in hours and minutes? I feel like there would be a demand for that; calculations involving time are probably among the most common ones in ordinary life. Then again, most people seem able to do them; this sounds like an isolated incident.
(via Eric Berlin.)
Have there been cheap calculators that work in hours and minutes? I feel like there would be a demand for that; calculations involving time are probably among the most common ones in ordinary life. Then again, most people seem able to do them; this sounds like an isolated incident.
(via Eric Berlin.)
15 July 2008
Translating popular votes to electoral votes
By sheer chance, I came across the book Predicting Party Sizes by Rein Taagepera, a political scientist who was trained as a physicist. I was interested to run into a "theorem" (I'm not sure whether I can call it this, because the derivation in the book is rather heuristic) which states the following. Let V be the number of voters in a country like the United States which elects its president through an electoral college, and let E be the number of states in that country. Then let n = (log V)/(log E). For the United States at present, V is about 121 million (I'm using the turnout in the 2004 election), E is 51 (the District of Columbia is a "state" for the purposes of this discussion), and so n is about 4.7.
This quantity n is called the "responsiveness" of the system, and its rough interpretation is that if the party in control receives (1/2 + ε) of the popular vote, then it will receive (1/2 + nε) of the electoral vote, for small ε. More generally, let VD and VR be the number of popular votes obtained by the Democratic and Republican candidates, respectively; let ED and ER be their numbers of electoral votes. Then ED/ER is approximately (VD/VR)n. When VD/VR = 1 this reduces to the first statement.
Anyway, Nate Silver at fivethirtyeight.com showed the results of some of his simulations about a month ago and claimed that a one-percent swing in the popular vote corresponds to 25 electoral votes. It turns out that 25 electoral votes is 4.6 percent of the electoral college at a whole, so based on his simulations n = 4.6. I take this as evidence that Silver is doing something right. (n is also in this neighborhood for data from actual elections.)
This quantity n is called the "responsiveness" of the system, and its rough interpretation is that if the party in control receives (1/2 + ε) of the popular vote, then it will receive (1/2 + nε) of the electoral vote, for small ε. More generally, let VD and VR be the number of popular votes obtained by the Democratic and Republican candidates, respectively; let ED and ER be their numbers of electoral votes. Then ED/ER is approximately (VD/VR)n. When VD/VR = 1 this reduces to the first statement.
Anyway, Nate Silver at fivethirtyeight.com showed the results of some of his simulations about a month ago and claimed that a one-percent swing in the popular vote corresponds to 25 electoral votes. It turns out that 25 electoral votes is 4.6 percent of the electoral college at a whole, so based on his simulations n = 4.6. I take this as evidence that Silver is doing something right. (n is also in this neighborhood for data from actual elections.)
Shortage of fours
Gas stations have a shortage of fours.
My American readers probably know why -- gas has been over $4 per gallon for a while. Apparently the numbers come in sets of forty, four of each digit. They can also be bought individually. But there aren't too many manufacturers.
I'm kind of curious if there are more stations selling at $4.43 or $4.45 than $4.44 just because they don't have the appropriate digits. (I would have asked a similar question at $3.33 or $2.22. And I'll ask it again if we get to $5.55.) Stations might also price at $4.39 instead of $4.40, or $4.50 instead of $4.49, for similar reasons. It sounds like some of them are improvising digits, but reporters wouldn't know if a particular station charging $4.43 is doing this or not; it could only be figured out by looking at large amounts of data, and I'm not that curious.
And in New Hampshire some stations are pricing gas by the half-gallon, because their pumps can't handle prices higher than $3.999. So they indicate that they're doing so, set the pump at something like $2.05, and charge double what the pump reads, namely $4.10. Apparently some people are troubled by the mathematical demands this places on the consumer:
I hope people can double and halve in their heads. But there's the psychological issue -- they might forget to.
My American readers probably know why -- gas has been over $4 per gallon for a while. Apparently the numbers come in sets of forty, four of each digit. They can also be bought individually. But there aren't too many manufacturers.
I'm kind of curious if there are more stations selling at $4.43 or $4.45 than $4.44 just because they don't have the appropriate digits. (I would have asked a similar question at $3.33 or $2.22. And I'll ask it again if we get to $5.55.) Stations might also price at $4.39 instead of $4.40, or $4.50 instead of $4.49, for similar reasons. It sounds like some of them are improvising digits, but reporters wouldn't know if a particular station charging $4.43 is doing this or not; it could only be figured out by looking at large amounts of data, and I'm not that curious.
And in New Hampshire some stations are pricing gas by the half-gallon, because their pumps can't handle prices higher than $3.999. So they indicate that they're doing so, set the pump at something like $2.05, and charge double what the pump reads, namely $4.10. Apparently some people are troubled by the mathematical demands this places on the consumer:
"If for no other reason, half pricing is confusing and can be inconvenient for the customer. When I buy gasoline I stop the pump at the dollar amount I want to spend. So let's say I have $60 to spend and the meter, if it's on half pricing — reads $31.50 and I forgot to stop it at $30, what do I do?" he said.
I hope people can double and halve in their heads. But there's the psychological issue -- they might forget to.
14 July 2008
Some quick statistics on the calibration quiz
On Saturday I gave a quiz from Ian Ayres' book Super Crunchers which asked you to provide 90% confidence intervals for ten numerical questions with well-defined answers. Roughly speaking, you should select your answers so that you expect to get nine of the questions right and you believe you're equally likely to have gotten each of them wrong.
Nineteen people have taken the quiz.
Out of the 190 individual answers received, 97 were correct -- slightly over half. The distribution of scores on the quiz is as follows:
In short, the respondents as a group confirm Ayres' claim that "almost everyone who answers these questions has the opposite problem of overconfidence -- they can't help themselves from reporting ranges that are too small." Ayres cites a book by J. Edward Russo and Paul J. H. Schoemaker, Decision Traps: Ten Barriers to Brilliant Decision-Making and How to Overcome Them, which I haven't read; supposedly "most" people get between three and six questions right. I'm actually soewhat surprised that you as a group don't seem all that different from the general population.
I have some other comments -- which questions seem particularly difficult or easy, what we might say about confidence intervals other than 90 percent -- but I'm hoping more people might answer, so I'll wait for that. (Although if the remaining answers are suspiciously better-calibrated that the answers so far, that might turn out to be not such a good idea.)
Nineteen people have taken the quiz.
Out of the 190 individual answers received, 97 were correct -- slightly over half. The distribution of scores on the quiz is as follows:
Score | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
Number of people | 1 | 4 | 3 | 4 | 2 | 3 | 1 | 0 | 1 |
In short, the respondents as a group confirm Ayres' claim that "almost everyone who answers these questions has the opposite problem of overconfidence -- they can't help themselves from reporting ranges that are too small." Ayres cites a book by J. Edward Russo and Paul J. H. Schoemaker, Decision Traps: Ten Barriers to Brilliant Decision-Making and How to Overcome Them, which I haven't read; supposedly "most" people get between three and six questions right. I'm actually soewhat surprised that you as a group don't seem all that different from the general population.
I have some other comments -- which questions seem particularly difficult or easy, what we might say about confidence intervals other than 90 percent -- but I'm hoping more people might answer, so I'll wait for that. (Although if the remaining answers are suspiciously better-calibrated that the answers so far, that might turn out to be not such a good idea.)
12 July 2008
A prediction-making quiz
I just read Ian Ayres' book Super Crunchers, which talks about how the large amounts of data that are now routinely collected enable better predictions than before. Sort of like Freakonomics but a bit more statistical. (Although all the math is hidden -- but I knew that going in.)
Now, there was a recent article The End of Theory which predicts that we don't need theories, we can just mine our data for correlations; I don't believe this. And Ayres talks about how some predictive models need human input -- for example, a model for predicting how Supreme Court justices will vote needs people to read previous input on the cases in order to decide whether the ruling being appealed was liberal or conservative, and also to determine what the major issues involved in the case are. But he ponts out that people are bad at predicting things because we are overconfident about our predictions.
This piqued my curiosity. Here's a quiz; I want to see how good you are at calibrating your own predictions. (This is taken from Ayres' book, p. 113.) For each of the following ten questions, give a range that you are 90 percent confident contains the correct answer. Ayres' test implicitly uses English units, but if you want to use metric (which I suspect a lot of you are more comfortable in) that's fine; I'll convert.
So, for example, if one of the questions were "What is the population of Philadelphia?", and you gave the numbers "1.2 million, 1.6 million", that would indicate that you believe with probability 90 percent that the population of Philadelphia is in that interval. (The 2006 Census estimate for this, by the way, is 1,448,394.)
Your goal is to get exactly nine of these right. Yes, I know that sounds weird! But the point is that if you get all ten right, you're proabably underestimating your own abilities to predict things. If you get eight or less, you're probably overestimating them.
Send your answers to me at izzycat AT gmail DOT com; don't leave them in comments.
Here are the questions:
1. How old was Martin Luther King, Jr. at death?
2. What is the length of the Nile River?
3. How many countries belong to OPEC?
4. How many books are there in the Old Testament?
5. What is the diameter of the moon?
6. What is the weight of an empty Boeing 747-400?
7. In what year was Mozart born?
8. What is the gestation period of an Asian elephant?
9. What is the air distance from London to Tokyo?
10. What is the depth of the deepest known point in the ocean?
Also:
1. feel free to forward this quiz to other people. (I encourage it, although there's a non-negligible chance I might regret this if I get too many answers. I'll survive.)
2. if you have stories about how you made your guess, send them to me; I may use them in a future post.
I'm not going to post the answers; none of them are hard to find. Once answers stop coming in I'll make a post about how good you are at making these predictions.
Now, there was a recent article The End of Theory which predicts that we don't need theories, we can just mine our data for correlations; I don't believe this. And Ayres talks about how some predictive models need human input -- for example, a model for predicting how Supreme Court justices will vote needs people to read previous input on the cases in order to decide whether the ruling being appealed was liberal or conservative, and also to determine what the major issues involved in the case are. But he ponts out that people are bad at predicting things because we are overconfident about our predictions.
This piqued my curiosity. Here's a quiz; I want to see how good you are at calibrating your own predictions. (This is taken from Ayres' book, p. 113.) For each of the following ten questions, give a range that you are 90 percent confident contains the correct answer. Ayres' test implicitly uses English units, but if you want to use metric (which I suspect a lot of you are more comfortable in) that's fine; I'll convert.
So, for example, if one of the questions were "What is the population of Philadelphia?", and you gave the numbers "1.2 million, 1.6 million", that would indicate that you believe with probability 90 percent that the population of Philadelphia is in that interval. (The 2006 Census estimate for this, by the way, is 1,448,394.)
Your goal is to get exactly nine of these right. Yes, I know that sounds weird! But the point is that if you get all ten right, you're proabably underestimating your own abilities to predict things. If you get eight or less, you're probably overestimating them.
Send your answers to me at izzycat AT gmail DOT com; don't leave them in comments.
Here are the questions:
1. How old was Martin Luther King, Jr. at death?
2. What is the length of the Nile River?
3. How many countries belong to OPEC?
4. How many books are there in the Old Testament?
5. What is the diameter of the moon?
6. What is the weight of an empty Boeing 747-400?
7. In what year was Mozart born?
8. What is the gestation period of an Asian elephant?
9. What is the air distance from London to Tokyo?
10. What is the depth of the deepest known point in the ocean?
Also:
1. feel free to forward this quiz to other people. (I encourage it, although there's a non-negligible chance I might regret this if I get too many answers. I'll survive.)
2. if you have stories about how you made your guess, send them to me; I may use them in a future post.
I'm not going to post the answers; none of them are hard to find. Once answers stop coming in I'll make a post about how good you are at making these predictions.
11 July 2008
Good's "singing logarithms"
I've previously mentioned Sanjoy Mahajan's Street Fighting Mathematics. (Yes, that's right, almost the entire sentence is links, deal with it.)
One thing I didn't mention is approximating logarithms using musical intervals, from that course. We all know 210 and 103 are roughly equal; this is the approximation that leads people to use the metric prefixes kilo-, mega-, giga-, tera- for 210, 220, 230, and 240 in computing contexts. Take 120th roots; you get 21/12 ≈ 101/40.
Now, 21/12 is the ratio corresponding to a semitone in twelve-tone equal temperament. So, for example, we know that 27/12 is approximately 3/2, because seven semitones make a perfect fifth. So log10 3/2 ≈ 7/40 = 0.175; the correct value is 0.17609... Some more complicated examples are in Mahajan's handout.
You might think "yeah, but when do I ever need to know the logarithm of something?" And that may be true; they're no longer particularly useful as an aid for calculation, except when you don't have a computer around. But I often find myself doing approximate calculations while walking, and I can't pull out a calculator or a computer! (To be honest I don't use this trick, but that's only because I have an arsenal of others.)
Is this pointless? For the most part, yes. But amusingly so.
The method is supposedly due to I. J. Good, who is annoyingly difficult to Google.
Oh, and a few facts I find myself using quite often -- (2π)1/2 ≈ 2.5, e3 ≈ 20.
One thing I didn't mention is approximating logarithms using musical intervals, from that course. We all know 210 and 103 are roughly equal; this is the approximation that leads people to use the metric prefixes kilo-, mega-, giga-, tera- for 210, 220, 230, and 240 in computing contexts. Take 120th roots; you get 21/12 ≈ 101/40.
Now, 21/12 is the ratio corresponding to a semitone in twelve-tone equal temperament. So, for example, we know that 27/12 is approximately 3/2, because seven semitones make a perfect fifth. So log10 3/2 ≈ 7/40 = 0.175; the correct value is 0.17609... Some more complicated examples are in Mahajan's handout.
You might think "yeah, but when do I ever need to know the logarithm of something?" And that may be true; they're no longer particularly useful as an aid for calculation, except when you don't have a computer around. But I often find myself doing approximate calculations while walking, and I can't pull out a calculator or a computer! (To be honest I don't use this trick, but that's only because I have an arsenal of others.)
Is this pointless? For the most part, yes. But amusingly so.
The method is supposedly due to I. J. Good, who is annoyingly difficult to Google.
Oh, and a few facts I find myself using quite often -- (2π)1/2 ≈ 2.5, e3 ≈ 20.
10 July 2008
Three beautiful quicksorts
Jon Bentley gives a lecture called Three Beautiful Quicksorts, as three possible answers to the question "what's the most beautiful code you've ever written?" (An hour long, but hey, I've got time to kill.)
Watch the middle third, in which some standard code for quicksort is gradually transformed into code for performing an analysis of the number of comparisons needed in quicksort, and vanishes in a puff of mathematical smoke.
Although I must admit, I'm kind of annoyed that he slips into the idea that an average-case analysis is the most important thing somewhere in there. The first moment of a distribution is not everything you need to know about it! Although I admit that at times I subscribe to the school of thought that says "the first two moments are everything", but that's only because most distributions of normal.
(Note to those who don't get sarcasm: I don't actually believe that most distributions are normal.)
Watch the middle third, in which some standard code for quicksort is gradually transformed into code for performing an analysis of the number of comparisons needed in quicksort, and vanishes in a puff of mathematical smoke.
Although I must admit, I'm kind of annoyed that he slips into the idea that an average-case analysis is the most important thing somewhere in there. The first moment of a distribution is not everything you need to know about it! Although I admit that at times I subscribe to the school of thought that says "the first two moments are everything", but that's only because most distributions of normal.
(Note to those who don't get sarcasm: I don't actually believe that most distributions are normal.)
Why medians are dangerous
Greg Mankiw provides a graph of the salaries of newly minted lawyers, originally from Empirical Legal Studies.
There are two peaks, one centered at about $45,000 and one centered at about $145,000. The higher peak corresponds to people working for Big Law Firms; the lower to people working for nonprofits, the government, etc.
The median is reported at $62,000, just to the right of the first peak, since the first peak contains slightly more people. But one gets the impression that if a few more people were to shift from the left peak to the right peak, the median would jump drastically upwards. We usually hear that it's better to look at the median than the mean when looking at distributions of incomes, house prices, etc. because these distributions are heavily skewed towards the right. But even that starts to break down when the distribution is bimodal.
There are two peaks, one centered at about $45,000 and one centered at about $145,000. The higher peak corresponds to people working for Big Law Firms; the lower to people working for nonprofits, the government, etc.
The median is reported at $62,000, just to the right of the first peak, since the first peak contains slightly more people. But one gets the impression that if a few more people were to shift from the left peak to the right peak, the median would jump drastically upwards. We usually hear that it's better to look at the median than the mean when looking at distributions of incomes, house prices, etc. because these distributions are heavily skewed towards the right. But even that starts to break down when the distribution is bimodal.
09 July 2008
Why devil plays dice?
Why devil plays dice?, by Andrzej Dragan, from the arXiv. I haven't read it; this post basically exists to forestall e-mails of the form "Have you seen the title of this paper?"
(Hat tip to The Quantum Pontiff.)
(Hat tip to The Quantum Pontiff.)
Lottery tickets with really bad odds
A CNN.com article talks about lottery tickets with zero probability of winning.
Why, you ask? Because some state lotteries continue selling the tickets for scratch-off games even after the top prize has been awarded. Therefore the odds stated on the ticket are, as of the time the ticket was purchased, incorrect.
But let's say that half the tickets for some game have already been sold, and the top prize not awarded -- then the tickets that are still out there have double the probability of winning that they did originally. You wouldn't see anybody complaining about that.
One way to fix this would be to have all the tickets be independent of each other, but drawn from the same distribution -- so instead of having one grand prize among the 100,000 tickets, each ticket independently has probability 0.00001 of being a grand prize ticket. But then there's a significant probability that there will be no grand prizes awarded, or that there would be two or more.
And some lottery websites actually state which prizes have already been awarded. So it might be possible for somebody to use this information to their advantage, by betting only in lotteries where a disproportionate number of prizes remain to be awarded. This is basically the same principle as card-counting in blackjack, where the player bets more when the cards in the deck are more favorable. I suspect, though, that this wouldn't work well because the house edge in lotteries is much higher than that in casinos.
Why, you ask? Because some state lotteries continue selling the tickets for scratch-off games even after the top prize has been awarded. Therefore the odds stated on the ticket are, as of the time the ticket was purchased, incorrect.
But let's say that half the tickets for some game have already been sold, and the top prize not awarded -- then the tickets that are still out there have double the probability of winning that they did originally. You wouldn't see anybody complaining about that.
One way to fix this would be to have all the tickets be independent of each other, but drawn from the same distribution -- so instead of having one grand prize among the 100,000 tickets, each ticket independently has probability 0.00001 of being a grand prize ticket. But then there's a significant probability that there will be no grand prizes awarded, or that there would be two or more.
And some lottery websites actually state which prizes have already been awarded. So it might be possible for somebody to use this information to their advantage, by betting only in lotteries where a disproportionate number of prizes remain to be awarded. This is basically the same principle as card-counting in blackjack, where the player bets more when the cards in the deck are more favorable. I suspect, though, that this wouldn't work well because the house edge in lotteries is much higher than that in casinos.
08 July 2008
On today's New York Times crossword
Today's New York Times crossword is by Tim Wescott. There is someone who's commented at Secret Blogging Seminar with that name.
Anyway, here are some of the answers:
4 down: EVEN TENOR
6 down: PERFECT GAME
11 down: ODD MEN OUT
25 down: SQUARE KNOTS
33 down: REAL MCCOY
37 down: PRIME TIME
There was one more clue saying that the first word of each of those answers (which had a star before the clue) described the number of its clue. So 4 is even, 6 is perfect, 11 is odd, 25 is square, 33 is real, and 37 is prime.
33 down seems like a bit of a cop-out to me. But I'm not saying I could do better at making a crossword. Crosswords (especially American-style ones) are hard to make; read the information-theoretic argument in MacKay's book for some justification why.
For the non-mathematicians who may have stumbled in (and the mathematicians who don't remember this particular bit of trivia), I feel like I should point out what a perfect number is. A number is perfect if it's equal to the sum of all the numbers it's divisible by. So 6 is divisible by 1, 2, and 3, and 1 + 2 + 3 = 6. 28 is the next perfect number; it's divisible by 1, 2, 4, 7, and 14, and 1 + 2 + 4 + 7 + 14 = 28. But 12 isn't perfect; it's divisible by 1, 2, 3, 4, and 6, and 1 + 2 + 3 + 4 + 6 = 16, which isn't 12. We call 12 "abundant" because 16 (the sum of its divisors) is more than 12. Just under one quarter of integers are abundant, which is entirely irrelevant.
Anyway, here are some of the answers:
4 down: EVEN TENOR
6 down: PERFECT GAME
11 down: ODD MEN OUT
25 down: SQUARE KNOTS
33 down: REAL MCCOY
37 down: PRIME TIME
There was one more clue saying that the first word of each of those answers (which had a star before the clue) described the number of its clue. So 4 is even, 6 is perfect, 11 is odd, 25 is square, 33 is real, and 37 is prime.
33 down seems like a bit of a cop-out to me. But I'm not saying I could do better at making a crossword. Crosswords (especially American-style ones) are hard to make; read the information-theoretic argument in MacKay's book for some justification why.
For the non-mathematicians who may have stumbled in (and the mathematicians who don't remember this particular bit of trivia), I feel like I should point out what a perfect number is. A number is perfect if it's equal to the sum of all the numbers it's divisible by. So 6 is divisible by 1, 2, and 3, and 1 + 2 + 3 = 6. 28 is the next perfect number; it's divisible by 1, 2, 4, 7, and 14, and 1 + 2 + 4 + 7 + 14 = 28. But 12 isn't perfect; it's divisible by 1, 2, 3, 4, and 6, and 1 + 2 + 3 + 4 + 6 = 16, which isn't 12. We call 12 "abundant" because 16 (the sum of its divisors) is more than 12. Just under one quarter of integers are abundant, which is entirely irrelevant.
07 July 2008
A political minimum spanning tree
This morning, Nate Silver of fivethirtyeight.com posted State Similarity Scores. For each pair of states, Silver reports a score that gives the political "distance" between the two states. (He actually reports only the three states closest to each state.)
These are based on an analysis of certain variables that appear to be important in US politics, weighted by their importance in determining state-by-state polling in the 2004 and 2008 presidential elections. As it turns out, the pair of states that are closest to each other in this metric are the Carolinas, followed by the Dakotas; Kentucky-Tennessee; Michigan-Ohio and Oregon-Washington.
It occurred to me that the minimum-weight spanning tree for this data might look interesting. And indeed it does. I'm having some trouble articulating why it's interesting, but I just wanted to post the tree. There may be a slight issue because I don't have the full set of similarity scores, but the tree generated from the subset of the data that I do have is probably pretty close to the "true" tree and is quite interesting to look at. (The weight for the edge between any two states is 100 minus Silver's similarity score for that pair of states; Silver's similarity scores have a theoretical maximum of 100.)
Note that the positioning of the states in the drawing of the tree below is entirely irrelevant; I just attempted to draw the tree in such a way that people wouldn't be inclined to see edges that weren't actually there. In particular, Ohio is not somehow "unusual" even though the edges connecting it to adjacent states are long. (As a start, though, it does seem to be useful to think of Ohio as the center of the graph, in line with the conventional political wisdom that Ohio is at the political center of the US.) I thought about trying to make the distances in the drawing reflect the weights, but that was more trouble than I wanted to go to.
Also, some states that are close to each other in Silver's metric aren't close in the tree. There may be errors, since I did this by hand.
Here's the tree.
These are based on an analysis of certain variables that appear to be important in US politics, weighted by their importance in determining state-by-state polling in the 2004 and 2008 presidential elections. As it turns out, the pair of states that are closest to each other in this metric are the Carolinas, followed by the Dakotas; Kentucky-Tennessee; Michigan-Ohio and Oregon-Washington.
It occurred to me that the minimum-weight spanning tree for this data might look interesting. And indeed it does. I'm having some trouble articulating why it's interesting, but I just wanted to post the tree. There may be a slight issue because I don't have the full set of similarity scores, but the tree generated from the subset of the data that I do have is probably pretty close to the "true" tree and is quite interesting to look at. (The weight for the edge between any two states is 100 minus Silver's similarity score for that pair of states; Silver's similarity scores have a theoretical maximum of 100.)
Note that the positioning of the states in the drawing of the tree below is entirely irrelevant; I just attempted to draw the tree in such a way that people wouldn't be inclined to see edges that weren't actually there. In particular, Ohio is not somehow "unusual" even though the edges connecting it to adjacent states are long. (As a start, though, it does seem to be useful to think of Ohio as the center of the graph, in line with the conventional political wisdom that Ohio is at the political center of the US.) I thought about trying to make the distances in the drawing reflect the weights, but that was more trouble than I wanted to go to.
Also, some states that are close to each other in Silver's metric aren't close in the tree. There may be errors, since I did this by hand.
Here's the tree.
06 July 2008
Nomenclature clash
Prime Numbers for June 29 to July 5, from today's New York Times. (I don't know if this is a weekly thing; it could be but I don't recall seeing it before.)
The numbers are 46, 62000, 30, 18%, and 30000; each is important to some news story from this week. (If you want to get technical, 62000 and 30000 are approximations.)
Presumably they mean "prime" in the sense of "important". Or in the sense of "composite", but that would be a bit perverse.
The numbers are 46, 62000, 30, 18%, and 30000; each is important to some news story from this week. (If you want to get technical, 62000 and 30000 are approximations.)
Presumably they mean "prime" in the sense of "important". Or in the sense of "composite", but that would be a bit perverse.
05 July 2008
A couple of links
1. Jordan Ellenberg's review of Andrew Hodges' book One To Nine. Read the review, if only because it uses the word "mathiness". Ellenberg's review seems to imply that the book has similar content to most popular math books; sometimes I wonder how the publishing industry manages to keep churning out these books, but then I remember that the same thing is true in most other subjects and I'm just more conscious of it in mathematics.
2. Open Problem Garden, which is a user-editable (?) repository of open problems in mathematics. Thanks to Charles Siegel, my fellow Penn mathblogger, for pointing this out. The majority of the problems given there are in graph theory; that seems to be because Matt Devos, one of the most prolific contributors, is a graph theorist.
But I have to say that "garden" feels like the wrong word here; gardens are calm and peaceful and full of well-organized plants, which doesn't seem like a good way to describe problems that haven't been solved yet. "Forest" seems like a better metaphor to me -- certainly when I'm working on a problem that's not solved, it feels like hacking my way through a forest, not walking around a garden. Also, the use of "forest" enables bad graph theory jokes -- the problem of "negative assocation in uniform forests", due to Robin Pemantle, in particular sounds like it could be about sketchy people you meet in the woods.
(I gave a talk back in February where I mentioned this problem. I'm glad I didn't think of that joke then, because it's really bad and I would have just embarrassed myself.)
2. Open Problem Garden, which is a user-editable (?) repository of open problems in mathematics. Thanks to Charles Siegel, my fellow Penn mathblogger, for pointing this out. The majority of the problems given there are in graph theory; that seems to be because Matt Devos, one of the most prolific contributors, is a graph theorist.
But I have to say that "garden" feels like the wrong word here; gardens are calm and peaceful and full of well-organized plants, which doesn't seem like a good way to describe problems that haven't been solved yet. "Forest" seems like a better metaphor to me -- certainly when I'm working on a problem that's not solved, it feels like hacking my way through a forest, not walking around a garden. Also, the use of "forest" enables bad graph theory jokes -- the problem of "negative assocation in uniform forests", due to Robin Pemantle, in particular sounds like it could be about sketchy people you meet in the woods.
(I gave a talk back in February where I mentioned this problem. I'm glad I didn't think of that joke then, because it's really bad and I would have just embarrassed myself.)
03 July 2008
Lightning and lotteries
From a rerun of Friends:
Also, Ross is wrong. It seems the record for getting struck by lightning is Roy Sullivan, seven times. So nobody's been hit 42 times, while plenty of people have won the lottery.
I don't know how to calculate the odds that someone gets hit 42 times by lightning in their life; the lifetime incidence of getting hit is three thousand to one, and if you figure that lightning strikes are a Poisson process with rate 1/3000 per lifetime, as this article states, then the probability that lightning hits one person seven times is something like one in (1/3000)7/7!, or one in about 1028. (That's the probability that a Poisson with parameter 1/3000 takes the value exactly 7; I'm ignoring the normalizing factor of exp(1/3000) and the even-more-negligible probability that someone gets hit eight or more times.)
Since the number of people who have existed is much less than 1028, the existence of a person who's been hit seven times is very strong evidence that that's not the right model. My hunch is that events of each person getting hit by lightning are a Poisson process, but with a separate parameter depends on the person. Roy Sullivan was a park ranger.
But the 1 in 3000 figure can't be trusted; the article also claims the annual risk of getting hit by lightning is one in 700,000. People don't live 700,000/3,000 (i. e. 233) years.
Ross: Do you know what your odds are of winning the lottery? You have a better chance of being struck by lightning 42 times.Unsurprisingly, Chandler seems to know that probability doesn't work this way; Joey doesn't.
Chandler: Yes, but there's six of us, so we'd only have to get struck by lightning 7 times.
Joey: I like those odds!
Also, Ross is wrong. It seems the record for getting struck by lightning is Roy Sullivan, seven times. So nobody's been hit 42 times, while plenty of people have won the lottery.
I don't know how to calculate the odds that someone gets hit 42 times by lightning in their life; the lifetime incidence of getting hit is three thousand to one, and if you figure that lightning strikes are a Poisson process with rate 1/3000 per lifetime, as this article states, then the probability that lightning hits one person seven times is something like one in (1/3000)7/7!, or one in about 1028. (That's the probability that a Poisson with parameter 1/3000 takes the value exactly 7; I'm ignoring the normalizing factor of exp(1/3000) and the even-more-negligible probability that someone gets hit eight or more times.)
Since the number of people who have existed is much less than 1028, the existence of a person who's been hit seven times is very strong evidence that that's not the right model. My hunch is that events of each person getting hit by lightning are a Poisson process, but with a separate parameter depends on the person. Roy Sullivan was a park ranger.
But the 1 in 3000 figure can't be trusted; the article also claims the annual risk of getting hit by lightning is one in 700,000. People don't live 700,000/3,000 (i. e. 233) years.
Li's proof of Riemann has a flaw -- but all might not be lost?
Terry Tao claims that Li's proof of the Riemann hypothesis (which I wrote about yesterday) is flawed. (via Ars Mathematica.) But that was, I think, version 2 at the arXiv; the paper is now up to version 4, which apparently attempts to fix the flaw Tao claims in version 2.
Alain Connes has also weighed in at his blog; Li's paper relies on his work.
Alain Connes has also weighed in at his blog; Li's paper relies on his work.
02 July 2008
Obama isn't average -- and that's a good thing.
Someone at the Washington Post is a bit confused about averages.
Basically, Barack and Michelle Obama (you've heard of them, right?) got a mortgage at a rate of 5.625% at a time when the average rate was 5.93% -- and so the Obama campaign finds itself playing defense. But as Nate Silver pointed out, this is evidence that the Obamas have good credit, and as various people commenting there pointed out, it's an average.
Some people get better than average rates. That's true by definition. (Although I suspect that more than half of people get a rate below the mean, because the right tail is probably longer than the left tail.
Personally, I want my presidential candidates to be getting a good interest rate -- because it's evidence that they have good credit, which in turn is evidence for some sort of financial prudence. (Yes, I know, some people with bad credit got there because they got dealt a bad hand. It's evidence, not a proof.) And if someone is good at managing their own money, they might be good at managing the country's money.
And do we really want our president to be average?
Basically, Barack and Michelle Obama (you've heard of them, right?) got a mortgage at a rate of 5.625% at a time when the average rate was 5.93% -- and so the Obama campaign finds itself playing defense. But as Nate Silver pointed out, this is evidence that the Obamas have good credit, and as various people commenting there pointed out, it's an average.
Some people get better than average rates. That's true by definition. (Although I suspect that more than half of people get a rate below the mean, because the right tail is probably longer than the left tail.
Personally, I want my presidential candidates to be getting a good interest rate -- because it's evidence that they have good credit, which in turn is evidence for some sort of financial prudence. (Yes, I know, some people with bad credit got there because they got dealt a bad hand. It's evidence, not a proof.) And if someone is good at managing their own money, they might be good at managing the country's money.
And do we really want our president to be average?
Li's proof of Riemann?
A proof of the Riemann hypothesis, by Xian-Jin Li.
I'm not qualified to judge the correctness of this, but glancing through it, I see that it at least looks like mathematics. Most purported proofs of the Riemann hypothesis set off the crackpot alarm bells in my head; this one doesn't. Li has also stated Li's criterion in 1997, which is one of the many statements that's equivalent to RH, although I don't think it's used in the putative proof, and wrote a PhD thesis titled The Riemann Hypothesis For Polynomials Orthogonal On The Unit Circle (1993), so this is at least coming from someone who's been thinking about the problem for a while and is part of the mathematical community.
I'm not qualified to judge the correctness of this, but glancing through it, I see that it at least looks like mathematics. Most purported proofs of the Riemann hypothesis set off the crackpot alarm bells in my head; this one doesn't. Li has also stated Li's criterion in 1997, which is one of the many statements that's equivalent to RH, although I don't think it's used in the putative proof, and wrote a PhD thesis titled The Riemann Hypothesis For Polynomials Orthogonal On The Unit Circle (1993), so this is at least coming from someone who's been thinking about the problem for a while and is part of the mathematical community.
01 July 2008
Yudkowsky on Bayesian reasoning
An Intuitive Explanation of Bayesian Reasoning, by Eliezer Yudkowsky (of Overcoming Bias fame).
A sequel to this is A Technical Explanation of Technical Explanation [sic].
A sequel to this is A Technical Explanation of Technical Explanation [sic].
Subscribe to:
Posts (Atom)