16 May 2008

Correlation coefficients and the popularity gap

The Popularity Gap (Sarah Kliff, Newsweek, May 15 issue).

Apparently, the people who end up being successful later in life are the ones who think people like them in middle school, not necessarily the ones who are actually well-liked in middle school. This reports on a study by Kathleen Boykin McElhaney, who is not particularly important to what I'm going to say, because I'm going to comment on something that I assume was introduced by the folks at Newsweek.

The Newsweek article continues:
One of McElhaney's most interesting findings is that self-perceived and peer-perceived popularity don't line up too well; most of the well-liked kids do not perceive themselves as well liked and visa versa. The correlation between self-perceived and peer-ranked popularity was .25, meaning about a quarter of the kids who were popular according to their classmates also thought they were popular. For the other three quarters, there was a disconnect between how the teen saw themselves and what their peers thought.
I can't read the original journal article (the electronic version doesn't become available for a year after publication, and I'm not going to campus in the rain and looking around an unfamiliar library just to track this down!) but the Newsweek article says enough to make it clear that the study wasn't using a two-point "popular/unpopular" scale. I'm inclined to think that the "correlation" here is what's usually referred to as the "correlation coefficient" -- and this is usually explained in popular media by saying that "one-fourth of the variation in how popular students believed they were was due to how popular they actually were" or some such similar phrase. I'm not a statistician, so I won't try to explain why that phrase might be wrong; if you are, please feel free to weigh in!

But let's assume that half of students are actually popular, and half of students think they're popular. (This might be a big assumption; recall the apocryphal claim that 75 percent of students at [insert elite college here] come in thinking they'll be in the top 25 percent of their class.) Then if only 25 percent of the students who are actually popular think they're popular, there's actually a negative correlation between actual popularity and perceived popularity! More formally, let X be a random variable which is 0 if someone's not (objectively) popular and 1 if they are; let Y play the same role for their self-assessed popularity. Then E(XY) is the probability that a randomly chosen student both is popular and thinks they are, which is 1/8 in this case; E(X) E(Y) = 1/4, which is larger.

Then again, if there actually were a negative correlation -- if people were so bad at self-assessment as to be worse than useless at it -- then that would be quite interesting. As it is, there seems to be in general a weak positive correlation between how P someone is (where P is some desirable trait, say popularity in this case) and how P they think they are.

And the fact that I bothered to write this post probably will lead you to guess -- correctly -- that I wasn't all that popular in high school.


Anonymous said...

You might not consider yourself a statistician, but I do since I'm not familiar with the nuanced difference.

Adrienne said...

The fact that I read your blog every day and get excited about the math must mean I wasn't popular then, either?

Efrique said...

Proportion of variation corresponds to squared correlation.

Anonymous said...

Hi there -- that was my study. :) The reporter got the info wrong on that point. In case you are curious, the correlation between self-reported social acceptance and peer rankings of popularity was r=0.25 (positive). For what it is worth, what I actually said in the interview was that this means there is a lot of unexplained variance -- variation in self reports not explained by peer rankings (& vice versa).