The Princeton Review publishes annual “Best of…” college lists based upon a now-online survey of students at each college. They describe the survey process as:
The Princeton Review survey asks students 80 questions about their school’s academics / administration, campus life, student body, and themselves. Tallies for this edition’s rankings are based on surveys of 120,000 students (about 325 per campus) at the 366 schools in the book (not at all schools in the nation) during the 2006-07 and / or previous two school years.
Which all sounds fine and good, until you read the fine print:
Our survey is qualitative and anecdotal rather than quantitative. In order to guard against producing a write-up that’s off the mark for any particular college, we send our administrative contact at each school a copy of the profile we intend to publish with ample opportunity to respond with corrections, comments, and/or outright objections. In every case in which we receive requests for changes, we take careful measures to review the school’s suggestions against the student survey data we collected and make appropriate changes when warranted.
What is most compelling to us about how representative our survey findings are is this: We ask students who take the survey — after they have completed it — to review the information we published about their school in the previous year and grade us on its accuracy and validity. Year after year we’ve gotten high marks: This year, 81 percent of students said we were right on.
So basically they’re admitting their survey data is meant to provide a subjective, narrative picture of colleges — not stuff you can draw broad generalizations from. And yet, that’s exactly what the Princeton Review proceeds to do with their data — draw broad generalizations from their surveys and rank colleges in a quantitative manner. Colleges themselves may exert some unknown additional influence on whether their school appears on any given list. If the ranking process appears to be less-than-transparent and cloaked in mystery, that’s on purpose.
How can researchers turn anecdotal data into quantitative data? Well, scientifically, you can’t (e.g., there’s no research-based procedure to do so because it is literally an apples or oranges comparison). When an organization like this does something of this nature, you can make an argument that any such rankings are just the opinions of the people who put the list together, based upon other people’s opinions of those who actually have attended the school.
All of which would be fine, except that the Review puts all of this stuff into lists that suggests they have generalizable meaning when they do not. The list of “Top 20 Party Schools” is nothing more than some people’s opinions based on a “combination of survey questions concerning the use of alcohol and drugs, hours of study each day, and the popularity of the Greek system.” They could have just as easily chosen a different set of responses to look at that would’ve skewed schools’ rankings differently.
Larger schools with bigger student bodies are unlikely to significantly differ in terms of the size of their Greek system (big), and the wide variety of social and other non-academic opportunities they offer their students (resulting in less hours of study each day). Does that make such a school more likely to be a “party school” when combined with looking at what students say how much they drink? Of course not. It may just mean the school is a great place to both learn and socialize.
A poster presented at APS this past weekend, entitled “Self-fulfilling prophecy and Princeton Review: descriptions or prescriptions for drinking in college?” put the suggestion into my head that measures like the Princeton Review have some questionable validity. The researchers looked at actual alcohol-use data over a decade for a single mid-sized university located in western New York and compared it to various related rankings in the Princeton Review. They found the rankings in the Princeton Review has no systematic correlation with any of the alcohol-use data they examined. What they did find is that alcohol consumption data correlated with the published rankings after they were published — suggesting a self-fulfilling prophecy of sorts.
The researchers noted that since their study was conducted at only one university, the research would need to be replicated at other colleges and universities before one could generalize from the findings. But the findings are interesting to note nonetheless.
Reference:
Clark, C.D. et. al. (2008). Self-fulfilling prophecy and Princeton Review: descriptions or prescriptions for drinking in college? Poster presented at the 20th annual convention of the Association for Psychological Science, Chicago IL, May 24.
6 comments
” […] They found the rankings in the Princeton Review has no systematic correlation with any of the alcohol-use data they examined. What they did find is that alcohol consumption data correlated with the published rankings after they were published — suggesting a self-fulfilling prophecy of sorts. […] ”
Until I reread this a couple of times, I thought the inference was an “unauthorized leap” at best. Then I had an epiphany.
A student planning a career in the arts would research schools that had higher rankings in the visual and performing arts versus schools at the bottom of the list or ones that offered arts as a minor only.
A student who is an ardent abstainer would zero in on schools that had no alcohol use reported; of *course schools with high alcohol use reported would be magnets for the party-hardy group.
Self-fulfilling.
If I were evaluating colleges and universities, I’d place more credence in what students are reporting than what schools are promoting. So even if the “Princeton 366” anecdotes don’t correlate with reality now, self fulfillment can fix that.
Sometimes my obtuseness amazes me.