According to the breathless proclamations of the researchers of a recently published study (and also a Wired Science news report on the same), you’d think so. Until you look at how the study was designed.
Research results are fantastic things — they have the ability to add to our knowledge on a subject of interest. But we’re seeing a growing trend that is not being managed well by many journals these days — the trend of generalizing from data to conclusions that can’t be drawn from the study conducted. And journal editors, such as those at PLoS ONE aren’t reining in such bold statements as these (taken from the current study):
These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures.
Really now?
So if authors can get away with making such grand conclusive statements, you’d think they were talking about the results of a large-scale, cross-cultural study done on hundreds (if not thousands) of individuals in different countries.
And then you read what was actually done — a small 28-person study with subjects recruited from their local university in the UK. Wow. I mean, really. The East Asians were from only two different Asian countries, and the median age was 24 years old. No mention of what impact, if any, being a foreigner in a new country might have on these results (e.g., anxiety of being in a new and unfamiliar culture). It’s also not clear if any data analyses were conducted to see if gender played any role in their findings. Or how age might impact their data. Or how someone living in their country of birth might be different than a visiting foreigner who’s whisked into a psychology laboratory within a week of their arrival and asked to behave in such a way as to represent an entire culture!
That wasn’t the worst part. You can obviously draw few solid conclusions from a biased sample without mentioning the significant limitations of said sample. But there isn’t a single mention of the limitations of the study in the journal article. In other words, the journal published said article and accepted everything the authors claimed without even suggesting that they may be over-reaching with their conclusions.
But why is any of this considered new data to begin with? It’s been long accepted that Asian cultures avoid eye contact because it can be interpreted as a sign of aggressiveness or disobedience, especially with strangers. In Western cultures, eye contact is expected and cultivated and we feel something is amiss if we’re not looking at someone’s eyes. Plus, context is everything. What is appropriate and expected in a culture in a business situation may be completely different in a relaxed social setting. This experiment, in its artificial setting, captured none of these nuances and instead wielded the equivalent of a psychological sledgehammer at a complex interaction.
For these reasons, this kind of study contributes little new knowledge or understanding to how cultures interact and relate to one another. And PLoS ONE should definitely look to their reviewers to do a much better job at requiring bare minimums in the studies they choose to publish.
Read the study: PLoS ONE: Culture Shapes How We Look at Faces
Read the news article: Culture Shapes How People See Faces
1 comment
Thank you. I see this kind of things often; drawing big conclusions from research. And I always assume the reporters properly vet the research. I should read these things much more critically.