Ah, how quickly folks backpedal when they’re caught doing something a little less than transparent. And perhaps something a little bit… squishy, ethics-wise.
That’s what Facebook “data scientist” Adam D.I. Kramer was doing on Sunday, when he posted a status update to his own Facebook page trying to explain why Facebook ran a bad experiment and manipulated — more than usual — what people saw in their news feed.
For some Tuesday-morning humor, let’s take a look at what Kramer said on Sunday, versus what he wrote in the study.
Let’s start with examining the proclaimed motivation for the study, now revealed by Kramer:
We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook. ((Which they already told us in the study: “A test of whether posts with emotional content are more engaging.”))
To what end? Would you manipulate the news feed even further, making it seem like everybody’s life was a cherry on top of an ice cream sundae, and reduce showing negative content?
It makes little sense that a for-profit company would care about this, unless they could have some actionable outcome. And any actionable outcome from this study would make Facebook seem even less connected to the real world than it is today. ((Facebook seems less connected with my real life, seeing as my own news feed seems to have largely gone from posts about people’s lives to “links I find interesting” — even though I never click on those links!))
In the study (Kramer et al., 2014), the researchers claimed their experiment was broad and manipulative:
We show, via a massive (N = 689,003) experiment on Facebook…
The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. […] Two parallel experiments were conducted for positive and negative emotion: One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which exposure to negative emotional content in their News Feed was
reduced.In these conditions, when a person loaded their News Feed, posts that contained emotional content of the relevant emotional valence, each emotional post had between a 10% and
90% chance (based on their User ID) of being omitted from their News Feed for that specific viewing.
If you were a part of the experiment, posts with an emotional content word in them had up to a 90 percent chance of being omitted from your news feed. In mine, and most people’s books, that’s pretty manipulative.
Now look how Kramer (aka Danger Muffin) minimizes the impact of the experiment in his Facebook-posted explanation:
Regarding methodology, our research sought to investigate the above claim by very minimally deprioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500)…
Ah, we go from “up to a 90 percent chance” to “very minimally deprioritizing a small percentage of content.” Isn’t it amazing how creatively one can characterize the exact same study in two virtually contradictory ways?
Was it Significant or Not?
The study itself makes multiple claims and conclusions about the significance and impact of their findings (despite their ludicrously small effect sizes). Somehow all of these obnoxious, over-reaching claims got past the PNAS journal reviewers (who must’ve been sleeping when they rubber-stamped this paper) and were allowed to stand without qualification.
In Kramer’s explanation posted on Sunday, he suggests their data didn’t really find anything anyway that people should be concerned about:
And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it… ((Which is a researcher-squirrely way of saying, “Our experiment didn’t really find any noteworthy effect size. But we’re going to trumpet the results as though we did (since we actually found a journal, PNAS, sucker-enough to publish it!).” ))
Which directly contradicts the claims made in the study itself:
These results suggest that the emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks […]
Online messages influence our experience of emotions, which may affect a variety of offline behaviors.
Look-y there — no qualifiers on those statements. No saying, “Oh, but this wouldn’t really impact an individual’s emotions.” Nope, in my opinion, a complete contradiction from what one of the researchers is now claiming.
But Was It Ethical?
A lot of controversy has surrounded whether this sort of additional manipulation of your news feed in Facebook is ethical, and whether it’s okay to embed a global research consent form into a website’s terms of service agreement. (Facebook already manipulates what you see in your news feed via its algorithm.)
First, let’s get the red-herring argument out of the way that this research is not the same as internal research companies do for usability or design testing. That kind of research is never published, and never done to examine scientific hypotheses about emotional human behavior. It’s like comparing apples to oranges to suggest these are the same thing.
Research on human subjects generally needs to be signed off on by an independent third-party called an institutional review board (IRB). These are usually housed at universities and review all the research being conducted by the university’s own researchers to ensure it doesn’t violate things like the law, human rights, or human dignity. For-profit companies like Facebook generally do not have an exact IRB equivalent. If a study on human subjects wasn’t reviewed by an IRB, whether it was ethical or moral remains an open question.
Here’s “data scientist” ((I use quotes around this title, because all researchers and scientists are data scientists — that’s what differentiates a researcher from a storyteller.)) Kramer’s defense of the research design, as noted in the study:
[The data was processed in a way] such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.
However, Kashmir Hill suggests that the Facebook Data Use Policy was changed 4 months after the study was conducted to explicitly allow for “research” use of Facebook data.
They also seemed to have fudged in getting a university’s IRB approval for the study. Hill earlier reported that Cornell’s IRB didn’t review the study. ((In fact, they note Hancock, a named author on the paper, only had access to results — not even the actual data!)) None of the researchers have stepped forward to explain why they apparently told the PNAS editor they had run it by a university’s IRB.
The UK Guadian’s Chris Chambers offers up this summary of the sad situation:
This situation is, quite frankly, ridiculous. In what version of 2014 is it acceptable for journals, universities, and scientists to offer weasel words and obfuscation in response to simple questions about research ethics? How is it acceptable for an ethics committee to decide that the same authors who assisted Facebook in designing an interventional study to change the emotional state of more than 600,000 people did, somehow, “not directly engage in human research”?
Icing on the Cake: The Non-Apology
Kramer didn’t apologize for doing the research without user’s informed consent. Instead, he apologized for the way he wrote up the research:
I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused.
People aren’t upset you did the research, they’re upset you did the research on them without their knowledge or consent. And sorry Facebook, but burying “consent” in thousands of words of legal mumbo-jumbo may protect you legally, but it doesn’t protect you from common sense. Or people’s reactions when they find out you’ve been using them like guinea pigs.
People simply want a meaningful way to opt-out of your conducting experiments on them and their news feed without their knowledge or consent.
Facebook doesn’t offer this today. But I suspect that if Facebook wants to continue doing research of this nature, they’ll be offering this option to its users soon.
An Ethics Case Study for All Time
This situation is a perfect example of how not to conduct research on your users’ data without explicit consent from them. It will be taught in ethics’ classes for years — and perhaps decades — to come.
It will also act as a case study of what not to do as a social network if you want to remain trusted by your users.
Facebook should offer all users a genuine apology for conducting this sort of research on them without their explicit knowledge and permission. They should also change their internal research requirements so that all studies conducted on their users go through an external, university-based IRB.
Further reading
Facebook fiasco: was Cornell’s study of ‘emotional contagion’ an ethics breach?
Facebook Added ‘Research’ To User Agreement 4 Months After Emotion Manipulation Study
Reference
Kramer, ADI, Guillory, JE, Hancock, JT. (2014). Experimental evidence of massive-scale emotional contagion through social networks. PNAS. www.pnas.org/cgi/doi/10.1073/pnas.1320040111
5 comments
I 100% absolutely agree. In response to learning that I am participating as an unwitting lab rat in human experimention I canceled my Facebook page. I might consider renewing it but I think I would require the following conditions be met:
I may reopen my account on the following conditions:
1) Some kind of external regulation is applied that can prevent this sort of human experimentation/manipulation from taking place in future without them facing a hefty fine and jail time (The EU is apparently on this now. I would like to see the American Congress tackle this and Canada start making some laws to control this.)
2) Those involved are fired since they are obviously incapable of appreciating what they did and why it was so very wrong.
3) Facebook issues a full and proper apology (not the half assed “but we only did it for your own good and it was such a small thing and besides you ticked the ToS box so you asked for it†they have already issued)
4) Facebook proves they have instituted an “in house” ethical review system. (They say they have, but judging from their apology, they have not.)
5) Facebook has an independent external Research Ethics Board that meets proper standards in terms of training that is used by all staff and they approve any further human experimentation of any kind takes place in advance.
6) They agree to provide proper and full informed consent to all their test subjects in advance of any further experimentation or where the experiment is deemed to require lack of advanced informed consent by the various narrow acceptable standards for such, people are full informed once the experiment was done, why it was done, what the results were, and they are offered compensation for their participation and support for any perceived harm.
7) Facebook formally withdraws their odious PNAS paper with the appropriate full and proper apology for its lack of appropriate ethical review and lack of informed consent.
Until then I am no longer posting anything on Facebook.
John,
I LOVED this post. There are so many ethical issues involved with what FB did that I’m not sure where to begin. I commend you for your actions.
One of my fears with social media is that in the race to capture new users and retain them, people become unknowing participants of secret marketing scheme.
What most concerns me about the FB story is that they knew what they were doing and likely only created change after they were outed. And we have to ask the question – what else are they doing that we don’t know about?
Social media comes up a lot in my org psych classes. I’ll be sure to share this with students.
Best,
John
I have a friend whose life has difficulties due to psychological problems. Part of this is a belief in conspiracies and his life being manipulated by media. He is a big user of Facebook. This experiment has not been helpful. If anything it has strengthened his belief that his life is being controlled and no matter what he does he will still be the subject of experiments. I’ve pretty much dedicated the last two years of my life providing a stable and sane environment for him to get his life in control.
A week after this experiment was made public he was in hospital under psychiatric care. His paranoia has amplified to unmanageable levels. It doesn’t matter if he was or wasn’t one of the chosen few (hundred thousand) to be in this experiment – how can I convince him otherwise that there is no conspiracy when clearly there actually is, and on quite a grand scale. Would Facebook shareholders like to compensate me for two wasted years of my life? Would they like to do something to support mental health and regain this young man as a contributing member of society? He has skills, imagination, talent which are now being burned up in a distressed mind. Let’s just say I’m considerably pissed off and unlikely to waste any more of my resources on Facebook or their advertisers.
I came here while grading a student paper, since this post was cited. I am not trying to defend FB, and you make some good points. But some of what you say is misleading, which does not help your case. In several cases it sounds like you are really over-reaching to be critical.
1. “It makes little sense that a for-profit company would care about this, unless they could have some actionable outcome.” I teach business management, and I totally disagree. One reason they would care that springs to mind would be investigating the claims of their critics (e.g. the “the self-promotion-envy spiral†where viewing positive posts causes negative affect). I would assume that if they confirmed these criticisms, no one would have ever heard about this study. But these criticisms do not appear to be valid, in line with theories of emotional contagion (i.e. positive posts tend to lead more to positive, rather than negative, posts). So, FB can circulate these results and reinforce their company culture, which is a significant motivator of their workforce. I think the latter is a legitimate reason, though it is less legit if they were to hide the negative results (but we can only assume they would).
2. “Ah, we go from “up to a 90 percent chance†to “very minimally deprioritizing a small percentage of content.†Yes, well… both of the statements accurately describe what the study did.
3. “In Kramer’s explanation posted on Sunday, he suggests their data didn’t really find anything anyway that people should be concerned about:”
‘And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it…’
“Which directly contradicts the claims made in the study itself”
That is not what this means. He is saying they estimated the minimum sample size that would be necessary to detect the expected effect. This is an ethical thing to do (some of your other valid criticisms notwithstanding). He is NOT saying that this won’t really affect peoples’ emotions, at least not with this statement (i haven’t read his blog post).
“Research on human subjects generally needs to be signed off on by… an institutional review board (IRB)” Misleading. Not if this was a FB-led study, and if the university-affiliated researchers were not ‘engaged in the research.’ Given the details in the paper, they were not ‘engaged.’ Supported here: https://blog.petrieflom.law.harvard.edu/2014/06/29/how-an-irb-could-have-legitimately-approved-the-facebook-experiment-and-why-that-may-be-a-good-thing/
FB doesn’t have an IRB, per se, but that is no reason to assume that they unethically conducted the research.
The problem with your argument is that you assume any self-referenced, non-peer-reviewed data produced in-house by Fb would somehow be seen as unbiased, objective, and scientific. Just because you hire a scientist to conduct a study doesn’t mean you’re going to get objective results.
The point I was making was that Fb would only conduct and release the results of a study that proved whatever finding was to their benefit. They would put any study that showed the opposite into the trashcan. It’s self-serving, biased bullshit.
In terms of an IRB, yes, human-subjects research absolutely needs to be signed off by such a board — whether the research is university-based or not. In many states, it’s a requirement of the law. From an ethical stance, of COURSE human subjects’ research needs such a review and approval. Otherwise what’s to stop companies from conducting unethical and potentially harmful research on unsuspecting human customers anytime they’d like?
If a human subjects study isn’t reviewed by an IRB, I have to assume that an ethical scientist had a very good reason for that decision. And none of those reasons are going to leave me with a warm, fuzzy feeling toward him or her, since this is literally Science 101.