Recently, Facebook conducted an "experiment" aimed at inducing negative emotions in its users by manipulation of presented content. A Facebook acquaintance of mine posted a link to an article arguing the "other side" of the obvious position on this issue. Herein is my response… I agree with the authors' conclusion that a lightly-modified version of the study might have passed IRB review. I have no strong opinion on whether the academic authors were legally required to get IRB review: there is too little factual information available for me to form a judgement.
The important point that I think is missed here is that Facebook had an ethical obligation, quite aside from any legal one, to have the study reviewed by an outside ethics panel before moving forward.
I find the argument that "this is what Facebook does every day anyway" to be quite disingenuous: in this case, Facebook was directly trying to observe the induction of strong negative emotions in its users (whether by positive-biased or negative-biased content being a total red herring). This is the opposite of trying to make users happy, which is what Facebook presumably is "doing every day". The fact that the study was arguably a failure and that thus no harm was done is a post hoc rationalization; if they were trying to drain users' bank accounts for "research" and failed, someone should still be rightfully thrown in jail. (I am reminded of the inimitable Sideshow Bob: "Attempted murder? Now honestly, did they ever give anyone a Nobel prize for 'attempted chemistry'?")
The evidence that social media interactions can have extreme negative emotional consequences, even to the point of inducing suicide, is pretty strong. I generally hate slippery slope arguments. However, if you allow this study as acceptable behavior without any kind of responsible external review or oversight, I fail to understand where the line would be drawn in researching potentially more effective negative inductions. Should Facebook be posting lots of social media about suicide on selected users' timelines without prior consent? I would think we would learn a lot from the resulting comparative suicide rates.
The bottom line for me is that during the twentieth century the scientific community decided that psychological research that had the potential to damage subjects should be subjected to stringent controls to inhibit this damage, and to the strongest kind of informed consent from the subjects. Facebook, or any one else, who fails to go along with this broad consensus is ethically bankrupt—a villain—even if they avoid legal jeopardy in some way. (B)