Many people argue that Facebook's study of how the emotions of it's users changed depending on the emotional content of messages in their facebook feed wouldn't have been approved by the average ethical review board because facebook didn't seek informed consent for the experiment.

Is the harm that the average ethical review board prevents less than the harm that they cause by preventing research from happening? Are principles such as requiring informed consent from all research participants justifiable from an utilitarian perspective?

New Comment
20 comments, sorted by Click to highlight new comments since:

Probably the marginal ethics review board is a net negative, but the existence of ethical review boards is a net positive.

I'd think 'ethical' in review board has noting to do with ethics. It's more of PR-vary review board. Limiting science to status-quo-bordering questions doesn't seem most efficient, but a reasonable safety precaution. However, typical view of the board might be skewed from real estimates of safety. For example, genetic modification of humans is probably minimally disruptive biological research (compared, to, say, biological weapons), though it is considered controversial.

Ethics boards tend not to be utilitarian (or on many cases, even consequentialist) in their judgements. Many times their rules happen to improve net good, but that's not their goal.

The principle of informed consent stems from a deontological "do no harm" perspective, rather than a balance of value perspective. On the whole, I don't trust anyone to know my utility very well, so this over-caution seems best to me. But it's clearly not optimal from an outside perspective.

Ethics boards tend not to be utilitarian (or on many cases, even consequentialist) in their judgements.

Likely, but having a review board may still yield a net utilitarian outcome compared to not having one.

having a review board may still yield a net utilitarian outcome compared to not having one

By "net utilitarian outcome" I'm guessing you mean "overall higher utility in the universe". And I agree, it's higher than some alternate universes that don't contain ethics boards. However, it's probably lower than universes with (competent) utilitarian ethics boards. And the last is probably worse than universes with (competent) utilitarian researchers and no need for ethics boards.

It always depends on what you compare it against.

For the particular Facebook instance, web sites do that kind of testing all day. I don't see an issue.

As for ethical review boards, I'd expect they're mainly a drag on producing value. Like any check, I'm sure they prevent some stupid and harmful things, put a stamp of approval on others, and prevent useful and helpful things as well. Predominantly they're an exercise in Morality Theater by Ethical Authorities for Bureaucratic Ass Covering. The main useful thing they might do is enforce a consistent policy across an organization.

For the particular Facebook instance, web sites do that kind of testing all day. I don't see an issue.

If websites do things like this all day but society as a whole believes that to be immoral, it's going to be done in the dark and the resulting knowledge doesn't go into the public domain. A lot of value that society could have doesn't materialize.

Most of these sorts of meta interventions have not been tested on their own terms. (One can do a Bayesian update on Bayesian updating, and A/B test A/B testing, but it's much more difficult to use the Precautionary Principle on the Precautionary Principle, for example.)

So we should expect ethical review boards to be PR positives, because they were constructed for reasons of PR; I don't see any particular reason to expect them to be net positives when it comes to ethics.

it's much more difficult to use the Precautionary Principle on the Precautionary Principle

Seems quite simple to me. We should never use the Precautionary Principle, because we cannot rule out the possibility that it would do harm. ;)

That's not stable; the Precautionary Principle suggests that we shouldn't use the Precautionary Principle on the Precautionary Principle, because we cannot rule out the possibility that it would do harm.

It is quite stable to say that we should never use the Precautionary Principle because the principle is logically inconsistent, precisely for this reason. This is stable because refusing to use the principle is not logically inconsistent.

Laughs I added the "internal inconsistency" part to the Wikipedia article ages ago. (2011, specifically)

I see that it has citations now; I didn't bother, as I was just annoyed with people who kept arguing against advancing technology and wanted to throw a thumbtack in one of their gears.

It's been weighed down with some unnecessary language (as some kind of compromise over an edit war, I assume) in addition to some citations, but the basic structure looks like it's still intact. The unnecessary language argues that the precautionary principle -isn't- logically inconsistent, by implying that the risks of the precautionary principle are both proximally known and calculable, but it takes half a brain cell to notice that the implication isn't supported or supportable.

I find the citation to State of Fear particularly amusing; it suggests the edit wars were a proxy battle from the climate change edit wars.

[-][anonymous]00

Wikipedia isn't about consistency, in fact their rules ban both original research and primary sources. It's about whatever can be found in secondary sources, which of course tend to be inconsistent.

It's just the usual recursion eating its own tail :-)

Hi Christian, do you have a link to that facebook study and the ensuing controversy? I must have missed that study...

Simply Googling facebook and study brings you to the issue. This Forbes article is an example of public criticism that facebook got for running the study (don't take it for a good description of the study).

Many people argue that Facebook's study of how the emotions of it's users changed depending on the emotional content of messages in their facebook feed wouldn't have been approved by the average ethical review board because facebook didn't seek informed consent for the experiment.

A slightly strong form of this argument is that the study is unethical because the users didn't consent to being manipulated and studied in this fashion, and that an ethical review board would have noticed this and nixed it. Which is to say, a slightly stronger argument focuses on the ethics issues which might have been averted, rather than on what an ethics review board would conclude, as the former is an argument in its own right where the latter is an argument from authority.

I don't see an ethical lapse here; when Facebook modulates the content of messages without performing an experiment, informed consent doesn't enter into it. If you can do a thing ethically, calling your act an experiment, or paying attention to how people react, doesn't create an additional ethical burden.

Nor does calling your actions an experiment absolve you of ethical burdens, either. Information is valuable, and from a utilitarian perspective the information may outweigh the costs, but the utilitarian obligation to maximize doesn't end at the point where the scales first begin to tip.

the study is unethical because the users didn't consent to being manipulated and studied in this fashion

That makes all A/B testing unethical.

Not all; presumptively, it is possible to give informed consent. But unless the testers do give informed consent, under this principle, yes, it would be unethical.

(I do not agree with this principle for the reasons I already cited: experimentation doesn't confer unique ethical qualities to behavior, the ethical qualities are inherent in the behavior itself, and the general rules about experimentation were exported from medicine where the behaviors involved do have more questionable ethics.)

If you can do a thing ethically, calling your act an experiment, or paying attention to how people react, doesn't create an additional ethical burden.

It doesn't follow that if doing X doesn't make anyone worse off, we should allow X. This fails to consider the impact of incentives. It may be that if we permit X, X would be better for everyone in the current situation, but permitting X also affects the balance of what situations exist to begin with, leading to everyone being worse off overall.