peter_hurford comments on 2014 Survey of Effective Altruists - Less Wrong

27 Post author: tog 05 May 2014 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (148)

You are viewing a single comment's thread. Show more comments above.

Comment author: peter_hurford 02 May 2014 06:35:44PM 2 points [-]

Those are good points. It would confound things too much to change midstream, but now we'll know better for next year.

Comment author: RobbBB 05 May 2014 04:07:59AM 5 points [-]

I'd rather see 'consequentialist' supplemented or replaced by specific questions that get at substantive ethical or meta-ethical disputes in EA and philosophy. 'Utilitarian' and 'deontologist' mean lots of different things to different people, and on their strictest definitions they don't entail a lot of their most interesting or widely cited ideas. Perhaps have an exploratory question one year asking non-utilitarians to write in their main objection to utilitarianism, then convert that into a series of questions the following year.

Comment author: peter_hurford 05 May 2014 05:50:36PM 2 points [-]

This was something I suggested to Tom because I'd be interested too. But ultimately we thought that only a small group of EAs would really have substantive ethical opinions and we thought to trim things for survey length. We added a box asking for clarifications at the end of the survey to provide more of this outlet.

Comment author: SaidAchmiz 05 May 2014 05:45:51AM *  2 points [-]

One of the main objections to utilitarianism, it seems to me, is skepticism about the possibility (or even coherence of the notion) of aggregating utility across individuals. That's one of my main objections, at any rate.

Skepticism about the applicability of the VNM theorem to human preferences is another issue, though that one might be less widespread.

Edit: The SEP describes classic utilitarianism as actual, direct, evaluative, hedonistic, maximizing, aggregative (specifically, total), universal, equal-consideration, agent-neutral consequentialism. I have definite issues with the "actual", "direct", "hedonistic", "aggregative", "total", and "equal-consideration" parts of that. (Though I expect that my issues with "actual" will be shared by a significant portion of those who consider themselves utilitarians here, and my issues with "hedonistic" and "direct" may be as well. That leaves "aggregative"+"total", and "equal-consideration", as the two aspects most likely to be sources of philosophical conflict.)

Comment author: Kaj_Sotala 05 May 2014 11:08:44AM *  4 points [-]

Those sound like objections to preference utilitarianism but not hedonistic utilitarianism. Although it's not technically possible yet, measuring the intensity of the positive and negative components of an experience sounds something that ought to be at least possible in principle. And the applicability of the VNM theorem to human preferences becomes irrelevant if you're not interested in preferences in the first place.

Comment author: Vaniver 05 May 2014 11:14:17AM 2 points [-]

Although it's not technically possible yet, measuring the intensity of the positive and negative components of an experience sounds something that ought to be at least possible in principle.

I don't see how having a quantitative, empirical measure which is appropriate for one individual helps you with comparisons across individuals. Do we really want to make people utility monsters because their neural currents devoted to measuring happiness have a higher amperage?

Comment author: Kaj_Sotala 05 May 2014 11:25:26AM 4 points [-]

I was assuming that the measure would be valid across individuals. I wouldn't expect the neural basis of suffering or pleasure to vary so much that you couldn't automatically adapt it to the brains in question.

Do we really want to make people utility monsters because their neural currents devoted to measuring happiness have a higher amperage?

Well yes, hedonistic utilitarianism does make it possible in principle that Felix ends up screwing us over, but that's an objection to hedonistic utilitarianism rather than the measure.

Comment author: Vaniver 05 May 2014 11:38:11AM 2 points [-]

I was assuming that the measure would be valid across individuals.

I mean, the measure is going to be something like an EEG or an MRI, where we determine the amount of activity in some brain region. But while measuring the electrical properties of that region is just an engineering problem, and the units are the same from person to person, and maybe even the range is the same from person to person, that doesn't establish the ethical principle that all people deserve equal consideration (or, in the case of range differences or variance differences, that neural activity determines how much consideration one deserves).

Well yes, hedonistic utilitarianism does make it possible in principle that Felix ends up screwing us over, but that's an objection to hedonistic utilitarianism rather than the measure.

It's not obvious to me that all agents deserve the same level of moral consideration (i.e. I am open to the possibility of utility monsters), but it is obvious to me that some ways of determining who should be the utility monsters are bad (generally because they're easily hacked or provide unproductive incentives).

Comment author: Kaj_Sotala 05 May 2014 12:07:43PM 2 points [-]

Well it's not like people would go around maximizing the amount of this particular pattern of neural activity in the world: they would go around maximizing pleasure in the-kinds-of-agents-they-care-about, where the pattern is just a way of measuring and establishing what kinds of interventions actually do increase pleasure. (We are talking about humans, not FAI design, right?) If there are ways of hacking the pattern or producing it in ways that don't actually correlate with pleasure (of the kind that we care about), then those can be identified and ignored.

Comment author: Vaniver 05 May 2014 12:43:54PM *  2 points [-]

Well it's not like people would go around maximizing the amount of this particular pattern of neural activity in the world

Depending on your view of human psychology, this doesn't seem like that bad a description, so long as we're talking about people only maximizing their own circuitry. (Maximizing is probably wrong, rather than keeping it within some reference range.)

We are talking about humans, not FAI design, right?

That's what I had that in mind, yeah.


My core objection, which I think lines up with SaidAchmiz's, is that even if there's the ability to measure people's satisfaction objectively (so that we can count the transparency problem as solved), that doesn't tell us how to make satisfaction tradeoffs between individuals.

Comment author: Kaj_Sotala 05 May 2014 03:01:07PM 1 point [-]

even if there's the ability to measure people's satisfaction objectively (so that we can count the transparency problem as solved), that doesn't tell us how to make satisfaction tradeoffs between individuals.

I agree with this. I was originally only objecting to the argument that aggregating utility between individuals would be impossible or incoherent, but I do not have an objection to the argument that the mapping from subjective states to math is underspecified. (Though I don't see this as a serious problem for utilitarianism: it only means that different people will have different mappings rather than there being a single unique one.)

Comment author: SaidAchmiz 05 May 2014 11:31:57AM 2 points [-]

Yes, true enough[1]; I did not properly separate those objections in my comment. To elaborate:

I object to hedonistic utilitarianism on the grounds that it clearly and grossly fails to capture my moral intuitions or those of anyone else whom I consider not to be evading the question. A full takedown of the "hedonistic" part of "hedonistic utilitarianism" is basically (at least) all of Eliezer's posts about the complexity of value and so forth, and I won't rehash it here.

To be honest, hedonistic utilitarianism seems to me to be so obviously wrong that I'm not even all that interested in having this sort of moral philosophy debate with an effective altruist (or anyone else) who holds such a view. I mean, to start with, my hypothetical interlocutor would have to rebut all the objections raised to hedonistic utilitarianism over the centuries since it's been articulated, including, but not limited to, the aforementioned Lesswrong material.

I object to preference utilitarianism because of the "aggregation of utility" and "possibility of constructing a utility function" issues[2]. I think this is the more interesting objection.

[1] I'm not sure "intensity of the positive and negative components of an experience" is a coherent notion. There may not be a single quantity like that to measure. And even if we can measure something which we think qualifies for the title, it may be measurable only in some more-or-less absolute terms, while leaving open the question of how this hypothetical measured quantity matches up with anything like "utility to this particular experiencer". But, for the sake of the argument, I'm willing to grant that such a quantity can indeed be usefully measured, because this is certainly not my true rejection.

[2] These are my objections to the "preference" component of preference utilitarianism; my objection to classical utilitarianism also includes objections to other components, which I have enumerated in the grandparent.

Comment author: Kaj_Sotala 05 May 2014 11:44:32AM 3 points [-]

Two replies:

1) Even if hedonistic utilitarianism would ultimately be wrong as a full description of what a person values, "maximize pleasure while minimizing suffering" can still be a useful heuristic to follow. Yes, following that heuristic to its logical conclusion would mean forcibly rewiring everyone's brains, but that doesn't need to be a problem for as long as forcibly rewiring people's brains isn't a realistic option. HU may still be the best approximation of a person's values in the context of today's world, even if it wasn't the best description overall.

2) The arguments on complexity of value and so on establish that the average person's values aren't correctly described by HU. This still leaves open the possibility of someone only approving of those of their behaviors that serve to promote HU, so there may definitely be individual people who accept HU, due to not sharing the moral intuitions which motivate the objections to it.

Comment author: SaidAchmiz 05 May 2014 12:04:16PM 2 points [-]

On 1): I am skeptical of replies to the effect that "yes, well, X might not be quite right, but it's a useful heuristic, therefore I will go on acting as if X is right". For one thing, a person who makes such a reply usually goes right back to saying "X is right!" (sans qualifiers) as soon as the current conversation ends. Let's get clear on what we actually believe, I generally think; once we've firmly established that, we can look for maximally effective implementations.

For another thing, HU may be the best approximation etc. etc., but that's a claim that at least should be made explicitly, such that it can be examined and argued for; a claim of this importance shouldn't come up only in such tangential discussion branches.

For a third thing, what happens when forcibly rewiring people's brains becomes a realistic option?

On 2): I think there's two issues here. There could indeed be people who accept HU because that's what correctly describes their moral intuitions. (Though I should certainly hope they do not think it proper to impose that moral philosophy on me, or on anyone else who doesn't subscribe to HU!)

"Only approving of those behaviors that serve to promote HU" is, I think, a separate thing. Or at least, I'd need to see the concept expanded a bit more before I could judge. What does this hypothetical person believe? What moral intuitions do they have? What exactly does it mean to "promote" hedonistic utilitarianism?

Comment author: tog 05 May 2014 08:09:27PM 3 points [-]

There could indeed be people who accept HU because that's what correctly describes their moral intuitions. (Though I should certainly hope they do not think it proper to impose that moral philosophy on me, or on anyone else who doesn't subscribe to HU!)

Why would this be improper? Don't that it doesn't follow from any meta-ethical position.

Comment author: SaidAchmiz 05 May 2014 08:20:11PM 1 point [-]

If you say "all that matters is pain and pleasure", and I say "no! I care about other things!", and you're like "nope, not listening. PAIN AND PLEASURE ARE THE ONLY THINGS", and then proceed to enact policies which minimize pain and maximize pleasure, without regard for any of the other things that I care about, and all the while I'm telling you that no, I care about these other things! Stop ignoring them! Other things matter to me! but you're not listening because you've decided that only pain and pleasure can possibly matter to anyone, despite my protestations otherwise...

... well, I hope you can see how that would bother me.

It's not just a matter of us caring about different things. If it were only that, we could acknowledge the fact, and proceed to some sort of compromise. Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided! Clearly.

Comment author: tog 06 May 2014 11:27:50PM 2 points [-]

Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure.

They may think it's incorrect if they're realists, or cognitivists of some other form. But this has nothing to do with their being HUs, only with their being cognitivists.

[Description of situation] ... well, I hope you can see how that would bother me.

Here are 3 non-exhaustive ways in which the situation you described could be bothersome:

(i) If your first order ethical theory (as opposed to your meta-ethics), perhaps combined with very plausible facts about human nature, requires otherwise. For instance if it speaks in favour of toleration or liberty here.

(ii) If you're a cognitivist of the sort who thinks she could be wrong, it could increase your credence that you're wrong.

(iii) If you'd at least on reflection give weight to the evident distress SaidAchmiz feels in this scenario, as most HUs would.

Comment author: Kaj_Sotala 06 May 2014 05:13:48AM 1 point [-]

Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided!

I don't think that hedonistic utilitarianism necessarily implies moral realism. Some HUs will certainly tell you that the people who morally disagree with them are misguided, but I don't see why the proportion of HUs who think so (vs. the proportion of HUs who think that you are simply caring about different things) would need to be any different than it would be among the adherents of any other ethical position.

Maybe you meant your comment to refer specifically to the kinds of HUs who would impose their position on you, but even then the moral realism doesn't follow. You can want to impose your values on others despite thinking that values are just questions of opinion. For instance, there are things that I consider basic human rights and I want to impose the requirement to respect them on every member of every society, even though there are people who would disagree with that requirement. I don't think that the people who disagree are misguided in any sense, I just think that they value different things.

Comment author: Kaj_Sotala 05 May 2014 03:15:56PM *  2 points [-]

Let's get clear on what we actually believe, I generally think; once we've firmly established that, we can look for maximally effective implementations.

For another thing, HU may be the best approximation etc. etc., but that's a claim that at least should be made explicitly

I agree that it would often be good to be clearer about these points.

For a third thing, what happens when forcibly rewiring people's brains becomes a realistic option?

At that point the people who consider themselves hedonistic utilitarians might come up with a theory that says that forcible wireheading is wrong and switch to calling themselves supporters of that theory. Or they could go on calling themselves HUs despite not forcibly wireheading anyone, in the same way that many people call themselves utilitarians today despite not actually giving most of their income away. Or some of them could decide to start working towards efforts to forcibly wirehead everyone, in which case they'd become the kinds of people described by my reply 2).

"Only approving of those behaviors that serve to promote HU" is, I think, a separate thing. Or at least, I'd need to see the concept expanded a bit more before I could judge.

By this, I meant to say "only approve of whatever course of action HU says is the best one".

Comment author: SaidAchmiz 05 May 2014 09:09:23PM 2 points [-]

At that point ... [various possibilities]

Yeah, I meant that as a normative "what then", not an empirical one. I agree that what you describe are plausible scenarios.

Comment author: Kaj_Sotala 06 May 2014 05:03:22AM 0 points [-]

In that case, I'm unsure of what kind of an answer you were expecting (unless the "what then" was meant as a rhetorical question, but even then I'm slightly unsure of what point it was making).

Comment author: tog 05 May 2014 05:45:24PM 1 point [-]

Can you suggest some? These could go into next year's survey, though we're keeping that short - more likely they'd go into a followup that Ben Landau-Taylor of Leverage Research is running.