Comment author: Kaj_Sotala 05 May 2014 11:44:32AM 3 points [-]

Two replies:

1) Even if hedonistic utilitarianism would ultimately be wrong as a full description of what a person values, "maximize pleasure while minimizing suffering" can still be a useful heuristic to follow. Yes, following that heuristic to its logical conclusion would mean forcibly rewiring everyone's brains, but that doesn't need to be a problem for as long as forcibly rewiring people's brains isn't a realistic option. HU may still be the best approximation of a person's values in the context of today's world, even if it wasn't the best description overall.

2) The arguments on complexity of value and so on establish that the average person's values aren't correctly described by HU. This still leaves open the possibility of someone only approving of those of their behaviors that serve to promote HU, so there may definitely be individual people who accept HU, due to not sharing the moral intuitions which motivate the objections to it.

Comment author: SaidAchmiz 05 May 2014 12:04:16PM 2 points [-]

On 1): I am skeptical of replies to the effect that "yes, well, X might not be quite right, but it's a useful heuristic, therefore I will go on acting as if X is right". For one thing, a person who makes such a reply usually goes right back to saying "X is right!" (sans qualifiers) as soon as the current conversation ends. Let's get clear on what we actually believe, I generally think; once we've firmly established that, we can look for maximally effective implementations.

For another thing, HU may be the best approximation etc. etc., but that's a claim that at least should be made explicitly, such that it can be examined and argued for; a claim of this importance shouldn't come up only in such tangential discussion branches.

For a third thing, what happens when forcibly rewiring people's brains becomes a realistic option?

On 2): I think there's two issues here. There could indeed be people who accept HU because that's what correctly describes their moral intuitions. (Though I should certainly hope they do not think it proper to impose that moral philosophy on me, or on anyone else who doesn't subscribe to HU!)

"Only approving of those behaviors that serve to promote HU" is, I think, a separate thing. Or at least, I'd need to see the concept expanded a bit more before I could judge. What does this hypothetical person believe? What moral intuitions do they have? What exactly does it mean to "promote" hedonistic utilitarianism?

Comment author: ike 05 May 2014 11:34:33AM 0 points [-]

I've seen Newcomb and Dust specks vs Torture but not Trolley (although I've seen that one before in other places). Which sequences do I need to finish for those?

If the trolley one is the same as the "standard" version, then it's fairly trivial within the framework of Orthodox Judaism (if I'm allowed to bring that in), because of strict rules about death. I'll elaborate further when I'm up to the question. The other two are a lot more complicated for me.

Comment author: SaidAchmiz 05 May 2014 11:39:15AM 0 points [-]

I don't think there's a Lesswrong-specific take on the trolley problem, so I'm assuming shminux is just referring to the usual one.

Comment author: Kaj_Sotala 05 May 2014 11:08:44AM *  4 points [-]

Those sound like objections to preference utilitarianism but not hedonistic utilitarianism. Although it's not technically possible yet, measuring the intensity of the positive and negative components of an experience sounds something that ought to be at least possible in principle. And the applicability of the VNM theorem to human preferences becomes irrelevant if you're not interested in preferences in the first place.

Comment author: SaidAchmiz 05 May 2014 11:31:57AM 2 points [-]

Yes, true enough[1]; I did not properly separate those objections in my comment. To elaborate:

I object to hedonistic utilitarianism on the grounds that it clearly and grossly fails to capture my moral intuitions or those of anyone else whom I consider not to be evading the question. A full takedown of the "hedonistic" part of "hedonistic utilitarianism" is basically (at least) all of Eliezer's posts about the complexity of value and so forth, and I won't rehash it here.

To be honest, hedonistic utilitarianism seems to me to be so obviously wrong that I'm not even all that interested in having this sort of moral philosophy debate with an effective altruist (or anyone else) who holds such a view. I mean, to start with, my hypothetical interlocutor would have to rebut all the objections raised to hedonistic utilitarianism over the centuries since it's been articulated, including, but not limited to, the aforementioned Lesswrong material.

I object to preference utilitarianism because of the "aggregation of utility" and "possibility of constructing a utility function" issues[2]. I think this is the more interesting objection.

[1] I'm not sure "intensity of the positive and negative components of an experience" is a coherent notion. There may not be a single quantity like that to measure. And even if we can measure something which we think qualifies for the title, it may be measurable only in some more-or-less absolute terms, while leaving open the question of how this hypothetical measured quantity matches up with anything like "utility to this particular experiencer". But, for the sake of the argument, I'm willing to grant that such a quantity can indeed be usefully measured, because this is certainly not my true rejection.

[2] These are my objections to the "preference" component of preference utilitarianism; my objection to classical utilitarianism also includes objections to other components, which I have enumerated in the grandparent.

Comment author: thebestwecan 02 May 2014 06:53:26PM 0 points [-]

I think it'd be interesting to know more about the specific ethical views of ethically-minded EAs, but the majority of EAs are not well-versed enough to make Utilitarianism vs. Other Consequentialism distinctions. It's good to make a big survey like this as easy to fill out as possible.

Same thing about the "political views" point, although there are standards for left vs. right across countries: http://en.wikipedia.org/wiki/Left%E2%80%93right_politics

Comment author: SaidAchmiz 05 May 2014 06:18:27AM 2 points [-]

the majority of EAs are not well-versed enough to make Utilitarianism vs. Other Consequentialism distinctions

I think that's a problem! (I discuss in this comment some reasons why.)

Comment author: tog 01 May 2014 10:59:09PM 2 points [-]

I judge this to be a problematic criterion. See this comment, esp. starting with "To put this another way ...", for why I think so.

That comment makes a lot of sense. It depends what we use the criterion for. In the survey, it's to gather information, and it's for precisely this reason that I chose not to ask if people were 'EAs' in your loose sense - almost everyone would say yes. I'm curious as to what uses do you think the criterion's problematic for.

My contention is that there's a distinct separation between, on the one hand, the general idea that we should be altruistic (in whatever sense we decide is meaningful and useful) and that we should seek to optimize the effectiveness of our altruism, and on the other hand, the loose community of people who share certain values, certain approaches to ethics, etc. (as I outline in the above-linked comment), which are not necessarily causally or conceptually entangled with the former (more general) idea.

It's a matter of a degree, but in the EA context (which sets a high bar), I personally call people 'altruistic' if (but not only if) they've donated >=10% of a real income for over a year or they've consistently spent over an hour a week doing something they'd otherwise rather not do to help others.

My contention is that there's a distinct separation between, on the one hand, the general idea that we should be altruistic (in whatever sense we decide is meaningful and useful) and that we should seek to optimize the effectiveness of our altruism, and on the other hand, the loose community of people who share certain values, certain approaches to ethics, etc. (as I outline in the above-linked comment), which are not necessarily causally or conceptually entangled with the former (more general) idea.

That's right, if by 'conceptually entangled' you mean 'necessarily connected', or even 'commonly accepted by both groups of people'. For example, I believe utilitarianism's widely accepted by EAs (though the survey may show otherwise!), but not entangled with merely valuing altruism and the effectiveness of altruism.

This is problematic for various reasons, I think. I won't clutter this thread by starting a debate on those reasons (unless asked), but I think it's at least important (and relevant to endeavors like this survey) to recognize this distinction.

I see no harm in thread-cluttering, at least here - go for it.

Comment author: SaidAchmiz 05 May 2014 06:16:04AM *  1 point [-]

Here is the promised other issue I see with the conflation of the general[1] and specific[2] forms of effective altruism.

You do not actually ever argue for the ideas making up that specific form.

It seems to go like this:

"We all think being altruistic is good, right? Of course we do. And we think it's important to be effective in our altruism, don't we? Of course. Good! Now, onwards to the fight for animal rights, the saving of children in Africa, the application of utilitarian principles to our charity work, and all the rest."

Now, as I say in my other comments, one issue is that potential newcomers to the movement might assent to those first two questions, but to the "Now, onwards ..." say — "whoa, whoa, where did that suddenly come from?". But the other issue is that it seems like you yourselves haven't given much thought to those positions. How do you know they're right, those philosophical and moral ideas? A lot of EA writing seems not to even consider the question! It's not like these are obvious principles you're assuming — many intelligent people, on LessWrong and elsewhere, do not agree with them!

Of course I don't actually think you've simply accepted these ideas out of some sort of blind go-alonging with some liberal crowd. This is LessWrong; I think better of you folks than that. (Although some EA-ers without an LW-or-similar background may well have given the matter just as little thought as that.) Presumably, you were, at some point, convinced of these ideas, in some way, by some arguments or evidence or considerations.

But I have no idea what those considerations are. I have no idea what convinced you; I don't know why you believe what you believe, because you hardly even acknowledge that you believe these things. In most EA writings I've seen, they are breezily assumed. That is not good for the epistemic health of the movement, I think.

I think it would be good to have some effort to clearly delineate the ideas that are held by, and commonly taken as background assumptions by, the majority of people in the EA movement; to acknowledge that these are nontrivial philosophical and moral positions, which are not shared by all people or even all who identify as rationalists; to explain how it was that you[3] became convinced of these ideas; and to lay out some arguments for said ideas, for potential disagreers to debate, if desired.

[1] "Being altruistic is good, and we should be effective in our altruistic actions."
[2] The specific cluster of ideas held by a specific community of people who describe themselves as the EA community.
[3] By "you" I don't necessarily mean you, personally, but: as many prominent figures in the EA movement as possible, and more generally, anyone who undertakes to write things intended to build the EA movement, recruit, etc.

Comment author: RobbBB 05 May 2014 04:07:59AM 5 points [-]

I'd rather see 'consequentialist' supplemented or replaced by specific questions that get at substantive ethical or meta-ethical disputes in EA and philosophy. 'Utilitarian' and 'deontologist' mean lots of different things to different people, and on their strictest definitions they don't entail a lot of their most interesting or widely cited ideas. Perhaps have an exploratory question one year asking non-utilitarians to write in their main objection to utilitarianism, then convert that into a series of questions the following year.

Comment author: SaidAchmiz 05 May 2014 05:45:51AM *  2 points [-]

One of the main objections to utilitarianism, it seems to me, is skepticism about the possibility (or even coherence of the notion) of aggregating utility across individuals. That's one of my main objections, at any rate.

Skepticism about the applicability of the VNM theorem to human preferences is another issue, though that one might be less widespread.

Edit: The SEP describes classic utilitarianism as actual, direct, evaluative, hedonistic, maximizing, aggregative (specifically, total), universal, equal-consideration, agent-neutral consequentialism. I have definite issues with the "actual", "direct", "hedonistic", "aggregative", "total", and "equal-consideration" parts of that. (Though I expect that my issues with "actual" will be shared by a significant portion of those who consider themselves utilitarians here, and my issues with "hedonistic" and "direct" may be as well. That leaves "aggregative"+"total", and "equal-consideration", as the two aspects most likely to be sources of philosophical conflict.)

Comment author: Torello 04 May 2014 04:35:37AM 5 points [-]

"Nothing in Biology Makes Sense Except in the Light of Evolution"

— Theodosius Dobzhansky

The fact that a theory that can be stated in ten words frames an entire discipline is quite incredible. Compared to group theory and probability, it sure seems like an easier uploading process as well.

Comment author: SaidAchmiz 04 May 2014 05:12:28AM 3 points [-]

What are the ten words or less in which evolution can be stated?

Comment author: Torello 03 May 2014 07:44:50PM 2 points [-]

I would love to hear what Richard Dawkins would say in reply to this quote.

Personally, I think it's great advice--challenging people immediately and directly is often not a good long-term strategy.

Comment author: SaidAchmiz 04 May 2014 12:14:30AM 21 points [-]

Dawkins, in arguments with theists, homeopaths, etc., is not trying to convince his interlocutors; nor are most of the other well-known atheist public figures. The aim to convince bystanders — the private atheist who is unsure whether to "come out", the theist who's all but lost his faith but isn't sure whether atheism is a position one may take publicly, the person who's lukewarm on religious arguments but has always had a rather benign and respectful view of religion, etc.

In private conversations with someone whose opinions are of concern to you, Franklin's advice make sense. The public arguments of Dawkins & Co. are more akin to performances than conversations. I think he achieves his aim admirably. I, for one, have little interest in watching people get on a public stage and have exchanges laden with "in certain cases or circumstances..." and other such mealy-mouthed nonsense.

Comment author: johnlawrenceaspden 03 May 2014 12:55:14PM 3 points [-]

{ the ability to navigate ambiguity }

I think this is one of the most important skills you get from the humanities. I have a friend who's a history professor. He's very used to hearing 20 different accounts of the same event told by different people, most of whom are self-serving if not outright lying, and working out what must actually have gone on, which looks like a strength to me.

He has a skill I'd like to have, but don't, and he got it from studying history, (and playing academic politics).

Comment author: SaidAchmiz 03 May 2014 06:05:56PM 10 points [-]

working out what must actually have gone on

How did he know that his judgment of what actually had gone on was correct? How did he verify his conclusion?

Comment author: johnlawrenceaspden 02 May 2014 09:18:36PM 1 point [-]

Which are the odd ones out?

Comment author: SaidAchmiz 02 May 2014 09:24:39PM 3 points [-]

To a first approximation:

{ critical thinking skills; an ability to work with and interpret numbers and statistics; a willingness to experiment, to open up to change }

vs.

{ knowledge of the past and other cultures; access to the insights of great writers and artists }

Then you've got this one by itself because what the heck does it even mean:

{ the ability to navigate ambiguity }

View more: Prev | Next