Designed to gauge responses to some parts of the planned “Noticing confusion about meta-ethics” sequence, which should intertwine with or be absorbed by Lukeprog’s meta-ethics sequence at some point.
Disclaimer: I am going to leave out many relevant details. If you want, you can bring them up in the comments, but in general meta-ethics is still very confusing and thus we could list relevant details all day and still be confused. There are a lot of subtle themes and distinctions that have thus far been completely ignored by everyone, as far as I can tell.
Problem 1: Torture versus specks
Imagine you’re at a Less Wrong meetup when out of nowhere Eliezer Yudkowsky proposes his torture versus dust specks problem. Years of bullet-biting make this a trivial dilemma for any good philosopher, but suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory that is at best an approximation of sane morality and at worst an obviously anti-epistemic spiral of moral rationalization. Temporarily humbled, you decide to think about the problem a little longer:
"Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for, since it is probable that they have preferences that implicitly cover edge cases such as this, and disregarding the meta-ethical preferences of 3^^^3+1 people is certainly one of the most blatantly immoral things one can do. After all, even if they never learn anything about this decision taking place, people are allowed to have preferences about it. But... that the majority of people believe something doesn’t make it right, and that the majority of people prefer something doesn’t make it right either. If I expect that these 3^^^3+1 people are mostly wrong about morality and would not reflectively endorse their implicit preferences being used in this decision instead of my explicitly reasoned and reflected upon preferences, then I should just go with mine, even if I am knowingly arrogantly blatantly disregarding the current preferences of 3^^^3 currently-alive-and-and-not-just-hypothetical people in doing so and thus causing negative utility many, many, many times more severe than the 3^^^3 units of negative utility I was trying to avert. I may be willing to accept this sacrifice, but I should at least admit that what I am doing largely ignores their current preferences, and there is some chance it is wrong upon reflection regardless, for though I am wiser than those 3^^^3+1 people, I notice that I too am confused."
You hesitantly give your answer and continue to ponder the analogies to Eliezer’s document “CEV”, and this whole business about “extrapolation”...
(Thinking of people as having coherent non-contradictory preferences is very misleadingly wrong, not taking into account preferences at gradient levels of organization is probably wrong, not thinking of typical human preferences as implicitly preferring to update in various ways is maybe wrong (i.e. failing to see preferences as processes embedded in time is probably wrong), et cetera, but I have to start somewhere and this is already glossing over way too much.)
Bonus problem 1: Taking trolleys seriously
"...Wait, considering how unlikely this scenario is, if I ever actually did end up in it then that would probably mean I was in some perverse simulation set up by empirical meta-ethicists with powerful computers, in which case they might use my decision as part of a propaganda campaign meant to somehow discredit consequentialist reasoning or maybe deontological reasoning, or maybe they'd use it for some other reason entirely, but at any rate that sure complicates the problem...” (HT: Steve Rayhawk)
People often mistakenly think they are above average at tasks and skills such as driving. This has implications for people who are members of the set of people who believe themselves above average, without changing how well members of the set of people who are actually above average at driving can drive.
Humans often mistakenly think they face trolley problems when they really don't. This has implications for people who believe they face a trolley problem, without directly changing what constitutes a good response by someone who actually faces a trolley problem.
If your decision depends on referencing people's hypothetical reflectively endorsed morality, then you are not simply going with your preferences about morality, divorced from the moral systems of the many people in question. Your original thought process was about the morality of the act independent of those people's preferences,and it determined one choice was right. Having checked others' reflective morality, it's in an important sense a coincidence that you conclude that the same act is the right one. You are performing a new calculation (that it is largely comprised of the old one is irrelevant), and so should not say you are "just" going with "[your]" preferences.
That you are ignoring people's stated preferences in both calculations (which remember, have the same conclusion) is similarly irrelevant. In the second but not the first you weigh people's reflective morality, so despite other (conspicuous) similarities between the calculations, there was no going back to the original calculation in reaching the second conclusion.
If in your hypothetical they are informed and this hurts them, they're getting more than a speck's worth, eh?
You're willing to accept the sacrifice of others having negative utility?
It's OK to admit an element of an action was bad - it's not really an admission of a flaw. We can celebrate the event of the death of Bin Laden without saying it's good that a helicopter crashed in the operation to get him. We can celebrate the event of his death without saying that one great element of his death was that he felt some pain. We can still love life and be happy for all Americans and especially the SEAL who shot him, that he got to experience what really has to feel FANTASTIC, without saying it's, all else equal, good OBL is dead rather than alive. All else is not equal, as we are overall much better off without him.
The right choice will have some negative consequences, but to say it is partly evil is misleadingly calling attention to an irrelevancy, if it isn't an outright misuse of "evil".
Or to test the subject to see if he or she is trustworthy...or for reasons I can't think of.
I'm having trouble inferring your point here... The contrast between 'those who are dreaming think they are awake, but those who are awake know they are awake' and "I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of... (read more)