I think torture v. dust specks and similar problems can be illuminated by flipping them around and examining them from the perspective of the potential victims. Given a choice between getting a dust speck in the eye with probability 1 or a 1-in-3^^^^^3 chance of being tortured, I suspect the vast majority of individuals will actually opt for the dust speck, and I don't think this is just insensitivity to the scope of 3^^^^^3. Dust specks are such a trivial inconvenience that people generally don't choose to do any of the easy things they could do to minimize the chances of getting one (e.g. regularly dusting their environment, wearing goggles, etc.) On the other hand, most people would do anything to stop being tortured, up to and including suicide if the torture has no apparent end point. The difference here is arguably not expressible as a finite number.
Pardon me, I have to go flush my cornea.
After spending a week getting a dust speck in the eye every single second, I think you'll do the math and opt to be choosing the 1-in-3^^^^^3 chance of torture instead.
That is an entirely different scenario than what Prismattic is describing. In fact, a dust speck in the eye every single second would be an extremely effective form of torture.
It's confusing that you use the word 'meta-ethics' when talking about plain first-order ethics.
My favorite realist injection into the trolley problem is that there will be far more uncertainty: you won't know that the fat man will stop the trolley. I keep picturing someone tipping the poor guy over, watching him fall and break a few legs, moaning in agony, and then get mowed down by the trolley, which continues on its merry way and kills the children tied to the tracks regardless.
suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory
People often mistakenly think they are above average at tasks and skills such as driving. This has implications for people who are members of the set of people who believe themselves above average, without changing how well members of the set of people who are actually above average at driving can drive.
Humans often mistakenly t...
What is a "meta-ethical preference"? Do you just mean a moral judgment that is informed by one's metaethics? Or do you mean something like a second-order moral judgment based on others' first-order moral judgments?
The problem with basing decisions on events with a probability of 1-in-3^^^^^3, is that you're neglecting to take into account all kinds of possibilities with much higher (though still tiny probabilities).
For example, your chances of finding that the Earth has turned into your favorite fantasy novel, i.e., the particles making up the earth spontaneously rearranged themselves into a world closely resembling the world of the novel due to quantum tunneling, and then the whole thing turning into a giant bowl of tapioca pudding a week later, is much much higher then 1-in-3^^^^^3.
I didn't understand the phrase "preferences at gradient levels of organization". Can you clarify?
The original dust speck vs. torture problem isn't probabilistic. You're asked to choose whether to dust-speck 3^^^3 people, or to torture someone; not whether to do it to yourself.
If we reformulate it to apply to the person making the decision, we should be careful - the result may not be the same as for the original problem. For instance, it's not clear that making a single decision to dust-speck 3^^^3 people is the same as making 3^^^3 independent decisions to dust-speck each of them. (What do you do when the people disagree?)
But... that the majority of people believe something doesn’t make it right, and that the majority of people prefer something doesn’t make it right either.
Gotta love how this sentence is perfectly clear even though the word right means two different things (‘true’ and ‘good’ respectively) in two seemingly parallel places.
"Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for
I am careful to avoid putting people in a position of such literal moral hazard. That is, where people I care about will end up having their current preferences better satisfied by having different preferences than their current preferences. I don't average.
Designed to gauge responses to some parts of the planned “Noticing confusion about meta-ethics” sequence, which should intertwine with or be absorbed by Lukeprog’s meta-ethics sequence at some point.
Disclaimer: I am going to leave out many relevant details. If you want, you can bring them up in the comments, but in general meta-ethics is still very confusing and thus we could list relevant details all day and still be confused. There are a lot of subtle themes and distinctions that have thus far been completely ignored by everyone, as far as I can tell.
Problem 1: Torture versus specks
Imagine you’re at a Less Wrong meetup when out of nowhere Eliezer Yudkowsky proposes his torture versus dust specks problem. Years of bullet-biting make this a trivial dilemma for any good philosopher, but suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory that is at best an approximation of sane morality and at worst an obviously anti-epistemic spiral of moral rationalization. Temporarily humbled, you decide to think about the problem a little longer:
"Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for, since it is probable that they have preferences that implicitly cover edge cases such as this, and disregarding the meta-ethical preferences of 3^^^3+1 people is certainly one of the most blatantly immoral things one can do. After all, even if they never learn anything about this decision taking place, people are allowed to have preferences about it. But... that the majority of people believe something doesn’t make it right, and that the majority of people prefer something doesn’t make it right either. If I expect that these 3^^^3+1 people are mostly wrong about morality and would not reflectively endorse their implicit preferences being used in this decision instead of my explicitly reasoned and reflected upon preferences, then I should just go with mine, even if I am knowingly arrogantly blatantly disregarding the current preferences of 3^^^3 currently-alive-and-and-not-just-hypothetical people in doing so and thus causing negative utility many, many, many times more severe than the 3^^^3 units of negative utility I was trying to avert. I may be willing to accept this sacrifice, but I should at least admit that what I am doing largely ignores their current preferences, and there is some chance it is wrong upon reflection regardless, for though I am wiser than those 3^^^3+1 people, I notice that I too am confused."
You hesitantly give your answer and continue to ponder the analogies to Eliezer’s document “CEV”, and this whole business about “extrapolation”...
(Thinking of people as having coherent non-contradictory preferences is very misleadingly wrong, not taking into account preferences at gradient levels of organization is probably wrong, not thinking of typical human preferences as implicitly preferring to update in various ways is maybe wrong (i.e. failing to see preferences as processes embedded in time is probably wrong), et cetera, but I have to start somewhere and this is already glossing over way too much.)
Bonus problem 1: Taking trolleys seriously
"...Wait, considering how unlikely this scenario is, if I ever actually did end up in it then that would probably mean I was in some perverse simulation set up by empirical meta-ethicists with powerful computers, in which case they might use my decision as part of a propaganda campaign meant to somehow discredit consequentialist reasoning or maybe deontological reasoning, or maybe they'd use it for some other reason entirely, but at any rate that sure complicates the problem...” (HT: Steve Rayhawk)