I think torture v. dust specks and similar problems can be illuminated by flipping them around and examining them from the perspective of the potential victims. Given a choice between getting a dust speck in the eye with probability 1 or a 1-in-3^^^^^3 chance of being tortured, I suspect the vast majority of individuals will actually opt for the dust speck, and I don't think this is just insensitivity to the scope of 3^^^^^3. Dust specks are such a trivial inconvenience that people generally don't choose to do any of the easy things they could do to minimize the chances of getting one (e.g. regularly dusting their environment, wearing goggles, etc.) On the other hand, most people would do anything to stop being tortured, up to and including suicide if the torture has no apparent end point. The difference here is arguably not expressible as a finite number.
Pardon me, I have to go flush my cornea.
After spending a week getting a dust speck in the eye every single second, I think you'll do the math and opt to be choosing the 1-in-3^^^^^3 chance of torture instead.
That is an entirely different scenario than what Prismattic is describing. In fact, a dust speck in the eye every single second would be an extremely effective form of torture.
It's confusing that you use the word 'meta-ethics' when talking about plain first-order ethics.
My favorite realist injection into the trolley problem is that there will be far more uncertainty: you won't know that the fat man will stop the trolley. I keep picturing someone tipping the poor guy over, watching him fall and break a few legs, moaning in agony, and then get mowed down by the trolley, which continues on its merry way and kills the children tied to the tracks regardless.
suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory
People often mistakenly think they are above average at tasks and skills such as driving. This has implications for people who are members of the set of people who believe themselves above average, without changing how well members of the set of people who are actually above average at driving can drive.
Humans often mistakenly t...
What is a "meta-ethical preference"? Do you just mean a moral judgment that is informed by one's metaethics? Or do you mean something like a second-order moral judgment based on others' first-order moral judgments?
The problem with basing decisions on events with a probability of 1-in-3^^^^^3, is that you're neglecting to take into account all kinds of possibilities with much higher (though still tiny probabilities).
For example, your chances of finding that the Earth has turned into your favorite fantasy novel, i.e., the particles making up the earth spontaneously rearranged themselves into a world closely resembling the world of the novel due to quantum tunneling, and then the whole thing turning into a giant bowl of tapioca pudding a week later, is much much higher then 1-in-3^^^^^3.
I didn't understand the phrase "preferences at gradient levels of organization". Can you clarify?
The original dust speck vs. torture problem isn't probabilistic. You're asked to choose whether to dust-speck 3^^^3 people, or to torture someone; not whether to do it to yourself.
If we reformulate it to apply to the person making the decision, we should be careful - the result may not be the same as for the original problem. For instance, it's not clear that making a single decision to dust-speck 3^^^3 people is the same as making 3^^^3 independent decisions to dust-speck each of them. (What do you do when the people disagree?)
But... that the majority of people believe something doesn’t make it right, and that the majority of people prefer something doesn’t make it right either.
Gotta love how this sentence is perfectly clear even though the word right means two different things (‘true’ and ‘good’ respectively) in two seemingly parallel places.
"Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for
I am careful to avoid putting people in a position of such literal moral hazard. That is, where people I care about will end up having their current preferences better satisfied by having different preferences than their current preferences. I don't average.
Humans often mistakenly think they face trolley problems when they really don't. This has implications for people who believe they face a trolley problem, without directly changing what constitutes a good response by someone who actually faces a trolley problem.
I'm having trouble inferring your point here... The contrast between 'those who are dreaming think they are awake, but those who are awake know they are awake' and "I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings" is always on the edges of every moral calculation, and especially every one that actually matters. (I guess it might sound like I'm suggesting reveling in doubt, but noticing confusion is always so that we can eventually become confused on a higher level and about more important things. Once you notice a confusion, you get to use curiosity!)
If your decision depends on referencing people's hypothetical reflectively endorsed morality, then you are not simply going with your preferences about morality, divorced from the moral systems of the many people in question.
Yeah, so this gets a little tricky because the decision forks depending on whether or not you think most people would themselves care about their future smarter selves' values, or whether you think they don't care but they're wrong for not caring. (The meta levels are really blending here, which is a theme I didn't want to avoid but unfortunately I don't think I came up with an elegant way to acknowledge their importance while keeping the spirit of the post, which is more about noticing confusion and pointing out lots of potential threads of inquiry than it is an analysis, since a real analysis would take a ton of analytic philosophy, I think.)
That you are ignoring people's stated preferences in both calculations (which remember, have the same conclusion) is similarly irrelevant. In the second but not the first you weigh people's reflective morality, so despite other (conspicuous) similarities between the calculations, there was no going back to the original calculation in reaching the second conclusion.
Ah, I was trying to hint at 3 main branches of calculation; perhaps I will add an extra sentence to delineate the second one more. The first is the original "go with whatever my moral intuitions say", the second is "go with whatever everyone's moral intuitions say, magically averaged", and the third is "go with what I think everyone would upon reflection think is right, taking into account their current intuitions as evidence but not as themselves the source of justifiedness". The third and the first are meant to look conspicuously like each other but I didn't mean to mislead folk into thinking the third explicitly used the first calculation. The conspicuous similarity stems from the fact that the actual process you would go through to reach the first and the third positions are probably the same.
The right choice will have some negative consequences, but to say it is partly evil is misleadingly calling attention to an irrelevancy, if it isn't an outright misuse of "evil"
I used some rhetoric, like using the word 'evil' and not rounding 3^^^3+1 to just 3^^^3, to highlight how the people whose fate you're choosing might perceive both the problem and how you're thinking about the problem. It's just... I have a similar reaction when thinking about a human self-righteously proclaiming 'Kill them all, God will know His own.', but I feel like it's useful that a part of me always kicks in and says I'm probably doing the same damn thing in ways that are just less obvious. But maybe it is not useful.
I used some rhetoric, like using the word 'evil' and not rounding 3^^^3+1 to just 3^^^3, to highlight how the people whose fate you're choosing might perceive both the problem and how you're thinking about the problem. It's just... I have a similar reaction when thinking about a human self-righteously proclaiming 'Kill them all, God will know His own.', but I feel like it's useful that a part of me always kicks in and says I'm probably doing the same damn thing in ways that are just less obvious. But maybe it is not useful.
Umm, didn't you (non-trollishly) advocate indiscriminately murdering anyone and everyone accused of heresy as long as it's the Catholic Church doing it?
Designed to gauge responses to some parts of the planned “Noticing confusion about meta-ethics” sequence, which should intertwine with or be absorbed by Lukeprog’s meta-ethics sequence at some point.
Disclaimer: I am going to leave out many relevant details. If you want, you can bring them up in the comments, but in general meta-ethics is still very confusing and thus we could list relevant details all day and still be confused. There are a lot of subtle themes and distinctions that have thus far been completely ignored by everyone, as far as I can tell.
Problem 1: Torture versus specks
Imagine you’re at a Less Wrong meetup when out of nowhere Eliezer Yudkowsky proposes his torture versus dust specks problem. Years of bullet-biting make this a trivial dilemma for any good philosopher, but suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory that is at best an approximation of sane morality and at worst an obviously anti-epistemic spiral of moral rationalization. Temporarily humbled, you decide to think about the problem a little longer:
"Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for, since it is probable that they have preferences that implicitly cover edge cases such as this, and disregarding the meta-ethical preferences of 3^^^3+1 people is certainly one of the most blatantly immoral things one can do. After all, even if they never learn anything about this decision taking place, people are allowed to have preferences about it. But... that the majority of people believe something doesn’t make it right, and that the majority of people prefer something doesn’t make it right either. If I expect that these 3^^^3+1 people are mostly wrong about morality and would not reflectively endorse their implicit preferences being used in this decision instead of my explicitly reasoned and reflected upon preferences, then I should just go with mine, even if I am knowingly arrogantly blatantly disregarding the current preferences of 3^^^3 currently-alive-and-and-not-just-hypothetical people in doing so and thus causing negative utility many, many, many times more severe than the 3^^^3 units of negative utility I was trying to avert. I may be willing to accept this sacrifice, but I should at least admit that what I am doing largely ignores their current preferences, and there is some chance it is wrong upon reflection regardless, for though I am wiser than those 3^^^3+1 people, I notice that I too am confused."
You hesitantly give your answer and continue to ponder the analogies to Eliezer’s document “CEV”, and this whole business about “extrapolation”...
(Thinking of people as having coherent non-contradictory preferences is very misleadingly wrong, not taking into account preferences at gradient levels of organization is probably wrong, not thinking of typical human preferences as implicitly preferring to update in various ways is maybe wrong (i.e. failing to see preferences as processes embedded in time is probably wrong), et cetera, but I have to start somewhere and this is already glossing over way too much.)
Bonus problem 1: Taking trolleys seriously
"...Wait, considering how unlikely this scenario is, if I ever actually did end up in it then that would probably mean I was in some perverse simulation set up by empirical meta-ethicists with powerful computers, in which case they might use my decision as part of a propaganda campaign meant to somehow discredit consequentialist reasoning or maybe deontological reasoning, or maybe they'd use it for some other reason entirely, but at any rate that sure complicates the problem...” (HT: Steve Rayhawk)