Designed to gauge responses to some parts of the planned “Noticing confusion about meta-ethics” sequence, which should intertwine with or be absorbed by Lukeprog’s meta-ethics sequence at some point.

Disclaimer: I am going to leave out many relevant details. If you want, you can bring them up in the comments, but in general meta-ethics is still very confusing and thus we could list relevant details all day and still be confused. There are a lot of subtle themes and distinctions that have thus far been completely ignored by everyone, as far as I can tell.

Problem 1: Torture versus specks

Imagine you’re at a Less Wrong meetup when out of nowhere Eliezer Yudkowsky proposes his torture versus dust specks problem. Years of bullet-biting make this a trivial dilemma for any good philosopher, but suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory that is at best an approximation of sane morality and at worst an obviously anti-epistemic spiral of moral rationalization. Temporarily humbled, you decide to think about the problem a little longer:

"Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for, since it is probable that they have preferences that implicitly cover edge cases such as this, and disregarding the meta-ethical preferences of 3^^^3+1 people is certainly one of the most blatantly immoral things one can do. After all, even if they never learn anything about this decision taking place, people are allowed to have preferences about it. But... that the majority of people believe something doesn’t make it right, and that the majority of people prefer something doesn’t make it right either. If I expect that these 3^^^3+1 people are mostly wrong about morality and would not reflectively endorse their implicit preferences being used in this decision instead of my explicitly reasoned and reflected upon preferences, then I should just go with mine, even if I am knowingly arrogantly blatantly disregarding the current preferences of 3^^^3 currently-alive-and-and-not-just-hypothetical people in doing so and thus causing negative utility many, many, many times more severe than the 3^^^3 units of negative utility I was trying to avert. I may be willing to accept this sacrifice, but I should at least admit that what I am doing largely ignores their current preferences, and there is some chance it is wrong upon reflection regardless, for though I am wiser than those 3^^^3+1 people, I notice that I too am confused."

You hesitantly give your answer and continue to ponder the analogies to Eliezer’s document “CEV”, and this whole business about “extrapolation”...

(Thinking of people as having coherent non-contradictory preferences is very misleadingly wrong, not taking into account preferences at gradient levels of organization is probably wrong, not thinking of typical human preferences as implicitly preferring to update in various ways is maybe wrong (i.e. failing to see preferences as processes embedded in time is probably wrong), et cetera, but I have to start somewhere and this is already glossing over way too much.)

Bonus problem 1: Taking trolleys seriously

"...Wait, considering how unlikely this scenario is, if I ever actually did end up in it then that would probably mean I was in some perverse simulation set up by empirical meta-ethicists with powerful computers, in which case they might use my decision as part of a propaganda campaign meant to somehow discredit consequentialist reasoning or maybe deontological reasoning, or maybe they'd use it for some other reason entirely, but at any rate that sure complicates the problem...” (HT: Steve Rayhawk)

New to LessWrong?

New Comment
75 comments, sorted by Click to highlight new comments since: Today at 10:37 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I think torture v. dust specks and similar problems can be illuminated by flipping them around and examining them from the perspective of the potential victims. Given a choice between getting a dust speck in the eye with probability 1 or a 1-in-3^^^^^3 chance of being tortured, I suspect the vast majority of individuals will actually opt for the dust speck, and I don't think this is just insensitivity to the scope of 3^^^^^3. Dust specks are such a trivial inconvenience that people generally don't choose to do any of the easy things they could do to minimize the chances of getting one (e.g. regularly dusting their environment, wearing goggles, etc.) On the other hand, most people would do anything to stop being tortured, up to and including suicide if the torture has no apparent end point. The difference here is arguably not expressible as a finite number.

Pardon me, I have to go flush my cornea.

8NihilCredo13y
FWIW, I would take the 1 in 3^^^^^3 chance of torture over a single dust speck*, but I would give 3^^^^^3 people a dust speck each than subject one person to torture, because the world with the dust specks looks nicer to me than the world with the torture. (I find fairness to be aesthetically pleasant.) * Though since I feel like being pedantic, the - irrational - momentary anxiety of rolling the torture die would be a worse feeling than a dust speck, which in practice would make the dust speck the better choice.
0Kevin13y
That's not a world with dust specks, that's millions upon millions of universes cycling endlessly of dust specks.
6Rain13y
We have eyelashes, blink a lot, have reflex actions to shield our eyes, and will quickly stop all activity around us because "I got something in my eye." And 1-in-3^^^^^3 odds are so trivial as to never happen within the lifetime of this universe or many such universes, so I think it is indeed scope insensitivity.
5Kevin13y
Yup. To get on the right scope, I would look at it as a choice between getting a dust speck in my eye or not getting a dust speck in my eye.
-1Prismattic13y
I am just rephrasing the original torture v. dust specks problem. We have already established that exactly 1 individual in some universe somewhere will being tortured. The odds of that individual being you are greater-than-astronomical but that is not the same as "will never happen."
4Rain13y
Why would people only accept "zero probability", considering zero is not a probability?
2FAWS13y
Yes, it's vastly less probable than things that merely "will never happen." It's far more likely that you spontaneously turn into chocolate raspberry ice cream and are eaten by a pink Yeti who happens to be Miss New South Wales. Would you like to be dust specked in exchange for a reduction in the base rate of that specific thing happening to you in the next second by one part in a trillion? It's a much better deal than the torture offer.
0Prismattic13y
Trying again here, as I'm not sure I expressed myself well in the previous attempt. The original problem is that you either torture one person or give 3^^^^^3 people dust specks. For purposes of this thought experiment, therefore, we must assume that there exist at least 3^^^^^3 people in some number of universes. So if you asked each and every one of them the formulation I offered in the grandparent, and they all chose the chance of torture based on your reasoning, exactly one of those people would be in for a very unpleasant surprise. Also, the probability of the torture is greater than the probability of spontaneously converting into a frozen treat for an improbably attractive sasquatch, since we have no reason whatsoever to expect that there is anything that could cause the latter, but we have already taken as a given that there is some individual in the multiverse with the capacity to choose to inflict torture or dust specks. [Edited] to correct the numbers to be of identical powers.
3wedrifid13y
3^^^^^^3? Do you know the difference between 3^^^^^^3 and 3^^^^^3? Compare the size of an electron and the size of the entire universe. Then forget that because that doesn't come close to demonstrating it. This means that the fact that you said the original problem is about 3^^^^^3 when it is actually about a mere 3^^^3 is a relatively minor error. That you are so willing to jump from 3^^^3 to 3^^^^^^3 in the same conversation suggest you don't really grasp the point here. The amount of people experiencing discomfort really does make a difference. There is an unimaginable amount of pain at stake.
0Prismattic13y
The sixth carrot in 3^^^^^^3 was a typo, which I am correcting.
-2Peterdjones13y
And "carrot"?
0Prismattic13y
The name of the grammatical symbol "^" is a carrot, as far as I know.
-2Peterdjones13y
caret =/= Carrot](http://en.wikipedia.org/wiki/
0Prismattic13y
Duly noted. On reflection, I've never actually seen it spelled out before.
2FAWS13y
You do not seem to understand how large 3^^^^^3 is.
2Kevin13y
1 in 3^^^^^3 is a really, really, really, really, really, really accurate specification of 0.
2FAWS13y
Nope, unless they have an arbitrary discontinuity in their valuation of harm it's really just scope insensitivy, 3^^^^^3 is that big. Quibbling about a few orders of magnitude of their caring about dust specks ("dust specks" just means the smallest harm they care about at all) and torture is a waste of time, they could consider torture a billion times worse than they do and it wouldn't change anything. It's another matter to volunteer to be dust-specked to save someone from torture, the personal satisfaction from having done so might very well outweigh the inconvenience. But if you choose for them and they never learn about it they don't get that satisfaction.
0Eugine_Nier13y
And yet we don't give Pascal's mugger the 10 bucks.
-2ArisKatsaris13y
I think it is. Imagine that the same trade is offered you a trillion times. Or imagine that it's automatically offered or rejected (unconsciously by default, but you have the ability to change the default) every second of your life. After spending a week getting a dust speck in the eye every single second, I think you'll do the math and opt to be choosing the 1-in-3^^^^^3 chance of torture instead.

After spending a week getting a dust speck in the eye every single second, I think you'll do the math and opt to be choosing the 1-in-3^^^^^3 chance of torture instead.

That is an entirely different scenario than what Prismattic is describing. In fact, a dust speck in the eye every single second would be an extremely effective form of torture.

3NihilCredo13y
Indeed. More abstractly: pleasure and suffering aren't so nice as to neatly add and multiply like pretty little scalars. Even if you wish to talk about "utils/utilons" - it is by no means obvious that ten dust specks are worth exactly ten times as many (negative) utils as one dust speck.
0Prismattic13y
Getting a dust speck for a moment is a minor nuisance. Getting an uninterrupted series of dust specks forever is torture. It's not a particularly invasive form, but it is debilitating.
2Rain13y
He was trying to show the difference between something which actually happens and has an effect versus something which will only happen with a 1-in-3^^^^^3 chance: one exists in this universe, the other does not. If we changed it to be one dust speck per second and a 1-in-3^^^^^3 chance every second that you are quantum teleported to a random planet in the universe, you'd swallow that planet in the ever-expanding black hole you'd become long, long before you're teleported there.
0ArisKatsaris13y
Okay, what about a dust speck per hour or a dust speck per ten minutes. Still a minor nuisance, but has it reached to the point you'd prefer to have a 1-in-3^^^^^3 chance of being tortured?

It's confusing that you use the word 'meta-ethics' when talking about plain first-order ethics.

-1Will_Newsome13y
...You're right, that was pretty sloppy. I felt vaguely justified in doing so since I often think about meta-ethics implied by or represented in first-order ethics (not that the levels are easily distinguishable in the first place in practice) and thus sort of made a point of not distinguishing them carefully. In hindsight that was dumb and especially dumb to to fail to acknowledge.

My favorite realist injection into the trolley problem is that there will be far more uncertainty: you won't know that the fat man will stop the trolley. I keep picturing someone tipping the poor guy over, watching him fall and break a few legs, moaning in agony, and then get mowed down by the trolley, which continues on its merry way and kills the children tied to the tracks regardless.

5NihilCredo13y
Have you come up with a better scenario for the trolley problem? The one I currently like the best is: * Trolley: You're a surgeon, you have a dying patient in your care, he needs five full litres of healthy blood to survive the operation; fortunately, you have exactly five liters available. You've just opened him up when five more emergency patients arrive, each of whom could survive with just one litre. * Fat man: There is no dying surgery patient, but the same five new emergencies have just arrived and you have no blood reserves at all. What you do have is a healthy but unconscious patient, with five litres of good blood in his veins. It's still not perfect though, because the role of doctors has deep cultural roots (Hyppocratic oath and so on), so the idea of a doctor doing harm to a patient feels repugnant and blasphemous, and since a patient feels like he is "entrusting" himself to a surgeon there's also an overtone of betrayal. Modern hospitals, huge and anonymous, have only partially deleted such feelings (and they try their best not to).
1Rain13y
Having played a healer in many online games, I've discovered that triage (what you've described above) quickly becomes second nature, to the point where if someone performs the wrong response to a threat, I will literally say aloud, "go die then" because I have more important people to take care of. I consider it a signature of the best healers that they will abort a spell on someone who needs it to instead cast it on someone who is more important. A true trolley problem would have to be contrived by a murderous, insane villain ala Saw); the uncertainty remains in any real world scenario that I've come across Though perhaps organ transplants can serve as a stand-in. Tons of otherwise healthy people just need one organ to live out a good life, and tons of people who are somewhat negative utility have nice, juicy organs. We'll know we're on our way to fixing the trolley problem when organ donation is mandatory, and can be a punishment handed down similar to the death penalty for those who harm society.
0MixedNuts13y
That's what makes heroism so poignant in real life. If you have to shoot an innocent but firmly believe it'll save the world, you'll probably brood a little (especially if there's also a cost to you), but mostly you'll get massive fuzzies. (I've never shot an innocent, but defending a cause you think is just is... more pleasant than it should be.) If you have to shoot an innocent and you expect it won't save the world but on average it's worth it anyway, it gnaws at you.
4Prismattic13y
Heroism is throwing yourself on the tracks to save the greater number of people. Pushing somebody else may be an example of decisiveness, or courage (in the sense of grace under pressure) but there is nothing heroic about it.
4orthonormal13y
This is a definitional dispute and an attempt at applause lights rather than a helpful comment.
4wedrifid13y
I don't agree with your judgement. The applause light reference in particular doesn't seem fair.
1MixedNuts13y
If it's really purely a cost to others, okay. But usually there's also a cost to yourself - you push someone else and get a death sentence, or a life sentence, you push a loved one, or you're just eaten alive by guilt for the rest of your days.
0Will_Newsome13y
Ha. I like Steve's version better though, since you can assume away all the environmental uncertainty leaving just indexical uncertainty (the distinction doesn't really exist but bear with me), and yet people still can't just go "LCPW!", because this version is both way less convenient and more probable. I bet it'd still annoy the hell out of a real philosopher, though.

suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory

People often mistakenly think they are above average at tasks and skills such as driving. This has implications for people who are members of the set of people who believe themselves above average, without changing how well members of the set of people who are actually above average at driving can drive.

Humans often mistakenly t... (read more)

1Will_Newsome13y
I'm having trouble inferring your point here... The contrast between 'those who are dreaming think they are awake, but those who are awake know they are awake' and "I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings" is always on the edges of every moral calculation, and especially every one that actually matters. (I guess it might sound like I'm suggesting reveling in doubt, but noticing confusion is always so that we can eventually become confused on a higher level and about more important things. Once you notice a confusion, you get to use curiosity!) Yeah, so this gets a little tricky because the decision forks depending on whether or not you think most people would themselves care about their future smarter selves' values, or whether you think they don't care but they're wrong for not caring. (The meta levels are really blending here, which is a theme I didn't want to avoid but unfortunately I don't think I came up with an elegant way to acknowledge their importance while keeping the spirit of the post, which is more about noticing confusion and pointing out lots of potential threads of inquiry than it is an analysis, since a real analysis would take a ton of analytic philosophy, I think.) Ah, I was trying to hint at 3 main branches of calculation; perhaps I will add an extra sentence to delineate the second one more. The first is the original "go with whatever my moral intuitions say", the second is "go with whatever everyone's moral intuitions say, magically averaged", and the third is "go with what I think everyone would upon reflection think is right, taking into account their current intuitions as evidence but not as themselves the source of justifiedness". The third and the first are meant to look conspicuously like each other but I didn't mean to mislead folk into thinking the third explicitly used the first calculation. The conspicuous similarity stem
5lessdazed13y
I have an uncommon relationship with the dream world, as I remember many dreams every night. I often dream within a dream, I might do this more often than most because dreams occupy a larger portion of my thoughts than they do in others, or I might just be remembering those dreams more than most do. When I wake up within a dream, I often think I am awake. On the other hand, sometimes in the middle of dreams I know I am dreaming. Usually it's not something I think about while asleep or awake. I also have hypnopompic sleep paralysis, and sometimes wake up thinking I am dead. This is like the inverse of sleep walking - the mind wakes up some time before the body can move. I'm not exactly sure if one breathes during this period or not, but it's certainly impossible to consciously breathe and one immediately knows that one cannot, so if one does not think oneself already dead (which for me is rare) one thinks one will suffocate soon. Confabulating something physical blocking the mouth or constricting the trunk can occur. It's actually not as bad to think one is dead, because then one is (sometimes) pleasantly surprised by the presence of an afterlife (even if movement is at least temporarily impossible) and one does not panic about dying - at least I don't. So all in all I'd say I have less respect for intuitions like that than most do.
2lessdazed13y
One point is that I feel very unconfused. That is, not only do I not feel confused now, I once felt confused and experienced what I thought was confusion lifting and being replaced by understanding. Which one, if just one, criteria for usefulness are you using here? It is useful for the human to have pain receptors, but there is negative utility in being vulnerable to torture (and not just from one's personal perspective). Surely you don't expect that even the most useful intuition is always right? This is similar to the Bin Laden point above, that the most justified and net-good action will almost certainly have negative consequences. I'm willing to call your intuition useful if it often saves you from being misled, and its score on any particular case is not too important in its overall value. However, its score on any particular case is indicative of how it would do in similar cases. If it has a short track record and it fails this test, we have excellent reason to believe it is a poorly tuned intuition because we know little other than how it did on this hypothetical, though its poor performance on this hypothetical should never be considered a significant factor in what makes it generally out of step with moral dilemmas regardless. This is analogous to getting cable ratings from only a few tracked boxes: we think many millions watched a show because many of the thousands tracked did, but do not think those thousand constitute a substantial portion of the audience.
3Will_Newsome13y
That's the one I'm referencing. My fear of having been terribly immoral (which could also be even less virtuously characterized as being or at least being motivated by an unreasonable fear of negative social feedback) is useful because it increases the extent to which I'm reflective on my decisions and practical moral positions, especially in situations that pattern match to ones that I've already implicitly labeled as 'situations where it would be easy to deceive myself into thinking I had a good justification when I didn't', or 'situations where it would be easy to throw up my hands because it's not like anyone could actually expect me to be perfect'. Vegetarianism is a concrete example. The alarm itself (though perhaps not the state of mind that summons it) has been practically useful in the past, even just from a hedonic perspective.
2lessdazed13y
OK, sometimes you will end up making the same decision after reflection and having wasted time, other times you may even change from a good decision (by all relevant criteria) to a bad one simply because your self-reflection was poorly executed. That doesn't necessarily mean there's something wrong with you for having a fear or with your fear (though it seems too strong in my opinion). This should be obvious - it wasn't to me until after reading your comment the second time - but "increases the extent to which I'm reflective" really ought to sound extraordinarily uncompelling to us. Think about it: a bias increases the extent to which you do something. It should be obvious that that thing is not always good to increase, and the only reason it seems otherwise to us is that we automatically assume there are biases in the opposite direction that won't be exceeded however much we try to bias ourselves. Even so, to combat bias with bias - it's not ideal.
0Multiheaded12y
Umm, didn't you (non-trollishly) advocate indiscriminately murdering anyone and everyone accused of heresy as long as it's the Catholic Church doing it?

What is a "meta-ethical preference"? Do you just mean a moral judgment that is informed by one's metaethics? Or do you mean something like a second-order moral judgment based on others' first-order moral judgments?

The problem with basing decisions on events with a probability of 1-in-3^^^^^3, is that you're neglecting to take into account all kinds of possibilities with much higher (though still tiny probabilities).

For example, your chances of finding that the Earth has turned into your favorite fantasy novel, i.e., the particles making up the earth spontaneously rearranged themselves into a world closely resembling the world of the novel due to quantum tunneling, and then the whole thing turning into a giant bowl of tapioca pudding a week later, is much much higher then 1-in-3^^^^^3.

-1Amanojack13y
Especially the probability that the means by which you learned of these probabilities is unreliable, which is probably not even very tiny. (How tiny is the probability that you, the reader of this comment, are actually dreaming right now?)
0jimrandomh13y
Actually, considering the possibility that you've misjudged the probability doesn't help with Pascal's Mugging scenarios, because P(X|judged that X has probability p) >= p\*P(judgment was correct) And while P(judgment was correct) may be small, it won't be astronomically small under ordinary circumstances, which is what it would take to resolve the mugging. (My preferred resolution is to restrict the class of admissable utility function-predictor pairs to those where probability shrinks faster that utility grows for any parameterizable statement, which is slightly less restrictive than requiring bounded utility functions.)
0Will_Newsome13y
It's still way too restrictive though, no? And are there ways you can Dutch book it with deals where probability grows faster (instead of the intuitively-very-common scenario where they always grow at the same rate)?
0Eugine_Nier13y
BTW, you realize we're talking about torture vs. dust spec and not Pascal's mugging here?
0Amanojack13y
I think he's just pointing out that all you have to do is change the scenario slightly and then my objection doesn't work. Still, I'm a little curious about how someone's ability to state a large number succinctly makes a difference. I mean, suppose the biggest number the mugger knew how to say was 12, and they didn't know about multiplication, exponents, up arrow notation, etc. They just chose 12 because it was the biggest number they could think of or knew how to express (whether they were bluffing totally or were actually going to torture 3^^^3 people). Should I take a mugger more seriously just because they know how to communicate big numbers to me?
0Eugine_Nier13y
The point of stating the large number succinctly is that it overwhelms the small likelihood of the muggers story being true, at least if you have something resembling a Solomonoff prior. Note also that the mugger isn't really necessary for the scenario, he's merely there to supply a hypothesis that you could have come up with on your own.
0Amanojack13y
Good point. I guess the only way to counter these odd scenarios is to point out that everyone's utility function is different, and then the question is simply whether the responder wants to self-modify (or would be happier in the long run doing so) even after hearing some rationalist arguments to clarify their intuitions. The question of self-modification is a little hard to grasp, but at least it avoids all these far-fetched situations.
0Eugine_Nier13y
For the Pascal's mugging problem, I don't think that will help.
-3Amanojack13y
Isn't Pascal's mugging just this? I'd just walk away. Why should I care? If I thought about it for so long that I had some lingering qualms, and I got mugged like that a lot, I'd self-modify just to enjoy the rest of the my life more. As an aside, I don't think people really care that much about other people dying unless they have some way to connect to it. Someone probably was murdered while you were reading this comment. Is it going to keep you up? On the other hand, people can cry all night about a video game character dying. It's all subjective.
0endoself13y
There's a difference between mental distress and action-motivating desire. If I were asked to pay $5 to prevent someone from being murdered with near-certainty, I would. On the other hand, I would not pay $5 more for a video game where a character does not die, though I can't be sure of this self-simulation because I play video games rather infrequently. If I only had $5, I would definitely spend it on the former option. I do not allow my mental distress to respond to the same things that motivate my actions; intuitively grasping the magnitude of existential risks is impossible and even thinking about a fraction of that tragedy could prevent action, such as by causing depression. However, existential risks still motivate my decisions.
0Amanojack13y
I thought of a way that I could be mugged Pascal-style: If I had to watch even one simulated person being tortured in gratuitous, holodeck-level realistic detail even for a minute if I didn't pay $5, I'd pay. I also wouldn't self-modify to make me not care about seeing simulated humans tortured in such a way, because I'm afraid that would make my interactions with people I know and care about very strange. I wouldn't want to be callous about witnessing people tortured, because I think it would take away part of my enjoyment of life. (And there are ways to amp up such scenarios to me it far worse, like if I were forced to torture to death 100 simulations of the people I most care about in a holodeck in order to save those actual people...that would probably have very bad consequences for me, and self-modifying so that I wouldn't care would just make it worse.) But let's face it, the vast majority of people are indeed pretty callous about the actual deaths happening today, all the pain experienced by livestock as they're slaughtered, and all the pain felt by chronic pain sufferers. People decry such things loudly, but few of those who aren't directly connected to the victims are losing sleep over such suffering, even though there are actions they could conceivably take to mitigate it. It is uncomfortable to acknowledge, but it seems undeniable.
0endoself13y
It's not Pascal's mugging unless it works with ridiculously low probabilities. Would you pay $5 to avoid a 10^-30 chance of watching 3^^^3 people being tortured? Are you including yourself in "the vast majority of people"? Are you including most of LW? If your utility is bounded, you are probably not vulnerable to Pascal's mugging. If your utility is not bounded, it is irrelevant whether other people act like their utilities are bounded. Note that even egoists can have unbounded utility functions.
1Amanojack13y
Are you losing sleep over the daily deaths in Iraq? Are most LWers? That's all I'm saying. I consider myself pretty far above-average empathy-wise, to the extent that if I saw someone be tortured and die I'd probably be completely changed as a person. If I spent more time thinking about the war I probably not be able to sleep at all, and eventually if I steeped myself in the reality of the situation I'd probably go insane or die of grief. The same would probably happen if I spent all my time watching slaughterhouse videos. So I'm not pretending to be callous. I'm just trying to inject some reality into the discussion. If we cared as much as we signal we do, no one would be able go to work, or post on LW. We'd all be too grief-stricken. So although it depends on what exactly you mean by "unbounded utility function," it seems that no one's utility function is really unbounded. And it also isn't immediately clear that anyone would really want their utility function to be unbounded (unless I'm misinterpreting the term). Also, point taken about my scenario not being a Pascal's mugging situation.
1endoself13y
That is exactly what I was talking about when I said "There's a difference between mental distress and action-motivating desire.". Utility functions are about choices, not feelings, so I assumed that, in a discussion about utility we would be using the word 'care' (as in "If we cared as much as we signal we do") to refer to motives for action, not mental distress. If this isn't clear, I'm trying to refer to the same ideas discussed here. It does not make sense to speak of what someone wants their utility function to be; utility functions just describe actual preferences. Someone's utility function is unbounded if and only if there are consequences with arbitrarily high utility differences. For every consequence, you can identify one that is over twice as good (relative to some zero point, which can be arbitrarily chosen. This doesn't really matter if you're not familiar with the topic, it just corresponds to the fact that if every consequence were 1 utilon better, you would make the same choices because relative utilities would not have changed.) Whether a utility function has this property is important in many circumstances and I consider it an open problem whether humans' utility functions are unbounded, though some would probably disagree and I don't know what science doesn't know.
0Amanojack13y
Is this basically saying that you can tell someone else's utility function by demonstrated preference? It sounds a lot like that.
1endoself13y
No, because people are not completely rational. What I 'really' want to do is what I would do if I were fully informed, rational, etc. Morality is difficult because our brains do not just tell us what we want. Demonstrated preference would only work with ideal agents, and even then it could only tell you what they want most among the possible options.

I didn't understand the phrase "preferences at gradient levels of organization". Can you clarify?

Yes, but I don't like the word evil.

1Will_Newsome13y
2 out of 2 comments thus far complaining about this means I have to change it... /sigh. I like the word evil because good/evil and good/bad (depending on context) is how most of the world thinks (at one level of abstraction at least), and in meta-ethics it's not entirely obvious you're allowed to just throw away such value judgments as the results of misguided human adaptations.
2Kevin13y
I feel like the word "evil" is only coherent under absolute morality (which I'm pretty sure is wrong) and falls apart trivially under relative morality. Just because it is regular used as a metaphor by the population doesn't seem worth the linguistic precision lost by a concept so widely misunderstood. This post is slightly less interesting with the word evil removed because negative utility is harder to feel for most people, but it makes more sense to me. I would be interested in hearing you more clearly define the nature of evil as negative utility and having a comment thread descend into various
0Peterdjones13y
Do you mean absolute or objective morality? The two are not the same (see the WP articles). The commitments you have already made, if true, make that impossible. Utility is clearly relative. You say Evil falls apart under relative interpretations. Therefore, there can be no interpretation of Evil in terms of utility where it doesn't fall apart.

The original dust speck vs. torture problem isn't probabilistic. You're asked to choose whether to dust-speck 3^^^3 people, or to torture someone; not whether to do it to yourself.

If we reformulate it to apply to the person making the decision, we should be careful - the result may not be the same as for the original problem. For instance, it's not clear that making a single decision to dust-speck 3^^^3 people is the same as making 3^^^3 independent decisions to dust-speck each of them. (What do you do when the people disagree?)

But... that the majority of people believe something doesn’t make it right, and that the majority of people prefer something doesn’t make it right either.

Gotta love how this sentence is perfectly clear even though the word right means two different things (‘true’ and ‘good’ respectively) in two seemingly parallel places.

"Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for

I am careful to avoid putting people in a position of such literal moral hazard. That is, where people I care about will end up having their current preferences better satisfied by having different preferences than their current preferences. I don't average.

1lessdazed13y
I'm confused. All people are always in position to have current preferences better satisfied by having somewhat different preferences, no? I have no doubt the case you're thinking of meets the criteria for "That is...". One mistake I have made recently is to think my description fit reality because an important scenario fit according to thedescription, as I diligently checked...but so did every other outcome. Perhaps I am oversensitive right now to seeing this mistake around me, perhaps this is a false positive, or a true positive I wouldn't have otherwise spotted, or even a true positive I would have without having experience with that mistake - seeing the flaws in others' arguments is so very much easier than seeing them in one's own. This is particularly true of gaps, which one naturally fills in. If you could share some examples of when you were put in the position of putting people in the position of moral hazard, that would be great.
3wedrifid13y
Preferences A will be more satisfied if the agent actually had preferences B than they will be if they actually have preferences A. So the way you get what you would have wanted is by wanting something different. For example, if I have a preference for '1' but I know that someone is going to average my preferences with someone who prefers 0 then I know I will make '1' happen by modifying myself to prefer '2' instead of '1'. So averaging sucks.
1Will_Newsome13y
Yeah, evolutionary (in the Universal Darwinian sense that includes Hebbian learning) incentives for a belief, attention signal, meme, or person to game differential comparisons made by overseer/peer algorithms (who are themselves just rent-seeking half the time) whenever possible is a big source of dukkha (suffering, imperfection, off-kilteredness). An example at the memetic-societal level: http://lesswrong.com/lw/59i/offense_versus_harm_minimization/3y0k . In the torture/specks case it's a little tricky. If no one knows that you're going to be averaging their preferences and won't ever find out, and all of their preferences are already the result of billions of years of self-interested system-gaming, then at least averaging doesn't throw more fuel on the fire. Unless preferences have evolved to exaggerate themselves to game systems-in-general due to incentives caused by the general strategy of averaging preferences, in which case you might want to have precommited to avoid averaging. Of course, it's not like you can avoid having to take the average somewhere, at some level of organization...
0Perplexed13y
Averaging by taking the mean sucks. Averaging by taking the median sucks less. It is a procedure relatively immune to gaming by would-be utility monsters. The median is usually the 'right' utilitarian algorithm in any case. It minimizes total collective distance from the 'average'. The mean minimizes total collective distance^2 from the 'average'. There is no justification for squaring.
4magfrump13y
Is there a justification for not-squaring? What's the appropriate metric on the space of preferences? This seems like something people would have different opinions about; i.e. "People who are smart should have more say!" "People who have spent more time self-reflecting should have more say!" "People who make lifestyle choices like this should be weighted more heavily!" "People who agree with me should have more say!" Depending on the distribution, squaring could be better, because more (might be) lost as you get further away. And of course you can only take the median if your preferences are one dimensional.
0Perplexed13y
Personally, I am unconvinced that there is any fundamental justification for considering anyone's utility but one's own. But, if you have reason to respect the principles of democracy, the median stands out as the unique point acceptable to a majority. That is, if you specify any other point, a majority would vote to replace that point by the median. That depends on what kinds of preferences you are comparing. If you are looking at the preferences of a single person, the standard construction of that person's utility function sets the "metric". But if you attempt to combine the preferences of two people, you either need to use the Nash Bargaining solution or Harsanyi's procedure for interpersonal comparison. The first gives a result that is vaguely median-like. The second gives an answer that is suitable for use with the mean.