Morality is Awesome

86 [deleted] 06 January 2013 03:21PM

(This is a semi-serious introduction to the metaethics sequence. You may find it useful, but don't take it too seriously.)

Meditate on this: A wizard has turned you into a whale. Is this awesome?

Is it?

"Maybe? I guess it would be pretty cool to be a whale for a day. But only if I can turn back, and if I stay human inside and so on. Also, that's not a whale.

"Actually, a whale seems kind of specific, and I'd be suprised if that was the best thing the wizard can do. Can I have something else? Eternal happiness maybe?"

Meditate on this: A wizard has turned you into orgasmium, doomed to spend the rest of eternity experiencing pure happiness. Is this awesome?

...

"Kindof... That's pretty lame actually. On second thought I'd rather be the whale; at least that way I could explore the ocean for a while.

"Let's try again. Wizard: maximize awesomeness."

Meditate on this: A wizard has turned himself into a superintelligent god, and is squeezing as much awesomeness out of the universe as it could possibly support. This may include whales and starships and parties and jupiter brains and friendship, but only if they are awesome enough. Is this awesome?

...

"Well, yes, that is awesome."


What we just did there is called Applied Ethics. Applied ethics is about what is awesome and what is not. Parties with all your friends inside superintelligent starship-whales are awesome. ~666 children dying of hunger every hour is not.

(There is also normative ethics, which is about how to decide if something is awesome, and metaethics, which is about something or other that I can't quite figure out. I'll tell you right now that those terms are not on the exam.)

"Wait a minute!" you cry, "What is this awesomeness stuff? I thought ethics was about what is good and right."

I'm glad you asked. I think "awesomeness" is what we should be talking about when we talk about morality. Why do I think this?

  1. "Awesome" is not a philosophical landmine. If someone encounters the word "right", all sorts of bad philosophy and connotations send them spinning off into the void. "Awesome", on the other hand, has no philosophical respectability, hence no philosophical baggage.

  2. "Awesome" is vague enough to capture all your moral intuition by the well-known mechanisms behind fake utility functions, and meaningless enough that this is no problem. If you think "happiness" is the stuff, you might get confused and try to maximize actual happiness. If you think awesomeness is the stuff, it is much harder to screw it up.

  3. If you do manage to actually implement "awesomeness" as a maximization criteria, the results will be actually good. That is, "awesome" already refers to the same things "good" is supposed to refer to.

  4. "Awesome" does not refer to anything else. You think you can just redefine words, but you can't, and this causes all sorts of trouble for people who overload "happiness", "utility", etc.

  5. You already know that you know how to compute "Awesomeness", and it doesn't feel like it has a mysterious essence that you need to study to discover. Instead it brings to mind concrete things like starship-whale math-parties and not-starving children, which is what we want anyways. You are already enabled to take joy in the merely awesome.

  6. "Awesome" is implicitly consequentialist. "Is this awesome?" engages you to think of the value of a possible world, as opposed to "Is this right?" which engages to to think of virtues and rules. (Those things can be awesome sometimes, though.)

I find that the above is true about me, and is nearly all I need to know about morality. It handily inoculates against the usual confusions, and sets me in the right direction to make my life and the world more awesome. It may work for you too.

I would append the additional facts that if you wrote it out, the dynamic procedure to compute awesomeness would be hellishly complex, and that right now, it is only implicitly encoded in human brains, and no where else. Also, if the great procedure to compute awesomeness is not preserved, the future will not be awesome. Period.

Also, it's important to note that what you think of as awesome can be changed by considering things from different angles and being exposed to different arguments. That is, the procedure to compute awesomeness is dynamic and created already in motion.

If we still insist on being confused, or if we're just curious, or if we need to actually build a wizard to turn the universe into an awesome place (though we can leave that to the experts), then we can see the metaethics sequence for the full argument, details, and finer points. I think the best post (and the one to read if only one) is joy in the merely good.

Comments (437)

Comment author: seanwelsh77 01 May 2013 12:11:47AM 0 points [-]

According to Leibniz, this is the most awesome of all possible worlds.

Comment author: Will_Newsome 01 May 2013 02:58:32AM *  6 points [-]

(This is not a good characterization of Leibniz's actual conceptual system, for what it's worth;---the arguments that this is the "best of all possible worlds" are quite technical and come from the sort of intuitions that would later inspire algorithmic information theory; certainly neither blind optimism nor psychologically contingent enthusiasm about life's bounties were motivating the arguments. Crucially, "best" or similar, unlike "awesome", is potentially philosophically simple (in the sense of algorithmic information theory), which is necessary for Leibniz's arguments to go through. (This comment is directed more at the general readership than the author of the comment I'm replying to.))

Comment author: seanwelsh77 01 May 2013 04:21:25AM -2 points [-]

My recollection of Leibniz's view is dim but I recollect that the essence of it is that the perfection of the world is a consequence of the perfection of God. It would reflect poorly on the Omnipotence, Omniscience, Benevolence & Supreme Awesomeness &c of the Deity and Designer if he bashed out some second-rate less than perfectly good (or indeed merely averagely awesome) world. For the benefit of the general readership, the book to read on this is Candide by Voltaire. You will never see rationalists in quite the same way again... :-)

Link to Candide

Comment author: Jayson_Virissimo 01 May 2013 04:38:47AM *  4 points [-]

My recollection of Leibniz's view is dim but I recollect that the essence of it is that the perfection of the world is a consequence of the perfection of God. It would reflect poorly on the Omnipotence, Omniscience, Benevolence & Supreme Awesomeness &c of the Deity and Designer if he bashed out some second-rate less than perfectly good (or indeed merely averagely awesome) world. For the benefit of the general readership, the book to read on this is Candide by Voltaire. You will never see rationalists in quite the same way again... :-)

I think this comment reinforces Will_Newsome's point. The textbook Rhetoric, Logic, and Argumentation: A Guide for Student Writers by Magedah Shabo (quite correctly) uses Voltaire's Candide as the very first example of a straw man fallacy on page 95.

Comment author: [deleted] 01 May 2013 04:47:12AM 1 point [-]

Nice catch!

Comment author: Eliezer_Yudkowsky 01 May 2013 02:25:53AM 5 points [-]

Falsified by diarrhea. Next!

Comment author: mare-of-night 01 May 2013 12:31:00AM 2 points [-]

I don't agree with Leibniz, but I do find his "best of all possible worlds" concept really useful for talking about what utilitarians try to do.

Comment author: MinibearRex 18 January 2013 09:58:29PM 1 point [-]

I tend to use the word fun.

Comment author: hankx7787 17 January 2013 12:36:45PM *  1 point [-]

I'm sorry, this is all I could think about the whole time reading your post: http://www.youtube.com/watch?v=3b1iwLIMmRQ

Seriously though, this terminology has been employed by Starglider in SL4 circles for ages. His explanation of a positive Singularity would be something like, "Everything gets really, really awesome really really fast." I do think it's a great word, despite the negative connotations.

Comment author: [deleted] 17 January 2013 02:56:43PM 0 points [-]

Seriously though, this terminology has been employed by Starglider in SL4 circles for ages.

That's good to know.

Comment author: Qiaochu_Yuan 13 January 2013 02:59:04AM 18 points [-]

So it seems to me that the positive responses to this post have largely been of the form "hey, this is a useful intuition pump!" and the negative responses to this post have largely been of the form "hey, this is a problematic theory of morality." For what it's worth, my response was in the former camp, so I'd like to say a little more in its defense.

One useful thing that using the word "awesome" instead of the word "moral" accomplishes is that it redefines the search space of moral decisions. The archetypal members of the category "moral decisions" involve decisions like whether to kill, whether to steal, or whether to lie. But using the word "awesome" makes it easier to realize that a much larger class of decisions can be thought of as moral decisions, such as what kind of career to aim for.

Comment author: [deleted] 09 March 2013 05:23:46PM 4 points [-]

whether to kill, whether to steal, or whether to lie

With the archetypal answers being "no". Perhaps the word "morality" cues proscriptive, inhibitory, avoidance thoughts for you, while awesomeness cues prescriptive, excitatory, attractive ones.

Comment author: lukeprog 13 January 2013 07:49:39AM 0 points [-]

A good point.

Comment author: Rubix 11 January 2013 08:46:18PM *  6 points [-]

"Morality is awesome", as a statement, scans like "consent is sexy" to me. Neither of these statements are true enough to be useful except as signalling or a personal goal ("I would like to find X thing I believe to be moral more awesome, so as to hack my brain to be more moral").

In some cases of assessing morality/awesomeness or consent/sexiness correlation, one would sometimes have to lie about their awesomeness/sexiness preferences, and ignore those preferences in order to be a Perfectly Moral Good Individual who does not Like Evil Things.

Comment author: [deleted] 11 January 2013 09:06:23PM 4 points [-]

"Morality is awesome", as a statement, scans like "consent is sexy" to me.

It was secretly meant to be parsed the other way: "awesome is morality". Sorry to confuse.

It's not about signalling, it's supposed to be an entirely personal thing.

It's not about hacking your brain to find your current conception of morality more awesome either. It's about flushing out your current conception of morality and rebuilding it from intuition without interference from Deep Wisdom or philosophical cached thoughts.

In some cases of assessing ... one would sometimes have to lie ...in order to be a Perfectly Moral Good Individual who does not Like Evil Things.

I assume the capitals are about signaling "goodness". Sometimes one will have to lie about what is actually moral, in order to appear "moral". The awesomeness basis is orthogonal to this, except that it seems to make the difference between what is actually good and "morality" more explicit.

Comment author: Rubix 11 January 2013 09:30:24PM *  2 points [-]

I assume the capitals are about signaling "goodness"

I use Meaningful Initial Caps to communicate tone, but recognize that it's nonstandard. Sorry for any confusion.

So as far as I can tell, you're saying that "awesomeness" is a good basis for noticing what one's brain currently considers moral, so it can then rebuild its definitions from there.

To extend the metaphor, "sexiness is (perceived by the intuitive parts of your brain, absent intervention from moralizing or abstract-cognition parts, as) consent" is a good thing to pay attention to, so you can know what that part of you actually cares about, which gives you new information that isn't simply from choosing a side on the "Sexiness is about evopsych and golden ratios and trading meat for sex!" versus "Sexiness is about communication and queer theory praxis and bucking stereotypes!" battle.

What I'm curious about is:

rebuilding it from intuition without interference from Deep Wisdom or philosophical cached thoughts.

What, then, do you rebuild your current conception of morality from? "Blowing up people, when I have vague evidence that they're mooks of the Forces of Evil, by the dozens, is a bad idea, even though it seems awesome" seems like a philosophical cached thought to me. Do you think it's something else?

Counterfactual terrorism - "but those mooks may not be mooks!" - isn't a good tool for discerning actual bad ideas.

If I respond to "Consent is sexy!" by saying "But some of my brain doesn't think that!", noticing what those brainbits actually think, then change those brainbits to find sexy what I think of as "consent", I'm not in a very different situation from the person who's cheering blindly for consent being sexy. I just believe my premise more on the ground level, which will blind me to ways in which my preconceived notions of consent might suck.

In other words, both my intuitive models of awesomeness and my explicit models of morality might be lame in many invisible ways. What then?

Comment author: [deleted] 11 January 2013 11:02:28PM 0 points [-]

I use Meaningful Initial Caps to communicate tone, but recognize that it's nonstandard. Sorry for any confusion.

I recognize the idiom (I've read most of c2 wiki, and other places where such is used), just unsure how to parse it in this case. The closest match of "Perfectly Moral Good Individual" is a noun emphasizing apparent nature, rather than true nature.

Or did you mean "ignore those preferences in order to be a Perfectly Moral Good Individual who does not Like Evil Things." to be taken literally in the sense that you have to lie about something to be moral? That seems odd. Lie to who?

What, then, do you rebuild your current conception of morality from? "Blowing up people, when I have vague evidence that they're mooks of the Forces of Evil, by the dozens, is a bad idea, even though it seems awesome" seems like a philosophical cached thought to me. Do you think it's something else?

Yes, it's a cached thought, but one that has a solid justification that is easy to port. I have no trouble with bringing those over. The ones the "switch to awesome" procedure targets are cached thoughts like "I am confused about morality", or the various bits of Deep Wisdom that act as the explosive in the philosophical landmine.

(Though of course many people in this thread managed to port their confusion and standard antiwisdom as well.)

The fact that you were forced to explicitly import "this is a bad idea because of X and Y" shows that it is generally working.

In other words, both my intuitive models of awesomeness and my explicit models of morality might be lame in many invisible ways. What then?

Not sure what you are getting at here.

Comment author: Alicorn 11 January 2013 06:41:35PM 5 points [-]

I upvoted this post because it was clear, interesting, and relatively novel, but I'm concerned that it could tend to lead to what I'm going to call "narrative bias" even though I think that already means something.

Imagine someone who's living a fairly mediocre life. Then, they get attacked - mugged or something. This isn't fun for them, but they acquire a wicked keen scar, lots of support from their friends, and a Nemesis who gives them Purpose in Life. They spend a long time hunting their nemesis, acquiring skills to do so, etc. etc., and eventually there is a kickass showdown where the nemesis - fairly old by this point, wasn't going to last long even absent violence - is taken down.

Or, for a simpler case: the death of Batman's parents. Batman's parents' death was not particularly awesome, but Batman got really awesome as a result.

It is not moral to attack mediocre people or orphan impressionable rich children, regardless.

I dunno, maybe this is just me complaining about consequentialism-in-general again with a different vocabulary.

Comment author: TheOtherDave 11 January 2013 07:52:01PM 3 points [-]

I dunno, maybe this is just me complaining about consequentialism-in-general again with a different vocabulary.

(nods) I think so. Supposing that Bruce Wayne being Batman is a good thing, and supposing that his parents being killed was indispensible to him becoming Batman, then a consequentialist should endorse his parents having been killed. (Of course, we might ask why on earth we're supposing those things, but that's a different question.)

Comment author: Qiaochu_Yuan 11 January 2013 08:15:21PM 3 points [-]

Supposing that Bruce Wayne being Batman is a good thing, and supposing that his parents being killed was indispensible to him becoming Batman, then a consequentialist should endorse his parents having been killed.

Disagree. P(parents killed | becoming like Batman) being high doesn't imply that P(becoming like Batman | parents killed) is high.

Comment author: TheOtherDave 11 January 2013 08:46:28PM 3 points [-]

I agree with your assertion, but I suspect we're talking past each other, probably because I was cryptic.

Let me unpack a little, and see if you still disagree.

There's 30-year-old Bruce over there, and we have established (somehow) that he is Batman, that this is a good thing, and that it would not have happened had his parents not been killed. (Further, we have established (somehow) that his parents' continued survival would not have been an even better thing.)

And the question arises, was it a good thing that his parents were killed? (Not, "could we have known at the time that it was a good thing", merely "was it, in retrospect, a good thing?")

I'm saying a consequentialist answers "yes."

If your disagreement still applies, then I haven't followed your reasoning, and would appreciate it if you unpacked it for me.

Comment author: Qiaochu_Yuan 11 January 2013 08:52:02PM *  4 points [-]

As a consequentialist, I think the only good reason to judge past actions is to help make future decisions, so to me the question "was it a good thing that his parents were killed?" cashes out to "should we adopt a general policy of killing people's parents?" and the answer is no. (I think Alicorn agrees with me.)

It seems to me like a bad idea to judge past actions on the basis of their observed results; this leaves you too susceptible to survivorship bias. Past actions should be judged on the basis of their expected results. If I adopt a bad investment strategy but end up making a lot of money anyway, that doesn't imply that my investment strategy was a good idea.

Comment author: TheOtherDave 11 January 2013 09:04:42PM 0 points [-]

OK, that's clear; thanks.

I of course agree that adopting a general policy of killing people's parents without reference to their attributes is a bad idea. It would most likely have bad consequences, after all. (Also, it violates rules against killing, and it's something virtuous people don't do.)

I agree that for a consequentialist, the only good reason to judge past actions is to help make future decisions.

I disagree that the question "was it a good thing that his parents were killed?" cashes out to "should we adopt a general policy of killing people's parents?" I would say, rather, that it cashes out to "should we adopt a general policy of killing people who are similar to Bruce Wayne's parents at the moment of their death?" ("People's parents" is one such set, but not the only one, and I see no reason to privilege it.)

And I would say the consequentialist's answer is "yes, for some kinds of similarity; no, for others." (Which kinds of similarity? Well, we may not know yet. That requires further study.)

Comment author: Qiaochu_Yuan 11 January 2013 09:19:55PM 1 point [-]

"should we adopt a general policy of killing people who are similar to Bruce Wayne's parents at the moment of their death?"

My answer's still no because of my first comment. The death of his parents is only one factor involved in Bruce Wayne's becoming Batman. In Batman Begins, for example, another important factor is his training with the League of Shadows. The latter is not a predictable consequence of the former.

Comment author: TheOtherDave 11 January 2013 10:09:43PM 0 points [-]

Ah, I see your point. Sure, that's true.

Comment author: [deleted] 11 January 2013 06:49:55PM 6 points [-]

Or, for a simpler case: the death of Batman's parents. Batman's parents' death was not particularly awesome, but Batman got really awesome as a result.

It is not moral to attack mediocre people or orphan impressionable rich children, regardless.

If it reliably resulted in more superheros and nobel-proze winners and such, I think it would be awesome (and moral) to traumatize kids.

If it's not reliable, and only some crazy black swan, then not.

I dunno, maybe this is just me complaining about consequentialism-in-general again with a different vocabulary.

This does seem to be the substance of your example.

Comment author: Qiaochu_Yuan 11 January 2013 11:54:18PM 4 points [-]

If it reliably resulted in more superheros and nobel-proze winners and such, I think it would be awesome (and moral) to traumatize kids.

Agreed. Most people already agree that it is moral to force kids to go to school for years, which can be a traumatizing experience for some, and school is not even all that reliable at producing what it claims to want to produce, namely productive members of society.

Comment author: [deleted] 12 January 2013 12:02:58AM 0 points [-]

Even if net awesomeness increases though, do awesome ends justify non-awesome means?

Comment author: Qiaochu_Yuan 12 January 2013 12:12:41AM *  6 points [-]

The point of having LW posts around is not to take their titles as axioms and work from there. My hardware, corrupted as it is, has no intrinsic interest in traumatizing children, so I don't suspect my brain of doing something wrong when it tells me "if it were reliably determined that traumatizing children led to awesome outcome X, then we should traumatize children, especially considering we are in some sense already doing this."

In other words, I think an argument against traumatizing children to make superheroes, if it were determined that this would actually work, is either also an argument against mandatory education or else has to explain why it isn't suffering from status quo bias (why are we currently traumatizing children exactly the right amount?).

Edit: I'm not sure I said quite what I meant to say above. Let me say something different: the post you linked to is about how, when humans say things like "doing superficially bad thing X has awesome consequence Y, therefore we should do X" you should be skeptical because humans run on corrupted hardware which incentivizes them to justify certain kinds of superficially bad things. But what you're being skeptical of is the premise "doing superficially bad thing X has awesome consequence Y," or at least the implicit premise that it doesn't have counterbalancing bad consequences. In this discussion nyan_sandwich and I are both taking this premise for granted.

Comment author: PhilGoetz 10 January 2013 11:08:18PM *  21 points [-]

Whether to use "awesome" instead of "virtuous" is the question, not the answer. This is the question asked by Nietzsche in Beyond Good and Evil. If you've gotten to the point where you're set on using "awesome" instead of "good", you've already chosen your answer to most of the difficult questions.

The challenge to awesome theory is the same one it has been for 70 years: Posit a world in which Hitler conquered the world instead of shooting himself in his bunker. Explain how that Hitler was not awesome. Don't look at his outcomes and conclude they were not awesome because lots of innocent people died. Awesome doesn't care how many innocent people died. They were not awesome. They were pathetic, which is the opposite of awesome. Awesome means you build a space program to send a rocket to the moon instead of feeding the hungry. Awesome history is the stuff that happened that people will actually watch on the History Channel. Which is Hitler, Napoleon, and the Apollo program.

If you don't think Hitler was awesome, odds are very good that you are trying to smuggle in virtues and good-old-fashioned good, buried under an extra layer of obfuscation, by saying "I don't know exactly what awesome is, but someone that evil can't be awesome." Hitler was evil, not bad.

You think you can just redefine words, but you can't,

That's exactly right. Including "awesome". Tornadoes, hurricanes, earthquakes, and floods are awesome. A God who will squish you like a bug if you dare not to worship him is awesome, awe-full, and awful.

If you think "happiness" is the stuff, you might get confused and try to maximize actual happiness. If you think awesomeness is the stuff, it is much harder to screw it up.

Saying that it's good because it's vague, because it's harder to screw up when you don't know what you're talking about, is contrary to the spirit of LessWrong.

That is, "awesome" already refers to the same things "good" is supposed to refer to.

Awesome already refers to the same things good is supposed to refer to, for those people who have already decided to use "awesome" instead of "good". The "Is this right?" question that invokes virtues and rules is not a confused notion of what is awesome. It's a different, incompatible view of what we "ought" to do.

Comment author: MugaSofer 13 January 2013 09:05:36PM *  0 points [-]

Whether to use "awesome" instead of "virtuous" is the question, not the answer. This is the question asked by Nietzsche in Beyond Good and Evil.

Awesome doesn't care how many innocent people died. They were not awesome. They were pathetic, which is the opposite of awesome.

[...]

Tornadoes, hurricanes, earthquakes, and floods are awesome. A God who will squish you like a bug if you dare not to worship him is awesome

[...]

Hitler was evil, not bad.

You appear to have invented your own highly specific meaning of "awesome", which appears synonymous with "effective". As such, "awesome" (in my experience generally used as a contentless expression of approval, more or less, with connotations of excitingness) is not fulfilling it's intended goal of intuition-pump for you. Poor you. Those of us who use "awesome" in the same way as nyan_sandwich, however, have no such problem.

If you don't think Hitler was awesome, odds are very good that you are trying to smuggle in virtues and good-old-fashioned good, buried under an extra layer of obfuscation

That is explicitly the goal here - to use the vague goodness of "awesome" as a hack to access moral intuitions more directly.

Comment author: Fronken 22 January 2013 04:34:51PM -1 points [-]

Poor you. Those of us who use "awesome" in the same way as nyan_sandwich, however, have no such problem.

Actually, aren't there existing connotations of "awesome" - exciting, dramatic and so on - for everyone?

Comment author: Nick_Tarleton 11 January 2013 08:13:22PM *  3 points [-]

Upvoted; whatever its relationship to what the OP actually meant, this is good.

Saying that it's good because it's vague, because it's harder to screw up when you don't know what you're talking about, is contrary to the spirit of LessWrong.

Reminding yourself of your confusion, and avoiding privileging hypotheses, by using vague terms as long as you remember that they're vague doesn't seem so bad.

Comment author: gwern 11 January 2013 06:51:36PM 6 points [-]

If you don't think Hitler was awesome, odds are very good that you are trying to smuggle in virtues and good-old-fashioned good, buried under an extra layer of obfuscation, by saying "I don't know exactly what awesome is, but someone that evil can't be awesome." Hitler was evil, not bad.

And that you probably haven't watched stuff like Triumph of the Will to understand why Nazi aesthetics and propaganda could be so effective.

Comment author: shminux 11 January 2013 07:08:35PM *  -1 points [-]

Clearly reducing the number of disgusting Untermenschen and increasing the Lebensraum for the master race is awesome if you consider yourself to be one of the latter.

[EDIT] Hmm, feels like a knee-jerk downvote. Maybe I'm missing something.

Comment author: gwern 11 January 2013 09:51:46PM *  9 points [-]

[EDIT] Hmm, feels like a knee-jerk downvote. Maybe I'm missing something.

You totally are. The point of Goetz's comment and mine was not that Hitler was 'awesome' simply because of ordinary in-group/out-group dynamics which apply to like every other leader ever and most of whom are not particularly 'awesome'; the point was that Hitler and the Nazis were unusually 'awesome' in appreciating shock and awe and technocratic superiority and Nazi Science (sneers at unimpressive projects) and geez I even named one of the stellar examples of this, Triumph of the Will, which still remains one of the best examples of the Nazi regime's co-option of scientists and artists and film-makers and philosophers to glorify itself and make it awesome. It's an impressive movie, so impressive that

Riefenstahl's techniques, such as moving cameras, the use of long focus lenses to create a distorted perspective, aerial photography, and revolutionary approach to the use of music and cinematography, have earned Triumph recognition as one of the greatest films in history.

or

The Economist wrote that Triumph of the Will "sealed her reputation as the greatest female filmmaker of the 20th century".[7]

(Oh, how that must burn in the hearts of American feminists: the great female filmmaker is not American, and was a lackey of the Nazis.)

I watched it once, and was very impressed, personally.

Take a moment to savor that. Hitler and Naziism are the ultimate embodiment of evil in the West; modern cinema, invented in America & Europe, are world-around still recognized as one of the quintessentially Western mediums. But here we have a propaganda movie, produced to glorify Hitler and Naziism, 100% glorifying Hitler and Naziism, featuring only Hitler and Naziism, commissioned by Hitler who "served as an unofficial executive producer", etc etc and not only has it not been consigned to the dust-heap of history, it is still watched, admired, and studied by those who are sworn to hate Naziism and anything even slightly like it such as eugenics or criticism of Israel.

Now that is "awesome".

Comment author: PhilGoetz 15 January 2013 06:27:55PM 2 points [-]

Minor note: According to an article in Wired recently, the Nazis invented 3D movies.

Comment author: NancyLebovitz 12 January 2013 01:07:01AM 6 points [-]

Also, I had a fit of the far view, and it occurred to me that Germany was rather a medium-sized country (I'm so used to continental superpowers, but the world wasn't always like that), and it tried to become a large country, and it took a big alliance of the other major powers to take it down. This is awesome from a sufficient distance.

Comment author: prase 16 January 2013 08:16:47PM *  2 points [-]

They had population of 70 million (probably after eating Austria) which was quite a lot at the time, compared to 48 million of Britain and about as much of France. The only more populous independent countries in 1939 were China, USA, the USSR and perhaps Japan.

Comment author: DaFranker 16 January 2013 08:31:29PM *  0 points [-]

Meh. Japan does seem like it was higher, according to projections. (http://www.wolframalpha.com/input/?i=country+populations+1939)

Comment author: prase 16 January 2013 08:46:54PM *  1 point [-]

Wikipedia says 73 million in 1940. For Germany it says 69,3 million in 1939 and almost exactly 70 million, but apparently without the annexed populations of Austria and Sudetenland, which I estimate at about 10 million or more.

Edit: not sure whether to include the population of Korea into Japan's statistics, which would make Japan more populous than Germany with certainty. The 73 million figure is without Korea.

Comment author: gwern 16 January 2013 02:35:38AM 1 point [-]

That was the German narrative, was it not? Starting from the avowed English-French 'encircling' of Germany - why do you think they were allied in the first place with decrepit Poland?

Comment author: Fronken 22 January 2013 04:37:44PM 0 points [-]

I don't understand why this was downvoted :( I upvoted it because it's a good point and true. Is it too understanding to Nazis?

Comment author: gwern 22 January 2013 04:53:13PM 2 points [-]

Probably. I remember a similar conversation where I posted a Wittgenstein lambasting mindless British nationalism in a WWII context, and VladimirM stepped in to defend said nationalism to much upvotes.

Comment author: Fronken 22 January 2013 05:40:31PM -1 points [-]

Not very rational to vote down a fact >:( it's not even politics like that one just the things they believed. Is there any post on bias against the poor Nazis, it seems a bad plan if you want human rationality to tar facts about them with the same brush as their evil deeds.

Comment author: gwern 22 January 2013 05:59:46PM 0 points [-]

Is there any post on bias against the poor Nazis

Not really. It falls under standard biases like 'horns effect' (dual of 'halo effect'). Sometimes LWers point out in comments good aspects of the Nazis, like their war on cancer & work on anti-smoking, or animal cruelty laws, but no one's written any sort of comprehensive discussion of this.

The closest I can think of is Yvain's classic post on religion: http://lesswrong.com/lw/fm/a_parable_on_obsolete_ideologies/

Comment author: Desrtopa 11 January 2013 06:21:25PM 1 point [-]

Don't look at his outcomes and conclude they were not awesome because lots of innocent people died. Awesome doesn't care how many innocent people died. They were not awesome. They were pathetic, which is the opposite of awesome. Awesome means you build a space program to send a rocket to the moon instead of feeding the hungry. Awesome history is the stuff that happened that people will actually watch on the History Channel. Which is Hitler, Napoleon, and the Apollo program.

I would say that the world being taken over by an evil dictator is a lot less awesome than the world being threatened by an evil dictator who's heroically defeated.

Comment author: TimS 11 January 2013 03:46:29PM *  1 point [-]

I think your post is aimed too high. Nyan is not trying to resolve the virtue ethics / deontology / consequentilist dispute.

Instead, he's trying to use vocabulary to break naive folks out of the good --> preferences --> good.

At that level of confusion, the distinction between good, virtue, or utility is not yet relevant. Only after people stop defining good in an essentially circular fashion is productive discussion of different moral theories even possible.

Attacking Nyan for presuming moral realism is fighting the hypothetical.

Comment author: Eliezer_Yudkowsky 11 January 2013 03:38:25PM 8 points [-]

I sometimes get the impression that I am the only person who reads MoR who actually thinks MoR!Hermione is more awesome than MoR!Quirrell. Of course I have access to at least some info others don't, but still...

Comment author: wedrifid 16 January 2013 08:32:34AM *  4 points [-]

I sometimes get the impression that I am the only person who reads MoR who actually thinks MoR!Hermione is more awesome than MoR!Quirrell.

Canon!Luna is more awesome than MoR!Hermione too.

However, a universe with MoR!Hermione in it is likely to be far more awesome than a universe with canon!Luna substituted in. MoR!Hermione is a heck of a lot more useful to have around for most purposes, including the protection of awesome things such as canon!Luna.

MoR!Quirrel certainly invokes "Fictional Awesomeness". That thing that makes many (including myself) think "Well he's just damn cool and I'm glad he exists in that fictional universe (which can't have any direct effect on me)". Like Darth Vader is way more awesome than Anakin Skywalker even though being a whiny brat is somewhat less dangerous than being a powerful, gratuitously evil Sith Lord. I personally distinguish this from the 'actual awesomeness' of the kind mentioned here. I'm not sure to what extent others consider the difference.

Comment author: PhilGoetz 15 January 2013 06:32:15PM 0 points [-]

I'll hazard a guess that your concepts have more internal structure than those of most people. You've probably looked at the interactions between the concepts you've learned, analyzed them, and refined them to be more intensional and less extensional. Whereas for most people, the concept "awesome" is a big bag of all the stuff they were looking at when someone said "Awesome!"

Comment author: Nick_Tarleton 11 January 2013 08:18:30PM 1 point [-]

I didn't, and still don't... but now I'm a little bit disturbed that I don't, and want to look a lot more closely at Hermione for ways she's awesome.

Comment author: Rubix 11 January 2013 08:32:37PM *  6 points [-]

Quirrell scans, to me, as more awesome along the "probably knows far more Secret Eldrich Lore than you" and "stereotype of a winner" axes, until I remember that Hermione is, canonically, also both of those things. (Eldrich Lore is something one can know, so she knows it. And she's more academically successful than anyone I've ever known in real life.)

So when I look more closely, the thing my brain is valuing is a script it follows where Hermione is both obviously unskillful about standard human things (feminism, kissing boys, Science Monogamy) and obviously cares about morality, to a degree that my brain thinks counts as weakness. When I pay attention, Quirrell is unskillful about tons of things as well, but he doesn't visibly acknowledge that he is/has been unskillful. He also may or may not care about ethics to a degree, but his Questionably Moral Snazzy Bad Guy archetype doesn't let him show this.

It does come around to Quirrell being more my stereotype of a winner, in a sense. Quirrell is more high-status than Hermione - when he does things that are cruel, wrong or stupid he hides it or recontextualizes it into something snazzy - but Hermione is more honorable than Quirrell. She confronts her mistakes and failings publicly, messily and head-on and grows as a person because of that. I think that's really awesome.

Comment author: [deleted] 11 January 2013 07:48:02PM 6 points [-]

Let's say they're different kinds of awesome to me. Overall, I think Quirrell is more awesome... until I remember Hermione is twelve.

Comment author: Vaniver 11 January 2013 03:57:58PM 1 point [-]

Yeah, that sounds like either a miscalibrated sense of awe (i.e. very different priorities), or like a reaction to private information.

Comment author: Eliezer_Yudkowsky 11 January 2013 05:35:43PM 11 points [-]

Well, to a first approximation, on a moral level, Quirrell is who I try not to be and Hermione is who I wish I was, and on the level of intelligence, it's not possible for me to be viscerally impressed with either one's intellect since I strictly contain both. Ergo I find Hermione's choices more impressive than Quirrell's choices.

Comment author: wedrifid 16 January 2013 07:37:22AM 1 point [-]

Quirrell is who I try not to be and Hermione is who I wish I was

Wait, all of her? Including the obnoxious controlling parts?

Comment author: Swimmer963 12 January 2013 02:28:53PM *  10 points [-]

Quirrel strikes me as the sort of character who is intended to be impressive. Pretty much all his charactaristics hit my "badass" buttons. The martial arts skills, the powerful magical field brushing at the edges of Harry's little one, etc. However, I wouldn't want to be like Quirrel, and I can't imagine being Quirrel-like and still at all like myself. Whereas Hermione impresses me in the sense of being almost like a version of myself that gets everything I try to be right and is better than me at everything I think matters. Hermione is more admirable to me than Quirrel, but my sense of awe is triggered more by badass-ness than admiration.

Comment author: shminux 11 January 2013 07:20:37PM *  3 points [-]

it's not possible for me to be viscerally impressed with either one's intellect since I strictly contain both

That's probably why. For many mere mortals like myself MoR!Quirrell is simply awesome: competent, unpredictable, in control, a level above everyone else. Whereas MoR!Hermione is, while clever and knowledgeable, too often a damsel in distress, and her thought process, decisions and actions are uniformly less impressive than those of Harry or Quirrell. Not sure if this is intentional or not. At this point I'm rooting for Quirrell to win. Maybe there will be an alternate ending which explores this scenario.

Comment author: TheOtherDave 11 January 2013 07:47:32PM 2 points [-]

Is this simply a case of rooting for whoever looks like they're going to win?

Comment author: PhilGoetz 15 January 2013 06:12:23PM 0 points [-]

I think either Harry will win, or everybody will lose.

Comment author: shminux 11 January 2013 08:18:32PM 1 point [-]

You think that [I think that] Quirrell/Voldemort is going to win? O.O I wish. After all, what's the worst that can happen if he does?

Comment author: TheOtherDave 11 January 2013 08:54:19PM 0 points [-]

Well, I meant the question as a question, not as a rhetorical statement.
That aside, I do think it's possible to be affected by the tendency to admire what appears currently to be the winning team even if I suspect, or even believe, that they will eventually lose. Human knowledge is rarely well-integrated.
That aside, I haven't read HP:MOR in a very long time, so any estimates of who wins I make would be way obsolete. I don't even quite know what Quirrell/Voldemort's "win conditions" are. So I have no idea what can happen if he does.
That said, I vaguely recall EY making statements about writing Quirrell that I took at the time to mean that EY is buying into the sorts of narrative conventions that require Quirrell to not win (though not necessarily to lose).

Comment author: NancyLebovitz 11 January 2013 06:17:08PM 5 points [-]

This surprises me, but I'm not sure what I've mismodelled. To my mind, Hermione is trusting about moral rules in a way that I wouldn't have expected you to like that much, but perhaps it's just a trait that I don't like that much.

Harry seems more awesome to me because he has a strong drive to get to the bottom of things-- not the same thing as intelligence, though it might be a trait that wouldn't be as interesting in an unintelligent character. (Or would it be? I can't think of an author who's tried to portray that.)

Comment author: NancyLebovitz 12 January 2013 01:12:58AM 0 points [-]

Question for those who've tracked MOR more carefully than I have: How much is Harry's curiosity entangled with his desire for power?

Comment author: BerryPick6 11 January 2013 06:32:26PM 5 points [-]

Harry seems more awesome to me because he has a strong drive to get to the bottom of things-- not the same thing as intelligence, though it might be a trait that wouldn't be as interesting in an unintelligent character. (Or would it be? I can't think of an author who's tried to portray that.)

I would be fascinated to read a character who can Get Curious and think skeptically and reflectively about competing ideas, but is only of average intelligence.

Trying to model this character in my head has resulted in some sort of error though, so there's that...

Comment author: Swimmer963 12 January 2013 02:20:27PM 3 points [-]

I can imagine writing this character, because it's the way I feel a lot of the time... Knowing I read some important fact once but not being able retrieve it, lacking the working memory to do logic problems in my head and having to stop and pull out pen and paper, etc. I'm arguably of somewhat higher than average intelligence, but I'm quite familiar with the feeling of my brain not being good enough for what I want to do.

Comment author: BerryPick6 12 January 2013 02:34:47PM 0 points [-]

This is exactly what I was trying to describe, and this happens to me as well. If you ever do write such a figure, be sure to let me know, I'd like to read about them. :)

Comment author: Swimmer963 12 January 2013 02:49:19PM 2 points [-]

One of my previous novels somewhat touches on this. The main character is quite intelligent, but has grown up illiterate, and struggles with this limitation. If you want to check it out, see here.

Comment author: BerryPick6 12 January 2013 03:00:07PM 1 point [-]

Coincidences are funny: my name happens to be Asher.

I'll put this on my reading list.

Comment author: Izeinwinter 12 January 2013 09:56:58AM 2 points [-]

Those personality traits are not just correlated with intelligence, they almost certainly cause it - thinking is to some degree a skill set, and innate curiosity + introspection + skepticism would result in constant deliberate practice. So those traits + average intelligence can only coexist if the character has recently undergone a major personality change, or suffered brain damage.

Comment author: Kawoomba 11 January 2013 08:03:35PM 0 points [-]

Time to taboo intelligence.

Comment author: TheOtherDave 11 January 2013 07:53:48PM 6 points [-]

Arguably Watson is an attempt at this.

Comment author: deathpigeon 12 January 2013 01:09:28PM 1 point [-]

Except Watson was intended to be above average intelligence, but below Sherlock level intelligence, so he fails on the last account. He was very intelligent, just not as absurdly inelligent as Sherlock, so he appeared to be of average or lower intelligence.

Comment author: [deleted] 11 January 2013 07:52:16PM 2 points [-]
Comment author: NancyLebovitz 11 January 2013 06:54:09PM *  2 points [-]

The Millionaire Next Door may include a bunch of people who can think clearly without being able to handle a lot of complexity.

Comment author: shminux 11 January 2013 07:04:51PM 1 point [-]

The Millionaire Nest Door

Maybe Next Door? Or am I missing something?

Comment author: NancyLebovitz 11 January 2013 07:16:51PM 0 points [-]

Just a typo (now corrected) rather than a joke or reference.

Comment author: Vaniver 11 January 2013 07:04:11PM 2 points [-]

Amazon link. The primary takeaway from the book is that high consumption and high wealth draw from the same resource pool, and so conflict.

In general, I wonder if this shows up as characters who see virtue as intuitive, rather than deliberative. Harry sometimes gets the answer right, but he has to think hard and avoid tripping over himself to get there; Hermione often gets the answer right from the start because she appears to have a good feel for her situation.

Moving back to wealth, and generalizing from my parents, it's not clear to me that they sat down one day and said "you know how we could become millionaires? Not spending a lot of money!" rather than having the "consume / save?" dial in their heads turned towards save, in part because "thrift => wealth" is an old, old association.

If you model intelligence differences as primarily working memory differences, it seems reasonable to me that high-WM people would be comfortable with nuance and low-WM people would be uncomfortable with it; the low-WM person might be able to compensate with external devices like writing things down, but it's not clear they can synthesize things as easily on paper as a high-WM person could do in their head (or as easily as the high-WM person using paper!).

Comment author: Ghatanathoah 11 January 2013 06:22:10AM *  2 points [-]

The challenge to awesome theory is the same one it has been for 70 years: Posit a world in which Hitler conquered the world instead of shooting himself in his bunker. Explain how that Hitler was not awesome. Don't look at his outcomes and conclude they were not awesome because lots of innocent people died. Awesome doesn't care how many innocent people died. They were not awesome. They were pathetic, which is the opposite of awesome.

Can't we resolve this simply by amending the statement to "Morality is awesome for everybody." Dying pathetically is not an awesome outcome for the people who had to do it. Arguing that innocent people were pathetic actually emphasizes the point. If Hitler's actions made tons of people pathetic instead of awesome then those actions were most certainly immoral.

Incidentally, I do not expect nyan_sandwich to retitle the OP based on my comment. I think that the "for everybody" part can probably just be implicit.

Comment author: Decius 11 January 2013 06:10:12AM 0 points [-]

The purpose of using awesome instead of good failed in this case. If you think that rocketry is more awesome than genocide is lame, (e.g.), then you think Hitler increased awesomeness.

Comment author: [deleted] 11 January 2013 02:34:37AM 5 points [-]

odds are very good that you are trying to smuggle in virtues and good-old-fashioned good, buried under an extra layer of obfuscation

Exactly right. In fact I do this explicitly, by invoking "fake utility functions" in point 2.

You think you can just redefine words, but you can't,

You're right I'm playing fast and loose a bit here. I guess my "morality is awesome" idea doesn't work for people who are in possession of the actual definition of awesome.

In that case, depending on whether you are being difficult or not, I recommend finding another vaguely good and approximately meaningless word that is free of philosophical connotations to stand in for "awesome", or just following the "if you are still confused" procedure (read metaethics).

Saying that it's good because it's vague, because it's harder to screw up when you don't know what you're talking about, is contrary to the spirit of LessWrong.

Perhaps. I certainly wouldn't endorse it in general. I have inside view reasons that it's a good idea (for me) in this particular case, though; I'm not just pulling a classic "I don't understand, therefore it will work". Have you seen the discussion here?

for those people who have already decided to use "awesome" instead of "good". The "Is this right?" question that invokes virtues and rules is not a confused notion of what is awesome. It's a different, incompatible view of what we "ought" to do.

I'm confused about what you are saying. Here you seem to be identifying consequentialism with "awesome", but above, you used similar phrasings and identified "awesome" with Space Hitler, which nearly everyone (including consequentialists) would generally agree was only good if you don't look at the details (like people getting killed).

Can you clarify?

Comment author: Vaniver 11 January 2013 03:56:06PM 3 points [-]

I'm confused about what you are saying.

Was Space Hitler awesome? Yes. Was Space Hitler good? No. If you say "morality is what is awesome," then you are either explicitly signing on to a morality in which the thing to be maximized is the glorious actions of supermen, not the petty happiness of the masses, or you are misusing the word "awesome."

Comment author: DaFranker 11 January 2013 04:23:17PM *  2 points [-]

Was Space Hitler awesome? Yes. Was Space Hitler good? No.

This doesn't seem to pose any kind of contradiction or problem for the "Morality is Awesome" statement, though I agree with you about the rest of your comment.

Is Space Hitler awesome? Yes. Is saving everyone from Space Hitler such that no harm is done to anyone even more awesome? Hell yes.

Remember, we're dealing with a potentially-infinite search space of yet-unknown properties with a superintelligence attempting to maximize total awesomeness within that space. You're going to find lots of Ninja-Robot-Pirate-BountyHunter-Jedi-Superheroes fighting off the hordes of Evil-Nazi-Mutant-Zombie-Alien-Viking-Spider-Henchmen, and winning.

And what's more awesome than a Ninja-Robot-Pirate-BountyHunter-Jedi-Superhero? Being one. And what's more awesome than being a Ninja-Robot-Pirate-BountyHunter-Jedi-Superhero? Being a billion of them!

Comment author: Vaniver 11 January 2013 04:33:09PM 3 points [-]

Is saving everyone from Space Hitler such that no harm is done to anyone even more awesome? Hell yes.

Suppose a disaster could be prevented by foresight, or narrowly averted by heroic action. Which one is more awesome? Which one is better?

Being a billion of them!

Tvtropes link: Really?

Comment author: DaFranker 11 January 2013 04:45:42PM *  2 points [-]

Tvtropes link: Really?

My numerous words are defeated by your single link. This analogy is irrelevant, but illustrates your point well.

Anyway, that's pretty much all I had to say. The initial argument I was responding to sounded weak, but your arguments now seem much stronger. They do, after all, single-handedly defeat an army of Ninja-Robot-... of those.

Comment author: JGWeissman 11 January 2013 04:45:37PM 3 points [-]

Suppose a disaster could be prevented by foresight, or narrowly averted by heroic action. Which one is more awesome? Which one is better?

Preventing disaster by foresight is more likely to work than narrow aversion by heroic action, so the the awesomeness of foresight working gets multiplied by a larger probability than the awesomeness of heroic action working when you decide to take one approach over the other. This advantage of the action that is more likely to work belongs in decision theory, not your utility function. Your utility function just says whether one approach is sufficiently more awesome than the other to overcome its decision theoretic disadvantage. This depends on the probabilities and awesomeness in the specific situation.

Comment author: jooyous 11 January 2013 04:51:16AM *  2 points [-]

Reading this comment thread motivated me to finally look this up -- the words "cheesy" and "corny" actually did originally have something to do with cheese and corn!

Comment author: summerstay 09 January 2013 06:04:13PM 11 points [-]

Great! This means that in order to develop an AI with a proper moral foundation, we just need to reduce the following statements of ethical guidance to predicate logic, and we'll be all set: 1. Be excellent to each other. 2. Party on, dudes!

Comment author: MugaSofer 10 January 2013 08:46:22AM *  -2 points [-]

He does say that if you need more detailed knowledge you should read the metaethics sequence.

</nitpick>

Comment author: BerryPick6 09 January 2013 06:49:48PM 3 points [-]

Is the first time that movie's ever been mentioned in the context of this site? Well done.

Comment author: Kawoomba 08 January 2013 05:17:49PM 1 point [-]

In a world of hardship and mediocrity (hopefully not yours), even when implementing "degree of awesomeness" on a scale, it may be a bit of a stretch going "Hey, this cubicle job is more awesome than that cubicle job! ... twitching smile, furtive glances around", or even "Awesome, another ration of rice from USAID, that means I may survive yet another day! :-))"

Comment author: noeticthoughts 08 January 2013 01:55:41PM *  5 points [-]

IMHO I think this awesomeness equating with morality is very wrong. Say those soldiers who shot down a number of innocent civilians, check the vid, it was pretty awesome for them. When it obviously isn't awesome to others. Perhaps we have to respect some universal agreed upon boundaries withing giving exceptions.

Comment author: [deleted] 06 January 2013 09:04:34PM 32 points [-]

[META] Why is this so heavily upvoted? Does that indicate actual value to LW, or just a majority of lurking septemberites captivated by cute pixel art?

It was just hacked out in a couple of hours to organize my thoughts for the meetup. It has little justification for anything, very little coherent overarching structure, and it's not even really serious. It's only 90% true, with many bugs. Very much a worse-is-better sort of post.

Now it's promoted with 50-something upvotes. I notice that I would not predict this, and feel the need to update.

What should I (we) learn from this?

  • Am I underestimating the value of a given post-idea? (i.e. should we all err on the side of writing more?)

  • Are structure, seriousness, watertightness and such are trumped by fun and clarity? Is it safe to run with this? This could save a lot of work.

  • Are people just really interested in morality, or re-framing of problems, or well-linked integration posts?

Comment author: abramdemski 08 January 2013 04:50:58AM 2 points [-]

Am I underestimating the value of a given post-idea? (i.e. should we all err on the side of writing more?)

I would tentatively advocate this (especially since there is already a system in place for filtering content into 'promoted' material for those who want a slower stream). More writing => more good writing.

Comment author: Mass_Driver 08 January 2013 12:42:29AM 23 points [-]

Given at least moderate quality, upvotes correlate much more tightly with accessibility / scope of audience than quality of writing. Remember, the article score isn't an average of hundreds of scalar ratings -- it's the sum of thousands of ratings of [-1, 0, +1] -- and the default rating of anyone who doesn't see, doesn't care about, or doesn't understand the thrust of a post is 0. If you get a high score, that says more about how many people bothered to process your post than about how many people thought it was the best post ever.

Comment author: Mass_Driver 25 January 2013 09:58:27PM 2 points [-]

Ironically, this is my most-upvoted comment in several months.

Comment author: khafra 10 January 2013 01:19:15PM 4 points [-]

Yes, to counter this effect I tend to upvote the math-heavy decision theory posts and comment chains if I have even the slightest idea what's going on, and the Vladimirs seem to think it's not stupid.

Comment author: Peterdjones 08 January 2013 12:12:19AM *  -2 points [-]

Why is this so heavily upvoted?

LW is broken. Aspiring rationalistis should welcom argument contrary to their biases, but actually downvote it. Aspiring rationalists should not welcome pandering, dumbed-down ideas that don't really solve problems or challenge them, but do.

Comment author: MugaSofer 08 January 2013 11:41:15AM 3 points [-]

Have you considered the possibiltiy that some people actually found this useful?

Comment author: Peterdjones 08 January 2013 11:48:46AM *  -2 points [-]

You see to be assuming that if someone judges something to be useful, that is the last word on the subject.

Comment author: MugaSofer 08 January 2013 11:58:03AM 1 point [-]

I am assuming that if someone judges something to be useful, they are likely to upvote based on that. This is an alternative hypothesis to the one presented in your comment, that "aspiring rationalists" upvoted this because it is a "pandering, dumbed-down idea" that "do[es]n't really solve problems or challenge them".

Comment author: Peterdjones 08 January 2013 12:00:54PM *  -2 points [-]

Not really differrent. If you pander to someone by presenting dumbed-down ideas as profound, they are liable to like them and judge them to be useful. People judge junk food to be worth eating, after all.

Comment author: MugaSofer 08 January 2013 12:13:25PM 0 points [-]

they are liable to like them and judge them to be useful. People judge junk food to be worth eating, after all.

Are you arguing that judgments of usefulness have, in this case, (and others?) been distorted by the halo effect? Or have I misunderstood this comment?

Comment author: Ritalin 07 January 2013 09:55:07PM 11 points [-]

Karma votes on this site are fickle, superficial, and reward percieved humour and wit much more than they do hard work and local unconventionality; you're allowed to be unconventional to the world-at-large, even encouraged to, if it's conventional in LW; the reverse is not encouraged.

Your work was both novel and completely in line with what is popular here, and so it thrived. Try to present a novel perspective arguing against things that are unanymously liked yet culture-specific, such as sex or alcohol or sarcasm or Twitter or market economies as automatic optimizers, and you might not fare as well.

You can pick up on those trends by following the Twitter accounts of notable LWers, watch them pat each other on the back for expressing beliefs that signal their belonging to the tribe, and mimick them for easy karma, which you can stock reserves of for the times where you feel absolutely compelled to take a stand for an unpopular idea.

This problem is endemic of Karma systems and makes LW no worse than any other community. It's just that one would expect them to hold themselves to a higher standard.

Awesome post, BTW. Nice brain-hacking.

Comment author: Peterdjones 08 January 2013 12:30:39AM -2 points [-]

This problem is endemic of Karma systems and makes LW no worse than any other community. It's just that one would expect them to hold themselves to a higher standard.

hear, hear!

Comment author: [deleted] 08 January 2013 12:05:52AM 6 points [-]

Yes, humour tends to be upvoted a lot, but it's just not true that you can never get good karma by arguing against the LW majority position. For example, the most upvoted top-level post ever expresses scepticism about the Singularity Institute.

Comment author: Ritalin 09 January 2013 12:12:51PM 0 points [-]

I never said "never"; I implied that it's not the most probable outcome.

Comment author: prase 16 January 2013 08:29:22PM 1 point [-]

You indeed didn't say "never", but the implied meaning was closer to it than to the "not the most probable outcome" interpretation.

Also, saying that LW tends to upvote LW-conventional writings seems a little tautological, unless you have got a karma-independent way to assess LW-conventionality. Do you?

Comment author: wedrifid 17 January 2013 03:38:24AM 3 points [-]

Also, saying that LW tends to upvote LW-conventional writings seems a little tautological, unless you have got a karma-independent way to assess LW-conventionality. Do you?

Count the number of comments that express the same notion. Or count the number of users that express said thought and contrast it with the number of users that contradict the thought.

Comment author: Ritalin 17 January 2013 10:04:20AM 1 point [-]

Thank you, werd.

You indeed didn't say "never", but the implied meaning was closer to it than to the "not the most probable outcome" interpretation.

This is my failure as a communicator and I apologize for it.

Comment author: Kindly 07 January 2013 10:59:33PM 1 point [-]

It's just that one would expect them to hold themselves to a higher standard.

I notice that you're discussing what "they" do on LW. Not that I can honestly object; I'm often tempted to do so myself. It really helps when trying to draw the line between my own ideas, and all those crazy ideas everyone else here has.

But I think we are both actually fairly typical LWers, which means that it would be more correct to say something like "It's just that one would expect us to hold ourselves to a higher standard". This is a very different thought somehow, more than one would expect from a mere pronoun substitution.

Comment author: Ritalin 09 January 2013 12:11:21PM 1 point [-]

"Them" as in "the rest of the community, excepting the exceptions". I hold myself to those standards just fine, and there may well be others who do.

Comment author: MugaSofer 08 January 2013 11:48:45AM -1 points [-]

It seems to me that the change is that with "us" the speaker is assumed to identify with the group under discussion. Specifically, it seems like they consider(ed) LW superior, and are disappointed that we have failed in this particular; whereas with "they" it seems to be accusing us of hypocrisy.

Comment author: TheOtherDave 07 January 2013 11:54:18PM 1 point [-]

Relatedly, I often find replacing "one would expect" with "I expect" has similar effects.
Especially when it turns out the latter isn't true.

Comment author: [deleted] 07 January 2013 09:31:10PM *  5 points [-]

It's an interesting perspective and it presents previous thinking on the subject in a more accessible manner.

Hence, one upvote. I don't know that it's worth sixty-three upvotes (I don't know that it's not), but I didn't upvote it sixty-three times. Also, I see from the comments that it's encouraged some interesting conversations (and perhaps some reading of the meta-ethics sequence, which I think is actually fairly well written if a little dense).

In other words, congratulations on writing something engaging! It's harder than it looks.

Comment author: Eliezer_Yudkowsky 07 January 2013 01:39:12PM 8 points [-]

I have typically been awful at predicting which parts of HPMOR people would most enjoy. I suggest relaxing and enjoying the hedons.

Comment author: JoachimSchipper 07 January 2013 01:31:37PM 5 points [-]

For me, high (insight + fun) per (time + effort).

Comment author: [deleted] 07 January 2013 11:07:05AM 3 points [-]

I upvoted it because I loved what you did. (I did feel it was, er... awesome, but before reading that comment I couldn't have put it down in words.)

Comment author: RobbBB 06 January 2013 11:02:38PM *  36 points [-]
  1. Because you make few assertions of substance, there is a lot of empty space (where people, depending on their mood, may insert either unrealistically charitable or unrealistically uncharitable reconstructions of reasoning) and not a lot of specific content for anyone to disagree with. In contrast, if I make 10 very concrete and substantive suggestions in a post, and most people like 9 of them but hate the 10th, that could make them very reluctant to upvote the post as a whole, lest their vote be taken as a blanket endorsement for every claim.

  2. Because the post is vague and humorous, people leave it feeling vaguely happy and not in a mood to pick it apart. Expressing this vague happiness as an upvote reifies it and makes it more intense. People like 'liking' things they like.

  3. The post is actually useful, as a way of popularizing some deeper and more substantive meta-ethical and practical points. Some LessWrongers may be tired of endlessly arguing over which theory is most ideal, and instead hunger for better popularizations and summaries of the extant philosophical progress we've already made, so that we can start peddling those views to the masses. They may view your post as an important step in that Voltairean process, even if it doesn't advance the distinct project of constructing the substance-for-future-popularization in the first place.

  4. Meta-ethics is hard. There are very few easy answers, and there's a lot of disagreement. Uncertainty and disagreement, and in general lack of closure, create a lot of unpleasant dissonance. Your article helps us pretend that we can ignore those problems, which alleviates the dissonance and makes us feel better. This would help explain why applause-lighting posts in areas like meta-ethics or the hard problem of consciousness see better results than applause-lighting posts in areas where substantive progress is easier.

  5. The post invites people to oversimplify their utility calculations via the simple dichotomy 'is it awesome, or isn't it?'. Whether or not your post is useful, informative, insightful ,etc., it is quite clearly 'awesome,' as the word is ordinarily used. So your post encourages people to simplify their evaluation procedure in a way that favors the post itself.

Comment author: shminux 06 January 2013 11:01:11PM *  12 points [-]

As one of the upvoters, here is my thought process, as far as I recall it:

  1. WTF?!! What does it even mean?

  2. Wait, this kind of makes sense intuitively.

  3. Hey, every example I can try actually works. I wonder why.

  4. OK, so the OP suggests awesomeness as an overriding single intuitive terminal value. What does he mean by "intuitive"?

  5. It seems clear from the comments that any attempt to unpack "awesome" eventually fails on some example, while the general concept of perceived awesomeness doesn't.

  6. He must be onto something.

  7. Oh, and his approach is clearly awesome, so the post is self-consistent.

  8. Gotta upvote!

  9. Drat, I wish I made it to the meetup where he presented it!

Comment author: [deleted] 07 January 2013 01:30:06AM 7 points [-]

It seems clear from the comments that any attempt to unpack "awesome" eventually fails on some example, why the general concept of perceived awesomeness doesn't.

Totally. Hence the link to fake utility functions. I could have made this clearer; you're not really supposed to unpack it, just use it as a rough pointer to your built-in moral intuitions. "oh that's all there is to it".

Drat, I wish I made it to the meetup where he presented it!

Don't worry. I basically just went over this post, then went over "joy in the merely good". We also discussed a bit, but the shield against useless philosophy provided by using "awesome" instead of "good" only lasted so long...

That said, it would have been nice to have you and your ideas there.

Comment author: Raemon 06 January 2013 10:48:06PM *  7 points [-]

My impression of this post (which may not be evident from my comments) went something like this:

1) Hah. That's a really funny opening.
2) Oh, this is really interesting and potentially useful, AND really funny, which is a really good combination for articles one the internet.
3) How would I apply this idea to my life?
4) *think about it a bit, and read some comments, think some more *
5) Wait a second, this idea actually isn't nearly as useful as it seemed at first.
5a) To the extent that it's true, it's only the first thesis statement of a lengthy examination of the actual issue
5b) The rest of the sequence this would need to herald to be truly useful is not guaranteed to be nearly as fun
5c) Upon reflection, while "awesome" does capture elements of "good" that would be obscured by "good's" baggage, "awesome" also fails to capture some of the intended value.
5d) This post is still useful, but not nearly as useful as my initial positive reaction indicates
5e) I am now dramatically more interested in the subject of how interesting this post seemed vs how interesting it actually was and what this says about the internet and people and ideas, then about the content of the article.

Comment author: [deleted] 07 January 2013 01:54:40AM 2 points [-]

To the extent that it's true, it's only the first thesis statement of a lengthy examination of the actual issue

The rest of the sequence this would need to herald to be truly useful is not guaranteed to be nearly as fun

Yep. It's intended as an introduction to the long and not-very-exciting metaethics sequence.

How would I apply this idea to my life?

Wait a second, this idea actually isn't nearly as useful as it seemed at first.

Yeah, it tends to melt down under examination. (because "awesome" is a fake utility function, as per point 2). The point was not to give a bulletproof morality procedure, but to just reframe the issue in a way that bypasses the usual confusion and cached thoughts.

So I wouldn't expect it to be useful to people who have their metaethical shit together (which you seem to, judging by the content of your rituals). It was explicitly aimed at people in my meetup who were confused and intimidated by the seeming mysteriousness of morality.

I am now dramatically more interested in the subject of how interesting this post seemed vs how interesting it actually was and what this says about the internet and people and ideas, then about the content of the article.

Yes the implications of this are very interesting.

Comment author: Kawoomba 06 January 2013 10:33:26PM 7 points [-]

Are structure, seriousness, watertightness and such are trumped by fun and clarity? Is it safe to run with this? This could save a lot of work.

I DUNT KNOW LETS TRY

It's not necessarily that a highly upvoted post is deemed better on average, each individual still only casts one vote. The trichotomy of "downvote / no vote / upvote" doesn't provide nuanced feedback, and while you'd think it all equals out with a large number of votes, that's not so because of a) modifying visibility by means secondary to the content of the post, b) capturing readers' interest early to get them to vote in the first place and c) various distributions of opinions about your post all projecting onto potentially the same voting score (e.g. strong likes + strong dislikes equalling the score of general indifference), all three of which can occur independently of the post's real content.

The visibility was increased with the promotion of your post. While you did need initial upvotes to support that promotion, once achieved there's no stopping the chain reaction: People want to check out that highly rated top post, they expect to see good content and often automatically steelman / gloss over your weaker points. Then there's a kind of implied peer pressure similar to Ash's conformity experiments; you see a highly upvoted post, then monkey see monkey do kicks in, at least skewing your heuristics.

Lastly people you keep invested until the end of your post are more likely to upvote than downvote, and your pixel art does a good job of capturing attention, the opening scene of a movie is crucial. The lower the entry barrier into a post, the more people will tag along. A lesson well internalized by television. Compare the vote counts of some AIXI related posts and yours.

You are also called nyan_sandwich, have a good reputation on this site (AFAICT), yet provide us with some guilty pleasures (of an easy-to-parse comfort-food-for-thought post, talk about nomen est omen, nom nom). In short, you covered all your populist bases. They are all belong to you.

Comment author: [deleted] 07 January 2013 01:40:55AM *  2 points [-]

The visibility was increased with the promotion of your post.

I don't think it was promoted until it had >30, so maybe that helped a bit, but I have another visibility explanation:

I tend to stick around in my posts and obsessively reply to every comment and cherish every upvote, which means it gets a lot of visibility in the "recent comments" section. My posts tend to have lots of comments, and I think it's largely me trying to get the last word on everything. (until I get swamped and give up)

It is kind of odd that unpromoted posts in main have strictly less visibility than posts in discussion...

each individual still only casts one vote.
In short, you covered all your populist bases.

This is a good explanation. I get it now I think. Now the question is if we should be doing more of that?

EDIT: also, what does this mean:

talk about nomen est omen, nom nom

Comment author: DaFranker 07 January 2013 03:30:54PM 2 points [-]

EDIT: also, what does this mean:

talk about nomen est omen, nom nom

Basically, name causes behavior, as far as I can tell. Your nickname is indeed very aptronymical (?) to providing a quick and easy lunch for the hungry mind in a humorous or good-feeling manner.

Comment author: jooyous 06 January 2013 10:43:08PM 0 points [-]

I thought the question was "Does this post have value?" or "Can you quantify the extent to which these here upvotes correlate with value?" and not "How did I get upvotes?"

Comment author: Kawoomba 06 January 2013 10:53:13PM 1 point [-]

Why is this so heavily upvoted? Does that indicate actual value to LW, or just a majority of lurking septemberites captivated by cute pixel art?

Pointing out how the genesis of the upvotes is based on mechanisms only weakly related to the content value seems pertinent to answering the two questions in the quote.

Comment author: jooyous 06 January 2013 11:07:26PM *  0 points [-]

It's definitely pertinent, but it seems a bit one-sided? As an upvoter, I was trying really hard confess my love for whale and quantify it alongside my appreciation for fun and clarity. So I'm concerned that the above reads more like "it was probably all nyans and noms" as opposed to "nyans and noms were a factor."

Comment author: Kawoomba 06 January 2013 11:28:39PM 1 point [-]

The whale, the fun and the clarity (and the wardrobe, too) all belong on the same side of "structure, seriousness, watertightness" versus "fun and clarity" as per the dichotomy in my initial comment's quote. It would be weird if content hadn't been a factor, albeit one that's been swallowed whole by a vile white whale.

Comment author: [deleted] 07 January 2013 01:44:00AM 1 point [-]

wardrobe

swallowed whole by a vile white whale

I must confess I don't understand half of what you guys are referring to.

Comment author: Kawoomba 07 January 2013 07:23:51AM 1 point [-]

I must confess I don't understand half of what you guys are referring to.

You're not missing much, it's just some throwaway references that aren't central to the point.

"The whale, the fun and the clarity" has the same structure as the movie "The Lion, the Witch and the Wardrobe" and also starts with an animal.

swallowed whole by a vile white whale

Swallowed whole by the whale was supposed to say that the content factor was secondary to the "whale factor". The "swallowed" also allures to the whole Jonah story (who lived in a whale's stomach), the whole / [wh]ile / whale was just infantile switching out of vowels, since interestingly all have a Hamming distance of just 1 (you only need to swap one letter).

talk about nomen est omen, nom nom

Your name contains a food item, and you provide guilty comfort food for thought with your post, so "nomen est omen" applies, i.e. your name is a sign of your purpose. The "nom nom" I just appended because it keeps with the food theme, and also because interestingly the "nom nom" is a partial anagram of "nomen est omen".

Yea ... not exactly essential to my arguments. Which in a way does support my points! :)

Comment author: Armok_GoB 07 January 2013 06:09:37PM 1 point [-]

So it had nothing to do with Moby Dick?

Comment author: Kawoomba 07 January 2013 06:13:56PM 1 point [-]

No, of course not!

Comment author: jooyous 06 January 2013 09:35:53PM *  3 points [-]

I think

  • a community in which people have a good idea but err on the side of not writing it up will tend toward a community in which people err on the of side of not bothering to flesh out their ideas?

  • fun and clarity are good starting points for structure, seriousness and watertightness? Picking out the bugs feels like a useful exercise for me, having just read the bit of the sequence talking about the impact of language.

I thought it was fun and clear and I liked the cute whale, but also it made me think. ^_^

Comment author: BerryPick6 06 January 2013 09:19:42PM 5 points [-]

My own guess, based on nothing much other than a hunch: Morality as Awesomeness sounds simple and easy to do. It also sounds fun and light, unlike many of the other Ethical posts on LW. People have responded positively to this change of pace.

Comment author: Mitchell_Porter 06 January 2013 05:36:15PM *  34 points [-]

Morality needs a concept of awfulness as well as awesomeness. In the depths of hell, good things are not an option and therefore not a consideration, but there are still choices to be made.

Comment author: BillyOblivion 28 January 2013 05:42:14PM 0 points [-]

I'm sigquoting that if you don't mind.

Not that that means anything anymore, but I'm old school that way.

Comment author: RichardKennaway 20 January 2013 04:14:56PM 0 points [-]

Couldn't resist meming this.

Comment author: insufferablejake 19 January 2013 07:12:16PM 0 points [-]

good things are not an option and therefore not a consideration, but there are still choices to be made.

Awesome line. Up goes the vote.

Comment author: Decius 08 January 2013 05:47:22AM 0 points [-]

Morality needs a concept of awfulness as well as awesomeness.

Concur. Upvoted.

Comment author: MugaSofer 07 January 2013 08:23:29PM 3 points [-]

"Least not-awesome choice" is isomorphic to "most awesome choice".

Curiously, I like everything about your comment but this, it's central point. Indeed, a concept of negative utility is probably useful; but this is not why.

Comment author: Decius 08 January 2013 05:51:57AM 10 points [-]

I think that 'awesome' loses a lot of value when you are forced to make the statement "Watching lot of people die was the most awesome choice I had, because any intervention would have added victims without saving anyone."

I propose 'lame' and 'bummer' as antonyms for 'awesome'. Instead of trying to figure out the most awesome of a series of bad options, we can discuss the least lame.

Comment author: JamesAndrix 15 January 2013 09:35:46PM 3 points [-]

Sucks less sucks less.

Comment author: Decius 15 January 2013 11:12:56PM 1 point [-]

What's the adjectival form of suck?

Comment author: BerryPick6 15 January 2013 11:18:28PM *  1 point [-]

Sucky. As in: "That movie was really sucky."

ETA: It's even in the dictionary!

Comment author: Qiaochu_Yuan 15 January 2013 11:18:03PM 1 point [-]

Sucky. (It's kind of sucky, but oh well.)

Comment author: Decius 16 January 2013 12:19:10AM 0 points [-]

Is "suckier" awesomer than "lamer"?

Comment author: MugaSofer 08 January 2013 09:03:10AM -2 points [-]

Sounds like an excellent idea.

Comment author: Eliezer_Yudkowsky 07 January 2013 01:37:26PM 11 points [-]

In the depths of hell, good things are not an option and therefore not a consideration, but there are still choices to be made.

Gloomiest sentence of 2013 so far. Upvoted.

Comment author: deathpigeon 06 January 2013 11:11:46AM 3 points [-]

"Awesome" is implicitly consequentialist.

Not necessarily. If I tell a story of how I went white water rafting, and the person I'm talking to tells me that what I did was "awesome," is he or she really thinking of the consequences of my white water rafting? Probably not. Instead, he or she probably thought very little before declaring the white water rafting awesome. That's an inherent problem to using awesome with morality. Awesome is usually used without thought. If you determine morality based on awesomeness, then you are moralizing without thinking at all, which can often be a problem.

Comment author: Mass_Driver 06 January 2013 08:08:54PM 0 points [-]

To say that something's 'consequentialist' doesn't have to mean that it's literally forward-looking about each item under consideration. Like any other ethical theory, consequentialism can look back at an event and determine whether it was good/awesome. If you going white-water rafting was a good/awesome consequence, then your decision to go white-water rafting and the conditions of the universe that let you do so were good/awesome.

Comment author: deathpigeon 06 January 2013 11:23:45PM 0 points [-]

That misses my point. When people say awesome, they don't think back at the consequences or look forward for consequences. People say awesome without thinking about it AT ALL.

Comment author: Mass_Driver 07 January 2013 08:19:12PM 1 point [-]

OK, let's say you're right, and people say "awesome" without thinking at all. I imagine Nyan_Sandwich would view that as a feature of the word, rather than as a bug. The point of using "awesome" in moral discourse is precisely to bypass conscious thought (which a quick review of formal philosophy suggests is highly misleading) and access common-sense intuitions.

I think it's fair to be concerned that people are mistaken about what is awesome, in the sense that (a) they can't accurately predict ex ante what states of the world they will wind up approving of, or in the sense that (b) what you think is awesome significantly diverges from what I (and perhaps from what a supermajority of people) think is awesome, or in the sense that (c) it shouldn't matter what people approve of, because the 'right' think to do is something else entirely that doesn't depend on what people approve of.

But merely to point out that saying "awesome" involves no conscious thought is not a very strong objection. Why should we always have to use conscious thought when we make moral judgments?

Comment author: deathpigeon 12 January 2013 12:46:20PM 1 point [-]

Those are both good points. I view it as a bug because I feel like too much ethical thought bypasses conscious thought to ill affect. This can range from people not thinking about the ethics homosexuality because their pastor tells them its a sin to not thinking about the ethics of invading a country because people believe they are responsible for an attack of some kind, whether they are or not. However, Nyan_Sandwich's ethics of awesome does appear to bypass such problems, to an extent. It's hardly s, but it appears like it would do its job better than many other ethical systems in place today.

I should note that it wasn't ever intended to be a very strong objection. As a matter of fact, the original objection wasn't to the conclusions made, but to the path taken to get to them. If an argument for a conclusion I agree with is faulty, I usually attempt to point out the faults in the argument so that the argument can be better.

Also, I apologize for taking so long to respond. life (and Minecraft playing) interfered with me checking LessWrong, and I'm not yet used to checking it regularly as I'm new here.

Comment author: Mass_Driver 25 January 2013 09:57:42PM 1 point [-]

OK, so how else might we get people to gate-check the troublesome, philosophical, misleading parts of their moral intuitions that would have fewer undesirable side effects? I tend to agree with you that it's good when people pause to reflect on consequences -- but then when they evaluate those consequences I want them to just consult their gut feeling, as it were. Sooner or later the train of conscious reasoning had better dead-end in an intuitively held preference, or it's spectacularly unlikely to fulfill anyone's intuitively held preferences. (I, of course, intuitively prefer that such preferences be fulfilled.)

How do we prompt that kind of behavior? How can we get people to turn the logical brain on for consequentialism but off for normative ethics?

Comment author: deathpigeon 26 January 2013 05:04:39PM 0 points [-]

Am I to understand that you're suggesting that we apply awesomeness to the consequences, and not the actions? Because that would be different from what I thought was being implied by saying "'Awesome' is implicitly consequentialist." What I took that to mean is that, when one looks at an action, and decides whether or not it is awesome, the person is determining whether or not the consequences are something that they find desirable. That is distinct from looking at consequences and determining whether or not the consequences are awesome. That requires one to ALREADY be looking at things consequentially.

I think that, after thinking of things, when people use the term "awesome" they use it differently depending on how they view the world. If someone is already a consequentialist, that person will look at things consequentially when using the word awesome. If someone is already a dentologist, that person will look at the fulfillment of duties when using the word awesome. This is just a hypothesis, and I'm not very certain that it's true, at the moment.

I'm not entirely sure how to prompt that sort of behavior, to be honest.

Comment author: [deleted] 26 January 2013 05:30:01PM 1 point [-]

Because that would be different from what I thought was being implied by saying "'Awesome' is implicitly consequentialist."

I meant that we should be looking at the awesomeness of outcomes and not actions, and that "awesome" is more effective at prompting this behavior than "good". It looks like you get it, if I understand you correctly.

If someone is already a dentologist, that person will look at the fulfillment of duties when using the word awesome.

I find that somewhat implausible. If they are a hardcore explicit deontologist who,against the spirit of this article, has attempted to import their previous moral beliefs/confusions into their interpretation of "awesomism", then yeah. For random folks who intuitively lean towards deontology for "good", I think "awesome" is still going to be substantially more consequentialist. I would expect variation, though.

I wonder how you could test this. Maybe next year's survey could have some scenarios that ask for an awesomeness ranking, and some other scenarios that ask for a goodness raking, and some more with a rightness ranking. Then we could see how people's intuitions vary with whether they claim to be deontologist or consequentialist, and with prompting wording. This could put the claims in the OP here on a more solid footing than "this works for me".

Comment author: deathpigeon 27 January 2013 04:05:52AM 0 points [-]

I meant that we should be looking at the awesomeness of outcomes and not actions, and that "awesome" is more effective at prompting this behavior than "good". It looks like you get it, if I understand you correctly.

Oh! That does make sense. I can see your point with that.

I find that somewhat implausible. If they are a hardcore explicit deontologist who,against the spirit of this article, has attempted to import their previous moral beliefs/confusions into their interpretation of "awesomism", then yeah. For random folks who intuitively lean towards deontology for "good", I think "awesome" is still going to be substantially more consequentialist.

Possibly. I'm honestly not sure which hypothesis would be more correct, at the moment. Testing it would probably be a good idea, if we had the resources to do it. (Do we have the resources for that? I wouldn't expect it, but weirder things have happened.)

Maybe next year's survey could have some scenarios that ask for an awesomeness ranking, and some other scenarios that ask for a goodness raking, and some more with a rightness ranking. Then we could see how people's intuitions vary with whether they claim to be deontologist or consequentialist, and with prompting wording. This could put the claims in the OP here on a more solid footing than "this works for me".

I don't think that would work. People here tend to be more consequentialist than I've seen from people not from here, so we'd probably not be able to see as much of a difference. Plus, the people here are hardly what I'd call normal and are more homogeneous than a more standard set of people. To effectively test that, we'd have to conduct that survey with a more random group of people. I mean, that survey would work, but the sample should be different than the contributors of LessWrong.

Comment author: [deleted] 27 January 2013 06:14:05AM 1 point [-]

I don't think that would work. People here tend to be more consequentialist than I've seen from people not from here, so we'd probably not be able to see as much of a difference. Plus, the people here are hardly what I'd call normal and are more homogeneous than a more standard set of people. To effectively test that, we'd have to conduct that survey with a more random group of people. I mean, that survey would work, but the sample should be different than the contributors of LessWrong.

If the number of deontologists isn't big enough to power our inference, the stats should tell us this. There are some though.

And I think going outside LW is unnecessary. This essay is hardly aimed at people-in-general.

Comment author: Wei_Dai 06 January 2013 08:36:01AM 38 points [-]

You already know that you know how to compute "Awesomeness", and it doesn't feel like it has a mysterious essence that you need to study to discover.

I wish! Both metaethics and normative ethics are still mysterious and confusing to me (despite having read Eliezer's sequence). Here's a sample of problems I'm faced with, none of which seem to be helped by replacing the word "right" with "awesome": 1 2 3 4 5 6 7 8 9. I'm concerned this post might make a lot of people feel more clarity than they actually possess, and more importantly and unfortunately from my perspective, less inclined to look into the problems that continue to puzzle me.

Comment author: Decius 08 January 2013 05:54:55AM -2 points [-]

Is it more awesome to have a 1% chance of there being 100 identical copies of you running on a simulation, or to a certainty of one copy of you running on a simulation? If you can't answer that, it's because you are ambivalent about the outcomes.

Comment author: Sarokrae 06 January 2013 07:09:12PM *  5 points [-]

I'd just like to say that although I don't have anything to add, there are all excellent questions and I don't think people are considering questions like these enough. (Didn't feel like an upvote was sufficient endorsement for everything in that comment!)

Comment author: Klao 06 January 2013 08:18:15AM 1 point [-]

Awesome summary, thanks!

Comment author: ctl 06 January 2013 07:04:24AM 4 points [-]

This may be a minor nit, but... is this forum collectively anti-orgasmium, now?

Because being orgasmium is by definition more pleasant than not being orgasmium. Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.[1] (Well, that's not actually true, since as a human you can make other people happier, and as orgasmium you presumably cannot. But it is at least on average a mistake to refuse to become orgasmium; I would argue that it is virtually always a mistake.)

[1] We're all hedonistic utilitarians, right?

Comment author: MugaSofer 06 January 2013 10:49:13PM 3 points [-]

is this forum collectively anti-orgasmium, now?

For as long as I've been here, which admittedly isn't all that long.

Because being orgasmium is by definition more pleasant than not being orgasmium. Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.[1]

[1] We're all hedonistic utilitarians, right?

Here's your problem.

Comment author: Alicorn 06 January 2013 07:09:10PM 12 points [-]

We're all hedonistic utilitarians, right?

No. Most of us are preferentists or similar. Some of us are not consequentialists at all.

Comment author: Raemon 06 January 2013 05:21:13PM 1 point [-]

I'm anti-orgasmium, but not necessarily anti-experience-machine. I'm approximately a median-preference utilitarian. (This is more descriptive than normative)

Comment author: [deleted] 06 January 2013 04:56:10PM 2 points [-]

We're all hedonistic utilitarians, right?

No thanks. Awesomeness is more complex than can be achieved with wireheading.

Comment author: Fadeway 06 January 2013 06:15:51PM *  -2 points [-]

I can't bring myself to see the creation of an awesomeness pill as the one problem of such huge complexity that even a superintelligent agent can't solve it.

Comment author: NancyLebovitz 11 January 2013 04:13:51PM 1 point [-]

My first thought was that an awesomeness pill would be a pill that makes ordinary experience awesome. Things fall down. Reliably. That's awesome!

And in fact, that's a major element of popular science writing, though I don't know how well it works.

Comment author: [deleted] 11 January 2013 08:33:02PM 2 points [-]

a pill that makes ordinary experience awesome

Psychedelic drugs already exist...

Comment author: earthwormchuck163 11 January 2013 08:41:05PM 4 points [-]

One time my roommate ate shrooms, and then he spent about 2 hours repeatedly knocking over an orange juice jug, and then picking it up again. It was bizarre. He said "this is the best thing ever" and was pretty sincere. It looked pretty silly from the outside though.

Comment author: [deleted] 06 January 2013 06:25:28PM *  3 points [-]

I have no doubt that you could make a pill that would convince someone that they were living an awesome life, complete with hallucinations of rocket-powered tyrannosaurs, and black leather lab coats.

The trouble is that merely hallucinating those things, or merely feeling awesome is not enough.

The average optimizer probably has no code for experiencing utility, it only feels the utility of actions under consideration. The concept of valuing (or even having) internal experience is particular to humans, and is in fact only one of the many things that we care about. Is there a good argument for why internal experience ought to be the only thing we care about? Why should we forget all the other things that we like and focus solely on internal experience (and possibly altruism)?

Comment author: Fadeway 06 January 2013 06:45:51PM *  0 points [-]

Can't I simulate everything I care about? And if I can, why would I care about what is going on outside of the simulation, any more than I care now about a hypothetical asteroid on which the "true" purpose of the universe is written? Hell, if I can delete the fact from my memory that my utility function is being deceived, I'd gladly do so - yes, it will bring some momentous negative utility, but it would be a teensy bit greatly offset by the gains, especially stretched over a huge amount of time.

Now that I think about it...if, without an awesomeness pill, my decision would be to go and do battle in an eternal Valhalla where I polish my skills and have fun, and an awesomeness pill brings me that, except maybe better in some way I wouldn't normally have thought of...what is exactly the problem here? The image of a brain with the utility slider moved to the max is disturbing, but I myself can avoid caring about that particular asteroid. An image of an universe tiled with brains storing infinite integers is disturbing; one of an universe tiled with humans riding rocket-powered tyrannosaurs is great - and yet, they're one and the same; we just can't intuitively penetrate the black box that is the brain storing the integer. I'd gladly tile the universe with awesome.

If I could take an awesomeness pill and be whisked off somewhere where my body would be taken care of indefinitely, leaving everything else as it is, maybe I would decline; probably I won't. Luckily, once awesomeness pills become available, there probably won't be starving children, so that point seems moot.

[PS.] In any case, if my space fleet flies by some billboard saying that all this is an illusion, I'd probably smirk, I'd maybe blow it up with my rainbow lasers, and I'd definitely feel bad about all those other fellas whose space fleets are a bit less awesome and significantly more energy-consuming than mine (provided our AI is still limited by, at the very least, entropy; meaning limited in its ability to tile the world to infinity; if it can create the same amount of real giant robots as it can create awesome pills, it doesn't matter which option is taken), all just because they're bothered by silly billboards like this. If I'm allowed to have that knowledge and the resulting negative utility, that is.

[PPS.] I can't imagine how an awesomeness pill would max my sliders for self-improvement, accomplishment, etc without actually giving me the illusion of doing those things. As in, I can imagine feeling intense pleasure; I can't imagine feeling intense achievement separated from actually flying - or imagining that I'm flying - a spaceship - it wouldn't feel as fulfilling, and it makes no sense that an awesomeness pill would separate them if it's possible not to. It probably wouldn't have me go through the roundabout process of doing all the stuff, and it probably would max my sliders even if I can't imagine it, to an effect much different from the roundabout way, and by definition superior. As long as it doesn't modify my utility function (as long as I value flying space ships), I don't mind.

Comment author: MugaSofer 06 January 2013 10:33:05PM 0 points [-]

Hell, if I can delete the fact from my memory that my utility function is being deceived, I'd gladly do so - yes, it will bring some momentous negative utility, but it would be a teensy bit greatly offset by the gains, especially stretched over a huge amount of time.

I don't understand this. If your utility function is being deceived, then you don't value the true state of affairs, right? Unless you value "my future self feeling utility" as a terminal value, and this outweighs the value of everything else ...

Comment author: Fadeway 07 January 2013 03:42:49AM 0 points [-]

No, this is more about deleting a tiny discomfort - say, the fact that I know that all of it is an illusion; I attach a big value to my memory and especially disagree with sweeping changes to it, but I'll rely on the pill and thereby the AI to make the decision what shouldn't be deleted because doing so would interfere with the fulfillment of my terminal values and what can be deleted because it brings negative utility that isn't necessary.

Intellectually, I wouldn't care whether I'm the only drugged brain in a world where everyone is flying real spaceships. I probably can't fully deal with the intuition telling me I'm drugged though. It's not highly important - just a passing discomfort when I think about the particular topic (passing and tiny, unless there are starving children). Whether its worth keeping around so I can feel in control and totally not drugged and imprisoned...I guess that's reliant on the circumstances.

Comment author: MugaSofer 07 January 2013 08:55:02PM 0 points [-]

So you're saying that your utility function is fine with the world-as-it-is, but you don't like the sensation of knowing you're in a vat. Fair enough.

Comment author: TheOtherDave 06 January 2013 09:29:17PM 1 point [-]

Luckily, once awesomeness pills become available, there probably won't be starving children, so that point seems moot.

This is a key assumption. Sure, if I assume that the universe is such that no choice I make affects the chances that a child I care about will starve -- and, more generally, if I assume that no choice I make affects the chances that people will gain good stuff or bad stuff -- then sure, why not wirehead? It's not like there's anything useful I could be doing instead.

But some people would, in that scenario, object to the state of the world. Some people actually want to be able to affect the total amount of good and bad stuff that people get.

And, sure, the rest of us could get together and lie to them (e.g., by creating a simulation in which they believe that's the case), though it's not entirely clear why we ought to. We could also alter them (e.g., by removing their desire to actually do good) but it's not clear why we ought to do that, either.

I can't imagine feeling intense achievement separated from actually flying - or imagining that I'm flying - a spaceship

Do you mean to distinguish this from believing that you have flown a spaceship?

Comment author: Fadeway 07 January 2013 04:03:20AM *  0 points [-]

Don't we have to do it (lying to people) because we value other people being happy? I'd rather trick them (or rather, let the AI do so without my knowledge) than have them spend a lot of time angsting about not being able to help anyone because everyone was already helped. (If there are people who can use your help, I'm not about to wirehead you though)

Do you mean to distinguish this from believing that you have flown a spaceship?

Yes. Thinking about simulating achievement got me confused about it. I can imagine intense pleasure or pain. I can't imagine intense achievement; if I just got the surge of warmth I normally get, it would feel wrong, removed from flying a spaceship. Yet, that doesnt mean that I don't have an achievement slider to max; it just means I can't imagine what maxing it indefinitely would feel like. Maxing the slider leading to hallucinations about performing activities related to achievement seems too roundabout - really, that's the only thing I can say; it feels like it won't work that way. Can the pill satisfy terminal values without making me think I satisfied them? I think this question shows that the sentence before it is just me being confused. Yet I can't imagine how an awesomeness pill would feel, hence I can't dispel this annoying confusion.

[EDIT] Maybe a pill that simply maxes the sliders would make me feel achievement, but without flying a spaceship, hence making it incomplete, hence forcing the AI to include a spaceship hallucinator. I think I am/was making it needlessly complicated. In any case, the general idea is that if we are all opposed to just feeling intense pleasure without all the other stuff we value, then a pill that gives us only intense pleasure is flawed and would not even be given as an option.

Comment author: TheOtherDave 07 January 2013 04:46:29AM 1 point [-]

Regarding the first bit... well, we have a few basic choices: - Change the world so that reality makes them happy
- Change them so that reality makes them happy
- Lie to them about reality, so that they're happy
- Accept that they aren't happy

If I'm understanding your scenario properly, we don't want to do the first because it leaves more people worse off, and we don't want to do the last because it leaves us worse off. (Why our valuing other people being happy should be more important than their valuing actually helping people, I don't know, but I'll accept that it is.)

But why, on your view, ought we lie to them, rather than change them?

Comment author: Fadeway 07 January 2013 05:06:03AM *  0 points [-]

I attach negative utility to getting my utility function changed - I wouldn't change myself to maximize paperclips. I also attach negative utility to getting my memory modified - I don't like the normal decay that is happening even now, but far worse is getting a large swath of my memory wiped. I also dislike being fed negative information, but that is by far the least negative of the three, provided no negative consequences arise from the false belief. Hence, I'd prefer being fed negative information to having my memory modified to being made to stop caring about other people altogether. There is an especially big gap between the last one and the former two.

Thanks for summarizing my argument. I guess I need to work on expressing myself so I don't force other people to work through my roundaboutness :)

Comment author: TheOtherDave 07 January 2013 05:30:42AM *  0 points [-]

Fair enough. If you have any insight into why your preferences rank in this way, I'd be interested, but I accept that they are what they are.

However, I'm now confused about your claim.

Are you saying that we ought to treat other people in accordance with your preferences of how to be treated (e.g., lied to in the present rather than having their values changed or their memories altered)? Or are you just talking about how you'd like us to treat you? Or are you assuming that other people have the same preferences you do?

Comment author: SaidAchmiz 06 January 2013 07:40:22AM 19 points [-]

[1] We're all hedonistic utilitarians, right?

... no?

http://lesswrong.com/lw/lb/notforthesakeofhappinessalone/

Comment author: ctl 06 January 2013 09:27:28AM -1 points [-]

Interesting stuff. Very interesting.

Do you buy it?

That article is arguing that it's all right to value things that aren't mental states over a net gain in mental utility.[1] If, for instance, you're given the choice between feeling like you've made lots of scientific discoveries and actually making just a few scientific discoveries, it's reasonable to prefer the latter.[2]

Well, that example doesn't sound all that ridiculous.

But the logic that Eliezer is using is exactly the same logic that drives somebody who's dying of a horrible disease to refuse antibiotics, because she wants to keep her body natural. And this choice is — well, it isn't wrong, choices can't be "wrong" — but it reflects a very fundamental sort of human bias. It's misguided.

And I think that Eliezer's argument is misguided, too. He can't stand the idea that scientific discovery is only an instrument to increase happiness, so he makes it a terminal value just because he can. This is less horrible than the hippie who thinks that maintaining her "naturalness" is more important than avoiding a painful death, but it's not much less dumb.

[1] Or a net gain in "happiness," if we don't mind using that word as a catchall for "whatever it is that makes good mental states good."

[2] In this discussion we are, of course, ignoring external effects altogether. And we're assuming that the person who gets to experience lots of scientific discoveries really is happier than the person who doesn't, otherwise there's nothing to debate. Let me note that in the real world, it is obviously possible to make yourself less happy by taking joy-inducing drugs — for instance if doing so devalues the rest of your life. This fact makes Eliezer's stance seem a lot more reasonable than it actually is.

Comment author: nshepperd 08 January 2013 06:02:09AM 2 points [-]

Choices can be wrong, and that one is. The hippy is simply mistaken about the kinds of differences that exist between "natural" and "non-natural" things, and about how much she would care about those differences if she knew more chemistry and physics. And presumably if she was less mistaken in expectations of what happens "after you die".

As for relating this to Eliezer's argument, a few examples of wrong non-subjective-happiness values is no demonstration that subjective happiness is the only human terminal value. Especially given the introspective and experimental evidence that people care about certain things that aren't subjective happiness.

Comment author: Ghatanathoah 08 January 2013 05:41:07AM 7 points [-]

But the logic that Eliezer is using is exactly the same logic that drives somebody who's dying of a horrible disease to refuse antibiotics, because she wants to keep her body natural. And this choice is — well, it isn't wrong, choices can't be "wrong" — but it reflects a very fundamental sort of human bias. It's misguided.

Very well, let's back up Eliezer's argument with some hard evidence. Fortunately, lukeprog has already written a brief review of the neuroscience on this topic. The verdict? Eliezer is right. People value things other than happiness and pleasure. The idea that pleasant feelings are the sole good is an illusion created by the fact that the signals for wanting something and getting pleasure from it are comingled on the same neurons.

So no, Eliezer is not misguided. On the contrary, the evidence is on his side. People really do value more things than just happiness. If you want more evidence consider this thought experiment Alonso Fyfe cooked up:

Assume that you and somebody you care about (e.g., your child) are kidnapped by a mad scientist. This scientist gives you two options:

Option 1: Your child will be taken away and tortured. However, you will be made to believe that your child is living a happy and healthy life. You will receive regular reports and even correspondence explaining how great your child’s life is. Except, they will all be fake. In fact, we will take your child to another location and spend every day peeling off his skin while soaking him in a vat of salt water, among other things.

Option 2: Your child will be taken away, provided with paid medical insurance, an endowment to complete an education, will be hired into a good job, and will be caused to live a healthy and happy life. However, you will be made to believe that your child is suffering excruciating torture. You will be able to hear what you think are your child’s screams coming down the hallway. We will show you video of the torture. It will all be fake, of course, but you will be convinced it is real.

Of course, after you make your choice, we will make you forget that you even had these options presented to you.

What do you choose?

Now, we are not going to kidnap people and make them choose. However, both theories need to explain the fact that the vast majority of parents, for example, report that, in such a situation, they would choose Option 2.

Happiness theory seems to suggest that the agent should choose Option 1. After all, the agent will be happier receiving news (that she believes) that says that her child is living a happy and healthy life. So, if happiness is what she is after, and Option1 delivers more happiness, then Option 1 is the rational choice.

Why do people choose Option 2?

Because happiness theory is wrong. In fact, people do not choose happiness. They choose “making or keeping true the propositions that are the objects of our desires.” In this case, the desire in question is the desire that one’s child be healthy and happy. A person with a desire that “my child is healthy and happy” will select that option that will make or keep the proposition, “my child is healthy and happy” true. That is Option 2.

Comment author: MugaSofer 08 January 2013 11:24:28AM -1 points [-]

Damn but that's a good example. Is it too long to submit to the Rationality Quotes thread?

Comment author: Nisan 06 January 2013 05:59:07PM *  6 points [-]

You can argue that having values other than hedonistic utility is mistaken in certain cases. But that doesn't imply that it's mistaken in all cases.

Comment author: SaidAchmiz 06 January 2013 10:16:48AM *  0 points [-]

I surmise from your comments that you may not be aware that Eliezer's written quite a bit on this matter; http://wiki.lesswrong.com/wiki/Complexityofvalue is a good summary/index ( is one of my favorites). There's a lot of stuff in there that is relevant to your points.

However, you asked me what I think, so here it is...

The wording of your first post in this thread seems telling. You say that "Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop."

Do you want to become orgasmium?

Perhaps you do. In that case, I direct the question to myself, and my answer is no: I don't want to become orgasmium.

That having been established, what could it mean to say that my judgment is a "mistake"? That seems to be a category error. One can't be mistaken in wanting something. One can be mistaken about wanting something ("I thought I wanted X, but upon reflection and consideration of my mental state, it turns out I actually don't want X"), or one can be mistaken about some property of the thing in question, which affects the preference ("I thought I wanted X, but then I found out more about X, and now I don't want X"); but if you're aware of all relevant facts about the way the world is, and you're not mistaken about what your own mental states are, and you still want something... labeling that a "mistake" seems simply meaningless.

On to your analogy:

If someone wants to "keep her body natural", then conditional on that even being a coherent desire[1], what's wrong with it? If it harms other people somehow, then that's a problem... otherwise, I see no issue. I don't think it makes this person "kind of dumb" unless you mean that she's actually got other values that are being harmed by this value, or is being irrational in some other ways; but values in and of themselves cannot be irrational.

[Eliezer] can't stand the idea that scientific discovery is only an instrument to increase happiness, so he makes it a terminal value just because he can.

This construal is incorrect. Say rather: Eliezer does not agree that scientific discovery is only an instrument to increase happiness. Eliezer isn't making scientific discovery a terminal value, it is a terminal value for him. Terminal values are given.

In this discussion we are, of course, ignoring external effects altogether.

Why are we doing that...? If it's only about happiness, then external effects should be irrelevant. You shouldn't need to ignore them; they shouldn't affect your point.

[1]Coherence matters: the difference between your hypothetical hippie and Eliezer the potential-scientific-discoverer is that the hippie, upon reflection, would realize (or so we would like to hope) that "natural" is not a very meaningful category, that her body is almost certainly already "not natural" in at least some important sense, and that "keeping her body natural" is just not a state of affairs that can be described in any consistent and intuitively correct way, much less one that can be implemented. That, if anything, is what makes her preference "dumb". There's no analogous failures of reasoning behind Eliezer's preference to actually discover things instead of just pretend-discovering, or my preference to not become orgasmium.

Comment author: ctl 06 January 2013 10:57:07AM *  0 points [-]

That having been established, what could it mean to say that my judgment is a "mistake"? That seems to be a category error. One can't be mistaken in wanting something.

I have never used the word "mistake" by itself. I did say that refusing to become orgasmium is a hedonistic utilitarian mistake, which is mathematically true, unless you disagree with me on the definition of "hedonistic utilitarian mistake" (= an action which demonstrably results in less hedonic utility than some other action) or of "orgasmium" (= a state of maximum personal hedonic utility).[1]

I point this out because I think you are quite right: it doesn't make sense to tell somebody that they are mistaken in "wanting" something.

Indeed, I never argued that the dying hippie was mistaken. In fact I made exactly the same point that you're making, when I said:

And [the hippie's] choice is — well, it isn't wrong, choices can't be "wrong"

What I said was that she is misguided.

The argument I was trying to make was, look, this hippie is using some suspect reasoning to make her decisions, and Eliezer's reasoning looks a lot like her's, so we should doubt Eliezer's conclusions. There are two perfectly reasonable ways to refute this argument: you can (1) deny that the hippie's reasoning is suspect, or (2) deny that Eliezer's reasoning is similar to hers.

These are both perfectly fine things to do, since I never elaborated on either point. (You seem to be trying option 1.) My comment can only possibly convince people who feel instinctively that both of these points are true.

All that said, I think that I am meaningfully right — in the sense that, if we debated this forever, we would both end up much closer to my (current) view than to your (current) view. Maybe I'll write an article about this stuff and see if I can make my case more strongly.

[1] Please note that I am ignoring the external effects of becoming orgasmium. If we take those into account, my statement stops being mathematically true.

Comment author: SaidAchmiz 06 January 2013 06:55:11PM *  1 point [-]

I don't think those are the only two ways to refute the argument. I can think of at least two more:

(3) Deny the third step of the argument's structure — the "so we should doubt Eliezer's conclusions" part. Analogical reasoning applied to surface features of arguments is not reliable. There's really no substitute for actually examining an argument.

(4) Disagree that construing the hippie's position as constituting any sort of "reasoning" that may or may not be "suspect" is a meaningful description of what's going on in your hypothetical (or at least, the interesting aspect of what's going on, the part we're concerned with). The point I was making is this: what's relevant in that scenario is that the hippie has "keeping her body natural" as a terminal value. If that's a coherent value, then the rest of the reasoning ("and therefore I shouldn't take this pill") is trivial and of no interest to us. Now it may not be a coherent value, as I said; but if it is — well, arguing with terminal values is not a matter of poking holes in someone's logic. Terminal values are given.

As for your other points:

It's true, you didn't say "mistake" on its own. What I am wondering is this: ok, refusing to become orgasmium fails to satisfy the mathematical requirements of hedonistic utilitarianism.

But why should anyone care about that?

I don't mean this as a general, out-of-hand dismissal; I am asking, specifically, why such a requirement would override a person's desires:

Person A: If you become orgasmium, you would feel more pleasure than you otherwise would.
Person B: But I don't want to become orgasmium.
Person A: But if you want to feel as much pleasure as possible, then you should become orgasmium!
Person B: But... I don't want to become orgasmium.

I see Person B's position as being the final word on the matter (especially if, as you say, we're ignoring external consequences). Person A may be entirely right — but so what? Why should that affect Person B's judgments? Why should the mathematical requirements behind Person A's framework have any relevance to Person B's decisions? In other words, why should we be hedonistic utilitarians, if we don't want to be?

(If we imagine the above argument continuing, it would develop that Person B doesn't want to feel as much pleasure as possible; or, at the least, wants other things too, and even the pleasure thing he wants only given certain conditions; in other words, we'd arrive at conclusions along the lines outlined in the "Complexity of value" wiki entry.)

(As an aside, I'm still not sure why you're ignoring external effects in your arguments.)