You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

mouseking comments on Open thread, 11-17 August 2014 - Less Wrong Discussion

5 Post author: David_Gerard 11 August 2014 10:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (268)

You are viewing a single comment's thread.

Comment author: mouseking 15 August 2014 01:29:28AM 2 points [-]

I've been noticing a theme of utilitarianism on this site -- can anyone explain this? More specifically: how did (x)uys rationalize a utilitarian philosophy over an existential, nihilistic, or hedonistic one?

Comment author: Dahlen 15 August 2014 06:31:26PM 5 points [-]

To put it as simply as I could, LessWrongers like to quantify stuff. A more specific instance of this is the fact that, since this website started off as the brainchild of an AI researcher, the prevalent intellectual trends will be those with applicability in AI research. Computers work easily with quantifiable data. As such, if you want to instill human morality into an AI, chances are you'll at least consider conceptualizing morality in utilitarian terms.

Comment author: RichardKennaway 15 August 2014 01:00:06PM 4 points [-]

The confluence of a number of ideas.

Cox's theorem shows that degree of belief can be expressed as probabilities.

The VNM theorem shows that preferences can be expressed as numbers (up to an additive constant), usually called utilities.

Consequentialism, the idea that actions are to be judged by their consequences, is pretty much taken as axiomatic.

Combining these gives the conclusion that the rational action to take in any situation is the one that maximises the resulting expected utility.

Your morality is your utility function: your beliefs about how people should live are preferences about they should live.

Add the idea of actually being convinced by arguments (except arguments of the form "this conclusion is absurd, therefore there is likely to be something wrong with the argument", which are merely the absurdity heuristic) and you get LessWrong utilitarianism.

Comment author: blacktrance 15 August 2014 11:10:50PM 1 point [-]

Utilitarianism is more than just maximizing expected utility, it's maximizing the world's expected utility. Rationality, in the economic or decision-theoretic sense, is not synonymous with utilitarianism.

Comment author: RichardKennaway 16 August 2014 08:03:04AM 1 point [-]

That is a good point, but I think one under-appreciated on LessWrong. It seems to go "rationality, therefore OMG dead babies!!" There is discussion about how to define "the world's expected utility", but it has never reached a conclusion.

Comment author: blacktrance 16 August 2014 08:54:53AM 0 points [-]

In addition to the problem of defining "the world's expected utility", there is also the separate question of whether it (whatever it is) should be maximized.

Comment author: Vulture 17 August 2014 05:13:27PM *  0 points [-]

Utilitarianism is more than just maximizing expected utility, it's maximizing the world's expected utility.

I think this is probably literally correct, but misleading. "Maximizing X's utility" is generally taken to mean "maximize your own utility function over X". So in that sense you are quite correct. But if by "maximizing the world's utility" you mean something more like "maximizing the aggregate utility of everyone in the world", then what you say is only true of those who adhere to some kind of preference utilitarianism. Other utilitarians would not necessarily agree.

Comment author: blacktrance 17 August 2014 08:52:21PM *  0 points [-]

Hedonic utilitarians would also say that they want to maximize the aggregate utility of everyone in the world, they would just have a different conception of what that entails. Utilitarianism necessarily means maximizing aggregate utility of everyone in the world, though different utilitarians can disagree about what that means - but they'd agree that maximizing one's own utility is contrary to utilitarianism.

Comment author: Vulture 18 August 2014 12:34:58AM *  0 points [-]

Anyone who believes that "maximizing one's own utility is contrary to utilitarianism" is fundamentally confused as to the standard meaning of at least one of those terms. Not knowing which one, however, I'm not sure what I can say to make the matter more clear.

Comment author: blacktrance 18 August 2014 01:09:18AM 0 points [-]

Maximizing one's own utility is practical rationality. Maximizing the world's aggregate utility is utilitarianism. The two need not the the same, and in fact can conflict. For example, you may prefer to buy a cone of ice cream, but world utility would be bettered more effectively if you'd donate that money to charity instead. Buying the ice cream would be the rational own-utility-maximizing thing to do, and donating to charity would be the utilitarian thing to do.

Comment author: RichardKennaway 18 August 2014 06:30:41AM *  0 points [-]

However, if utilitarianism is your ethics, the world's utility is your utility, and the distinction collapses. A utilitarian will never prefer to buy that ice cream.

Comment author: shminux 18 August 2014 06:39:34AM 0 points [-]

It's the old System I (want ice cream!) vs System 2 (want world peace!) friction again.

Comment author: Ef_Re 15 August 2014 01:58:48AM -1 points [-]

To the extent that lesswrong has an official ethical system, that system is definitely not utilitarianism.

Comment author: James_Miller 15 August 2014 02:36:58AM 1 point [-]

I don't agree. LW takes a microeconomics viewpoint of decision theory and this implicitly involves maximizing some weighted average of everyone's utility function.

Comment author: Vulture 17 August 2014 05:22:45PM 0 points [-]

At some point we really need to come up with more words for this stuff so that the whole consequentialism/hedonic-utilitarianism/etc. confusion doesn't keep coming up.

Comment author: 2ZctE 15 August 2014 05:06:41PM 0 points [-]

To the extent that lesswrong has an official ethical system, that system is utilitiarianism with "the fulfillment of complex human values" as a suggested maximand rather than hedons

Comment author: Ef_Re 16 August 2014 06:35:30PM 0 points [-]

That would normally be referred to as consequentialism, not utilitarianism.

Comment author: 2ZctE 18 August 2014 03:08:25AM *  0 points [-]

Huh, I'm not sure actually, I had been thinking of consequentialism as being the general class of ethical theories based on caring about the state of the world, and that it's utilitarianism when you try to maximize some definition of utility (which could be human value-fulfillment if you tried to reason about it quantitatively). If my usages are unusual I more or less inherited them from the consequentialism faq I think

Comment author: Ef_Re 22 August 2014 11:46:07PM 0 points [-]

If you mean Yvain's, while his stuff is in general excellent, I recommend learning about philosophical nomenclature from actual philosophers, not medics.

Comment author: ChristianKl 15 August 2014 11:40:36AM 0 points [-]

In general this site focuses on the friendly AI problem, a nihilistic or a hedonistic AI might not be friendly to humans. The notion of an existentialist AI seems to be largely unexplored as far as I know.