Tom_McCabe2 comments on Anthropomorphic Optimism - Less Wrong

25 Post author: Eliezer_Yudkowsky 04 August 2008 08:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tom_McCabe2 05 August 2008 07:52:04AM 0 points [-]

"However, those objective values probably differ quite a lot from most of what most human beings find important in their lives; for example our obsessions with sex, romance and child-rearing probably aren't in there."

Several years ago, I was attracted to pure libertarianism as a possible objective morality for precisely this reason. The idea that, eg., chocolate tastes good can't possibly be represented directly in an objective morality, as chocolate is unique to Earth and objective moralities need to apply everywhere. However, the idea of immorality stemming from violation of another person's liberty seemed simple enough to arise spontaneously from the mathematics of utility functions.

It turns out that you *do* get a morality out of the mathematics of utility functions (sort of), in the sense that utility functions will tend towards certain actions and away from others unless some special conditions are met. Unfortunately, these actions aren't very Friendly; they involve things like turning the universe into computronium to solve the Riemann Hypothesis (see http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf for some examples). If libertarianism really *was* a universal morality, Friendly AI would be much simpler, as we could fail on the first try without the UFAI killing us all.