You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

handoflixue comments on Is Equality Really about Diminishing Marginal Utility? - Less Wrong Discussion

5 Post author: Ghatanathoah 04 December 2012 11:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: handoflixue 05 December 2012 01:21:46AM 5 points [-]

Heh, relativistic effects on morality.

To elaborate: Newtonian physics work within our "default" range of experience. If you go 99.99% of c, or are dealing with electrons, or a Dyson Sphere, then you'll need new models. For the most part, our models of reality see certain "thresholds", and you have to use different models for different sides of that threshold.

You see this in simple transitions like liquid <-> solid, and you see this pretty much any time you feed in incredibly small or large numbers. XKCD captures this nicely :)

So... the point? We shouldn't expect our morality to scale past a certain situation, and in fact it is completely reasonable to assume that there is NO model that covers both normal human utilities AND utility monsters.

Comment author: [deleted] 07 December 2012 07:17:40AM 1 point [-]

We shouldn't expect our morality to scale past a certain situation

Indeed, it would be a little weird if it did, though I suppose that depends on what specific set of behaviors and values one chooses to draw the morality box around, too -- I'm kind of wondering if "morality" is a red herring, although it's hard to find the words here. In local lingo, I'm sort of thinking "pebblesorters", as contrasted to moral agents, might be about as misleading as "p-zombies vs conscious humans."

Comment author: Ghatanathoah 05 December 2012 01:28:56AM 0 points [-]

That's a really great point. Do you think that attempts to create some sort of pluralistic consequentialism that tries to cover these huge situation more effectively, like I am doing, are a worthwhile effort, or do you think the odds of there being no model are high enough that the effort is probably wasted?

Comment author: Pentashagon 05 December 2012 09:51:19AM 2 points [-]

It's worth pointing out that relativity gives the right answers at 0.01% light speed too, it just takes more computations to get the answer. A more complex model of morality that gives the same answers to our simple questions as our currently held system of morals seems quite desirable.