You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Squark comments on Expected utility, unlosing agents, and Pascal's mugging - Less Wrong Discussion

19 Post author: Stuart_Armstrong 28 July 2014 06:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 18 August 2014 06:26:41PM 0 points [-]

It seems to me that any reasoning, be it with bounded or unbounded utility will support avoiding unlikely civilizational threats at the expense of small number of lives for sufficiently large civilizations. I don't see anything wrong with that (in particular I don't think it leads to mass murder since that would have a significant utility cost).

There is a different related problem, namely that if the utility function saturates around (say) 10^10 people and our civilization has 10^20 people, then the death of everyone except some 10^15 people will be acceptable to prevent a an event killing everyone except some 10^8 people with much lower probability. However, this effect disappears once we sum over all possible universes weighted by the Solomonoff measure as we should (like done here). Effectively it normalizes the utility function to saturate at the actual capacity of the multiverse.