Alicorn comments on Exterminating life is rational - Less Wrong

17 Post author: PhilGoetz 06 August 2009 04:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (272)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 07 August 2009 09:55:54PM 2 points [-]

Does Omega's utility doubling cover the contents of the as-yet-untouched deck? It seems to me that it'd be pretty spiffy re: my utility function for the deck to have a reduced chance of killing me.

Comment author: randallsquared 09 August 2009 12:17:22AM 2 points [-]

At first I thought this was pretty funny, but even if you were joking, it may actually map to the extinction problem, since each new technology has a chance of making extinction less likely, as well. As an example, nuclear technology had some probability of killing everyone, but also some probability of making Orion ships possible, allowing diaspora.

Comment author: Alicorn 11 August 2009 07:09:42PM *  -1 points [-]

While I'm gaming the system, my lifetime utility function (if I have one) could probably be doubled by giving me a reasonable suite of superpowers, some of which would let me identify the rest of the cards in the deck (X-ray vision, precog powers, etc.) or be protected from whatever mechanism the skull cards use to kill me (immunity to electricity or just straight-up invulnerability). Is it a stipulation of the scenario that nothing Omega does to tweak the utility function upon drawing a star affects the risks of drawing from the deck, directly or indirectly?

Comment author: orthonormal 11 August 2009 07:23:26PM 2 points [-]

It should be, especially since the existential-risk problems that we're trying to model aren't known to come with superpowers or other such escape hatches.