J_Thomas2 comments on Anthropomorphic Optimism - Less Wrong

25 Post author: Eliezer_Yudkowsky 04 August 2008 08:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

Sort By: Old

You are viewing a single comment's thread.

Comment author: J_Thomas2 05 August 2008 10:16:33PM 0 points [-]

There's always a nonzero chance that any action will cause an infinite bad. Also an infinite good.

Then how can you put error bounds on your estimate of your utility function?

If you say "I want to do the bestest for the mostest, so that's what I'll try to do" then that's a fine goal. When you say "The reason I killed 500 million people was that according to my calculations it will do more good than harm, but I have absolutely no way to tell how correct my calculations are" then maybe something is wrong?