CarlShulman comments on Bayesian Adjustment Does Not Defeat Existential Risk Charity - Less Wrong

43 Post author: steven0461 17 March 2013 08:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 25 April 2013 07:07:42PM *  4 points [-]

Yes, I think you are misusing the term. It's the utility that's bounded, not the inputs. Say that U=1-(1/(X^2) and 0 when X=0, and X is the quantity of some good. Then utility is bounded between 0 and 1, but increasing X from 3^^^3 to 3^^^3+1 or 4^^^^4 will still (exceedingly slightly) increase utility. It just won't take risks for small increases in utility. However, terms in the bounded utility function can give weight to large numbers, to relative achievement, to effort, and all the other things mentioned in the discussion I linked, so that one takes risks for those.

Comment author: G0W51 22 March 2015 03:48:22PM 0 points [-]

Bounded utility functions still seem to cause problems when uncertainty is involved. For example, consider the aforementioned utility function U(n) = 1 - (1 / (n^2)), and let n equal the number of agents living good lives. Using this function, the utility of a 1 in 1 chance of there being 10 agents living good lives equals 1 - (1 / (10^2)) = 0.99, and the utility of a 9 in 10 chance of 3^^^3 agents living good lives and a 1 in 10 chance of no agents living good lives roughly equals 0.1 * 0 + 0.9 * 1 = 0.9. Thus, in this situation the agent would be willing to kill (3^^^3) - 10 agents in order to prevent a 0.1 chance of everyone dying, which doesn't seem right at all. You could modify the utility function, but I think this issue would still to exist to some extent.

Comment author: MugaSofer 26 April 2013 11:22:42AM -1 points [-]

Ah, OK, I was thinking of a bounded utility function as one with a "cutoff point", yes. You're absolutely right.