Peter_de_Blanc comments on The Pascal's Wager Fallacy Fallacy - Less Wrong

23 [deleted] 18 March 2009 12:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (121)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Carl_Shulman 18 March 2009 01:52:46AM 9 points [-]

"that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God)." Utilitarian would rightly attack this, since the probabilities almost certainly won't wind up exactly balancing. A better argument is that wasting time thinking about Christianity will distract you from more probable weird-physics and Simulation Hypothesis Wagers.

A more important criticism is that humans just physiologically don't have any emotions that scale linearly. To the extent that we approximate utility functions, we approximate ones with bounded utility, although utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences, i.e. they have a bounded interest in 'shutting up and multiplying.'

Comment author: Peter_de_Blanc 24 April 2010 04:35:46AM 6 points [-]

utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences

I know this is not what you were suggesting, but this made me think of goal systems of the form "take the action that I think idealized agent X is most likely to take," e.g. WWAIXID.

A huge problem with these goal systems is that the idealized agent will probably have very low-entropy probability distributions, while your own beliefs have very high entropy. So you'll end up acting as if you believed with near-certainty the single most likely scenario you can think of.

Another problem, of course, is that you'll take actions that only make sense for an agent much more competent than you are. For example, AIXI would be happy to bet $1 million that it can beat Cho Chikun at Go.

Comment author: gjm 15 February 2015 10:24:06PM 0 points [-]

In the relevant circumstances, I too might be happy to bet $1M that AIXI can beat Cho Chikun at go.