You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Adele_L comments on Open thread, August 19-25, 2013 - Less Wrong Discussion

2 Post author: David_Gerard 19 August 2013 06:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (325)

You are viewing a single comment's thread.

Comment author: Adele_L 19 August 2013 11:39:54PM 3 points [-]

Consider the following scenario. Suppose that it can be shown that the laws of physics imply that if we do a certain action (costing 5 utils to perform), then in 1/googol of our descendent universes, 3^^^3 utils can be generated. Intuitively, it seems that we should do this action! (at least to me) But this scenario also seems isomorphic to a Pascal's mugging situation. What is different?

If I attempt to describe the thought process that leads to these differences, it seems to be something like this. What is the measure of the causal descendents where 3^^^3 utils are generated? In typical Pascal's mugging, I expect there to be absolutely zero causal descendents where 3^^^3 utils are generated, but in this example, I expect there to be "1/googol" such causal descendents, even though the subjective probability of these two scenarios is roughly the same. I then do my expected utility maximization with (# of utils)(best guess of my measure) instead of (# of utils)(subjective probability), which seems to match with my intuitions better, at least.

But this also just seems like I am passing the buck to the subjective probability of a certain model of the universe, and that this will suffer from the mugging problem as well.

So does thinking about it this way add anything, or is it just more confusing?

Comment author: Armok_GoB 27 August 2013 08:22:33PM 0 points [-]

You cant pay for things in Utils, you can only pay for them in Opportunities.

This is where pascals mugging goes wrong as well; the only reason to not give pascals mugger the money is the possibility of an even greater opportunity coming along later; a mugger that's more credible, and/or offers an even greater potential payof. (And once any mugger offers INFINITE utility there's only credibility left to increase.)

Comment author: Adele_L 27 August 2013 11:04:17PM 0 points [-]

That doesn't work because the expected value of things that you should do, e.g. donating to an effective charity, is far lower than the expected value of a pascal mugging.

Comment author: Armok_GoB 28 August 2013 07:03:21PM 1 point [-]

I expect an FAI to have at least 10% probability of acquiring infinite computational power. This means donations to MIRI have infinite expected utility.