cousin_it comments on Tendencies in reflective equilibrium - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (69)
Agreed.
Here's an idea that just occurred to me: you could replace Solomonoff induction with a more arbitrary prior (interpreted as "degree of caring" like Wei Dai suggests) and hand-tune your degree of caring for huge/unfair universes so Pascal's mugging stops working. Informally, you could value your money more in universes that don't contain omnipotent muggers. This approach still feels unsatisfactory, but I don't remember it suggested before...
Is this different from jimrandomh's proposal to penalize the prior probability of events of utility of large magnitude, or komponisto's proposal to penalize the utility?