Tiiba2 comments on Pascal's Mugging: Tiny Probabilities of Vast Utilities - Less Wrong

39 Post author: Eliezer_Yudkowsky 19 October 2007 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (334)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Tiiba2 20 October 2007 06:27:24PM 3 points [-]

Give me five dollars, or I will kill as many puppies as it takes to make you. And they'll go to hell. And there in that hell will be fire, brimstone, and rap with Engrish lyrics.

I think the problem is not Solomonoff inducton or Kolmogorov complexity or Bayesian rationality, whatever the difference is, but you. You don't want an AI to think like this because you don't want it to kill you. Meanwhile, to a true altruist, it would make perfect sense.

*Not really confident. It's obvious that no society of selfish beings whose members think like this could function. But they'd still, absurdly, be happier on average.*

Comment author: pnrjulius 07 April 2012 01:36:12AM 0 points [-]

Well, in that case, one possible response is for me to kill YOU (or report you to the police who will arrest you for threatening mass animal cruelty). But if you're really a super-intelligent being from beyond the simulation, then trying to kill you will inevitably fail and probably cause those 3^^^^3 people to suffer as a result.

(The most plausible scenario in which a Pascal's Mugging occurs? Our simulation is being tested for its coherence in expected utility calculations. Fail the test and the simulation will be terminated.)