AdeleneDawner comments on The Aliens have Landed! - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (156)
But then changing your values to not care about simulated torture won't prevent the extortion attempt either (since the aliens will think there's a small chance you haven't actually changed your values and it costs them nothing to try). Unless you already really just don't care about simulated torture, it seems like you'd want to have a decision algorithm that makes you go to war against such extortionists (and not just ignore them).
Wait, is this a variant on Newcomb's problem?
(Am I just slow today? Nobody else seems to have mentioned it outright, at least.)
This sort of thing is really the motivating example behind Newcomb's problem.
I'm not seeing the analogy. Can you explain?
The extortion attempt cost the aliens almost nothing, and would have given them a vacant solar system to move into if someone like Fred was in power, so it's rational for them to make the attempt almost regardless of the odds of succeeding. Nobody is reading anybody else's mind here, except the idiots who read their own minds and uploaded them to the Internet, and they don't seem to be making any of the choices.
This case looks most like the 'transparent boxes' version of the problem, which I haven't read much about.
In Newcomb's problem, Omega offers a larger amount of utility if you will predictably do something that intuitively would give a smaller amount of utility.
In this situation, being less open to blackmail probably gives you less disutility in the long run (fewer instances of people trying to blackmail you) than acceding to the blackmail, even though acceding intuitively gives you less disutility.
The other interesting part of this particular scenario is how to define 'blackmail' and differentiate it from, say, someone accidentally doing something that's harmful to you and asking you to help fix it. We've approached that issue, too, but I'm not sure if it's been given a thorough treatment yet.
They had other choices though. It would have been similarly inexpensive to offer to simulate happy people.
Even limiting the spheres to a single proof-of-concept would have been a start.