Document comments on The AI in a box boxes you - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (378)
There is no reason to trust the AI is telling the truth, unlike all the Omega thought experiments.
As long as the probability of it saying the truth is positive, it could up the number of copies of you it tortues/claims to torture (and torture them all in subtly different ways)...
I don't use a single probability to decide whether it was telling me the truth.
Whether it was telling me the truth would depend upon the statement being made as well. This tends to happen in every day life as well.
So the higher number of people it claims it is torturing the less I would believe it. Considering your prior in this case as well. You can't assign an equal probability to the maximum number of copies of you it can simulate. This is because there are potentially infinite numbers of different maxes, you'd need a function that summed to 1 in the limit (as you do in solomonoff induction).
There'd be no reason to expect it to torture people at less than the maximum rate its hardware was capable of.
But good reason to expect it not to torture people at greater than the maximum rate its hardware was capable of, so if you can bound that there exist some positive values of belief that cannot be inflated into something meaningful by upping copies.