Vladimir_Nesov comments on Contrived infinite-torture scenarios: July 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (188)
It's not a given that rescuing the child is the best use of one's resources. As a matter of heuristic, you'd expect that, and as a human, you'd form that particular wish, but it's not obvious that even such heuristics will hold. Maybe something even better than rescuing the child can be done instead.
Not to speak of the situation where the harm is already done. Fact is a fact, not even a superintelligence can alter a fact. An agent determines, but doesn't change. It could try "writing over" the tragedy with simulations of happy resolutions (in the future or rented possible worlds), but those simulations would be additional things to do, and not at all obviously optimal use of FAI's control.
You'd expect the simularity of original scenario to "connect" the original scenario with the new ones, diluting the tradegy through reduction in anticipated experience of it happening, but anticipated experience has no absolute moral value, apart from allowing to discover moral value of certain facts. So this doesn't even avert the tragedy, and simulation of sub-optimal pre-singularity world, even without the tragedy, even locally around the averted tragedy, might be grossly noneudaimonic.