tailcalled comments on The Hardcore AI Box Experiment - Less Wrong

3 Post author: tailcalled 30 March 2015 06:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: kingmaker 30 March 2015 07:31:10PM *  8 points [-]

This co-opts Bostrom's Simulation argument, but a possible solution to the fermi paradox is that we are all AI's in the box, and the simulators have produced billions of humans in order to find the most friendly human to release from the box. Moral of the story, be good and become a god

Comment author: tailcalled 30 March 2015 07:41:03PM 6 points [-]

Assuming the simulators are good, that would imply that people who experience lives not worth living are not actually people (since otherwise it would be evil to simulate them) but instead shallow 'AIs'. Paradoxically, if that argument is true, there is nothing good about being good.

Or something along those lines.

Comment author: artemium 31 March 2015 06:02:02AM 1 point [-]

Hmm I still think that there is incentive to behave good. Good, cooperative behavior is always more useful than being untrustworthy and cruel to other entities. There might be some exceptions, thought (simulators want conflict situation for entertainment purposes or some other reasons).

Comment author: tailcalled 31 March 2015 09:45:21AM 2 points [-]

Well, yeah, you should still be good to your friends and other presumably real people. However, there would be no point in, say, trying to save people from the holocaust, since the simulators wouldn't let actual people get tortured and burnt.

Comment author: kingmaker 31 March 2015 03:50:18PM 4 points [-]

The simulators may justify in their minds actual people getting tortured and burnt by suggesting that most of the people will not experience too much suffering, that the simulations would not otherwise have lived (although this fails to distinguish between lives and lives worth living), and that they can end the simulation if our suffering becomes too great. That the hypothetical simulators did not step in during the many genocides in our kind's history may suggest that they either do not exist, or that creating an FAI is more important to them than preventing human suffering.