You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gastogh comments on Computation Hazards - Less Wrong Discussion

14 Post author: Alex_Altair 13 June 2012 09:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread.

Comment author: Gastogh 14 June 2012 11:51:40AM 5 points [-]

For example, suppose a computer program needs to model people very accurately to make some predictions, and it models those people so accurately that the "simulated" people can experience conscious suffering. In a very large computation of this type, millions of people could be created, suffer for some time, and then be destroyed when they are no longer needed for making the predictions desired by the program. This idea was first mentioned by Eliezer Yudkowsky in Nonperson Predicates.

Nitpick: we can date this concern at least as far back as Vernor Vinge's A Fire Upon the Deep:

Pham Nuwen's ticket to the Transcend was based on a Power's sudden interest in the Straumli perversion. This innocent's ego might end up smeared across a million death cubes, running a million million simulations of human nature.