Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Paul_Crowley2 comments on Nonperson Predicates - Less Wrong

29 Post author: Eliezer_Yudkowsky 27 December 2008 01:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (175)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Paul_Crowley2 27 December 2008 12:34:20PM 0 points [-]

"by the time the AI is smart enough to do that, it will be smart enough not to"

I still don't quite grasp why this isn't an adequate answer. If an FAI shares our CEV, it won't want to simulate zillions of conscious people in order to put them through great torture, and it will figure out how to avoid it. Is it simply that it may take the simulated torture of zillions for the FAI to figure this out? I don't see any reason to think that we will find this problem very much easier to solve than a massively powerful AI.

I'm also not wholly convinced that the only ethical way to treat simulacra is never to create them, but I need to think about that one further.