You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

moridinamael comments on Computation Hazards - Less Wrong Discussion

14 Post author: Alex_Altair 13 June 2012 09:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: moridinamael 14 June 2012 04:00:21AM 0 points [-]

I'm going to shamelessly quote myself from a previous discussion on waterfall ethics,

I don't think our intuitions about what "really happens" (versus what is "mathematically well defined") are useful. I think we have to zoom out at least one level and realize that our moral and ethical intuitions only mean anything within our particular instantiation of our causal framework. We can't be morally responsible for the notional space of computable torture simulations because they exist whether or not we "carry them out." But perhaps we are morally responsible for particular instantiations of those algorithms.

I also want to draw attention to the statement in the original post,

If these simulations are sufficiently precise, then they will be people in and of themselves. The simulations could cause those people to suffer, and will likely kill them by ending the simulation when the prediction or answer is given.

This usage of "killing" is conceptually very distant from the intuitive notion for the reasons you (Pentashagon) indicate. I don't feel that the matter of how to handle moral culpability for events occurring in causally disconnected algorithms is sufficiently settled that we can meaningfully have this conversion.