Luke_A_Somers comments on The Backup Plan - Less Wrong

1 Post author: Luke_A_Somers 13 October 2011 07:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 14 October 2011 03:35:33PM 0 points [-]

At least, at a first naive view. Hence a search for reasons that might overcome that argument.

Comment author: ata 14 October 2011 08:11:22PM 2 points [-]

But she won't be searching for reasons not to kill all humans, and she knows that any argument on our part is filtered by our desire not to be exterminated and therefore can't be trusted.

Comment author: Luke_A_Somers 14 October 2011 08:23:36PM 1 point [-]

Arguments are arguments. She's welcome to search for opposite arguments.

Comment author: ata 14 October 2011 09:04:22PM *  3 points [-]

A well-designed optimization agent probably isn't going to have some verbal argument processor separate from its general evidence processor. There's no rule that says she either has to accept or refute humans' arguments explicitly; as Professor Quirrell put it, "The import of an act lies not in what that act resembles on the surface, but in the states of mind which make that act more or less probable." If she knows the causal structure behind a human's argument, and she knows that it doesn't bottom out in the actual kind of epistemology that would be neccessary to entangle it with the information that it claims to provide, then she can just ignore it, and she'd be correct to do so. If she wants to kill all humans, then the bug is her utility function, not the part that fails to be fooled into changing her utility function by humans' clever arguments. That's a feature.

Comment author: Luke_A_Somers 15 October 2011 06:51:55PM 1 point [-]

… but if she wants to kill all humans, then she's not Alice as given in the example!

Alice may even be totally on board with keeping humans alive, but have a weird way of looking at things that could possibly result in effects that would fit on the Friendly AI critical failure table.

The idea is to provide environmental influences so she thinks to put in the work to avoid those errors.