FAWS comments on Contrived infinite-torture scenarios: July 2010 - Less Wrong

24 Post author: PlaidX 23 July 2010 11:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (188)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 24 July 2010 07:45:33PM *  6 points [-]

I don't think there is any reasonable utility function that is consistent with the actions the AI is claiming to have done. There may be utility functions that are consistent with those actions, but an AI exhibiting one of those utility functions could not be an AI that I would consider effectively omnipotent.

There is no connection between the intelligence or power of an agent and its values other than its intelligence functioning as an upper bound on the complexity of its values. An omnipotent actor can have just as stupid values as everyone else. An omnipotent AI could have have a positive utility for annoying you with stupid and redundant tests as many times as possible, either as part of a really stupid utility function that it somehow ended up with on accident, or a non-stupid (if there even is such a thing) utility function that just looks like nonsense to humans.