Lumifer comments on An Oracle standard trick - Less Wrong

4 Post author: Stuart_Armstrong 03 June 2015 02:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 04 June 2015 07:19:35PM 0 points [-]

Well, yes, except that you can have a perfectly good entirely Friendly AI which just shuts down because nobody listens, so why bother?

You're not testing for Friendliness, you're testing for the willingness to continue the irrational waste of bits and energy.

Comment author: Silver_Swift 05 June 2015 01:03:02PM 0 points [-]

False positives are vastly better than false negatives when testing for friendliness though. In the case of an oracle AI, friendliness includes a desire to answer questions truthfully regardless of the consequences to the outside world.

Comment author: Lumifer 05 June 2015 02:37:36PM 1 point [-]

friendliness includes a desire to answer questions

Which definition of Friendliness are you referring to? I have a feeling you're treating Friendliness as a sack into which you throw whatever you need at the moment...

Comment author: Silver_Swift 08 June 2015 01:49:08PM 1 point [-]

Fair enough, let me try to rephrase that without using the word friendliness:

We're trying to make a superintelligent AI that answers all of our questions accurately but does not otherwise influence the world and has no ulterior motives beyond correctly answering questions that we ask of it.

If we instead accidentally made an AI that decides that it is acceptable to (for instance) manipulate us into asking simpler question so that it can answer more of them, it is preferable that it doesn't believe anyone is listening to the answers it gives because that is one less way it has for interacting with the outside world.

It is a redundant safeguard. With it, you might end up with a perfectly functioning AI that does nothing, without it, you may end up with an AI that is optimizing the world in an uncontrolled manner.

Comment author: Lumifer 08 June 2015 02:55:44PM 0 points [-]

it is preferable that it doesn't believe anyone is listening to the answers it gives

I don't think so. As I mentioned in another subthread here, I consider separating what an AI believes (e.g. that no one is listening) from what it actually does (e.g. answer questions) to be a bad idea.