Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

An Oracle standard trick

4 Post author: Stuart_Armstrong 03 June 2015 02:17PM

A putative new idea for AI control; index here.

EDIT: To remind everyone, this method does not entail the Oracle having false beliefs, just behaving as if it did; see here and here.

An idea I thought I'd been mentioning to everyone, but a recent conversation reveals I haven't been assiduous about it.

It's quite simple: whenever designing an Oracle, you should, as a default, run it's output channel through a probabilistic process akin to the false thermodynamic miracle, in order to make the Oracle act as if it believed its message will never be read.

This reduces the possibility of the Oracle manipulating us through message content, because it's action as if that content will never be seen by anyone.

Now, some Oracle designs can't use that (eg if accuracy is defined in terms of the reaction of people that read its output). But in general, if your design allows such a precaution, there's no reason not to put it on, so it should be default in the Oracle design.

Even if the Oracle design precludes this directly, some version of it can be often be used. For instance, if accuracy is defined in terms of the reaction of the first person to read the output, and that person is isolated from the rest of the world, then we can get the Oracle to act as if it believed a nuclear bomb was due to go off before the person could communicate with the rest of the world.

Comments (33)

Comment author: SilentCal 04 June 2015 05:47:31PM 3 points [-]

Is there a danger that if you use an Oracle with this trick repeatedly it will conclude that there's an identical Oracle whose message is being read, and start trying to acausally cooperate with this other Oracle?

Comment author: Stuart_Armstrong 05 June 2015 11:49:06AM 1 point [-]

The oracle does not have inaccurate beliefs, it knows everything there is to know (see http://lesswrong.com/lw/ltf/false_thermodynamic_miracles/ ).

And if there's an acausal trade issue, we can break it via: http://lesswrong.com/r/discussion/lw/luy/acaucal_trade_barriers/ .

Comment author: [deleted] 03 June 2015 02:26:01PM 2 points [-]

Talk about inferential distances... I was thinking from the title must be about the Oracle Financials software package or at least the database.

Comment author: Gondolinian 03 June 2015 03:55:34PM *  2 points [-]

In the interest of helping to bridge the inferential distance of others reading this, here's a link to the wiki page for Oracle AI.

Comment author: Stuart_Armstrong 03 June 2015 02:32:16PM 2 points [-]

Those are less frightening, currently.

Comment author: Epictetus 04 June 2015 10:08:31PM 1 point [-]

Would there be any unintended consequences? I'm worried that possessing an incorrect belief may lead the Oracle to lose accuracy in other areas.

For instance, if accuracy is defined in terms of the reaction of the first person to read the output, and that person is isolated from the rest of the world, then we can get the Oracle to act as if it believed a nuclear bomb was due to go off before the person could communicate with the rest of the world.

In this example, would the imminent nuclear threat affect the Oracle's reasoning process? I'm sure there are some questions whose answers could vary depending on the likelihood of a nuclear detonation in the near future.

Comment author: Stuart_Armstrong 05 June 2015 11:46:46AM 2 points [-]

The Oracle does not possess inaccurate beliefs. Look at http://lesswrong.com/lw/ltf/false_thermodynamic_miracles/ and http://lesswrong.com/r/discussion/lw/lyh/utility_vs_probability_idea_synthesis/ . Note I've always very carefully phrased it as "act as if it believed" rather than "believed".

Comment author: roystgnr 04 June 2015 10:48:42PM 2 points [-]

Regardless of the mechanism for misleading the oracle, its predictions for the future ought to become less accurate in proportion to how useful they have been in the past.

"What will the world look like when our source of super-accurate predictions suddenly disappears" is not usually the question we'd really want to ask. Suppose people normally make business decisions informed by oracle predictions: how would the stock market react to the announcement that companies and traders everywhere had been metaphorically lobotomized?

We might not even need to program in "imminent nuclear threat" manually. "What will our enemies do when our military defenses are suddenly in chaos due to a vanished oracle?"

Comment author: HungryHobo 05 June 2015 03:32:15PM 0 points [-]

I'm not seeing the point when it could simply be disallowed from any reasoning chain involving references to it's own output and has no goals.

Comment author: Silver_Swift 04 June 2015 02:15:00PM 0 points [-]

(eg if accuracy is defined in terms of the reaction of people that read its output).

I'm mostly ignorant about AI design beyond what I picked up on this site, but could you explain why you would define accuracy in terms of how people react to the answers? There doesn't seem to be an obvious difference between how I react to information that is true or (unbeknownst to me) false. Is it just for training questions?

Comment author: Stuart_Armstrong 05 June 2015 11:44:50AM 1 point [-]

It might happen. "accuracy" could involve the AI answering with the positions of trillions of atoms, which is not human parsable. So someone might code "human parsable" as "a human confirms the message is parsable".

Comment author: Lumifer 03 June 2015 03:11:09PM 0 points [-]

Why would the Oracle send any messages at all, then?

Comment author: ike 03 June 2015 09:44:26PM 0 points [-]

You don't expect the Oracle to try to influence you; it answers questions. Just in case it does try something, you lead it to believe it can't do anything anyway.

It would send a message because it's programmed to answer questions, I'm assuming.

Comment author: Lumifer 04 June 2015 02:40:29PM 1 point [-]

Are we talking about an AI which has recursively self-improved?

Comment author: ike 04 June 2015 02:43:49PM 0 points [-]

I don't think that should matter, unless the improvement causes them to realize they do have an effect.

The point is that this is a failsafe in case something goes wrong.

(Or that's how I understood the proposal.)

Personally, I doubt it would work, because the AI should be able to see that you've programmed it that way. You need to outsmart the AI, which is similar to boxing it and telling it it's not boxed.

Comment author: Lumifer 04 June 2015 03:13:30PM 2 points [-]

The issue with the self-modifying AI is precisely that "it was programmed to do that" stops being a good answer.

Comment author: Stuart_Armstrong 05 June 2015 11:51:36AM 1 point [-]

The "act as if it doesn't believe its messages will be read" is part of its value function, not its decision theory. So we are only requiring the value function to be stable over self improvement.

Comment author: Lumifer 05 June 2015 02:31:22PM 0 points [-]

Why is that? The value function tells you what is important, but the "act" part requires decision theory.

Comment author: Stuart_Armstrong 05 June 2015 04:17:20PM 0 points [-]

What I mean is that I haven't wired the decision theory to something odd (which might be removed by self improvement), just chosen a particular value system (which has much higher chance of being preserved by self improvement).

Comment author: ike 04 June 2015 05:43:27PM 0 points [-]

It's supposed to keep that part of its programming. If we could rely on that, we wouldn't need any control. But we're worried it has changed, so we build in data which makes the AI think it won't have any control on the world, so even if it messes up it should at least not try to manipulate us.

Comment author: Lumifer 04 June 2015 06:19:39PM 0 points [-]

Right, so we have an AI which (1) is no longer constrained by its original programming; and (2) believes no one ever reads its messages. And thus we get back to my question: why such an AI would bother to send any messages at all?

Comment author: Stuart_Armstrong 05 June 2015 11:53:38AM 0 points [-]

The design I had in mind is: utility u causes the AI to want to send messages. This is modified to u' so that it also acts as if it believed the message wasn't read (note this doesn't mean that it believes it!). Then if u' remains stable under self-improvement, we have the same behaviour after self-improvement.

Comment author: Lumifer 05 June 2015 02:35:44PM 1 point [-]

it also acts as if it believed the message wasn't read (note this doesn't mean that it believes it!)

So... you want to introduce, as a feature, the ability to believe one thing but act as if you believe something else? That strikes me as a remarkably bad idea. For one thing, people with such a feature tend to end up in psychiatric wards.

Comment author: gjm 05 June 2015 04:28:40PM 1 point [-]

I haven't thought hard about Stuart's ideas, so this may or may not have any relevance to them; but it's at least arguable that it's really common (even outside psychiatric wards) for explicit beliefs and actions to diverge. A standard example: many Christians overtly believe that when Christians die they enter into a state of eternal infinite bliss, and yet treat other people's deaths as tragic and try to avoid dying themselves.

Comment author: Stuart_Armstrong 05 June 2015 04:17:49PM 0 points [-]

Have you read the two article I linked to, explaining the general principle?

Comment author: ike 04 June 2015 06:26:11PM 0 points [-]

In that case, you expect it to send no messages.

This strategy is supposed to make it that instead of failing by sending bad messages, its failure mode is by just shutting down.

If all works well, it answers normally, and if it doesn't work, it doesn't do anything because it expects nobody will listed. As opposed to an oracle that, if it messes up its own programming, will try to manipulate people with its answers.

Comment author: Lumifer 04 June 2015 07:19:35PM 0 points [-]

Well, yes, except that you can have a perfectly good entirely Friendly AI which just shuts down because nobody listens, so why bother?

You're not testing for Friendliness, you're testing for the willingness to continue the irrational waste of bits and energy.

Comment author: Silver_Swift 05 June 2015 01:03:02PM 0 points [-]

False positives are vastly better than false negatives when testing for friendliness though. In the case of an oracle AI, friendliness includes a desire to answer questions truthfully regardless of the consequences to the outside world.

Comment author: Stuart_Armstrong 04 June 2015 09:13:37AM 0 points [-]

It would send a message because it's programmed to answer questions, I'm assuming.

Yes. Most hypothetical designs are setup that way.