Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Toggle 16 February 2015 08:23:24PM 0 points [-]

In the more frequently considered case of a non-stable utility function, my understanding is that the agent will not try to identify the terminal attractor and then act according to that- it doesn't care about what 'it' will value in the future, except instrumentally. Rather, it will attempt to maximize its current utility function, given a future agent/self acting according to a different function. Metaphorically, it gets one move in a chess game against its future selves.

I don't see any reason for a temporarily uncertain agent to act any differently. If there is no function that is, right now, motivating it to maximize paperclips, why should it care that it will be so motivated in the future? That would seem to require a kind of recursive utility function, one in which it gains utility from maximizing its utility function in the abstract.

Comment author: Stuart_Armstrong 17 February 2015 02:52:14PM 1 point [-]

In this case, the AI has a stable utility function - it just doesn't know yet what it is.

For instance, it could be "in worlds where a certain coin was heads, maximise paperclips; in other worlds, minimise them", and it has no info yet on the coin flip. That's a perfectly consistent and stable utility function.

Comment author: Mark_Friedenbach 15 February 2015 05:30:57PM *  -1 points [-]

"There are some agents that are defined to have constant value systems, where, nonetheless, the value system will drift in practice".

Ok, we are now quite deep in a threat that started with me pointing out that a constant value system might be a bad thing! People want machines whose actions align with their own morality, and humans don't have constant value systems (maybe this is where we disagree?).

There are many bad stable outcomes. And an unstable update system will eventually fall into one of them, because they're attractor states.

Why don't we seem humans drifting into being sociopaths? E.g. starting as a normal, well adjusted human being and then becoming sociopaths as they get older?

Comment author: Stuart_Armstrong 16 February 2015 03:56:58PM 0 points [-]

Why don't we seem humans drifting into being sociopaths? E.g. starting as a normal, well adjusted human being and then becoming sociopaths as they get older?

That's an interesting question, partially because we'd want to copy that and implement it in AI. A large part of it seems to be social pressure, and lack of power: people must respond to social pressure, because they don't have the power to ignore it (a superintelligent AI would be very different, as would a superintelligent human). This is also connected with some evolutionary instincts, which cause us to behave in many ways as if we were in a tribal society with high costs to deviant behaviour - even if this is no longer the case.

The other main reason is evolution itself: very good at producing robustness, terrible at efficiency. If/when humans start self modifying freely, I'd start being worried about that tendency for them too...

Comment author: [deleted] 15 February 2015 03:00:22PM 0 points [-]

Thanks for the translation :) are you french?

In response to comment by [deleted] on AI-created pseudo-deontology
Comment author: Stuart_Armstrong 16 February 2015 03:50:41PM 1 point [-]

J'ais passe le secondaire en France (pres de Geneve) :-)

Comment author: Mark_Friedenbach 15 February 2015 04:19:02AM *  1 point [-]

There are other ways of interpreting value stability; a satisficer is one example. But those don't tend to be stable

That statement does not make sense. I hope if you read it with a fresh mind you can see why. "There are other ways of defining stable, but they are not stable." Perhaps you need to taboo the word stable here?

And would those defaults and update procedures remain stable themselves?

No, and that's the whole point! Stability is scary. Stability leads to Clippy. People wouldn't want stable. They'd want sensible. Sensible updates its behavior based on new information.

Comment author: Stuart_Armstrong 15 February 2015 02:36:02PM *  0 points [-]

Perhaps you need to taboo the word stable here?

"There are some agents that are defined to have constant value systems, where, nonetheless, the value system will drift in practice".

Stability leads to Clippy.

There are many bad stable outcomes. And an unstable update system will eventually fall into one of them, because they're attractor states. To avoid this, you need to define "sensible" in such a way as the agent never enters such states. You've effectively promoting a different kind of goal stability - a zone of stability, rather than a single point. It's not intrinsically a bad idea, but it's not clear that its easer than finding a single idea goal system. And it's very underdefined at his point.

Comment author: [deleted] 14 February 2015 11:12:32AM 1 point [-]

What's Al control retreat? (french..)

In response to comment by [deleted] on AI-created pseudo-deontology
Comment author: Stuart_Armstrong 15 February 2015 02:32:26PM 1 point [-]

Une periode de receillement et de mediation dans un monastere (au figuree ^_^).

Comment author: Toggle 15 February 2015 01:54:11AM *  0 points [-]

Well, let's further say that you assign p(+u)=0.51 and p(-u)=0.49, slightly favoring the production of paperclips over their destruction. And just to keep it a toy problem, you've got a paperclip-making button and a paperclip-destroying button you can push, and no other means of interacting with reality.

A plain old 'confident' paperclip maximizer in this situation will happily just push the former button all day, receiving one Point every time it does so. But an uncertain agent will have the exact same behavior; the only difference is that it only gets .02 Points every time it pushes the button, and thus a lower overall score in the same period of time. But the number of paperclips produced is identical. The agent would not (for example) push the 'destroy' button 49 times and the 'create' button 51 times. In practical effect, this is as inconsequential as telling the confident agent that it gets two Points for every paperclip.

So in this toy problem, at least, uncertainty isn't a moderating force. On the other hand, I would intuitively expect different behavior in a less 'toy' problem- for example, an uncertain maximizer might build every paperclip with a secret self-destruct command so that the number of paperclips could be quickly reduced to zero. So there's a line somewhere where behavior changes. Maybe a good way to phrase my question would be- what are the special circumstances under which an uncertain utility function produces a change in behavior?

Comment author: Stuart_Armstrong 15 February 2015 02:31:48PM 0 points [-]

If the AI expects to know tomorrow what utility function it has, it will be willing to wait, even if there is a (mild) discount rate, while a pure maximiser would not.

Comment author: Toggle 13 February 2015 08:19:49PM *  0 points [-]

Made me think of Rawl's veil of ignorance, somewhat. I wonder- is there a whole family of techniques along the lines of "design intelligence B, given some ambiguity about your own values", with different forms or degrees of uncertainty?

It seems like it should avoid extreme or weirdly specialized results (i.e. paper-clipping), since hedging your bets is an immediate consequence. But it's still highly dependent on the language you're using to model those values in the first place.

I'm a little unclear on the behavioral consequences of 'utility function uncertainty' as opposed to the more usual empirical uncertainty. Technically, it is an empirical question, but what does it mean to act without having perfect confidence in your own utility function?

Comment author: Stuart_Armstrong 14 February 2015 10:38:45PM 0 points [-]

but what does it mean to act without having perfect confidence in your own utility function?

If you look at utility functions as actual functions (not as affine equivalence classes of functions) then that uncertainty can be handled the usual way.

Suppose you want to either maximise u (the number of paperclips) or -u, you don't know which, but will find out soon. Then, in any case, you want to gain control of the paperclip factories...

Comment author: Mark_Friedenbach 13 February 2015 05:18:43PM *  2 points [-]

They tend not to be stable.

Yes, well that is a tautology. What do you mean by stable? I assume you mean value-stable, which can be interpreted as maximizes-the-same-function-over-time. Something which does not behave as a utility maximizer therefore is pretty much by definition not "stable". By technical definition, at least.

My point was more that this "instability" is in fact the desirable outcome -- people wouldn't want technical-stability, they'd want perhaps a heuristic machine with sensible defaults and rational update procedures.

Comment author: Stuart_Armstrong 14 February 2015 10:35:03PM 0 points [-]

There are other ways of interpreting value stability; a satisficer is one example. But those don't tend to be stable: http://lesswrong.com/lw/854/satisficers_want_to_become_maximisers/

people wouldn't want technical-stability, they'd want perhaps a heuristic machine with sensible defaults and rational update procedures.

And would those defaults and update procedures remain stable themselves?

Comment author: polymathwannabe 13 February 2015 01:00:43PM -1 points [-]

This sounded to me as being ruled by two Roman consuls, each of which can override the other's decisions. A part of me likes the idea.

Comment author: Stuart_Armstrong 13 February 2015 02:28:02PM 3 points [-]

It's more like: one Roman consul writes the constitution that the other must follow.

Comment author: djm 13 February 2015 05:09:39AM 0 points [-]

I think the idea of having additional agents B (and C) to act as a form of control is definitely worth pursuing, though I am not clear how it would be implemented.

Is 'w' just random noise added to the max value of u?

If so, would this just act as a limiter and eventually it would find a result close to the original max utility anyway once the random noise falls close to zero?

Comment author: Stuart_Armstrong 13 February 2015 12:13:24PM 0 points [-]

Specifying v is part of the challenge. But by "noise" I means a whole other utility function added permanently on to u. It would not "fall", it would be a permanent feature of v.

View more: Next