Stuart_Armstrong comments on Friendly AI ideas needed: how would you ban porn? - Less Wrong

6 Post author: Stuart_Armstrong 17 March 2014 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 24 March 2014 10:29:20AM 1 point [-]

The clearer we make as many concepts as we can, the more likely is is that "look, this is..." is going to work.

Comment author: Squark 25 March 2014 12:33:11PM -1 points [-]

Well, I think the concept we have to make clear is "agent with given utility function". We don't need any human-specific concepts, and they're hopelessly complex anyway: let the FAI figure out the particulars on its own. Moreover, the concept of an "agent with given utility function" is something I believe I'm already relatively near to formalizing.

Comment author: Eugine_Nier 27 March 2014 05:17:27AM 1 point [-]

If the agent in question has a well-defined utility function, why is he deferring to the FAI to explain it to him?

Comment author: Squark 27 March 2014 07:03:55PM 0 points [-]

Because he is bad at introspection and his only access to the utility function is through a noisy low-bandwidth sensor called "intuition".

Comment author: Stuart_Armstrong 25 March 2014 01:24:25PM 1 point [-]

let the FAI figure out the particulars on its own.

Again, the more we can do ahead of time, the more likely it is that the FAI will figure these things out correctly.

Comment author: Squark 25 March 2014 07:18:15PM *  -1 points [-]

Why do you think the FAI can figure these things out incorrectly, assuming we got "agent with given utility function" right? Maybe we can save it time by providing it with more initial knowledge. However, since the FAI has superhuman intelligence, it would probably take us much longer to generate that knowledge than it would take the FAI. I think that to generate an amount of knowledge which would be non-negligible from the FAI's point of view would take a timespan large with respect to the timescale on which UFAI risk becomes significant. Therefore in practice I don't think we can wait for it before building the FAI.

Comment author: Stuart_Armstrong 26 March 2014 01:43:22PM 1 point [-]

Why do you think the FAI can figure these things out incorrectly

Because values are not physical facts, and cannot be deduced from mere knowledge.

Comment author: Squark 26 March 2014 08:51:30PM *  -1 points [-]

I'm probably explaining myself poorly.

I'm suggesting that there should be a mathematical operator which takes a "digitized" representation of an agent, either in white-box form (e.g. uploaded human brain) or in black-box form (e.g. chatroom logs) and produces a utility function. There is nothing human-specific in the definition of the operator: it can as well be applied to e.g. another AI, an animal or an alien. It is the input we provide the operator that selects a human utility function.

Comment author: asr 31 March 2014 02:55:06PM *  1 point [-]

I don't understand how such an operator could work.

Suppose I give you a big messy data file that specifies neuron state and connectedness. And then I give you a big complicated finite-element simulator that can accurately predict what a brain would do, given some sensory input. How do you turn that into a utility function?

I understand what it means to use utility as a model of human preference. I don't understand what it means to say that a given person has a specific utility function. Can you explain exactly what the relationship is between a brain and this abstract utility function?

Comment author: Squark 31 March 2014 08:14:23PM -1 points [-]

See the last paragraph in this comment.

Comment author: asr 01 April 2014 04:22:39AM *  1 point [-]

I don't see how that addresses the problem. You're linking to a philosophical answer, and this is an engineering problem.

The claim you made, some posts ago, was "we can set an AI's goals by reference to a human's utility function." Many folks objected that humans don't really have utility functions. My objection was "we have no idea how to extract a utility function, even given complete data about a human's brain." Defining "utility function" isn't a solution. If you want to use "the utility function of a particular human" in building an AI, you need not only a definition, but a construction. To be convincing in this conversation, you would need to at least give some evidence that such a construction is possible.

You are trying to use, as a subcomponent, something we have no idea how to build and that seems possibly as hard as the original problem. And this isn't a good way to do engineering.

Comment author: Squark 01 April 2014 06:08:14AM 0 points [-]

The way I expect AGI to work is receiving a mathematical definition of its utility function as input. So there is no need to have a "construction". I don't even know what a "construction" is, in this context.

Note that in my formal definition of intelligence, we can use any appropriate formula* in the given formal language as a utility function, since it all comes down to computing logical expectation values. In fact I expect a real seed AGI to work through computing logical expectation values (by an approximate method, probably some kind of Monte Carlo).

Of course, if the AGI design we will come up with is only defined for a certain category of utility functions then we need to somehow project into this category (assuming the category is rich enough for the projection not to lose too much information). The construction of this projection operator indeed might be very difficult.

  • In practice, I formulated the definition with utility = Solomonoff expectation value of something computable. But this restriction isn't necessary. Note that my proposal for defining logical probabilities admits self reference in the sense that the reasoning system is allowed to speak of the probabilities it assigns (like in Christiano et al).
Comment author: Stuart_Armstrong 31 March 2014 11:02:03AM 1 point [-]

Humans don't follow anything like a utility function, which is a first problem, so you're asking the AI to construct something that isn't there. Then you have to knit this together into a humanity utility function, which is very non trivial (this is one feeble and problematic way of doing this: http://lesswrong.com/r/discussion/lw/8qb/cevinspired_models/).

The other problem is that you haven't actually solved many of the hard problems. Suppose the AI decides to kill everyone, then replay, in an endless loop, the one upload it has, having a marvellous experience. Why would it not do that? We want the AI to correctly balance our higher order preferences (not being reduced to a single mindless experience) with our lower order preferences (being happy). But that desire is itself a higher order preference - it won't happen unless the AI already decides that higher order preferences trump lower ones.

And that was one example I just thought of. It's not hard to come up with "the AI does something stupid in this model (eg: replaces everyone with chatterbots that describe their ever increasing happiness and fulfilment) that is compatible with the original model but clearly stupid - clearly stupid to our own judgement, though, not to the AIs.

You may object that these problems won't happen - but you can't be confident of this, as you haven't defined your solution formally, and are relying on common sense to reject those pathological solutions. But nowhere have you assumed the AI has common sense, or how it will use it. The more details you put in your model, I think, the more the problems will become apparent.

Comment author: Squark 31 March 2014 11:37:12AM *  0 points [-]

Thank you for the thoughtful reply!

Deducing the correct utility of a utility maximiser is one thing (which has a low level of uncertainty, higher if the agent is hiding stuff).

In the white-box approach it can't really hide. But I guess it's rather tangential to the discussion.

Assigning a utility to an agent that doesn't have one is quite another... Humans don't follow anything like a utility function, which is a first problem, so you're asking the AI to construct something that isn't there.

What do you mean by "follow a utility function"? Why do you thinks humans don't do it? If it isn't there, what does it mean to have a correct solution to the FAI problem?

The robot is a behavior-executor, not a utility-maximizer.

The main problem with Yvain's thesis is in the paragraph:

Again, give the robot human level intelligence. Teach it exactly what a hologram projector is and how it works. Now what happens? Exactly the same thing - the robot executes its code, which says to scan the room until its camera registers blue, then shoot its laser.

What does Yvain mean by "give the robot human level intelligence"? If the robot's code remained the same, in what sense does it have human level intelligence?

Then you have to knit this together into a humanity utility function, which is very non trivial.

This is the part of the CEV proposal which always seemed redundant to me. Why should we do it? If you're designing the AI, why wouldn't you use your own utility function? At worst, an average utility function of the group of AI designers? Why do we want / need the whole humanity there? Btw, I would obviously prefer my utility function in the AI but I'm perfectly willing to settle on e.g. Yudkowsky's.

Suppose the AI decides to kill everyone, then replay, in an endless loop, the one upload it has, having a marvellous experience... the AI does something stupid in this model (eg: replaces everyone with chatterbots that describe their ever increasing happiness and fulfilment)...

It seems that you're identifying my proposal with something like "maximize pleasure". The latter is a notoriously bad idea, as was discussed endlessly. However, my proposal is completely different. The AI wouldn't do something the upload wouldn't do because such an action is opposed to the upload's utility function.

You may object that these problems won't happen - but you can't be confident of this, as you haven't defined your solution formally...

Actually, I'm not far from it (at least I don't think I'm further than CEV). Note that I have already defined formally I(A, U) where I=intelligence, A=agent, U=utility function. Now we can do something like "U(A) is defined to be U s.t. the probability that I(A, U) > I(R, U) for random agent R is maximal". Maybe it's more correct to use something like a thermal ensemble with I(A, U) playing the role of energy: I don't know, I don't claim to have solved it all already. I just think it's a good research direction.

Comment author: Stuart_Armstrong 31 March 2014 12:39:20PM 1 point [-]

What do you mean by "follow a utility function"? Why do you thinks humans don't do it?

Humans are neither independent not transitive. Human preferences change over time, depending on arbitrary factors, including how choices are framed. Humans suffer because of things they cannot affect, and humans suffer because of details of their probability assessment (eg ambiguity aversion). That bears repeating - humans have preference over their state of knowledge. The core of this is that "assessment of fact" and "values" are not disconnected in humans, not disconnected at all. Humans feel good when a team they support wins, without them contributing anything to the victory. They will accept false compliments, and can be flattered. Social pressure changes most values quite easily.

Need I go on?

If it isn't there, what does it mean to have a correct solution to the FAI problem?

A utility function which, if implemented by the AI, would result in a positive, fulfilling, worthwhile existence for humans. Even if humans had a utility, it's not clear that a ruling FAI should have the same one, incidentally. The utility is for the AI, and it aims to capture as much of human value as possible - it might just be the utility of a nanny AI (make reasonable efforts to keep humanity from developing dangerous AIs, going extinct, or regressing technologically, otherwise, let them be).

Comment author: Squark 31 March 2014 01:18:33PM -1 points [-]

What do you mean by "follow a utility function"? Why do you thinks humans don't do it?

Humans are neither independent not transitive...

You still haven't defined "follow a utility function". Humans are not ideal rational optimizers of their respective utility functions. It doesn't mean they don't have them. Deep Blue often plays moves which are not ideal, nevertheless I think it's fair to say it optimizes winning. If you make intransitive choices, it doesn't mean your terminal values are intransitive. It means your choices are not optimal.

Human preferences change over time...

This is probably the case. However, the changes are slow, otherwise humans wouldn't behave coherently at all. The human utility function is only defined approximately, but the FAI problem only makes sense in the same approximation. In any case, if you're programming an AI you should equip it with the utility function you have at that moment.

...humans have preference over their state of knowledge...

Why do you think it is inconsistent with having a utility function?

...what does it mean to have a correct solution to the FAI problem?

A utility function which, if implemented by the AI, would result in a positive, fulfilling, worthwhile existence for humans.

How can you know that a given utility function has this property? How do you know the utility function I'm proposing doesn't have this property?

Even if humans had a utility, it's not clear that a ruling FAI should have the same one, incidentally.

Isn't it? Assume your utility function is U. Suppose you have the choice to create a superintelligence optimizing U or a superintelligence optimizing something other than U, let say V. Why would you choose V? Choosing U will obviously result in an enormous expected increase of U, which is what you want to happen, since you're a U-maximizing agent. Choosing V will almost certainly result in a lower expectation value of U: if the V-AI chooses strategy X that leads to higher expected U than the strategy that would be chosen by a U-AI then it's not clear why the U-AI wouldn't choose X.

Comment author: pengvado 27 March 2014 12:10:21AM *  0 points [-]

There are many such operators, and different ones give different answers when presented with the same agent. Only a human utility function distinguishes the right way of interpreting a human mind as having a utility function from all of the wrong ways of interpreting a human mind as having a utility function. So you need to get a bunch of Friendliness Theory right before you can bootstrap.

Comment author: Squark 27 March 2014 07:02:20PM 0 points [-]

Why do you think there are many such operators? Do you believe the concept of "utility function of an agent" is ill-defined (assuming the "agent" is actually an intelligent agent rather than e.g. a rock)? Do you think it is possible to interpret a paperclip maximizer as having a utility function other than maximizing paperclips?

Comment author: Stuart_Armstrong 31 March 2014 11:09:41AM 1 point [-]

Deducing the correct utility of a utility maximiser is one thing (which has a low level of uncertainty, higher if the agent is hiding stuff). Assigning a utility to an agent that doesn't have one is quite another.

See http://lesswrong.com/lw/6ha/the_blueminimizing_robot/ Key quote:

The robot is a behavior-executor, not a utility-maximizer.

Comment author: Squark 31 March 2014 11:58:18AM 0 points [-]

Replied in the other thread.