wedrifid comments on SotW: Be Specific - Less Wrong

37 Post author: Eliezer_Yudkowsky 03 April 2012 06:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (306)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 07 April 2012 09:23:19AM 1 point [-]

Well, it seems hopeless to me, maybe it also seems hopeless to him.

Perceived hopelessness would not stop him from attempting it anyway. He would abandon any detail and fundamentally change his whole strategy if necessary but some way of specifying values for an FAI based off human values if you want to create an FAI. And that goal is something he has concluded is absolutely necessary.

All this said it doesn't seem like working out CEV (or an alternatively named solution to the same problem) is the most difficult task to solve when it comes to creating predictably safe and Friendly GAIs.

If you take the maxim that human values are complex, then you need mind uploads to run CEV, not just that, but heavily (and likely unethically) processed mind uploads that are being fed hypothetical situations (That's a way to make it non vague). Clearly that's not what you want.

Simulation is one way of evaluating how much a human may like an outcome but not only is it not necessary it is not even sufficient for the purpose of calculating CEV. Reasoning logically about the values of a creature can be done without running it and most likely needs to be done in order to do the 'extrapolating' and 'cohering' parts rather than just brute force evaluation of volition.

Comment author: Dmytry 07 April 2012 09:41:55AM *  0 points [-]

This, however, requires the values not to be very complex - some form of moral absolutism is needed so that you can compress the values from one process into other process.

but some way of specifying values for an FAI based off human values if you want to create an FAI. And that goal is something he has concluded is absolutely necessary.

One shouldn't narrow the search too much too early. It may be that there is a common base C that our morals result from, and that the agreeable FAI morals can be derived from. You can put in as terminal goals the instrumental game theoretic goals of one type of good guy agent in ecology for example, which you can derive with a narrow AI. Then you may get a very anthropomorphic looking FAI that has enough common-sense not to feed utility monsters, not to get pascal-mugged, etc. It won't align with our values precisely, but it can be within the range that we call friendly.

Comment author: wedrifid 07 April 2012 09:59:37AM *  1 point [-]

This, however, requires the values not to be very complex - some form of moral absolutism is needed so that you can compress the values from one process into other process.

It requires that the values not be very complex to a superintelligence. "Complex" takes on a whole different meaning in that context. I mean, it is quite probable (given the redundancy and non-values based components in our brain) that our values don't even take as many nodes to represent as the number of neurons in the brain. Get smart enough and that becomes child's play!

Comment author: Dmytry 07 April 2012 10:16:33AM -1 points [-]

It requires that the values not be very complex to a superintelligence. "Complex" takes on a whole different meaning in that context. I mean, it is quite probable (given the redundancy and non-values based components in our brain) that our values don't even take as many nodes to represent as the number of neurons in the brain. Get smart enough and that becomes child's play!

Hmm. So it takes super-intelligence to understand the goal. So let me get this straight: this works by setting off a self improving AI that gets super intelligent and then becomes friendly? Or do we build a super-intelligence right off, super intelligent on it's original computer?

And you still need what ever implements values, to be ethical to utilize in the ways in which it is unethical to utilize a human mind.

Comment author: wedrifid 07 April 2012 10:28:55AM 1 point [-]

Hmm. So it takes super-intelligence to understand the goal.

I didn't say that. I did say that for "but it's complex!" to be an absolute limiting factor it would be required to be complex even to an FAI.

So let me get this straight: this works by setting off a self improving AI that gets super intelligent and then becomes friendly?

It had better start Friendly, if it doesn't you are just asking for trouble. Obviously it wouldn't be as good at being Friendly yet when it isn't particularly smart.

Or do we build a super-intelligence right off, super intelligent on it's original computer?

That sounds implausible. We'd probably go extinct before we managed to pull that off.

Comment author: Dmytry 07 April 2012 10:49:21AM *  -1 points [-]

It had better start Friendly, if it doesn't you are just asking for trouble. Obviously it wouldn't be as good at being Friendly yet when it isn't particularly smart.

Of course. Then we need to have some simple friendliness for when it is dumber. Let's look at CEV. Can I figure out what is the extrapolated volition of mankind? That's despite me having hardware-assist other-mind virtualization capability, aka "putting myself in other's shoes". Which non-mind-upload probably won't have. Hell, I'm not sure enough the extrapolated volition isn't a death wish.

Comment author: wedrifid 07 April 2012 11:15:17AM *  3 points [-]

Of course. Then we need to have some simple friendliness for when it is dumber.

A dumb (ie. About as smart as us but far more rational) AI would, I assume, think along the lines of:

"So... I'm kinda dumb. How about I make myself smart before I fuck with stuff? So for now I'll do a basic analysis of what my humans seem to want and make sure I don't do anything drastic to damage that while I'm in my recursive improvement stage. For example I'm definitely not going to turn them all into computation. It don't take a genius to figure out they probably don't want that."

ie. It is possible to maximise the expected value of a function without being able to perfectly calculate the details of said function but just approximate them. This involves taking into account risks that you are missing something important and also finding a way to improve your ability to better calculate and search for maxima within the function without risking significant damage to the overall outcome. This doesn't necessarily require special case programming although the FAI developers may find special case programming easier to make proofs about.

Have you got a better idea than that? If so, then probably the FAI would do that instead of what I just came up with after 2 seconds thought.

Hell, I'm not sure enough the extrapolated volition isn't a death wish.

I'm not sure either - specifically because the way different humans's values are aggregated is distinctly underspecified in CEV as Eliezer has ever discussed. That is combined with going about implicitly (it's this implicit part that I particularly don't like) assuming that "all of humanity" is what CEV must be run on. I can't know that CEV<humanity> will not kill me. Even if it doesn't kill me it is nearly tautologically true that CEV<people more like me> is better (in the subjectively objective sense of 'better').

This is one of two significant objections to Eliezer-memes that I am known to harp on from time to time.

Comment author: fubarobfusco 07 April 2012 05:50:14PM *  1 point [-]

That is combined with going about implicitly (it's this implicit part that I particularly don't like) assuming that "all of humanity" is what CEV must be run on. I can't know that CEV<humanity> will not kill me. Even if it doesn't kill me it is nearly tautologically true that CEV<people more like me> is better (in the subjectively objective sense of 'better').

Here's the trouble, though: by the same reasoning, if someone is implementing CEV<white people> or CEV<Russian intellectuals> or CEV<Orthodox Gnostic Pagans> or any such, everyone who isn't a white person, Russian intellectual, or Orthodox Gnostic Pagan has a damned good reason to be worried that it'll kill them.

Now, it may turn out that CEV<Orthodox Gnostic Pagans> is sufficiently similar to CEV<humanity> that the rest of humanity needn't worry. But is that a safe bet for all of us who aren't Orthodox Gnostic Pagans?

Comment author: Incorrect 07 April 2012 06:35:51PM 0 points [-]

For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.

So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.

Comment author: wedrifid 07 April 2012 06:47:02PM 0 points [-]

For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.

YES! CEV is altruism inclusive. For some reason this is often really hard to make people understand that the altruis belongs inside the CEV calculation while the compromise-for-instrumental-purposes goes on the outside.

So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.

This is true all else being equal. (The 'all else' being specifically that you are just as likely to succeed in creating FAI<CEV<self>> as you are in creating FAI<CEV<whatever>>.)

Comment author: wedrifid 07 April 2012 06:11:02PM *  0 points [-]

Compromise is often necessary for the purpose of cooperation and CEV<humanity> is a potentially useful Schelling point to agree upon. However, it should be acknowledged that these considerations are instrumental - or at least acknowledged that they are decisions to be made. Eliezer's discussion of the subject up until now has been completely innocent of even awareness of the possibility that 'humanity' is the only thing that could conceivably be plugged in to CEV. This is, as far as I am concerned, a bad thing.

Comment author: Incorrect 07 April 2012 06:39:03PM *  0 points [-]

This is, as far as I am concerned, a but thing.

Huh?

Comment author: Dmytry 07 April 2012 11:35:49AM *  -2 points [-]

A dumb (ie. About as smart as us but far more rational) AI would, I assume, think along the lines of:

Sounds like a lot of common sense that is very difficult to derive rationally.

"So... I'm kinda dumb. How about I make myself smart before I fuck with stuff? So for now I'll do a basic analysis of what my humans seem to want and make sure I don't do anything drastic to damage that while I'm in my recursive improvement stage. For example I'm definitely not going to turn them all into computation. It don't take a genius to figure out they probably don't want that."

Have you got a better idea than that? If so, then probably the FAI would do that instead of what I just came up with after 2 seconds thought.

Just a little more anthropomorphizing and we'll be speaking of AI that just knows what is the moral thing to do, innately, because he's such a good guy.

The 'basic analysis of what my humans seem to want' has fairly creepy overtones to it (testing hypotheses style). On top of it, say, you tell it, okay just do what ever you think i would do if I thought faster, and it obliges, you are vaporized, because you would of gotten bored into suicide if you thought faster, your simple values system works like this. What's exactly wrong about that course of action? I don't think 'extrapolating' is well defined.

re: volition of mankind. Yep.

Comment author: wedrifid 07 April 2012 12:27:07PM *  1 point [-]

Sounds like a lot of common sense

It doesn't sound like particularly common sense - I'd guess that significantly less than half of humans would arrive at that as a cached 'common sense' solution.

that is very difficult to derive rationally.

It's utterly trivial application of instrumental rationality. I can come up with it in 2 seconds. If the AI is as smart as I am (and with far less human biases) it can arrive at the solution as simply as I can. Especially after it reads every book on strategy that humans have written. Heck, it can read my comment and then decide whether it is a good strategy.

Artificial intelligences aren't stupid.

Just a little more anthropomorphizing and we'll be speaking of AI that just knows what is the moral thing to do, innately, because he's such a good guy.

Or... not. That's utter nonsense. We have been explicitly describing AIs that have been programmed with terminal goals. The AI would then

The 'basic analysis of what my humans seem to want' has fairly creepy overtones to it (testing hypotheses style). On top of it, say, you tell it, okay just do what ever you think i would do if I thought faster, and it obliges, you are vaporized, because you would of gotten bored into suicide if you thought faster, your simple values system works like this. What's exactly wrong about that course of action? I don't think 'extrapolating' is well defined.

CEV is well enough defined that it just wouldn't do that unless you actually do want it - in which case you, well, want it to do that so have no cause to complain. Reading even the incomplete specification from 2004 is sufficient to tell us that a GAI that does that is not implementing something that can reasonably called CEV. I must conclude that you are replying to a straw man (presumably due to not having actually read the materials you criticise.)

Comment author: Dmytry 07 April 2012 01:52:07PM *  -2 points [-]

CEV is not defined to do what you as-is actually want, but to do what you would of wanted, even in circumstances when you as-is actually want something else, as the 2004 paper cheerfully explains.

In any case, once you assume such intent-understanding interpretative powers of AI, it's hard to demonstrate why instructing the AI in plain English to "Be a good guy. Don't do bad things" would not be a better shot.