You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer_Yudkowsky comments on BOOK DRAFT: 'Ethics and Superintelligence' (part 1) - Less Wrong Discussion

11 Post author: lukeprog 13 February 2011 10:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (107)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 13 February 2011 08:59:33PM 4 points [-]

Can you give me one quick sentence on a concrete failure mode of CEV?

Comment author: cousin_it 13 February 2011 11:08:01PM *  6 points [-]

I'm confused by your asking such questions. Roko's basilisk is a failure mode of CEV. I'm not aware of any work by you or other SIAI people that addresses it, never mind work that would prove the absence of other, yet undiscovered "creative" flaws.

Comment author: Eliezer_Yudkowsky 14 February 2011 06:43:09AM 4 points [-]

Roko's original proposed basilisk is not and never was the problem in Roko's post. I don't expect it to be part of CEV, and it would be caught by generic procedures meant to prevent CEV from running if 80% of humanity turns out to be selfish bastards, like the Last Jury procedure (as renamed by Bostrom) or extrapolating a weighted donor CEV with a binary veto over the whole procedure.

EDIT: I affirm all of Nesov's answers (that I've seen so far) in the threads below.

Comment author: cousin_it 14 February 2011 01:56:17PM *  10 points [-]

wedrifid is right: if you're now counting on failsafes to stop CEV from doing the wrong thing, that means you could apply the same procedures to any other proposed AI, so the real value of your life's work is in the failsafe, not in CEV. What happened to all your clever arguments saying you can't put external chains on an AI? I just don't understand this at all.

Comment author: wedrifid 15 February 2011 08:53:11AM *  5 points [-]

wedrifid is right: if you're now counting on failsafes to stop CEV from doing the wrong thing, that means you could apply the same procedures to any other proposed AI, so the real value of your life's work is in the failsafe, not in CEV.

Since my name was mentioned I had better confirm that I generally agree with your point but would have left out this sentence:

What happened to all your clever arguments saying you can't put external chains on an AI?

I don't disagree with the principle of having a failsafe - and don't think it is incompatible with the aforementioned clever arguments. But I do agree that "but there is a failsafe" is an utterly abysmal argument in favour of preferring CEV<humanity> over an alternative AI goal system.

I just don't understand this at all.

Tell me about it. With most people if they kept asking the same question when the answer is staring them in the face and then act oblivious as it is told to them repeatedly I dismiss them as either disingenuous or (possibly selectively) stupid in short order. But, to borrow wisdom from HP:MoR:

.... that just doesn't sound like /Eliezer's/ style.

...but you can only think that thought so many times, before you start to wonder about the trustworthiness of that whole 'style' concept.

Comment author: Vladimir_Nesov 14 February 2011 02:53:58PM *  6 points [-]

Any given FAI design can turn out to be unable to do the right thing, which corresponds to tripping failsafes, but to be a FAI it must also be potentially capable (for all we know) of doing the right thing. Adequate failsafe should just turn off an ordinary AGI immediately, so it won't work as an AI-in-chains FAI solution. You can't make AI do the right thing just by adding failsafes, you also need to have a chance of winning.

Comment author: Eliezer_Yudkowsky 14 February 2011 04:29:26PM 1 point [-]

Affirmed.

Comment author: ciphergoth 14 February 2011 08:14:27AM 5 points [-]

Is the Last Jury written up anywhere? It's not in the draft manuscript I have.

Comment author: gwern 18 July 2011 03:35:49AM 3 points [-]

I assume Last Jury is just the Last Judge from CEV but with majority voting among n Last Judges.

Comment author: wedrifid 14 February 2011 08:16:00AM *  5 points [-]

it would be caught by generic procedures meant to prevent CEV from running if 80% of humanity turns out to be selfish bastards

I too am confused by your asking of such questions. Your own "80% of humanity turns out to be selfish bastards" gives a pretty good general answer to the question already.

"But we will not run it if is bad" seems like it could be used to reply to just about anything. Sure, it is good to have safety measures no matter what you are doing but not running it doesn't make CEV<humanity> desirable.

Comment author: XiXiDu 14 February 2011 11:30:14AM 3 points [-]

I'm completely confused now. I thought CEV was right by definition? If "80% of humanity turns out to be selfish bastards" then it will extrapolate on that. If we start to cherry pick certain outcomes according to our current perception, why run CEV at all?

Comment author: Vladimir_Nesov 14 February 2011 11:56:39AM 3 points [-]

CEV is not right by definition, it's only well-defined given certain assumptions that can fail. It should be designed so that if it doesn't shut down, then it's probably right.

Comment author: Tyrrell_McAllister 14 February 2011 05:58:35PM 4 points [-]

Sincere question: Why would "80% of humanity turns out to be selfish bastards" violate one of those assumptions? Is the problem the "selfish bastard" part? Or is it that the "80%" part implies less homogeneity among humans than CEV assumes?

Comment author: wedrifid 15 February 2011 02:34:17AM 1 point [-]

Why would "80% of humanity turns out to be selfish bastards" violate one of those assumptions?

It would certainly seem that 80% of humanity turning out to be selfish bastards is compatible with CEV<humanity> being well defined, but not with being 'right'. This does not technically contradict anything in the grandparent (which is why I didn't reply with the same question myself). It does, perhaps, go against the theme of Nesov's comments.

Basically, and as you suggest, either it must be acknowledged that 'not well defined' and 'possibly evil' are two entirely different problems or something that amounts to 'humans do not want things that suck' must be one of the assumptions.

Comment author: XiXiDu 15 February 2011 09:52:51AM 1 point [-]

It would certainly seem that 80% of humanity turning out to be selfish bastards is compatible with CEV<humanity> being well defined, but not with being 'right'.

I suppose you have to comprehend Yudkowsky's metaethics to understand that sentence. I still don't get what kind of 'right' people are talking about.

Comment author: wedrifid 15 February 2011 10:06:46AM *  7 points [-]

I still don't get what kind of 'right' people are talking about.

Very similar to your right, for all practical purposes. A slight difference in how it is described though. You describe (if I recall), 'right' as being "in accordance with XiXiDu's preferences". Using Eliezer's style of terminology you would instead describe 'right' as more like a photograph of what XiXiDu's preferences are, without them necessarily including any explicit reference to XiXiDu.

In most cases it doesn't really matter. It starts to matter once people start saying things like "But what if XiXiDu could take a pill that made him prefer that he eat babies? Would that mean it became right? Should XiXiDu take the pill?"

By the way, 'right' would also mean what the photo looks like after it has been airbrushed a bit in photoshop by an agent better at understanding what we actually want than we are at introspection and communication. So it's an abstract representation of what you would want if you were smarter and more rational but still had your preferences.

Also note that Eliezer sometimes blurs the line between 'right' meaning what he would want and what some abstract "all of humanity" would want.

Comment author: wedrifid 14 February 2011 12:15:16PM *  4 points [-]

I'm completely confused now. I thought CEV was right by definition? If "80% of humanity turns out to be selfish bastards" then it will extrapolate on that.

No, CEV<wedrifid> is right by definition. When CEV is used as shorthand for "the coherent extrapolated volitions of all of humanity" as is the case there then it is quite probably not right at all. Because many humans, to put it extremely politely, have preferences that are distinctly different to what I would call 'right'.

If we start to cherry pick certain outcomes according to our current perception, why run CEV at all?

Yes, that would be pointless, it would be far better to compare the outcomes to CEV<group_I_identify_with_sufficiently> (then just use the latter!) The purpose of doing CEV<humanity> at all is for signalling and cooperation.

Comment author: steven0461 14 February 2011 07:44:23PM 2 points [-]

Because many humans, to put it extremely politely, have preferences that are distinctly different to what I would call 'right'.

Before or after extrapolation? If the former then why does that matter, if the latter then how do you know?

Comment author: wedrifid 15 February 2011 02:22:09AM *  3 points [-]

Before or after extrapolation? If the former then why does that matter, if the latter then how do you know?

Former in as much as it allows inferences about the latter. I don't need to know with any particular confidence for the purposes of the point. The point was to illustrate possible (and overwhelmingly obvious) failure modes.

Hoping that CEV<humanity> is desirable rather than outright unfriendly isn't a particularly good reason to consider it. It is going to result in outcomes that are worse from the perspective of whoever is running the GAI than CEV<that person> and CEV<group more closely identified with>.

The purpose of doing CEV<humanity> at all is for signalling and cooperation (or, possibly, outright confusion).

Comment author: XiXiDu 14 February 2011 02:17:13PM 1 point [-]

The purpose of doing CEV<humanity> at all is for signalling and cooperation.

Do you mean it is simply an SIAI marketing strategy and that it is not what they are actually going to do?

Comment author: wedrifid 14 February 2011 02:44:17PM 4 points [-]

Do you mean it is simply an SIAI marketing strategy and that it is not what they are actually going to do?

Signalling and cooperation can include actual behavior.

Comment author: Vladimir_Nesov 14 February 2011 11:16:41AM 3 points [-]

"But we will not run it if is bad" seems like it could be used to reply to just about anything. Sure, it is good to have safety measures no matter what you are doing but not running it doesn't make CEV<humanity> desirable.

In case where assumptions fail, and CEV ceases to be predictably good, safety measures shut it down, so nothing happens. In case where assumptions hold, it works. As a result, CEV has good expected utility, and gives us a chance to try again with a different design if it fails.

Comment author: wedrifid 14 February 2011 11:56:05AM 3 points [-]

This does not seem to weaken the position you quoted in any way.

Failsafe measures are a great idea. They just don't do anything to privileged CEV<humanity> + failsafe over anything_else + failsafe.

Comment author: Vladimir_Nesov 14 February 2011 12:09:40PM 1 point [-]

Failsafe measures are a great idea. They just don't do anything to privilege CEV<humanity> + failsafe over anything_else + failsafe.

Yes. They make sure that [CEV<humanity> + failsafe] is not worse than not running any AIs. Uncertainty about whether CEV<humanity> works makes expected [CEV<humanity> + failsafe] significantly better than doing nothing. Presence of potential controlled shutdown scenarios doesn't argue for worthlessness of the attempt, even where detailed awareness of these scenarios could be used to improve the plan.

Comment author: wedrifid 14 February 2011 12:21:19PM *  0 points [-]

I'm actually not even sure whether you are trying to disagree with me or not but once again, in case you are, nothing here weakens my position.

Comment author: Vladimir_Nesov 14 February 2011 12:31:42PM 0 points [-]

"Not running it" does make [CEV<humanity> + failsafe] desirable, as compared to doing nothing, even in the face of problems with [CEV<humanity>], and nobody is going to run just [CEV<humanity>]. So most arguments for presence of problems in CEV<humanity>, if they are met with adequate failsafe specifications (which is far from a template to reply to anything, failsafes are not easy), do indeed lose a lot of traction. Besides, what are they arguments for? One needs a suggestion for improvement, and failsafes are intended to make it so that doing nothing is not an improvement, even though improvements over any given state of the plan would be dandy.

Comment author: wedrifid 14 February 2011 01:01:46PM *  0 points [-]

"Not running it" does make [CEV<humanity> + failsafe] desirable, as compared to doing nothing

Yes, this is trivially true and not currently disputed by anyone here. Nobody is suggesting doing nothing. Doing nothing is crazy.

Comment author: wedrifid 14 February 2011 11:05:12AM 2 points [-]

Roko's original proposed basilisk is not and never was the problem in Roko's post.

Of course, Roko did not originally propose a basilisk at all. Just a novel solution to a obscure game theory problem.

Comment deleted 14 February 2011 11:13:28AM *  [-]
Comment author: Vladimir_Nesov 14 February 2011 11:59:13AM 1 point [-]

If CEV has a serious bug, it won't correctly implement anyone's volition, and so someone's volition saying that CEV shouldn't have that bug won't help.

Comment author: lukeprog 13 February 2011 09:02:58PM 2 points [-]

Not until I get to that part of the writing and research, no.

Comment author: lukeprog 14 February 2011 04:56:31AM 5 points [-]
Comment author: Adele_L 13 November 2013 05:18:16AM 0 points [-]

Has this been published anywhere yet?

Comment author: lukeprog 13 November 2013 04:58:30PM 1 point [-]

A related thing that has since been published is Ideal Advisor Theories and Personal CEV.

I have no plans to write the book; see instead Bostrom's far superior Superintelligence, forthcoming.

Comment author: Dorikka 14 February 2011 06:09:46AM *  1 point [-]

Extrapolated humanity decides that the best possible outcome is to become the Affront. Now, if the FAI put everyone in a separate VR and tricked him into believing that he was acting all Affront-like, then everything would be great -- everyone would be content. However, people don't just want the experience of being the Affront -- everyone agrees that they want to be truly interacting with other sentiences which will often feel the brunt of each other's coercive action.

Comment author: Eliezer_Yudkowsky 14 February 2011 06:40:23AM 3 points [-]

Original version of grandparent contained, before I deleted it, "Besides the usual 'Eating babies is wrong, what if CEV outputs eating babies, therefore a better solution is CEV plus code that outlaws eating babies.'"

Comment author: lukeprog 14 February 2011 06:20:09AM *  2 points [-]

Dorikka,

I don't understand this. If the singleton's utility function was written such that it's highest value was for humans to become the Affront, then making it the case that humans believed they were the Affront while not being the Affront would not satisfy the utility function. So why would the singleton do such a thing?

Comment author: Dorikka 15 February 2011 02:45:39AM 2 points [-]

I don't think that my brain was working optimally at 1am last night.

My first point was that our CEV might decide to go Baby-Eater, and so the FAI should treat the caring-about-the-real-world-state part of its utility function as a mere preference (like chocolate ice cream), and pop humanity into a nicely designed VR (though I didn't have the precision of thought necessary to put it into such language). However, it's pretty absurd for us to be telling our CEV what to do, considering that they'll have much more information than we do and much more refined thinking processes. I actually don't think that our Last Judge should do anything more than watch for coding errors (as in, we forgot to remove known psychological biases when creating the CEV).

My second point was that the FAI should also slip us into a VR if we desire a world-state in which we defect from each other (with similar results as in the prisoner's dilemma). However, the counterargument from point 1 also applies to this point.

Comment author: nazgulnarsil 16 February 2011 01:21:39AM 2 points [-]

I have never understood what is wrong with the amnesia-holodecking scenario. (is there a proper name for this?)

Comment author: Dorikka 16 February 2011 02:20:56AM 3 points [-]

If you want to, say, stop people from starving to death, would you be satisfied with being plopped on a holodeck with images of non-starving people? If so, then your stop-people-from-starving-to-death desire is not a desire to optimize reality into a smaller set of possible world-states, but simply a desire to have a set of sensations so that you believe starvation does not exist. The two are really different.

If you don't understand what I'm saying, the first two paragraphs of this comment might explain it better.

Comment author: nazgulnarsil 16 February 2011 02:25:44AM *  0 points [-]

thanks for clarifying. I guess I'm evil. It's a good thing to know about oneself.

Comment author: Dorikka 16 February 2011 02:30:23AM 0 points [-]

Uh, that was a joke, right?

Comment author: nazgulnarsil 16 February 2011 06:19:09AM 0 points [-]

no.

Comment author: Dorikka 16 February 2011 11:53:37PM 0 points [-]

What definition of evil are you using? I'm having trouble understanding why (how?) you would declare yourself evil, especially evil_nazgulnarsil.

Comment author: nazgulnarsil 17 February 2011 06:06:07AM 4 points [-]

i don't care about suffering independent of my sensory perception of it causing me distress.

Comment author: Dorikka 17 February 2011 03:31:49PM 0 points [-]

Oh. In that case, it might be more precise to say that your utility function does not assign positive or negative utility to the suffering of others (if I'm interpreting your statement correctly). However, I'm curious about whether this statement holds true for you at extremes, so here's a hypothetical.

I'm going to assume that you like ice cream. If you don't like any sort of ice cream, substitute in a certain quantity of your favorite cookie. If you could get a scoop of ice cream (or a cookie) for free at the cost of a million babies thumbs cut off, would you take the ice cream/cookie?

If not, then you assign a non-zero utility to others suffering, so it might be true that you care very little, but it's not true that you don't care at all.

Comment author: Sniffnoy 16 February 2011 09:03:04AM 0 points [-]

Well, it's essentially equivalent to wireheading.

Comment author: nazgulnarsil 16 February 2011 10:16:45AM 0 points [-]

which I also plan to do if everything goes tits-up.