ESRogs comments on The Ape Constraint discussion meeting. - Less Wrong

9 Post author: Douglas_Reay 28 November 2013 11:22AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: ESRogs 30 November 2013 09:49:08AM 2 points [-]

I agree with you about not knowing any foolproof wording. In terms of what Eliezer had in mind though, here's what the LessWrong wiki has to say on CEV:

In calculating CEV, an AI would predict what an idealized version of us would want, "if we knew more, thought faster, were more the people we wished we were, had grown up farther together". http://wiki.lesswrong.com/wiki/CEV

So it's not just, "be good to humans," but rather, "do what (idealized) humans would want you to." I think it's an open question whether those would be the same thing.