wedrifid comments on The Friendly AI Game - Less Wrong

38 Post author: bentarm 15 March 2011 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (170)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 16 March 2011 05:31:27AM 2 points [-]

Is this addressed to the coherent extrapolated volition of humankind, as expressed by SIAI?

Yes. The CEV<humankind> really could suck. There isn't a good reason to assume that particular preference system is a good one.

Comment author: ArisKatsaris 16 March 2011 11:21:12PM 0 points [-]

Good one according to which criteria? CEV<humankind> is perfect according to humankind's criteria if humankind were more intelligent and more sane than it currently is.

Comment author: wedrifid 25 March 2011 01:24:52AM *  1 point [-]

Good one according to which criteria?

Mine. (This is tautological.) Anything else that is kind of similar to mine would be acceptable.

CEV<humankind> is perfect according to humankind's criteria if humankind were more intelligent and more sane than it currently is.

Which is fine if 'sane' is defined as 'more like what I would consider 'sane'. But that's because sane has all sorts of loaded connotations with respect to actual preferences - and "humanity's" may very well not qualify as not-insane.

Comment author: HughRistik 16 March 2011 11:52:01PM 0 points [-]

How about CEV<the smart people>?

Comment author: wedrifid 25 March 2011 01:31:53AM *  1 point [-]

How about CEV<the smart people>?

Yes, that would be preferable. But only because I assert a correlation between the attributes that produce what we measure as g and with personality traits and actual underlying preferences. A superintelligence extrapolating on <the smart people>'s preferences would, in fact, produce a different outcome than one extrapolating on <the rest>.

ArisKataris's accusation that you don't understand CEV means misses the mark. You can understand CEV and still not conclude that CEV<humanity> is necessarily a good thing.

Comment author: Dorikka 17 March 2011 01:02:42AM 0 points [-]

And, uh, how do you define that?

Comment author: HughRistik 18 March 2011 07:32:00AM 1 point [-]

Something like g, perhaps?

Comment author: ArisKatsaris 17 March 2011 11:06:56PM *  0 points [-]

What would that accomplish? It's the intelligence of the AI that will be getting used, not the intelligence of the people in question.

I'm getting the impression that some people don't understand what CEV even means. It's not about the programmers predicting a course of action, it's not about the AI using people's current choice, it's about the AI using the extrapolated volition - what people would choose if they were as smart and knowledgeable as the AI.