fubarobfusco comments on SotW: Be Specific - Less Wrong

38 Post author: Eliezer_Yudkowsky 03 April 2012 06:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (306)

You are viewing a single comment's thread. Show more comments above.

Comment author: fubarobfusco 07 April 2012 05:50:14PM *  1 point [-]

That is combined with going about implicitly (it's this implicit part that I particularly don't like) assuming that "all of humanity" is what CEV must be run on. I can't know that CEV<humanity> will not kill me. Even if it doesn't kill me it is nearly tautologically true that CEV<people more like me> is better (in the subjectively objective sense of 'better').

Here's the trouble, though: by the same reasoning, if someone is implementing CEV<white people> or CEV<Russian intellectuals> or CEV<Orthodox Gnostic Pagans> or any such, everyone who isn't a white person, Russian intellectual, or Orthodox Gnostic Pagan has a damned good reason to be worried that it'll kill them.

Now, it may turn out that CEV<Orthodox Gnostic Pagans> is sufficiently similar to CEV<humanity> that the rest of humanity needn't worry. But is that a safe bet for all of us who aren't Orthodox Gnostic Pagans?

Comment author: Incorrect 07 April 2012 06:35:51PM 0 points [-]

For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.

So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.

Comment author: wedrifid 07 April 2012 06:47:02PM 0 points [-]

For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.

YES! CEV is altruism inclusive. For some reason this is often really hard to make people understand that the altruis belongs inside the CEV calculation while the compromise-for-instrumental-purposes goes on the outside.

So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.

This is true all else being equal. (The 'all else' being specifically that you are just as likely to succeed in creating FAI<CEV<self>> as you are in creating FAI<CEV<whatever>>.)

Comment author: hairyfigment 07 April 2012 07:42:20PM 0 points [-]

For some reason this is often really hard to make people understand

IAWYC, but who doesn't get this?

Given our attitude toward politics, I'd expect little if any gain from replacing 'humanity' with 'Less Wrong'. Moreover, others would correctly take our exclusion of them as evidence of a meaningful difference if we actually made this decision. And I can't write an AGI by myself, nor can the smarter version of me calling itself Eliezer.

Comment author: wedrifid 07 April 2012 10:24:04PM 0 points [-]

IAWYC, but who doesn't get this?

I don't recall the names. The conversations would be archived though if you are interested.

Comment author: wedrifid 07 April 2012 06:11:02PM *  0 points [-]

Compromise is often necessary for the purpose of cooperation and CEV<humanity> is a potentially useful Schelling point to agree upon. However, it should be acknowledged that these considerations are instrumental - or at least acknowledged that they are decisions to be made. Eliezer's discussion of the subject up until now has been completely innocent of even awareness of the possibility that 'humanity' is the only thing that could conceivably be plugged in to CEV. This is, as far as I am concerned, a bad thing.

Comment author: Incorrect 07 April 2012 06:39:03PM *  0 points [-]

This is, as far as I am concerned, a but thing.

Huh?

Comment author: wedrifid 07 April 2012 06:40:37PM 0 points [-]

bad thing. Fixed.