wedrifid comments on SotW: Be Specific - Less Wrong

38 Post author: Eliezer_Yudkowsky 03 April 2012 06:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (306)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 07 April 2012 06:47:02PM 0 points [-]

For anyone who implements an AI, any justification for including other members of humanity in their CEV calculation is valid iff their CEV would specify that anyway.

YES! CEV is altruism inclusive. For some reason this is often really hard to make people understand that the altruis belongs inside the CEV calculation while the compromise-for-instrumental-purposes goes on the outside.

So, the rational course of action for anyone implementing an AI is to simply use their own CEV. If that CEV specifies to consider the CEV of other members of humanity then so be it.

This is true all else being equal. (The 'all else' being specifically that you are just as likely to succeed in creating FAI<CEV<self>> as you are in creating FAI<CEV<whatever>>.)

Comment author: hairyfigment 07 April 2012 07:42:20PM 0 points [-]

For some reason this is often really hard to make people understand

IAWYC, but who doesn't get this?

Given our attitude toward politics, I'd expect little if any gain from replacing 'humanity' with 'Less Wrong'. Moreover, others would correctly take our exclusion of them as evidence of a meaningful difference if we actually made this decision. And I can't write an AGI by myself, nor can the smarter version of me calling itself Eliezer.

Comment author: wedrifid 07 April 2012 10:24:04PM 0 points [-]

IAWYC, but who doesn't get this?

I don't recall the names. The conversations would be archived though if you are interested.