Perplexed comments on BOOK DRAFT: 'Ethics and Superintelligence' (part 1) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (107)
In the context of CEV, Eliezer apparently thinks that a singleton is desirable, not just likely.
I'm not convinced, but since Luke is going to critique CEV in any case, this aspect should be addressed.
ETA: I have been corrected - the quotation was not from Eliezer. Also, the quote doesn't directly say that a singleton is a desirable outcome; it says that the assumption that we will be dealing with a singleton is a desirable feature of an FAI strategy.
I don't know how much you meant to suggest otherwise, but just for context, the linked paper was written by Roko and me, not Eliezer, and doesn't try to perfectly represent his opinions.
No, I didn't realize that. Thx for the correction, and sorry for the misattribution.
I have different justifications in mind, and yes I will be explaining them in the book.