gwern comments on BOOK DRAFT: 'Ethics and Superintelligence' (part 1) - Less Wrong

11 Post author: lukeprog 13 February 2011 10:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (107)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 13 February 2011 04:25:32PM 1 point [-]

Indeed I shall.

Comment author: gwern 13 February 2011 06:44:25PM *  4 points [-]

It's also worth noting that more than one person thinks the singleton wouldn't exist and alternative models are more likely. For example, Robin Hanson's em model (crack of a future dawn) is fairly likely given that we have a decent Whole Brain Emulation Roadmap, but nothing of the sort for a synthetic AI, and people like Nick Szabo emphatically disagree that a single agent could outperform a market of agents.

Comment author: Perplexed 13 February 2011 09:57:47PM 1 point [-]

Of course, people can be crushed by impersonal markets as easily as they can by singletons. The case might be made that we would prefer a singleton because the task of controlling it would be less complex and error-prone.

Comment author: gwern 13 February 2011 10:16:11PM 1 point [-]

A reasonable point, but I took Luke to be discussing the problems of designing a good singleton because a singleton seemed like the most likely outcome, not because he likes the singleton aesthetically or because a singleton would be easier to control.

Comment author: Perplexed 13 February 2011 10:57:06PM *  2 points [-]

In the context of CEV, Eliezer apparently thinks that a singleton is desirable, not just likely.

Only one superintelligent AMA (Artificial Moral Agent) is to be constructed, and it is to take control of the entire future light cone with whatever goal function is decided upon. Justification: a singleton is the likely default outcome for superintelligence, and stable co-existence of superintelligences, if achievable, would offer no inherent advantages for humans.

I'm not convinced, but since Luke is going to critique CEV in any case, this aspect should be addressed.

ETA: I have been corrected - the quotation was not from Eliezer. Also, the quote doesn't directly say that a singleton is a desirable outcome; it says that the assumption that we will be dealing with a singleton is a desirable feature of an FAI strategy.

Comment author: Nick_Tarleton 14 February 2011 11:59:09PM *  0 points [-]

I don't know how much you meant to suggest otherwise, but just for context, the linked paper was written by Roko and me, not Eliezer, and doesn't try to perfectly represent his opinions.

Comment author: Perplexed 15 February 2011 12:09:05AM 0 points [-]

No, I didn't realize that. Thx for the correction, and sorry for the misattribution.

Comment author: lukeprog 14 February 2011 06:21:48AM 0 points [-]

I have different justifications in mind, and yes I will be explaining them in the book.

Comment author: lukeprog 13 February 2011 07:25:43PM 0 points [-]

Yup, thanks.