Comment author: RobinHanson 18 November 2009 02:25:31PM *  5 points [-]

A problem with this proposal is whether this paper can be seen as authorative. A critic might worry that if they study and respond to this paper they will be told it does not represent the best pro-Singularity arguments. So the paper would need to be endorsed enough to gain enough status to become worth criticizing.

Comment author: righteousreason 18 November 2009 10:46:07PM *  3 points [-]

Eliezer is arguing about one view of the Singularity, though there are others. This is one reason I thought to include http://yudkowsky.net/singularity/schools on the wiki. If leaders/proponents of the other two schools could acknowledge this model Eliezer has described of there being three schools of the Singularity, I think that might lend it more authority as you are describing.

Comment author: AndrewKemendo 18 November 2009 02:32:11AM *  3 points [-]

This is a problem for both those who'd want to critique the concept, and for those who are more open-minded and would want to learn more about it.

Anyone who is sufficiently technically minded undoubtedly finds frustration in reading books which give broad brush stroked counterfactuals to decision making and explanation without delving into the details of their processes. I am thinking of books like Freakonomics, Paradox of Choice, Outliers, Nudge etc..

These books are very accessible but lack the in depth analysis which are expected to be thoroughly critiqued and understood in depth. Writings like Global catastrophic risks and any of the other written deconstructions of the necessary steps of technological singularity lack those spell-it-out-for-us-all sections that Gladwell et al. make their living from. Reasonably so. The issue of singularity is so much more complex and involved that it does not do the field justice to give slogans and banner phrases. Indeed it is arguably detrimental and has the ability to backfire by simplifying too much.

I think however what is needed is a clear, short and easily understood consensus on why this crazy AI thing is the inevitable result of reason, why it is necessary to think about, how it will help humanity, how it could reasonably hurt humanity.

The SIAI tried to do this:

http://www.singinst.org/overview/whatisthesingularity

http://www.singinst.org/overview/whyworktowardthesingularity

Neither of these is compelling in my view. They both go into some detail and leave the un-knowledgeable reader behind. Most importantly neither has what people want: a clear vision of exactly what we are working for. The problem is there isn't a clear vision; there is no consensus on how to start. Which is why in my view the SIAI is more focused on "Global risks" rather than just stating "We want to build an AI"; frankly, people get scared by the latter.

So is this paper going to resolve the dichotomy between the simplified and complex approach, or will we simply be replicating what the SIAI has already done?

Comment author: righteousreason 18 November 2009 05:35:40AM 3 points [-]

I found the two SIAI introductory pages very compelling the first time I read them. This was back before I knew what SIAI or the Singularity really was, as soon as I read through those I just had to find out more.

Comment author: MichaelAnissimov 17 November 2009 10:11:55PM 6 points [-]

The arguments that I found most compelling for a hard takeoff are found in LOGI part 3 and the Wiki interview with Eliezer from 2003 or so, for anyone who needs help on references or argument ideas from outside of the sequences.

Comment author: righteousreason 18 November 2009 05:33:14AM 0 points [-]

I thought similarly about LOGI part 3 (Seed AI). I actually thought of that immediately and put a link up to that on the wiki page.

Comment author: bogdanb 11 November 2009 11:07:15PM 20 points [-]

How did you win any of the AI-in-the-box challenges?

Comment author: righteousreason 12 November 2009 02:47:29AM 9 points [-]

http://news.ycombinator.com/item?id=195959

"Oh, dear. Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...

All right, this much of a hint:

There's no super-clever special trick to it. I just did it the hard way.

Something of an entrepreneurial lesson there, I guess."

View more: Prev