You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on Journal of Consciousness Studies issue on the Singularity - Less Wrong Discussion

14 Post author: lukeprog 02 March 2012 03:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread.

Comment author: lukeprog 02 March 2012 09:46:27PM *  7 points [-]

I like Goertzel's succinct explanation of the idea behind Moore's Law of Mad Science:

...as technology advances, it is possible for people to create more and more destruction using less and less money, education and intelligence.

Also, his succinct explanation of why Friendly AI is so hard:

The practical realization of [Friendly AI] seems likely to require astounding breakthroughs in mathematics and science — whereas it seems plausible that human-level AI, molecular assemblers and the synthesis of novel organisms can be achieved via a series of moderate-level breakthroughs alternating with ‘normal science and engineering.’

Another choice quote that succinctly makes a key point I find myself making all the time:

if the US stopped developing AI, synthetic biology and nanotech next year, China and Russia would most likely interpret this as a fantastic economic and political opportunity, rather than as an example to be imitated.

His proposal for Nanny AI, however, appears to be FAI-complete.

Also, it is strange that despite paragraphs like this:

we haven’t needed an AI Nanny so far, because we haven’t had sufficiently powerful and destructive technologies. And now, these same technologies that may necessitate the creation of an AI Nanny, also may provide the means of creating it.

...he does not anywhere cite Bostrom (2004).

Comment author: timtyler 05 March 2012 09:56:20PM 0 points [-]

His proposal for Nanny AI, however, appears to be FAI-complete.

It's a very different idea from Yudkowsky's "CEV" proposal.

It's reasonable to think that a nanny-like machine might be easier to build that other kinds - because a nanny's job description is rather limited.