lukeprog comments on Open Problems Related to the Singularity (draft 1) - Less Wrong

39 Post author: lukeprog 13 December 2011 10:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread.

Comment author: lukeprog 14 December 2011 01:31:51AM 4 points [-]

Does anyone have a superior problem categorization? Does anyone have a correction or addition to make?

Comment author: ESRogs 14 December 2011 02:48:09AM 3 points [-]

Minor editorial comments:

Consider expanding WBE the first time it is mentioned. I'm a regular reader and couldn't think of what it referred to until I searched the site.

I believe "ok" should be either "OK" or "okay".

Comment author: timtyler 14 December 2011 11:22:21AM 1 point [-]

The list had me wondering where the political problems went.

Comment author: XiXiDu 14 December 2011 01:52:33PM *  7 points [-]

The list had me wondering where the political problems went.

You're right. If at some point the general public starts to take risks from AI seriously and realizes that SI is actually trying to take over the universe without their consensus then a better case scenario will be that SI gets closed and its members send to prison. Some of the not so good scenarios might include the complete extermination of the Bay Area if some foreign party believes that they are close to launching an AGI capable of recursive self-improvement.

Sounds ridiculous? Well, what do you think will be the reaction of governments and billions of irrational people who learn and actually believe that a small group of American white male (Jewish) atheist geeks is going to take over the whole universe? BOOM instead of FOOM.

Reference:

...—though it may be an idealistic dream—I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)

Eliezer Yudkowsky in an interview with John Baez.

Comment author: timtyler 14 December 2011 02:55:04PM *  7 points [-]

If at some point the general public starts to take risks from AI seriously and realizes that SI is actually trying to take over the universe without their consensus then a better case scenario will be that SI gets closed and its members send to prison.

It doesn't sound terribly likely. People are more likey to guffaw: So: you're planning to take over the world? And you can't tell us how because that's secret information? Right. Feel free to send us a postcard letting us know how you are getting on with that.

Well, what do you think will be the reaction of governments and billions of irrational people who learn and actually believe that a small group of American white male (Jewish) atheist geeks is going to take over the whole universe?

Again, why would anyone believe that, though? Plenty of people dream of ruling the universe - but - so far, nobody has pulled it off.

Most people are more worried about the secret banking cabal with the huge supercomputers, the billions of dollars in spare change and the shadowy past - who are busy banging on the core problem of inductive inference - than they are about the 'friendly' non-profit with its videos and PDF files - and probably rightfully so.