You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on Singularity Institute Executive Director Q&A #2 - Less Wrong Discussion

20 Post author: lukeprog 06 January 2012 03:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 06 January 2012 03:12:15PM *  7 points [-]

But you haven't given even the roughest outline of SI's progress on the thing that matters most, actual FAI research.

From what I understand they can't do that yet. They don't have enough people to make some actual progress on important problems. They also don't have enough money to hire enough people. So they are concentrating on raising awareness of the issue and persuading people to work on it, respectively contribute money to SI.

The real problem I see is the lack of formalized problems. I perceive it to be very important to formalize some actual problems. Doing so will aid raising money and allow others to work on the problems. To be more specific, I don't think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that's nothing that will impress the important academics enough to actually believe him on AI issues. He should have rather written a book on decision theory where he seems to have some genuine ideas.

Comment author: timtyler 06 January 2012 08:10:20PM *  0 points [-]

The real problem I see is the lack of formalized problems.

There was a list of problems posted recently:

To be more specific, I don't think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that's nothing that will impress the important academics enough to actually believe him on AI issues.

Rationality is probably a moderately important factor in planetary collective intelligence. Pinker claims that rational thinking + game theory have also contributed to recent positive moral shifts. Though there are some existing books on the topic, it could well be an area where a relatively small effort could produce a big positive result.

However, I'm not entirely convinced that hpmor.com is the best way to go about it...

Comment author: lukeprog 10 January 2012 04:08:13PM *  4 points [-]

It turns out that HPMOR has been great for SI recruiting and networking. IMO winners apparently read HPMOR. So do an absurd number of Googlers.