timtyler comments on Goals for which Less Wrong does (and doesn't) help - Less Wrong

57 Post author: AnnaSalamon 18 November 2010 10:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (101)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 22 November 2010 07:28:17PM *  1 point [-]

Well, I tend to think that that working on and supporting machine intelligence research is probably the most important way to positively influence the future of civilisation. The issue of what we want the machines to do is a part of the project.

So, such beliefs don't seem particularly "far out" - to me.

FWIW, Yudkowsky describes his motivation in writing about rationality here:

http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/