You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

WrongBot comments on Friendly AI Research and Taskification - Less Wrong Discussion

22 Post author: multifoliaterose 14 December 2010 06:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 20 December 2010 06:25:33PM *  3 points [-]

You ask after the Friendliness ToDo list, but discount the #1 item according to Eliezer Yudkowsky?!?

Yudkowsky explains why he thinks this area is an important one - e.g. in "Strong AI and Recursive Self-Improvement" - http://video.google.com/videoplay?docid=-821191370462819511

FWIW, IMO, such an approach would make little sense if your plan was just to build a machine intelligence.

We already have a decision theory good enough to automate 99% of jobs on the planet - if only we knew how to implement it. A pure machine intelligence project would be likely to focus on those implementation details - not on trying to adjust decision theory to better handle proofs about the dynamics of self-improving systems.