timtyler comments on Friendly AI Research and Taskification - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (44)
You ask after the Friendliness ToDo list, but discount the #1 item according to Eliezer Yudkowsky?!?
Yudkowsky explains why he thinks this area is an important one - e.g. in "Strong AI and Recursive Self-Improvement" - http://video.google.com/videoplay?docid=-821191370462819511
FWIW, IMO, such an approach would make little sense if your plan was just to build a machine intelligence.
We already have a decision theory good enough to automate 99% of jobs on the planet - if only we knew how to implement it. A pure machine intelligence project would be likely to focus on those implementation details - not on trying to adjust decision theory to better handle proofs about the dynamics of self-improving systems.