You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong Discussion

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 05 March 2012 08:01:45PM -2 points [-]

Has anyone constructed even a vaguely plausible outline, let alone a definition, of what would constitute a "human-friendly intelligence", defined in terms other than effects you don't want it to have?

Er, that's how it is defined - at least by Yudkowsky. You want to argue definitions? Without even offering one of your own? How will that help?