You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wedrifid comments on Best shot at immortality? - Less Wrong Discussion

4 Post author: tomme 22 March 2012 10:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 22 March 2012 02:28:06PM *  2 points [-]

If we're being overseen then it seems true by definition that we'll run into other AGIs if we build an FAI

This seems likely but is not true by definition. In fact if I were designing and overseer I can see reasons why I may prefer to design one that keeps itself hidden except where intervention is required. Such an overseer, upon detecting that the overseen have created an AI with an acceptable goal system, may actively destroy all evidence of its existence.

Comment author: Will_Newsome 22 March 2012 02:30:00PM *  3 points [-]

True, mea culpa. I swear, there's something about the words "by definition" that makes you misuse them even if you're already aware of how often they're misused. I almost never say "by definition" and yet it still screwed me over.