You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JoshuaFox comments on Evaluating the feasibility of SI's plan - Less Wrong Discussion

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaFox 10 January 2013 08:22:50PM *  2 points [-]

Sure, we agree that the "100% safe" mechanisms are not 100% safe, and SI knows that.

So how do we deal with this very real danger?

Comment author: wwa 10 January 2013 09:55:43PM *  8 points [-]

The point is you never achieve 100% safety no matter what, so the correct way to approach it is to reduce risk most given whatever resources you have. This is exactly what Eleizer says SI is doing:

I have an analysis of the problem which says that if I want something to have a failure probability less than 1, I have to do certain things because I haven't yet thought of any way not to have to do them.

IOW, they thought about it and concluded there's no other way. Is their approach the best possible one? I don't know, probably not. But it's a lot better than "let's just build something and hope for the best".

Edit: Is that analysis public? I'd be interested in that, probably many people would.

Comment author: JoshuaFox 11 January 2013 06:38:09AM *  2 points [-]

I'm not suggesting "let's just build something and hope for the best." Rather, we should pursue a few strategies at once: Both FAI theory, as well stopgap security measures. Also, education of other researchers.