Jack comments on Less Wrong Q&A with Eliezer Yudkowsky: Video Answers - Less Wrong

41 Post author: MichaelGR 07 January 2010 04:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jack 07 January 2010 07:14:30PM 4 points [-]

Shouldn't we hedge our bets a little? I don't know what the probability is that the Singularity Institute succeeds in building an FAI in time to prevent any existential disasters that would otherwise occur but it isn't 1. Any work done to reduce existential risk in the meantime (and in possible futures where no Friendly AI exists) seems to me worthwhile.

Am I wrong?