MrMind comments on Open Thread, Jun. 15 - Jun. 21, 2015 - Less Wrong

5 Post author: Gondolinian 15 June 2015 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (302)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 16 June 2015 01:47:23AM *  1 point [-]

People working on friendly AI probably assume that the odds of inventing a friendly AI is higher than establishing a world order in which research associated with existential risks is generally banned. Why is that? Is the reasoning that our civilization is likely to end without significant technological progress (due to reasons like nuclear war, climate change and societal collapse), so we should give it at least a try?

Comment author: MrMind 16 June 2015 07:16:51AM *  2 points [-]

If society doesn't end first, banning X-risks research worldwide is an effort that must be prolonged indefinitely, always ensuring that nobody ever fiddles with her computer in a way that could create an AGI. This means that with time the probability to enforce successfully the ban always decreases.
Building an FAI instead, is an effort that once accomplished stays so: its probability, however small, might even increase with time.