MrMind comments on Open Thread, Jun. 15 - Jun. 21, 2015 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (302)
People working on friendly AI probably assume that the odds of inventing a friendly AI is higher than establishing a world order in which research associated with existential risks is generally banned. Why is that? Is the reasoning that our civilization is likely to end without significant technological progress (due to reasons like nuclear war, climate change and societal collapse), so we should give it at least a try?
If society doesn't end first, banning X-risks research worldwide is an effort that must be prolonged indefinitely, always ensuring that nobody ever fiddles with her computer in a way that could create an AGI. This means that with time the probability to enforce successfully the ban always decreases.
Building an FAI instead, is an effort that once accomplished stays so: its probability, however small, might even increase with time.