Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

TuviaDulin comments on Failed Utopia #4-2 - Less Wrong

52 Post author: Eliezer_Yudkowsky 21 January 2009 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (248)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: TuviaDulin 25 March 2013 02:49:53PM -1 points [-]

Convincing people that intelligence explosion is a bad idea might discourage them from unleashing one. No violence there.

Comment author: MugaSofer 30 March 2013 09:21:32PM -1 points [-]

Judging by the fact that I think it would never work, you're not persuasive enough for that to work.

Comment author: Yosarian2 13 May 2013 12:07:33AM *  1 point [-]

Well, if people become sufficiently convinced that deploying a technology would be a really bad idea and not in anyone's best interest, they can refrain from deploying it. No one has used nuclear weapons in war since WWII, after all.

Of course, it would take some pretty strong evidence for that to happen. But, hypothetically speaking, if we created a non-self improving oracle AI and asked it "how can we do an intelligence explosion without killing ourselves", and it tells us "Sorry, you can't, there's no way", then we'd have to try to convince everyone to not "push the button".

Comment author: MugaSofer 13 May 2013 11:24:11AM 0 points [-]

If we had a superintelligent Oracle, we could just ask it what the maximally persuasive argument for not making AIs was and hook it up to some kind of broadcast.

If, on the other hand, this is some sort of single-function Oracle, I don't think we're capable of preventing our extinction in that case. Maybe if we managed to become a singleton somehow; if you know how to do that I have some friends who would be interested in your ideas.

Comment author: Yosarian2 13 May 2013 08:58:34PM 1 point [-]

Well, the oracle was just an example.

What if, again hypothetically speaking, Eliezer and his group while working on friendly AI theory proved mathematically beyond the shadow of a doubt that any intelligence explosion would end badly, and that friendly AI was impossible. While he doesn't like it, being a rationalist, he accepts it once there is no other rational alternative. He publishes these results, experts all over the world look at them, check them, and sadly agree that he was right.

Do you think any major organization with enough resources and manpower to create an AI would still do so if they knew that it would result in their own horrible deaths? I think the example of nuclear weapons shows that it's at least possible that people may refrain from an action if they understand that it's a no-win scenario for them.

This is all just hypothetical, mind you; I'm not really convinced that "AI goes foom" is all that likely a scenario in the first place, and if it was I don't see any reason that friendly AI of one type or another wouldn't be possible; but if it actually wasn't, then that may very well be enough to stop people, so long as that fact could be demonstrated to everyone's satisfaction.