Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Russell_Wallace comments on Failed Utopia #4-2 - Less Wrong

52 Post author: Eliezer_Yudkowsky 21 January 2009 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (248)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Russell_Wallace 21 January 2009 01:43:01PM 2 points [-]

An amusing if implausible story, Eliezer, but I have to ask, since you claimed to be writing some of these posts with the admirable goal of giving people hope in a transhumanist future:

Do you not understand that the message actually conveyed by these posts, if one were to take them seriously, is "transhumanism offers nothing of value; shun it and embrace ignorance and death, and hope that God exists, for He is our only hope"?

Comment author: TuviaDulin 01 April 2012 07:06:37PM 10 points [-]

I didn't get that impression, after reading this within the context of the rest of the sequence. Rather, it seems like a warning about the importance of foresight when planning a transhuman future. The "clever fool" in the story (presumably a parody of the author himself) released a self-improving AI into the world without knowing exactly what it was going to do or planning for every contingency.

Basically, the moral is: don't call the AI "friendly" until you've thought of every single last thing.

Comment author: Yosarian2 02 January 2013 11:05:31PM 6 points [-]

Corollary: you haven't thought of every last thing.

Comment author: TuviaDulin 23 January 2013 03:27:00PM 3 points [-]

Conclusion: intelligence explosion might not be a good idea.

Comment author: MugaSofer 24 January 2013 01:47:23PM 0 points [-]

And how would you suggest preventing intelligence explosions? It seems more effective to try and make sure it's a Friendly one. Then we at least have a shot at Eutopia, instead of hiding in a bunker until someone's paperclipper gets loose and turns us into grey goo.

Incidentally, If you plan on answering my (rhetorical) question, I should note that LW has a policy against advocating violence against identifiable individuals, specifically because people were claiming we were telling people they should become anti-AI terrorists. You're not the first to come to this conclusion.

Comment author: TuviaDulin 25 March 2013 02:49:53PM -1 points [-]

Convincing people that intelligence explosion is a bad idea might discourage them from unleashing one. No violence there.

Comment author: MugaSofer 30 March 2013 09:21:32PM -1 points [-]

Judging by the fact that I think it would never work, you're not persuasive enough for that to work.

Comment author: Yosarian2 13 May 2013 12:07:33AM *  1 point [-]

Well, if people become sufficiently convinced that deploying a technology would be a really bad idea and not in anyone's best interest, they can refrain from deploying it. No one has used nuclear weapons in war since WWII, after all.

Of course, it would take some pretty strong evidence for that to happen. But, hypothetically speaking, if we created a non-self improving oracle AI and asked it "how can we do an intelligence explosion without killing ourselves", and it tells us "Sorry, you can't, there's no way", then we'd have to try to convince everyone to not "push the button".

Comment author: MugaSofer 13 May 2013 11:24:11AM 0 points [-]

If we had a superintelligent Oracle, we could just ask it what the maximally persuasive argument for not making AIs was and hook it up to some kind of broadcast.

If, on the other hand, this is some sort of single-function Oracle, I don't think we're capable of preventing our extinction in that case. Maybe if we managed to become a singleton somehow; if you know how to do that I have some friends who would be interested in your ideas.

Comment author: Yosarian2 13 May 2013 08:58:34PM 1 point [-]

Well, the oracle was just an example.

What if, again hypothetically speaking, Eliezer and his group while working on friendly AI theory proved mathematically beyond the shadow of a doubt that any intelligence explosion would end badly, and that friendly AI was impossible. While he doesn't like it, being a rationalist, he accepts it once there is no other rational alternative. He publishes these results, experts all over the world look at them, check them, and sadly agree that he was right.

Do you think any major organization with enough resources and manpower to create an AI would still do so if they knew that it would result in their own horrible deaths? I think the example of nuclear weapons shows that it's at least possible that people may refrain from an action if they understand that it's a no-win scenario for them.

This is all just hypothetical, mind you; I'm not really convinced that "AI goes foom" is all that likely a scenario in the first place, and if it was I don't see any reason that friendly AI of one type or another wouldn't be possible; but if it actually wasn't, then that may very well be enough to stop people, so long as that fact could be demonstrated to everyone's satisfaction.

Comment author: FourFire 08 September 2013 11:07:44AM 0 points [-]

I don't gather that from this particular story, rather more "There's a radiant shimmer oh hope, it just happens to be the wrong colour."