Document comments on Rationality Quotes February 2013 - Less Wrong

2 Post author: arundelo 05 February 2013 10:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (563)

You are viewing a single comment's thread. Show more comments above.

Comment author: Document 22 February 2013 08:52:10PM *  1 point [-]
Comment author: Apprentice 23 February 2013 12:47:17PM 4 points [-]

Well, there are lots of cultists running around trying to summon an Elder God. This will almost certainly end in disaster. The options we have to fight this are: a) We can try to stop all Elder-God-summoning related program activities or b) We can try to get there first and summon a Friendly Elder God.

Both a) and b) are almost impossibly difficult and I find it hard to decide which is less impossible.

Comment deleted 23 February 2013 03:19:05PM *  [-]
Comment author: wedrifid 23 February 2013 06:44:08PM 1 point [-]

Ultimately, if some AI scientist is very concerned that an AI is going to kill us all, their opinion is more informative of the approaches to AI which they find viable, than of AIs in general. If someone is convinced that any nuclear power plant can explode like a multi megaton nuclear bomb, well, its probably better to let someone else design a nuclear power plant.

I think you have the lesson entirely backward.

Comment author: private_messaging 23 February 2013 07:21:03PM *  1 point [-]

How so? A person convinced that any nuclear power plant is a risk of multi megaton explosion would have some very weird ideas of how nuclear power plants should be built; they would deem moderated reactors impractical, negative thermal coefficient of reactivity infeasible, etc (or be simply unaware of the mechanisms that allow to achieve stability), and would build some fast neutron reactor that relies on very rapid control rod movement for it's stability. Meanwhile normal engineering produced nuclear power plants that, imperfect they might be, do not make a crater when they blow up.

Comment author: Creutzer 23 February 2013 09:24:44PM 3 points [-]

Yes, but you can say that because you have the independent evidence that nuclear power plants are workable, beyond the mere say-so of a couple of scientists. You don't have that kind of evidence for AI safety.

Also, this:

Non-Friendly AI is no Elder God. It kills you, at worst.

... is not a given. What makes you think that the worst it would do is kill you, when killing is not the worst thing humans do to each other?

Comment author: wedrifid 24 February 2013 03:55:10AM 4 points [-]

To the extent that you already know that nuclear power plants are basically safe they clearly do not apply as an analogy here. Reasoning from them like this is an error.