hairyfigment comments on Will the world's elites navigate the creation of AI just fine? - Less Wrong

20 Post author: lukeprog 31 May 2013 06:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (266)

You are viewing a single comment's thread. Show more comments above.

Comment author: hairyfigment 06 June 2013 07:38:54PM 0 points [-]

Only two nuclear weapons have been used since nuclear weapons were developed,

And I have the impression that relatively low-ranking people helped produce this outcome by keeping information from their superiors. Petrov chose not to report a malfunction of the early warning system until he could prove it was a malfunction. People during the Korean war and possibly Vietnam seem not to have passed on the fact that pilots from Russia or America were cursing in their native languages over the radio (and the other side was hearing them).

This in fact is part of why I don't think we 'survived' through the anthropic principle. Someone born after the end of the Cold War could look back at the apparent causes of our survival. And rather than seeing random events, or no causes at all, they would see a pattern that someone might have predicted beforehand, given more information.

This pattern seems vanishingly unlikely to save us from unFriendly AI. It would take, at the very least, a much more effective education/propaganda campaign.

Comment author: JonahSinick 06 June 2013 08:07:58PM *  0 points [-]

As I remark elsewhere in this thread, the point is that I would have expected substantially more nuclear exchange by now than actually happened, and in view of this, I updated in the direction of things being more likely to go well than I would have thought. I'm not saying "the fact that there haven't been nuclear exchanges means that destructive things can't happen."

This pattern seems vanishingly unlikely to save us from unFriendly AI. It would take, at the very least, a much more effective education/propaganda campaign.

I was using the nuclear war thing as one of many outside views, not as direct analogy. The AI situation needs to be analyzed separately — this is only one input.