Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on Existential risk from AI without an intelligence explosion - Less Wrong

12 Post author: AlexMennen 25 May 2017 04:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 22 June 2017 03:40:07PM 0 points [-]

I also don't think that it is now possible to model full geopolitics, but if some smaller but effective model of it will be created by humans, it may be used by AI.

Comment author: ChristianKl 22 June 2017 04:09:29PM 1 point [-]

Bruce Bueno de Mesquita seems to be of the opinion that even 20 years ago computer models outperformed humans once the modeling is finished but modeling seems crucial.

In his 2008 book, he advocates that the best move for Israel/Palestine would be to make a treaty that requires the two countries to share tourism revenue which each other. That's not the kind of move that an AI like DeepMind would produce without a human coming up with the move beforehand.

Comment author: turchin 22 June 2017 04:31:52PM 1 point [-]

So it looks like that if model creation job could be at least partly automated, it would give a strategic advantage in business, politics and military planning.