Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on Existential risk from AI without an intelligence explosion - Less Wrong

12 Post author: AlexMennen 25 May 2017 04:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread.

Comment author: turchin 22 June 2017 10:56:07AM 0 points [-]

Now existing AI systems are very well in winning in war-like strategic games, like chess and Go, and already reached superhuman performance in them. Military strategic planning and geopolitics could be seen as such a game, and AI able to win in it seems imaginable even on current capabilities.

I also agree that self-improving AI may choose not create its new version, because of the difficulty to solve aligning problem on the new level. In that case it would choose evolutionary development path, which means slower capability gain. I wrote a draft of a paper about levels of self-improvement, where I look in such obstacles in details. I а you are interested, I could share it with you.

Comment author: ChristianKl 22 June 2017 03:18:26PM 1 point [-]

Geopolitical forecasting requires you to build a good model of the conflict that you care about. Once you do have a model you can feed the model into a computer like the Bruce Bueno de Mesquita does and the computer might do better at calculating the optimal move. I don't think that current existing AI system are up to the task of modeling a complicated geopolitical event.

Comment author: turchin 22 June 2017 03:40:07PM 0 points [-]

I also don't think that it is now possible to model full geopolitics, but if some smaller but effective model of it will be created by humans, it may be used by AI.

Comment author: ChristianKl 22 June 2017 04:09:29PM 1 point [-]

Bruce Bueno de Mesquita seems to be of the opinion that even 20 years ago computer models outperformed humans once the modeling is finished but modeling seems crucial.

In his 2008 book, he advocates that the best move for Israel/Palestine would be to make a treaty that requires the two countries to share tourism revenue which each other. That's not the kind of move that an AI like DeepMind would produce without a human coming up with the move beforehand.

Comment author: turchin 22 June 2017 04:31:52PM 1 point [-]

So it looks like that if model creation job could be at least partly automated, it would give a strategic advantage in business, politics and military planning.

Comment author: dogiv 22 June 2017 01:46:09PM 1 point [-]

AI is good at well-defined strategy games, but (so far) bad at understanding and integrating real-world constraints. I suspect that there are already significant efforts to use narrow AI to help humans with strategic planning, but that these remain secret. For an AGI to defeat that sort of human-computer combination would require considerably superhuman capabilities, which means without an intelligence explosion it would take a great deal of time and resources.

Comment author: turchin 22 June 2017 02:29:14PM 0 points [-]

If AI will be able to use humans as outsourced form of intuition like in Mechanical Turk, it may be able to play such games with much less own intelligence.

Such game may resemble Trump's election campaign, where cyberweapons, fake news and internet memes was used by some algorithm. There was some speculation about it: https://scout.ai/story/the-rise-of-the-weaponized-ai-propaganda-machine

We already see superhuman performance in war-simulating games, but nothing like it in AI self-improving.

Mildly superhuman capabilities may be reached without intelligence explosion by the low-level accumulation of hardware, training and knowledge.

Comment author: ChristianKl 22 June 2017 02:50:11PM 0 points [-]

There was some speculation about it: https://scout.ai/story/the-rise-of-the-weaponized-ai-propaganda-machine

When I read "Cambridge Analytica isn’t the only company that could pull this off -- but it is the most powerful right now." I immediately think "citation needed".

Eric Schmidt funded multiple companies to provide technology to get Hillary elected.

Comment author: turchin 22 June 2017 02:52:31PM 0 points [-]

There are many programs which play Go, but only one currently with superhuman performance.

Comment author: ChristianKl 22 June 2017 02:59:03PM 0 points [-]

On the Go side, the program with the superhuman performance is run by Eric Schmidt's company.

What makes you think that Eric Schmidt's people aren't the best in the other domain as well?

Comment author: turchin 22 June 2017 03:06:03PM 0 points [-]

The fact that H lost?

But in fact, I don't want to derail the discussion about AI's possible decisive advantage in the future in the conspiracy looking discussion about past elections, which I mentioned as a possible example of strategic games, but not as the fact proving that such AI actually exists.

Comment author: ChristianKl 22 June 2017 04:01:54PM *  0 points [-]

The fact that H lost?

That argument feels circular in nature. You believe that Trump won because of a powerful computer model, simply because Trump won and he was supported by a computer model.

One the other hand, you have a tech billionaire who's gathering top programmers to fight. On the other hand, you have a company that has to be told by the daughter of that tech-billionaire what software they should use.

Who's press person said they worked for the leave-campaign and who's CEO is currently on the record for never having worked for the leave-campaign, neither paid nor unpaid.

From a NYTimes article:

But Cambridge’s psychographic models proved unreliable in the Cruz presidential campaign, according to Rick Tyler, a former Cruz aide, and another consultant involved in the campaign. In one early test, more than half the Oklahoma voters whom Cambridge had identified as Cruz supporters actually favored other candidates. The campaign stopped using Cambridge’s data entirely after the South Carolina primary.

There's a lot of irony in the fact that Cambridge Analytica seems to be better at telling spin about its amazing abilities of political manipulation in an untargeted way, than they are actually at helping political campaign.

I just saw on scout.ai's about page that they see themselves as being in the science fiction business. Maybe I should be less hard on them.

Comment author: turchin 22 June 2017 04:18:01PM 0 points [-]

I want to underline again that the fact that I discuss a possibility doesn't mean that I believe in it. The winning is evidence of intelligent power but given prior about its previous failures, it may be not strong evidence.

Comment author: Lumifer 22 June 2017 02:39:45PM 0 points [-]

and AI able to win in it seems imaginable even on current capabilities

Not on current capabilities.For one thing, the set of possible moves is undefined or very very large.