You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Transhumanist Nationalism and AI Politics

0 Post author: jacob_cannell 11 April 2015 06:39PM

From this article  by Zoltan Istvan, in regards to the looming Global AI Arms race, he says:

 

As the 2016 US Presidential candidate for the Transhumanist Party, I don't mind going out on a limb and saying the obvious: I also want AI to belong exclusively to America. Of course, I would hope to share the nonmilitary benefits and wisdom of a superintelligence with the world, as America has done for much of the last century with its groundbreaking innovation and technology. But can you imagine for a moment if AI was developed and launched in, let's say, North Korea, or Iran, or increasingly authoritarian Russia? What if another national power told that superintelligence to break all the secret codes and classified material that America's CIA and NSA use for national security? What if this superintelligence was told to hack into the mainframe computers tied to nuclear warheads, drones, and other dangerous weaponry? What if that superintelligence was told to override all traffic lights, power grids, and water treatment plants in Europe? Or Asia? Or everywhere in the world except for its own country? The possible danger is overwhelming.

 

Now, to some extent I expect many Americans, on reflection, would at least partly agree with the above statement - and that should be concerning.

Consider the issue from the perspective of Russian, Chinese (or really any foreign) readers with similar levels of national pride.

One equivalent postionally reflected statement from a foreign perspective might read like this:

I also want AI to belong exclusively to China. Of course, I would hope to share the nonmilitary benefits and wisdom of a superintelligence with the world, as China has done for much of the this century with its groundbreaking innovation and technology. But can you imagine for a moment if AI was developed and launched by, let's say, the US NSA, or Israel, or India? . ..

On a related note, there was an interesting panel recently with Robin Li (CEO of Baidu), Bill Gates, and Elon Musk.  They spent a little time discussing AI superintelligence.  Robin Li mentioned that his new head of research - Andrew Ng - doesn't believe superintelligence is an immediate threat.  In particular Ng said: "Worrying about AI risk now is like worrying about overpopulation on Mars."  Li also mentioned that he has been advocating for a large chinese government investment in AI.

 

Comments (15)

Comment author: SolveIt 12 April 2015 02:30:31AM 12 points [-]

This perspective puzzled me for a moment. It puzzled me, not because Istvan is necessarily wrong, but because his concerns seem so irrelevant. For me, the sentence "superintelligence will belong to the US", takes a while to parse because it doesn't even type-check. Superintelligence will be enough of a game-changer that nations will mean something very different from what they do now, if they exist at all.

Istvan seems like someone modeling the Internet by thinking of a postal system, and then imagining it running really really fast.

Now, a more charitable reading would interpret his AI as some sort of superhuman but non-FOOMed tool AI, in which case his concerns make a bit more sense. But even in this case, this seems pretty much irrelevant. The US couldn't keep nuclear secrets from the Russians in the 50's, and this was before the Internet.

Comment author: dxu 12 April 2015 03:00:56AM *  3 points [-]

Agree. In particular, this passage here

What if another national power told that superintelligence to break all the secret codes and classified material that America's CIA and NSA use for national security? What if this superintelligence was told to hack into the mainframe computers tied to nuclear warheads, drones, and other dangerous weaponry? What if that superintelligence was told to override all traffic lights, power grids, and water treatment plants in Europe? Or Asia? Or everywhere in the world except for its own country?

makes me think Istvan really doesn't understand what a "superintelligence" (or an "intelligence") is.

Comment author: Kaj_Sotala 12 April 2015 03:23:03PM *  4 points [-]

To give him the benefit of doubt, he might be choosing his arguments based on what he expects his readers to understand. Skimming some of the comments to that article suggests that even this simplified example might have been of excessive inferential distance to some readers.

The "let's hope the first superintelligent belongs to the US" could be steelmanned as "let's hope that the values of the first superintelligence are based on those of Americans rather than the Chinese", which seems reasonable given that there's no guarantee that people from different cultural groups would have compatible values. (Of course, this still leaves the problem that I'd expect there to be plenty of people even within the US who had incompatible values...)

Comment author: Viliam 13 April 2015 12:45:02PM 0 points [-]

For me, the sentence "superintelligence will belong to the US", takes a while to parse because it doesn't even type-check.

It means when the superintelligence starts converting people to paperclips, for sentimental reasons the Americans will be the last ones converted.

Of course, unless it conflicts with some more important objective, such as making more paperclips.

Comment author: Vaniver 11 April 2015 08:34:00PM *  8 points [-]

I believe the common opinion of Zoltan Istvan is that he's mostly interested in self-promotion, and so I am not surprised that he is emphasizing the more contentious possibilities.

Comment author: eternal_neophyte 13 April 2015 03:47:26AM *  3 points [-]

That's not really a fertile direction of criticism. Whether or not he's engaging in self-promoting provocation doesn't affect the validity of his position. Whether the USA can be trusted as the sole custodian of super intelligent AI is however an interesting question, since American exceptionalism appears to be in decline.

Comment author: Normal_Anomaly 13 April 2015 08:31:26PM 0 points [-]

IAWYC, but disagree on the last sentence: it's not an interesting question because it's a wrong question. Superintelligent AI can't have a "custodian". Geopolitics of non-superintelligent AI that is smarter than a human but won't FOOM is a completely different question, probably best debated by people who speculate about cyberwarfare since it's more their field.

Comment author: eternal_neophyte 13 April 2015 09:36:45PM 0 points [-]

"non-superintelligent AI that is smarter than a human but won't FOOM" ...is most likely a better framing of the issue. I nevertheless think a fooming AI could be owned, so long as we have some channels of control open. That the creation or maintenance of such channels would be difficult doesn't render the idea impossible in theory.

Comment author: dxu 12 April 2015 03:08:55AM *  4 points [-]

Istvan appears to be treating the issue of AI largely in the same manner that US politicians treated the space race back in the 1960's: as a competition between nations to see who can do it "first". I think it should be obvious to most LW readers that such an attitude is... problematic.

Comment author: SanguineEmpiricist 12 April 2015 05:50:25AM 1 point [-]

Space race led to a lot of technology right? Might as well send the money to a good direction.

Comment author: Normal_Anomaly 13 April 2015 08:24:58PM *  2 points [-]

My reaction to the first quoted statement was a big "Huh?". The only reason it would matter where superintelligent AI is first developed is that the researchers in different countries might do friendliness more or less well. A UFAI is equally catastrophic no matter who builds it; an AI that is otherwise friendly but has a preference for one country would . . . what would that even mean?Create eutopia and label it "The United Galaxy of America"? Only take the CEV of Americans instead of everybody? Either way, getting friendliness right means national politics is probably no longer an issue.

Also: I did not vote for this guy in the Transhumanist Party primaries!

Comment author: passive_fist 12 April 2015 10:53:39PM 3 points [-]

As usual, politics is the mind-killer, and being a member of the 'transhumanist' party does not make you exempt from this. I fail to immediately see why it is important which country superintelligence is first developed in, except empty nationalistic sentiments.

Everything we have learned from thinking about FAI is that the distinction between friendly/unfriendly AI goes far deeper than the intentions of its creators. The most moral person could wind up developing the most evil UFAI.

Now, to some extent I expect many Americans, on reflection, would at least partly agree with the above statement - and that should be concerning.

I agree.

Comment author: shminux 11 April 2015 08:37:36PM 3 points [-]

What else do you expect a "US Presidential candidate" to say? Also, the guy appears to have more ego than smarts.

Comment author: RedMan 18 April 2015 04:47:55PM 0 points [-]

What are dangerous things that a malign AI superintelligence can do, which a large enough group of humans with sufficient motivation cannot do? All the "horrible threats" listed are things that are well within the ability of large organizations that exist today. So why would an "AI superintelligence" able to execute those actions on its' own, or at the direction of its' human masters be more of a problem than the status quo?

Comment author: [deleted] 13 April 2015 08:06:49AM 0 points [-]

I don't really trust America either, but our preferences don't matter much. It will be either an international consortium or America, Russia or others don't really have a big enough equivalet of the Valley for this.