Brian_Tomasik comments on International cooperation vs. AI arms race - Less Wrong

15 Post author: Brian_Tomasik 05 December 2013 01:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (143)

You are viewing a single comment's thread. Show more comments above.

Comment author: Brian_Tomasik 09 December 2013 01:19:20AM 1 point [-]

In the opening sentence I used the (perhaps unwise) abbreviation "artificial general intelligence (AI)" because I meant AGI throughout the piece, but I wanted to be able to say just "AI" for convenience. Maybe I should have said "AGI" instead.

Comment author: timtyler 09 December 2013 02:21:50AM *  -2 points [-]

The first OS didn't take over the world. The first search engine didn't take over the world. The first government didn't take over the world. The first agent of some type taking over the world is dramatic - but there's no good reason to think that it will happen. History better supports models where pioneers typically get their lunch eaten by bigger fish coming up from behind them.

Comment author: passive_fist 09 December 2013 02:59:11AM 2 points [-]

As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you'll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself, and we currently have no information about how it arose (did the first self-replicating molecule lead to all life as we know it? Or were there many competing forms of life, one of which eventually won?)

Comment author: TheAncientGeek 09 December 2013 12:01:24PM 0 points [-]
Comment author: passive_fist 09 December 2013 08:51:46PM 0 points [-]

What is meant by 'known risk' though? Do you mean 'knowledge that AI is possible', or 'knowledge about what it will entail'? I agree with you completely that we have no information about the latter.

Comment author: TheAncientGeek 10 December 2013 05:47:52PM 0 points [-]

The latter.

Comment author: timtyler 09 December 2013 11:15:17AM -2 points [-]

As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you'll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself [...]

What, a new thinking technology? You can't be serious.

Comment author: nshepperd 09 December 2013 01:17:59PM 3 points [-]

Yes, let's engage in reference class tennis instead of thinking about object level features.

Comment author: timtyler 10 December 2013 12:11:26AM 0 points [-]

Doesn't someone have to hit the ball back for it to be "tennis"? If anyone does so, we can then compare reference classes - and see who has the better set. Are you suggesting this sort of thing is not productive? On what grounds?

Comment author: nshepperd 10 December 2013 02:00:21AM 0 points [-]

Doesn't someone have to hit the ball back for it to be "tennis"?

Looks like someone already did.

And I'm not just suggesting this is not productive, I'm saying it's not productive. My reasoning is standard: see here and also here.

Comment author: Brian_Tomasik 10 December 2013 01:44:52AM 0 points [-]

If we're talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.

At an object level, if AI research goes secret at some point, it seems unlikely, though not impossible, that if team A develops human-level AGI, then team B will develop super-human-level AGI before team A does. If the research is fully public (which seems dubious but again isn't impossible), then these advantages would be less pronounced, and it might well be that many teams could be in close competition even after human-level AGI. Still, because human-level AGI can be scaled to run very quickly, it seems likely it could bootstrap itself to stay in the lead.

Comment author: timtyler 10 December 2013 11:56:08AM -1 points [-]

If we're talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.

Note that humans haven't "taken over the world" in many senses of the phrase. We are massively outnumbered and out-massed by our own symbionts - and by other creatures.

Machine intelligence probably won't be a "secret" technology for long - due to the economic pressure to embed it.

While its true that things will go faster in the future, that applies about equally to all players - in a phenomenon commonly known as "internet time".