Brian_Tomasik comments on International cooperation vs. AI arms race - Less Wrong

15 Post author: Brian_Tomasik 05 December 2013 01:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (143)

You are viewing a single comment's thread. Show more comments above.

Comment author: Brian_Tomasik 10 December 2013 01:44:52AM 0 points [-]

If we're talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.

At an object level, if AI research goes secret at some point, it seems unlikely, though not impossible, that if team A develops human-level AGI, then team B will develop super-human-level AGI before team A does. If the research is fully public (which seems dubious but again isn't impossible), then these advantages would be less pronounced, and it might well be that many teams could be in close competition even after human-level AGI. Still, because human-level AGI can be scaled to run very quickly, it seems likely it could bootstrap itself to stay in the lead.

Comment author: timtyler 10 December 2013 11:56:08AM -1 points [-]

If we're talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.

Note that humans haven't "taken over the world" in many senses of the phrase. We are massively outnumbered and out-massed by our own symbionts - and by other creatures.

Machine intelligence probably won't be a "secret" technology for long - due to the economic pressure to embed it.

While its true that things will go faster in the future, that applies about equally to all players - in a phenomenon commonly known as "internet time".