timtyler comments on International cooperation vs. AI arms race - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (143)
Doesn't someone have to hit the ball back for it to be "tennis"? If anyone does so, we can then compare reference classes - and see who has the better set. Are you suggesting this sort of thing is not productive? On what grounds?
Looks like someone already did.
And I'm not just suggesting this is not productive, I'm saying it's not productive. My reasoning is standard: see here and also here.
If we're talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.
At an object level, if AI research goes secret at some point, it seems unlikely, though not impossible, that if team A develops human-level AGI, then team B will develop super-human-level AGI before team A does. If the research is fully public (which seems dubious but again isn't impossible), then these advantages would be less pronounced, and it might well be that many teams could be in close competition even after human-level AGI. Still, because human-level AGI can be scaled to run very quickly, it seems likely it could bootstrap itself to stay in the lead.
Note that humans haven't "taken over the world" in many senses of the phrase. We are massively outnumbered and out-massed by our own symbionts - and by other creatures.
Machine intelligence probably won't be a "secret" technology for long - due to the economic pressure to embed it.
While its true that things will go faster in the future, that applies about equally to all players - in a phenomenon commonly known as "internet time".