JonahSinick comments on Will the world's elites navigate the creation of AI just fine? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (266)
Thanks for engaging.
The point is that I would have expected things to be worse, and that I imagine that a lot of others would have as well.
I think that people will understand what makes AI dangerous. The arguments aren't difficult to understand.
Broadly, the most powerful countries are the ones with the most rational leadership (where here I mean "rational with respect to being able to run a country," which is relevant), and I expect this trend to continue.
Also, wealth is skewing toward more rational people over time, and wealthy people have political bargaining power.
Political leaders have policy advisors, and policy advisors listen to scientists. I expect that AI safety issues will percolate through the scientific community before long.
I agree that AI safety requires a substantial shift in perspective — what I'm claiming is that this change in perspective will occur organically substantially before the creation of AI is imminent.
You don't need "most people" to work on AI safety. It might suffice for 10% or fewer of the people who are working on AI to work on safety. There are lots of people who like to be big fish in a small pond, and this will motivate some AI researchers to work on safety even if safety isn't the most prestigious field.
If political leaders are sufficiently rational (as I expect them to be), they'll give research grants and prestige to people who work on AI safety.
We still get people occasionally who argue the point while reading through the Sequences, and that's a heavily filtered audience to begin with.
There's a difference between "sufficiently difficult so that a few readers of one person's exposition can't follow it" and "sufficiently difficult so that after being in the public domain for 30 years, the arguments won't have been distilled so as to be accessible to policy makers."
I don't think that the arguments are any more difficult than the arguments for anthropogenic global warming. One could argue that the difficulty of these arguments has been a limiting factor in climate change policy, but I believe that by far the dominant issue has been misaligned incentives, though I'd concede that this is not immediately obvious.
Things were a lot worse then everyone knew: Russia almost invaded Yugoslavia, which would have triggered a war according to newly declassified NSA journals, in the 1950's. The Cuban Missile Crisis could easily have gone hot, and several times early warning systems were triggered by accident. Of course, estimating what could have happened is quite hard.
I agree that there were close calls. Nevertheless, things turned out better than I would have guessed, and indeed, probably better than a large fraction of military and civilian people would have guessed.
World war three seems certain to significantly decrease human population. From my point of view, I can't eliminate anthropic reasoning for why there wasn't such a war before I was born.