In the coming years many decisions will be needed to be made around AI. This post is brief introduction to my thoughts on the subject. Ideally it would be some people's job to work on this subject. Some examples of the decisions that will need to be made:
How much money should be spent on AI safety?
Should there be any regulation of compute?
What topics on AI safety should be researched first?
How should these decisions be made?
Making a decision can be roughly decomposed into a world model/simulation, and preferences among states or actions in the world. When a decision is being made you can look at both aspects and see if there are any weaknesses in that aspect that could lead to a negative outcome.
If we are trying to make decisions for the good of humanity, we should collect preferences from as many people as possible. Opinion polls, user research and world building are tools that help elucidate people's preferences.
The world model is generally decomposed into many different models. For example physical models of the world, an economic model of other actors and self-model involving tasks/projects and programs of work.
For AI the key ones seem to include:
Technological development (including AI)
Social impacts of technological development
Models of AI safety
Evolutionary models
Models of other threats, so that appropriate trade offs can be made
Improving these can be done in a number of ways, by making more accurate models of the world or integrating disparate world models together. The models should be composable so that outputs from one can be fed into another. Models should be checked for accuracy and more accurate models promoted.
If the world model cannot be refined to one model, tools from decision making under deep uncertainty may reduce the risk of making decisions from one bad model.
Sometimes direct information about world models and preferences cannot be shared (due to information hazards), in which case hive-like decision making can be used.
Improving global decision making can be seen as a public good, so might be a good target for philanthropy.
In the coming years many decisions will be needed to be made around AI. This post is brief introduction to my thoughts on the subject. Ideally it would be some people's job to work on this subject. Some examples of the decisions that will need to be made:
How should these decisions be made?
Making a decision can be roughly decomposed into a world model/simulation, and preferences among states or actions in the world. When a decision is being made you can look at both aspects and see if there are any weaknesses in that aspect that could lead to a negative outcome.
If we are trying to make decisions for the good of humanity, we should collect preferences from as many people as possible. Opinion polls, user research and world building are tools that help elucidate people's preferences.
The world model is generally decomposed into many different models. For example physical models of the world, an economic model of other actors and self-model involving tasks/projects and programs of work.
For AI the key ones seem to include:
Improving these can be done in a number of ways, by making more accurate models of the world or integrating disparate world models together. The models should be composable so that outputs from one can be fed into another. Models should be checked for accuracy and more accurate models promoted.
I've talked more about improving world simulations, on my blog.
If the world model cannot be refined to one model, tools from decision making under deep uncertainty may reduce the risk of making decisions from one bad model.
Sometimes direct information about world models and preferences cannot be shared (due to information hazards), in which case hive-like decision making can be used.
Improving global decision making can be seen as a public good, so might be a good target for philanthropy.