Solving the value learning problem is (IMO) the key technical challenge for AI safety.
How good or bad is an approximate solution?
EDIT for clarity:
By "approximate value learning" I mean something which does a good (but suboptimal from the perspective of safety) job of learning values. So it may do a good enough job of learning values to behave well most of the time, and be useful for solving tasks, but it still has a non-trivial chance of developing dangerous instrumental goals, and is hence an Xrisk.
Considerations:
1. How would developing good approximate value learning algorithms effect AI research/deployment?
It would enable more AI applications. For instance, many many robotics tasks such as "smooth grasping motion" are difficult to manually specify a utility function for. This could have positive or negative effects:
Positive:
* It could encourage more mainstream AI researchers to work on value-learning.
Negative:
* It could encourage more mainstream AI developers to use reinforcement learning to solve tasks for which "good-enough" utility functions can be learned.
Consider a value-learning algorithm which is "good-enough" to learn how to perform complicated, ill-specified tasks (e.g. folding a towel). But it's still not quite perfect, and so every second, there is a 1/100,000,000 chance that it decides to take over the world. A robot using this algorithm would likely pass a year-long series of safety tests and seem like a viable product, but would be expected to decide to take over the world in ~3 years.
Without good-enough value learning, these tasks might just not be solved, or might be solved with safer approaches involving more engineering and less performance, e.g. using a collection of supervised learning modules and hand-crafted interfaces/heuristics.
2. What would a partially aligned AI do?
An AI programmed with an approximately correct value function might fail
* dramatically (see, e.g. Eliezer, on AIs "tiling the solar system with tiny smiley faces.")
or
* relatively benignly (see, e.g. my example of an AI that doesn't understand gustatory pleasure)
Perhaps a more significant example of benign partial-alignment would be an AI that has not learned all human values, but is corrigible and handles its uncertainty about its utility in a desirable way.
Is it even possible to have a perfectly aligned AI?
If you teach an AI to model the function f(x) = sin(x), it will only be "aligned" with your goal of computing sin(x) to the point of computational accuracy. You either accept some arithmetic cutoff or the AI turns the universe to computronium in order to better approximate Pi.
If you try to teach an AI something like handwritten digit classification, it'll come across examples that even a human wouldn't be able to identify accurately. There is no "truth" to whether a given image is a 6 or a very badly drawn 5, other than the intent of the person who wrote it. The AI's map can't really be absolutely correct because the notion of correctness is not unambiguously defined in the territory. Is it a 5 because the person who wrote it intended it to be a 5? What if 75% of humans say it's a 6?
Since there will always be both computational imprecision and epistemological uncertainty from the territory, the best you can ever do is probably an approximate solution that captures what is important to the degree of confidence we ultimately decide is sufficient.
I edited to clarify what I mean by "approximate value learning".