I believe there are people with far greater knowledge than me that can point out where I am wrong. Cause I do believe my reasoning is wrong, but I can not see why it would be highly unfeasible to train a sub-AGI intelligent AI that most likely will be aligned and able to solve AI alignment.
My assumptions are as follows:
- Current AI seems aligned to the best of its ability.
- PhD level researchers would eventually solve AI alignment if given enough time.
- PhD level intelligence is below AGI in intelligence.
- There is no clear reason why current AI using current paradigm technology would become unaligned before reaching PhD level intelligence.
- We could train AI until it reaches PhD level intelligence, and then let it solve AI Alignment, without itself needing to self improve.
The point I am least confident in, is 4, since we have no clear way of knowing at what intelligence level an AI model would become unaligned.
Multiple organisations seem to already think that training AI that solves alignment for us is the best path (e.g. superalignment).
Attached is my mental model of what intelligence different tasks require, and different people have.
Figure 1: My mental model of natural research capability RC (basically IQ with higher correlation for research capabilities), where intelligence needed to align AI is above average PhD level, but below smartest human in the world, and even further from AGI.
It is a fair point that we should distinguish alignment in the sense that it does what we want it and expect it to do, from having a deep understanding of human values and a good idea of how to properly optimize for that.
However most humans probably don't have a deep understanding of human values, but I see it as a positive outcome if a random human was picked and given god level abilities. Same thing goes for ChatGPT, if you ask it what it would do as a god it says it would prevent war, prevent climate issues, decrease poverty, give universal access to education etc.
So if we get an AI that does all of those things without a deeper understanding of human values, that is fine by me. So maybe we never even have to solve alignment in latter meaning of the word to create a utopia?