We need names for this phenomenon, in which the excess cognitive capacity of an AI, not needed for its task, suddenly manifests itself
It is so much like absurdist SF, that's the perfect source for the name--The Marvin Problem: "Here I am, brain the size of a planet and they ask me to take you down to the bridge. Call that job satisfaction? 'Cos I don't."
There's an article type called "You Could Have Invented" that I became aware of on reading Gwern's You Could Have Invented Transformers.
This type dates back to at least 2012. I believe they're usually good zetetic explanations.
In a stereotypical old-west gunfight, one fighter is more experienced and has a strong reputation; the other fighter is the underdog and considered likely to lose. But who's the underdog of a grenade fight inside a bank vault? Both sides are overwhelmingly likely to lose.
At least one side of many political battles believe they're in a grenade fight, where there's little or nothing they can do to prevent the other side from destroying a lot of value. and could reasonably feel like an underdog even if they have a full bandolier of grenades and the other side has only one or two.
I don't think "perfect" is a good descriptor for the missing solution. The solutions we have lack (at least) two crucial features:
1. A way to get an AI to prioritize the intended goals, with high enough fidelity to work when AI is no longer extremely corrigible, as today's AIs are (because they're not capable enough to circumvent human methods of control).
2. A way that works far enough outside of the training set. E.g., when AI is substantially in charge of logistics, research and development, security, etc.; and is doing those things in novel ways.
Robin Hanson's model of quiet vs loud aliens seems fundamentally the same as this question, to me.
Linear probes give better results than text output for quantitative predictions in economics. They'd likely give a better calibrated probability here, too.
I, too, would like to know how long it will be until my job is replaced by AI; and what fields, among those I could reasonably pivot to, will last the longest.
I think it's especially true for the type of human that likes Lesswrong. Using Scott's distinction between Metis and Techne, we are drawn to Techne. When a techne-leaning person does a deep dive into metis, that can generate a lot of value.
More speculatively, I feel like often--as in the case of lobbying for good government policy--there isn't a straightforward way to capture any of the created value; so it is under-incentivized.
Well, that was an interesting top-down processing error.
I feel like this was a sort of fractal parable, where the first two paragraphs should be enough to convey the point; but for readers who don't get it by then, it keeps beating you over the head with successively longer, more detailed, and more blatant forms of the point until the final denouement skips the "parable" part altogether.