I am looking for results showing that various approaches to provable safety are impossible or that such proofs are of a particular complexity class. I have Yampolskiy's paper "Impossibility Results in AI: A Survey," but I am looking for more targeted results that would help guide research into provable safety. Many of the results seem to be from the Computability theory and are so general that they are not that useful.
A theorem stating that one cannot formally verify all possible mathematical proofs does little to say about which constrained system can be verified.
I would also be interested in impossibility results in non-trivial toy models of alignment problems (RL environments) that are not simply the corollary of the much more general theorems.
Lastly, given everything written above, I would also like any other reference/information that a person may reasonably expect me to find interesting and generally related.
I think it is very unclear that we want fewer 'maladaptive' people in the world in the sense that we can measure with personality traits such as the big five.
Would reducing the number of outliers in neuroticism also reduce the number of people emotionally invested in X-risk? The downstream results of such a modification do not seem to be clear.
It seems like producing a more homogeneous personality distribution would also reduce the robustness of society.
The core weirdness of this post to me is that the first conditioning on LLM/AI does all the IQ tasks, and humans are not involved in auditing that system in a case where high IQ is important. Personally, I feel like assuming that AI does all the IQ tasks is a moot case. We are pets or dead in that case.