Indeed, I find it somewhat notable that high-level arguments for AI risk rarely attend in detail to the specific structure of an AI’s motivational system, or to the sorts of detailed trade-offs a not-yet-arbitrarily-powerful-AI might face in deciding whether to engage in a given sort of problematic power-seeking. [...] I think my power-seeking report is somewhat guilty in this respect; I tried, in my report on scheming, to do better.
Your 2021 report on power-seeking does not appear to discuss the cost-benefit analysis that a misaligned AI would conduct whe...
Retracted. I apologize for mischaracterizing the report and for the unfair attack on your work.