- Building a superhuman AI focused on a specific task is more economically valuable than building a much more expensive AI that is bad at a large number of things.
It also comes with ~0 risk of paperclipping the world — Alphazero is godlike at chess without needing to hijack all resources for its purposes
Yes, I think performance ultimately matters much more than risk preferences. If you really want to take that into account you can just define utility as a function of wealth, and then maximize the growth of utility instead. But I think risk-aversion has been way overemphasized by academics that weren't thinking about ergodicity, and were thinking along St Petersburg Paradox lines that any +EV bet must be rational, so when people don't take +EV bets they must be irrationally risk-averse.
What you actually want is to maximize the growth rate of your bankroll. You can go broke making +EV bets. The Kelly Criterion is the solution you're looking for for something like a lottery – a bet is "rational" iff the Kelly Criterion says you should make it.
Why wouldn’t AGI build a superhuman understanding of ethics, which it would then use to guide its decision-making?
I think the gears-level models are really the key here. Without a gears-level model, you are flying blind, and the outside view is very helpful when you're flying blind. But with a solid understanding of the causal mechanisms in a system, you don't need to rely on others' opinions to make good predictions and decisions.
My advice:
I'd recommend reading Stephen Wolfram on this question. For instance: https://www.wolframscience.com/nks/p315--the-intrinsic-generation-of-randomness/