Okay, so we just have to determine human terminal values in detail, and plug them into a powerful maximizer.
Why do you even go around thinking that the concept of "terminal values", which is basically just a consequentialist steelmanning Aristotle, cuts reality at the joints?
For starters, you want to be able to prove formally that its goals will remain stable as it self-modifies
That part honestly isn't that hard once you read the available literature about paradox theorems.
At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.
EDIT: Thanks for all the contribution! Keep them coming...