I put "trivial" in quotes because there are obviously some exceptionally large technical achievements that would still need to occur to get here, but suppose we had an AI with a utilitarian utility function of maximizing subjective human well-being (meaning, well-being is not something as simple as physical sensation of "pleasure" and depends on the mental facts of each person) and let us also assume the AI can model this "well" (lets say at least as well as the best of us can deduce the values of another person for their well-being). Finally, we will also assume that the AI does not possess the ability to manually rewire the human brain to change what a human values. In other words, the ability for the AI to manipulate another person's values is limited by what we as humans are capable of today. Given all this, is there any concern we should have about making this AI; would it succeed in being a friendly AI?
One argument I can imagine for why this fails friendly AI is the AI would wire people up to virtual reality machines. However, I don't think that works very well, because a person (except Cypher from the Matrix) wouldn't appreciate being wired into a virtual reality machine and having their autonomy forcefully removed. This means the action does not succeed in maximizing their well-being.
But I am curious to hear what arguments exist for why such an AI might still fail as a friendly AI.
The more people in what? Any particular moment in time? The complete timeline of any given Everett Branch? The whole multiverse?
Between an Everett branch of 10 billion people, and ten Everett branches of 1 billion people each, which do you prefer?
Between 10 billion people that live in the same century, and one billion people per century over a span of ten centuries, which do you prefer?
The whole multiverse.