I think there are a couple of interesting elements here.
- Acknowledging that the individual AI representatives will act on preferences/values, there will be many situations where the optimal move is not what an individual person believes should be done.
Take a simple example. A large part (basically all?) of the US population wants cheap housing to be available, and for elite housing to be built in a value maximizing way (aka the elite want to get their money's worth). Yet a common preference is "no new housing built near me, where the noise/traffic/sight will affect me. And "building new luxury housing won't lower the market price for housing because demand is infinite". "Also I don't like seeing homeless people ".
What a person claims to want is opposed to how they want the government to act.
This also will make it difficult to audit ones AI representative. Decisions will become extremely complex negotiations.
- If a single person's only voice is a vote, then for most issues the preferences of most voters don't matter. They can be ignored on the margin. This is because current democracy "bundles" decisions. Perhaps you had in mind a direct democracy where a person's ai representative votes on every decision.
If you can separate the how from the what, I wonder what people actually disagree on. An enormous amount of political conflicts seem to be disputes over the how, where people cannot agree on what policy has the highest probability of achieving a goal.
This is essentially just human ignorance: given a common data set about the world, you cannot agree to disagree: there is exactly one optimal policy using the rational policy that at that instant in time which has the highest EV (measured by back testing etc)
Are you trying make plebiscites work with AI? Interesting idea.