I would instead ask "What preferences would this agent have, in a counterfactual universe in which they were fully-informed and rational but otherwise identical?".
Quoting a forum post from a couple years ago...
"The problem with trying to extrapolate what a person would want with perfect information is, perfect information is a lot of fucking information. The human brain can't handle that much information, so if you want your extrapolatory homunculus to do anything but scream and die like someone put into the Total Perspective Vortex, you need to enhance its information processing capabilities. And once you've reached that point, why not improve its general intelligence too, so it can make better decisions? Mayb...
Here's the new thread for posting quotes, with the usual rules: