I don't care much to live in an universe where some god-like process already figured everything out for us, or could in principle but doesn't do it because we want to do it the slow way.
If its nature is to follow our preference to avoid spoilers, then in a very real sense it couldn't in principle do otherwise.
The problem is the knowledge that there does exist an oracle that could answer any question, or the knowledge to create one, if humanity wanted it to (if it was our preference). It pretty much destroys any curiosity.
Right now I enjoy learning new knowledge that is already known because it makes me more knowledgeable than other people. In future, where there is a process like CEV, that is completely unnecessary because the only reason for why people stay stupid is that they want to. Right now there is also the curiosity involved that learning will ultimate...
Suppose we want to use the convergence of humanity's preferences as the utility function of a seed AI that is about to determine the future of its light cone.
We figured out how to get an AI to extract preferences from human behavior and brain activity. The AI figured out how to extrapolate those values. But my values and your values and Sarah Palin's values aren't fully converging in the simulation running the extrapolation algorithm. Our simulated beliefs are converging because on the path to reflective equilibrium our partially simulated selves have become true Bayesians and Aumann's Agreement Theorem holds. But our preferences aren't converging quite so well.
What to do? We'd like the final utility function in the FOOMed AI to adhere to some common-sense criteria:
Now, Arrow's impossibility theorem says that we can only get the FOOMed AI's utility function to adhere to these criteria if the extrapolated preferences of each partially simulated agent are related to each other cardinally ("A is 2.3x better than B!") instead of ordinally ("A is better than B, and that's all I can say").
Now, if you're an old-school ordinalist about preferences, you might be worried. Ever since Vilfredo Pareto pointed out that cardinal models of a person's preferences go far beyond our behavioral data and that as far as we can tell utility has "no natural units," some economists have tended to assume that, in our models of human preferences, preference must be represented ordinally and not cardinally.
But if you're keeping up with the latest cognitive neuroscience, you might not be quite so worried. It turns out that preferences are encoded cardinally after all, and they do have a natural unit: action potentials per second. With cardinally encoded preferences, we can develop a utility function that represents our preferences and adheres to the common-sense criteria listed above.
Whaddya know? The last decade of cognitive neuroscience has produced a somewhat interesting result concerning the plausibility of CEV.