He evocatively termed creatures without second-order desires (other animals, human babies) wantons
I'm wondering if someone can link any research on when and how human babies develop second order wants?
First and second order wants strike me as having to do with near and far modes. (I also suspect near and far explain hyperbolic discounting/procrastination, and maybe also why the Wason selection task is hard (abstractness being associated with far mode) and maybe the endowment effect.)
Nonwantons, however, can represent a model of an idealized preference structure — perhaps, for example, a model based on a superordinate judgment of long-term lifespan considerations... So a human can say: I would prefer to prefer not to smoke. This second-order preference can then become a motivational competitor to the first-order preference. At the level of second-order preferences, I prefer to prefer to not smoke; nevertheless, as a first-order preference, I prefer to smoke.
One problem: How do we distinguish actual second-order preferences ("I would prefer to prefer not to smoke") from improper beliefs about one's own preferences, e.g. belief in belief ("It is good to think that smoking is bad")?
It seems to me that the obvious answer is to ask, "Well, is smoking actually bad?" In other words, we shouldn't expect to find out how good our reflective preferences are without actually asking what sort of world we live in, and whether agents with those preferences tend to do well in that sort of world.
"Actually bad" and "do well" depend on values, right? So that seems like the start to a better approach, but isn't enough.
Keith Stanovich is a leading expert on the cogsci of rationality, but he also also written on a problem related to CEV, that of the "rational integration" of our preferences. Here he is on pages 81-86 of Rationality and the Reflective Mind (currently my single favorite book on rationality, out of the dozens I've read):
Also see: The Robot's Rebellion, Higher order preferences the master rationality motive, Wanting to Want, The Human's Hidden Utility Function (Maybe), Indirect Normativity