My impression and my worry is that calling CEV a 'plan for Friendliness content', while true in a sense, is giving CEV-as-written too much credit as a stable conceptual framework. My default vision of someone working on CEV from my intuitive knee-jerk interpretation of your phrasing is of a person thinking hard for many hours about how to design a really clever meta level extrapolation process. This would probably be useful work, compared to many other research methods. But I would be kind of surprised if such research was at all eventually useful before the development of a significantly more thorough notion of preference, preferences as bounded computations, approximately embodied computation, overlapping computations, et cetera. I may well be underestimating the amount of creative juices you can get from informal models of something like extrapolation. It could be that you don't have to get A.I.-precise to get an abstract theory whose implementation details aren't necessarily prohibitively arbitrary, complex, or model-breaking. But I don't think C.E.V. is at the correct level of abstraction to start such reasoning, and I'm worried that the first step of research on it wouldn't involve an immediate and total conceptual reframing on a more precise/technical level. That said, there is assuredly less technical but still theoretical research to be done on existent systems of morality and moral reasoning, so I am not advocating against all research that isn't exploring the foundations of computer science or anything.
I should note that the above are my impressions and I intend them as evidence more than advice. Someone who has experience jumping between original research on condensed matter physics and macroscopic complex systems modeling (as an example of a huge set of people) would know a lot more about the right way to tackle such problems.
Your second paragraph is of course valid and worth noting though it perhaps unfortunately doesn't describe the folk I'm talking about, who are normally thinking on the humanity and not individual level. I should have stated that specifically. I should note for posterity that I am incredibly tired and (legally) drugged, and also was in my previous message, so although I feel sane I may not think so upon reflection.
(Deleted this minor comment as no longer relevant, so instead: how do you add line breaks with iOS 4? 20 seconds of Google didn't help me.)
I've been working on metaethics/CEV research for a couple months now (publishing mostly prerequisite material) and figured I'd share some of the sources I've been using.
CEV sources.
Motivation. CEV extrapolates human motivations/desires/values/volition. As such, it will help to understand how human motivation works.
Extrapolation. Is it plausible to think that some kind of extrapolation of human motivations will converge on a single motivational set? How would extrapolation work, exactly?
Metaethics. Should we use CEV, or something else? What does 'should' mean?
Building the utility function. How can a seed AI be built? How can it read what to value?
Preserving the utility function. How can the motivations we put into a superintelligence be preserved over time and self-modifcation?
Reflective decision theory. Current decision theories tell us little about software agents that make decisions to modify their own decision-making mechanisms.
Additional suggestions welcome. I'll try to keep this page up-to-date.