So, the workshop discussion (plus your "Intelligence Explosion" paper) lead to three possible approaches:
It seems really hard to differentially push for FAI. For example I've mostly stopped working on decision theory because it seems to help UFAI as much as FAI. The only safe topics within FAI that I can see are ethics (normative and meta) and meta-philosophy, which are not really things you can throw resources at. I'm much less familiar with WBE but naively I would think that there are more opportunities for research in WBE that don't contribute too much to neuromorphic AI.
Has anyone been working on these questions?
For example I've mostly stopped working on decision theory because it seems to help UFAI as much as FAI.
I think there are potential avenues of development of decision theory that might help FAI more than uFAI; I think maybe you should talk to Steve Rayhawk to see if he has any thoughts about this.
Anyway I praise your prudence, especially as it seems like a real logical possibility that AGI can't be engineered without first solving self-reference and logical uncertainty.
Here is a short new publication from the Singularity Institute, on the 2-day workshop that followed Singularity Summit 2011.
Note the new publication design. We are currently porting our earlier publications to this template, too.