What is the practical value (e.g., predicted impact) of the Less Wrong website (and similar public communication regarding rationality) with respect to FAI and/or existential risk outcomes?
(E.g., Is there an outreach objective? If so, for what purpose?)
To what extent is the success of your FAI project dependent upon the reliability of the dominant paradigm in Evolutionary Psychology (a la Tooby & Cosmides)?
Old, perhaps off-the-cuff, and perhaps outdated quote (9/4/02): “ well, the AI theory assumes evolutionary psychology and the FAI theory definitely assumes evolutionary psychology” (http://www.imminst.org/forum/lofiversion/index.php/t144.html).
Thanks for all your hard work.
Someone with whom establishing a connection might make the difference in being able to get them to appear at a future Singularity Summit. Also, someone with whom an association enhances your credibility.
Do you think a cog psych research program on “moral biases” might be helpful (e.g., regarding existential risk reduction)?
[The conceptual framework I aim working on (philosophy dissertation) targets a prevention-amenable form of “moral error” that requires (a) the perpetrating agent’s acceptance of the assessment of moral erroneousness (i.e., individual relativism to avoid categoricity problems), and (b) that the agent, for moral reasons, would not have committed the error had he been aware of the erroneousness (i.e., sufficiently motivating v. moral indifference, laziness, and/or akrasia).]