In the past 10 years, Coefficient Giving (formerly Open Philanthropy) has funded dozens of projects doing important work related to AI safety / navigating transformative AI. And yet, perhaps most activities that would improve expected outcomes from transformative AI have no significant project pushing them forward, let alone multiple. This...
Today, Open Philanthropy announced that our Potential Risks from Advanced Artificial Intelligence program will now be called Navigating Transformative AI. Excerpts from the announcement post: > We're making this change to better reflect the full scope of our AI program and address some common misconceptions about our work. While the...
Cross-post from EA Forum, follow-up to EA needs consultancies. Below is a list of features that make a report on some research question more helpful to me, along with a list of examples. I wrote this post for the benefit of individuals and organizations from whom I might commission reports...
This post adapts some internal notes I wrote for the Open Philanthropy Project, but they are merely at a "brainstorming" stage, and do not express my "endorsed" views nor the views of the Open Philanthropy Project. This post is also written quickly and not polished or well-explained. My 2017 Report...
Years ago, I wrote an unfinished sequence of posts called "No-Nonsense Metaethics." My last post, Pluralistic Moral Reductionism, said I would next explore "empathic metaethics," but I never got around to writing those posts. Recently, I wrote a high-level summary of some initial thoughts on "empathic metaethics" in section 6.1.2...
(Cross-posted from MIRI's blog. MIRI maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried MIRI staff members.) Thanks to the generosity of several major donors,† every donation made to MIRI between now and August 15th, 2014 will be matched dollar-for-dollar,...
Cross-posted from my blog. Yudkowsky writes: > In general and across all instances I can think of so far, I do not agree with the part of your futurological forecast in which you reason, "After event W happens, everyone will see the truth of proposition X, leading them to endorse...