Time travel, the past "still existing" - and utilitariainism? I don't buy any of that either - but in the context of artificial intelligence, I do agree that building discounting functions into the agent's ultimate values looks like bad news.
Discounting functions arise because agents don't know about the future - and can't predict or control it very well. However, the extent to which they can't predict or control it is a function of the circumstances and their own capabilities. If you wire temporal discounting into the ultimate preferences of super-Deep Blue - then it can't ever self-improve to push its prediction horizon further out as it gets more computing power! You are unnecessarily building limitations into it. Better to have no temporal discounting wired in - and let the machine itself figure out to what extent it can predict and control the future - and so figure out the relative value of the present.
[Added 02/24/14: After writing this post, I discovered that I had miscommunicated owing to not spelling out my thinking in sufficient detail, and also realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them). See Reflections on a Personal Public Relations Failure: A Lesson in Communication. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.]
Follow-up to: Existential Risk and Public Relations, Other Existential Risks, The Importance of Self-Doubt
Over the last few days I've made a string of posts levying strong criticisms against SIAI. This activity is not one that comes naturally to me. In The Trouble With Physics Lee Smolin writes
My feelings about and criticisms of SIAI are very much analogous to Smolin's feelings about and criticisms of string theory. Criticism hurts feelings and I feel squeamish about hurting feelings. I've found the process of presenting my criticisms of SIAI emotionally taxing and exhausting. I fear that if I persist for too long I'll move into the region of negative returns. For this reason I've decided to cut my planned sequence of posts short and explain what my goal has been in posting in the way that I have.
Edit: Removed irrelevant references to VillageReach and StopTB, modifying post accordingly.
As Robin Hanson never ceases to emphasize, there's a disconnect between what humans say that what they're trying to do and what their revealed goals are. Yvain has written about this topic recently under his posting Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model. This problem becomes especially acute in the domain of philanthropy. Three quotes on this point:
(1) In Public Choice and the Altruist's Burden Roko says:
(2) In My Donation for 2009 (guest post from Dario Amodei) Dario says:
(3) In private correspondence about career choice, Holden Karnofsky said:
I believe that the points that Robin, Yvain, Roko, Dario and Holden have made provide a compelling case for the idea that charities should strive toward transparency and accountability. As Richard Feynman has said:
Because it's harder to fool others than it is to fool oneself, I think that the case for making charities transparent and accountable is very strong.
SIAI does not presently exhibit high levels of transparency and accountability. I agree with what I interpret to be Dario's point above: that in evaluating charities which are not transparent and accountable, we should assume the worst. For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that saving money in a donor-advised-fund with a view toward donating to a transparent and accountable future existential risk organization has higher expected value than donating to SIAI now does.
Because I take astronomical waste seriously and believe in shutting up and multiplying, I believe that reducing existential risk is ultimately more important than developing world aid. I would very much like it if there were a highly credible existential risk charity. At present, I do not feel that SIAI is a credible existential risk charity. One LW poster sent me a private message saying:
I do not believe that Eliezer is consciously attempting to engage in a scam to live off of the donations but I believe that (like all humans) he is subject to subconscious influences which may lead him to act as though he were consciously running a scam to live off of the donations of nonconformists. In light of Hanson's points, it would not be surprising if this were the case. The very fact that I received such a message is a sign that SIAI has public relations problems.
I encourage LW posters who find this post compelling to visit and read the materials available at GiveWell which is, as far as I know, the only charity evaluator which places high emphasis on impact, transparency and accountability. I encourage LW posters who are interested in existential risk to contact GiveWell expressing interest in GiveWell evaluating existential risk charities. I would note that it may be useful for LW posters who are interested in finding transparent and accountable organizations to donate to GiveWell's recommended charities to signal seriousness to the GiveWell staff.
I encourage SIAI to strive toward greater transparency and accountability. For starters, I would encourage SIAI to follow the example set by GiveWell and put a page on its website called "Mistakes" publically acknowledging its past errors. I'll also note that GiveWell incentivizes charities to disclose failures by granting them a 1-star rating. As Elie Hassenfeld explains
I believe that the fate of humanity depends on the existence of transparent and accountable organizations. This is both because I believe that transparent and accountable organizations are more effective and because I believe that people are more willing to give to them. As Holden says:
I believe that at present the most effective way to reduce existential risk is to work toward the existence of a transparent and accountable existential risk organization.
Added 08/23: