It has been claimed on this site that the fundamental question of rationality is "What do you believe, and why do you believe it?".
A good question it is, but I claim there is another of equal importance. I ask you, Less Wrong...
What are you doing?
And why are you doing it?
What am I doing?: Working at a regular job as a C++ programmer, and donating as much as possible to SIAI. And sometimes doing other useful things in my spare time.
Why am I doing it?: Because I want to make lots of money to pay for Friendly AI and existential risk research, and programming is what I'm good at.
Why do I want this?: Well, to be honest, the original reason, from several years ago, was "Because Eliezer told me to". Since then I've internalized most of Eliezer's reasons for recommending this, but this process still seems kinda backwards.
I guess the next question is "Why did I originally choose to follow Eliezer?": I started following him back when he still believed in the most basic form of utilitarianism: Maximize pleasure and minimize pain, don't bother keeping track of which entity is experiencing the pleasure or pain. Even back then, Eliezer wasn't certain that this was the value system he really wanted, but for me it seemed to perfectly fit my own values. And even after years of thinking about these topics, I still haven't found any other system that more closely matches what I actually believe. Not even Eliezer's current value system. And yes, I am aware that my value system means that an orgasmium shockwave is the best possible scenario for the future. And I still haven't found any logically consistent reason why I should consider that a bad thing, other than "but other people don't want that". I'm still very conflicted about this.
(off-topic: oh, and SPOILER: I found the "True Ending" to Three Worlds Collide severely disturbing. Destroying a whole planet full of people, just to KEEP the human ability to feel pain??? oh, and some other minor human values, which the superhappies made very clear were merely minor aesthetic preferences. That... really shook my "faith" in Eliezer's values...)
Anyway, the reason why I started following Eliezer was that even back then, he seemed like one of the smartest people on the planet, and he had a mission that I strongly believed in, and he was seriously working towards this mission, with more dedication than I had seen in anyone else. And he was seeking followers, though he made it very clear that he wasn't seeking followers in the traditional sense, but was seeking people to help him with his mission who were capable of thinking for themselves. And at the time I desperately wanted a belief system that was better than the only other belief system I knew of at the time, which was christianity. And so I basically, um... converted directly from christianity to Singularitarianism. (yes, that's deliberate noncapitalization. somehow capitalizing the word "christianity" just feels wrong...)
And now the next question: "Why am I still following Eliezer?": Basically, because I still haven't found anyone to follow who I like better than Eliezer. And I don't dare to try to start my own competing branch of Singularitarianism, staying true to Eliezer's original vision, despite his repeated warnings why this would be a bad idea... Though, um... if anyone else is interested in the idea... please contact me... preferably privately.
Another question is "What other options are worth considering?": Even if I do decide that it would be a good idea to stop following Eliezer, I definitely don't plan to stop being a transhumanist, and whatever I become instead will still be close enough to Singularitarianism that I might as well continue calling it Singularitarianism. And reducing existential risks would still be my main priority. So far the only reasons I know of to stop giving most of my income to SIAI is that maybe their mission to create Friendly AI really is hopeless, and maybe there's something else I should be doing instead. Or maybe I should be splitting my donations between SIAI and someplace else. But where? The Oxford Future of Humanity Institute? The Foresight Institute? The Lifeboat Foundataion? no, definitely not the Venus Project or the Zeitgeist movement. A couple of times I asked SIAI about the idea of splitting my donations with some other group, and of course they said that donating all of the money to them would still be the most leveraged way for me to reduce existential risks. Looking at the list of projects they're currently working on, this does sound plausible, but somehow it still feels like a bad idea to give all of the money I can spare exclusively to SIAI.
Actually, there is one other place I plan to donate to, even if SIAI says that I should donate exclusively to SIAI. Armchair Revolutionary is awesome++. Everyone reading this who has any interest at all in having a positive effect on the future, please check out their website right now, and sign up for the beta. I'm having trouble describing it without triggering a reaction of aversion to cliches, or "this sounds too good to be true", but... ok, I won't worry about sounding cliched: They're harnessing the addictive power of social games, where you earn points, and badges, and stuff, to have a significant, positive impact on the future. They have a system that makes it easy, and possibly fun, to earn points by donating small amounts (99 cents) to one or more of several projects, or by helping in other ways: taking quizzes, doing some simple research, writing an email, making a phone call, uploading artwork, and more. And the system of limiting donations to 99 cents, and limiting it to one donation per person per project, provides a way to not feel guilty about not donating more. Personally, I find this extremely helpful. I can easily afford to donate the full amount to all of these projects, and spend some time on the other things I can do to earn points, and still have plenty of money and time left over to donate to SIAI. Oh, and so far it looks like donating small amounts to a wide variety of projects generates more warm fuzzies than donating large amounts to a single project. I like that.
It would be awesome if SIAI or LW or some of the other existential-risk-reducing groups could become partners of ArmRev, and get their own projects added to the list. Someone get on this ASAP. (What's that you say? Don't say "someone should", say "I will"? Ok, fine, I'll add it to my to-do list, with all of that other stuff that's really important but I don't feel at all qualified to do. But please, I would really appreciate if someone else could help with this, or take charge of this. Preferably someone who's actually in charge at SIAI, or LW, or one of the other groups)
Anyway, there's probably lots more I could write on these topics, but I guess I had better stop writing now. This post is already long enough.
If you were Bill Gates, that might be a valid concern. (The "exclusively" part, not the "SIAI" part.)
Otherwise, it's most efficient to donate to just one cause. Especially if you itemize deductions.