Jack comments on The Fundamental Question - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (277)
What am I doing?: Working at a regular job as a C++ programmer, and donating as much as possible to SIAI. And sometimes doing other useful things in my spare time.
Why am I doing it?: Because I want to make lots of money to pay for Friendly AI and existential risk research, and programming is what I'm good at.
Why do I want this?: Well, to be honest, the original reason, from several years ago, was "Because Eliezer told me to". Since then I've internalized most of Eliezer's reasons for recommending this, but this process still seems kinda backwards.
I guess the next question is "Why did I originally choose to follow Eliezer?": I started following him back when he still believed in the most basic form of utilitarianism: Maximize pleasure and minimize pain, don't bother keeping track of which entity is experiencing the pleasure or pain. Even back then, Eliezer wasn't certain that this was the value system he really wanted, but for me it seemed to perfectly fit my own values. And even after years of thinking about these topics, I still haven't found any other system that more closely matches what I actually believe. Not even Eliezer's current value system. And yes, I am aware that my value system means that an orgasmium shockwave is the best possible scenario for the future. And I still haven't found any logically consistent reason why I should consider that a bad thing, other than "but other people don't want that". I'm still very conflicted about this.
(off-topic: oh, and SPOILER: I found the "True Ending" to Three Worlds Collide severely disturbing. Destroying a whole planet full of people, just to KEEP the human ability to feel pain??? oh, and some other minor human values, which the superhappies made very clear were merely minor aesthetic preferences. That... really shook my "faith" in Eliezer's values...)
Anyway, the reason why I started following Eliezer was that even back then, he seemed like one of the smartest people on the planet, and he had a mission that I strongly believed in, and he was seriously working towards this mission, with more dedication than I had seen in anyone else. And he was seeking followers, though he made it very clear that he wasn't seeking followers in the traditional sense, but was seeking people to help him with his mission who were capable of thinking for themselves. And at the time I desperately wanted a belief system that was better than the only other belief system I knew of at the time, which was christianity. And so I basically, um... converted directly from christianity to Singularitarianism. (yes, that's deliberate noncapitalization. somehow capitalizing the word "christianity" just feels wrong...)
And now the next question: "Why am I still following Eliezer?": Basically, because I still haven't found anyone to follow who I like better than Eliezer. And I don't dare to try to start my own competing branch of Singularitarianism, staying true to Eliezer's original vision, despite his repeated warnings why this would be a bad idea... Though, um... if anyone else is interested in the idea... please contact me... preferably privately.
Another question is "What other options are worth considering?": Even if I do decide that it would be a good idea to stop following Eliezer, I definitely don't plan to stop being a transhumanist, and whatever I become instead will still be close enough to Singularitarianism that I might as well continue calling it Singularitarianism. And reducing existential risks would still be my main priority. So far the only reasons I know of to stop giving most of my income to SIAI is that maybe their mission to create Friendly AI really is hopeless, and maybe there's something else I should be doing instead. Or maybe I should be splitting my donations between SIAI and someplace else. But where? The Oxford Future of Humanity Institute? The Foresight Institute? The Lifeboat Foundataion? no, definitely not the Venus Project or the Zeitgeist movement. A couple of times I asked SIAI about the idea of splitting my donations with some other group, and of course they said that donating all of the money to them would still be the most leveraged way for me to reduce existential risks. Looking at the list of projects they're currently working on, this does sound plausible, but somehow it still feels like a bad idea to give all of the money I can spare exclusively to SIAI.
Actually, there is one other place I plan to donate to, even if SIAI says that I should donate exclusively to SIAI. Armchair Revolutionary is awesome++. Everyone reading this who has any interest at all in having a positive effect on the future, please check out their website right now, and sign up for the beta. I'm having trouble describing it without triggering a reaction of aversion to cliches, or "this sounds too good to be true", but... ok, I won't worry about sounding cliched: They're harnessing the addictive power of social games, where you earn points, and badges, and stuff, to have a significant, positive impact on the future. They have a system that makes it easy, and possibly fun, to earn points by donating small amounts (99 cents) to one or more of several projects, or by helping in other ways: taking quizzes, doing some simple research, writing an email, making a phone call, uploading artwork, and more. And the system of limiting donations to 99 cents, and limiting it to one donation per person per project, provides a way to not feel guilty about not donating more. Personally, I find this extremely helpful. I can easily afford to donate the full amount to all of these projects, and spend some time on the other things I can do to earn points, and still have plenty of money and time left over to donate to SIAI. Oh, and so far it looks like donating small amounts to a wide variety of projects generates more warm fuzzies than donating large amounts to a single project. I like that.
It would be awesome if SIAI or LW or some of the other existential-risk-reducing groups could become partners of ArmRev, and get their own projects added to the list. Someone get on this ASAP. (What's that you say? Don't say "someone should", say "I will"? Ok, fine, I'll add it to my to-do list, with all of that other stuff that's really important but I don't feel at all qualified to do. But please, I would really appreciate if someone else could help with this, or take charge of this. Preferably someone who's actually in charge at SIAI, or LW, or one of the other groups)
Anyway, there's probably lots more I could write on these topics, but I guess I had better stop writing now. This post is already long enough.
Nothing at all against SIAI but
If you're in doubt and seeking expert advice you should pick an expert that lacks really obvious institutional incentives to give one answer over others.
Regarding the rest of the comment I found it kind of weird and something freaked me out about it, though I'm not sure quite what. That doesn't mean you're doing anything wrong, I might just have biases or assumptions that make what you're doing seem weird to me. I think it has something to do with your lack of skepticism or cynicism and the focus on looking for someone to follow that MatthewB mentioned. I guess your comment pattern matches with things a very religious person would say: I'm just not sure if that means you're doing something wrong or if I'm having an adverse reaction to a reasonable set of behaviors because I have irrationally averse reactions to things that look religious.
Yeah, I realized that it was silly for me to ask SIAI what they thought about the idea of giving SIAI less money, but I didn't know who else to ask, and I still didn't have enough confidence in my own sanity to try to make this decision on my own. And I was kinda hoping that the people at SIAI were rational enough to give an accurate and reasonably unbiased answer, despite the institutional incentives. SIAI has a very real and very important mission, and I would have hoped that its members would be able to rationally think about what is best for the mission, rather than what is best for the group. And the possibility remains that they did, in fact, give a rational and mostly unbiased answer.
The answer they gave was that donating exclusively to SIAI was the most leveraged way to reduce existential risks. Yes, there are other groups that are doing important work, but SIAI is more critically underfunded than they are, and the projects that we (yes, I said "we", even though I'm "just" a donor) are working on this year are critical for figuring out what the most optimal strategies would be for humanity/transhumanity to maximize its probability of surviving into a post-Singularity future.
heh, one of these projects they're finally getting around to working on this year is writing a research paper examining how much existential-risk-reduction you get for each dollar donated to SIAI. That's something I've really been wanting to know, and had actually been feeling kinda guilty about not making more of an effort to try to figure out on my own, or at least to try to get a vague estimate, to within a few orders of magnitude. And I had also been really annoyed that noone more qualified than me had already done this. But now they're finally working on it. yay :)
Someone from SIAI, please correct me if I'm wrong about any of this.
And yes, my original comment seemed weird to me too, and kinda freaked me out. But I think it would have been a bad idea to deliberately avoid saying it, just because it sounds weird. If what I'm doing is a bad idea, then I need to figure this out, and find what I should be doing instead. And posting comments like this might help with that. Anyway, I realize that my way of thinking sounds weird to most people, and I don't make any claim that this is a healthy way to think, and I'm working on fixing this.
And as I mentioned in another comment, it would just feel wrong to deliberately not say this stuff, just because it sounds weird and might make SIAI look bad. But that kind of thinking belongs to the Dark Arts, and is probably just a bad habit I had left over from christianity, and isn't something that SIAI actually endorses, afaik.
And I do, in fact, have lots of skepticism and cynicism about SIAI, and their mission, and the people involved. This skepticism probably would have caused me to abandon them and their mission long ago... if I would have had somewhere better to go instead, or a more important mission. But after years of looking, I haven't found any cause more important than existential risk reduction, and I haven't found any group working towards this cause more effectively than SIAI, except possibly for some of the other groups I mentioned, but a preliminary analysis shows that they're not actually doing any better than SIAI. And starting my own group still looks like a really silly idea.
And yes, I'm aware that I still seem to talk and think like a religious person. I was raised as a christian, and I took christianity very seriously. Seriously enough to realize that it was no good, and that I needed to get out. And so I tried to replace my religious fanaticism with what's supposed to be an entirely non-religious and non-fanatical cause, but I still tend to think and act both religiously and fanatically. I'm working on that.
I also have an averse reaction to things that look religious. This is one of the many things causing me to have trouble with self-hatred. Anyway, I'm working on that.
Oh, and one more comment about cynicism: I currently think that 1% is an optimistic estimate of the probability that humanity/transhumanity will survive into a positive post-Singularity future, but it's been a while since I reviewed why I believe this. Another thing to add to my to-do list.