blogospheroid comments on The Fundamental Question - Less Wrong

43 Post author: MBlume 19 April 2010 04:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (277)

You are viewing a single comment's thread. Show more comments above.

Comment author: PeerInfinity 20 April 2010 04:36:01PM 18 points [-]

What am I doing?: Working at a regular job as a C++ programmer, and donating as much as possible to SIAI. And sometimes doing other useful things in my spare time.

Why am I doing it?: Because I want to make lots of money to pay for Friendly AI and existential risk research, and programming is what I'm good at.

Why do I want this?: Well, to be honest, the original reason, from several years ago, was "Because Eliezer told me to". Since then I've internalized most of Eliezer's reasons for recommending this, but this process still seems kinda backwards.

I guess the next question is "Why did I originally choose to follow Eliezer?": I started following him back when he still believed in the most basic form of utilitarianism: Maximize pleasure and minimize pain, don't bother keeping track of which entity is experiencing the pleasure or pain. Even back then, Eliezer wasn't certain that this was the value system he really wanted, but for me it seemed to perfectly fit my own values. And even after years of thinking about these topics, I still haven't found any other system that more closely matches what I actually believe. Not even Eliezer's current value system. And yes, I am aware that my value system means that an orgasmium shockwave is the best possible scenario for the future. And I still haven't found any logically consistent reason why I should consider that a bad thing, other than "but other people don't want that". I'm still very conflicted about this.

(off-topic: oh, and SPOILER: I found the "True Ending" to Three Worlds Collide severely disturbing. Destroying a whole planet full of people, just to KEEP the human ability to feel pain??? oh, and some other minor human values, which the superhappies made very clear were merely minor aesthetic preferences. That... really shook my "faith" in Eliezer's values...)

Anyway, the reason why I started following Eliezer was that even back then, he seemed like one of the smartest people on the planet, and he had a mission that I strongly believed in, and he was seriously working towards this mission, with more dedication than I had seen in anyone else. And he was seeking followers, though he made it very clear that he wasn't seeking followers in the traditional sense, but was seeking people to help him with his mission who were capable of thinking for themselves. And at the time I desperately wanted a belief system that was better than the only other belief system I knew of at the time, which was christianity. And so I basically, um... converted directly from christianity to Singularitarianism. (yes, that's deliberate noncapitalization. somehow capitalizing the word "christianity" just feels wrong...)

And now the next question: "Why am I still following Eliezer?": Basically, because I still haven't found anyone to follow who I like better than Eliezer. And I don't dare to try to start my own competing branch of Singularitarianism, staying true to Eliezer's original vision, despite his repeated warnings why this would be a bad idea... Though, um... if anyone else is interested in the idea... please contact me... preferably privately.

Another question is "What other options are worth considering?": Even if I do decide that it would be a good idea to stop following Eliezer, I definitely don't plan to stop being a transhumanist, and whatever I become instead will still be close enough to Singularitarianism that I might as well continue calling it Singularitarianism. And reducing existential risks would still be my main priority. So far the only reasons I know of to stop giving most of my income to SIAI is that maybe their mission to create Friendly AI really is hopeless, and maybe there's something else I should be doing instead. Or maybe I should be splitting my donations between SIAI and someplace else. But where? The Oxford Future of Humanity Institute? The Foresight Institute? The Lifeboat Foundataion? no, definitely not the Venus Project or the Zeitgeist movement. A couple of times I asked SIAI about the idea of splitting my donations with some other group, and of course they said that donating all of the money to them would still be the most leveraged way for me to reduce existential risks. Looking at the list of projects they're currently working on, this does sound plausible, but somehow it still feels like a bad idea to give all of the money I can spare exclusively to SIAI.

Actually, there is one other place I plan to donate to, even if SIAI says that I should donate exclusively to SIAI. Armchair Revolutionary is awesome++. Everyone reading this who has any interest at all in having a positive effect on the future, please check out their website right now, and sign up for the beta. I'm having trouble describing it without triggering a reaction of aversion to cliches, or "this sounds too good to be true", but... ok, I won't worry about sounding cliched: They're harnessing the addictive power of social games, where you earn points, and badges, and stuff, to have a significant, positive impact on the future. They have a system that makes it easy, and possibly fun, to earn points by donating small amounts (99 cents) to one or more of several projects, or by helping in other ways: taking quizzes, doing some simple research, writing an email, making a phone call, uploading artwork, and more. And the system of limiting donations to 99 cents, and limiting it to one donation per person per project, provides a way to not feel guilty about not donating more. Personally, I find this extremely helpful. I can easily afford to donate the full amount to all of these projects, and spend some time on the other things I can do to earn points, and still have plenty of money and time left over to donate to SIAI. Oh, and so far it looks like donating small amounts to a wide variety of projects generates more warm fuzzies than donating large amounts to a single project. I like that.

It would be awesome if SIAI or LW or some of the other existential-risk-reducing groups could become partners of ArmRev, and get their own projects added to the list. Someone get on this ASAP. (What's that you say? Don't say "someone should", say "I will"? Ok, fine, I'll add it to my to-do list, with all of that other stuff that's really important but I don't feel at all qualified to do. But please, I would really appreciate if someone else could help with this, or take charge of this. Preferably someone who's actually in charge at SIAI, or LW, or one of the other groups)

Anyway, there's probably lots more I could write on these topics, but I guess I had better stop writing now. This post is already long enough.

Comment author: blogospheroid 21 April 2010 06:51:42AM 3 points [-]

The problems of the world the way the way it is right now and the incentives of the people in power as per the current structure does not seem optimal to me.

There are so many obvious things that could be done that are not being done right now. for eg. Competition in the space of governments. Proposing solutions to many present problems of the world does not require a superintelligence. Economists do that everyday. But untangling the entire mess of incentives, power and leverage so that these formerly simple, but now complicated, solutions could be implemented requires a superintelligence.

This super intelligence needs to be benevolent today and tomorrow. I have not found a better goal structure than CEV which can maintain this benevolence. Singinst has openly written that they are open to better goal systems. If I find something better, I will move my support there.

Comment author: PeerInfinity 21 April 2010 04:52:32PM 2 points [-]

I agree.

I'm aware that there are problems with CEV (mainly: we're probably not going to have enough time to figure out how to actually implement it before the Singularity, and the CEV is biased to exclude only the volition of humanity, which means that there may be a risk of the CEV allowing arbitrary amounts of cruelty to entities that don't qualify as "human")

Anyway, I'm aware that there are problems with CEV, but I still don't know of any better plan.

Because of the extreme difficulty of actually implementing CEV, I am tempted to advocate the backup plan of coding a purely Utilitarian AI, maximizing pleasure and minimizing pain.An orgasmium shockwave is better than a lifeless universe. The idea would be to not release this AI unless it looks like we're running out of time to implement CEV, but if we are running out of time, then we're not likely to get much warning that we're running out of time. And then there's the complication that according to my current belief system (which I'm still very conflicted about) the orgasmium shockwave scenario is actually better than the CEV scenario, since it would result in greater total utility. But I'm nowhere near confident enough about this to actually advocate the plan of deliberately releasing a pure Utilitarian AI. And this plan has its own dangers, like... shudder... what if we get the utility formula wrong?

Oh, and one random idea I had to make CEV easier to actually implement: remove the restriction of the CEV not being allowed to simulate sentient minds. Just try to make sure that these sentient minds have at least a minimum standard of living. Or, if that's too hard, and you somehow need to simulate minds that are actually suffering, you could save a backup copy of them, rather than deleting them, and after the CEV has finished applying its emergency first-aid to the human condition, you can reawaken these simulated minds, and give them full rights as citizens. There should be more than enough resources available in the universe for these minds to live happy, fulfilling lives. They might even be considered heroes, who endured a few moments of discomfort, and existential confusion, in order to help bring about a positive post-Singularity future. But still it somehow feels wrong for me to suggest a plan that involves the suffering of others. If it makes anyone feel anyone better about this suggestion, then I, personally, volunteer to experience a playback of a recording of all of the unpleasant experiences that these simulated minds have experienced, while the CEV was busy doing its thing. There, now I'm not heartlessly advocating a plan that involves the suffering of others, but no harm to myself. And I'm expecting that the amount of this suffering would be small enough that the amount of pleasure I could experience in the rest of my life, after I'm finished experiencing this playback, would vastly outweigh the suffering. It would be nice if there would be some way to guarantee this, but that would make the system more complicated, and the whole point of all this was to make the system less complicated.

Comment author: PhilGoetz 23 April 2010 07:28:30PM 2 points [-]

CEV is too vague to call a plan. It bothers me that people are dedicating themselves to pursuing a goal that hasn't yet been defined.

Comment author: Strange7 27 April 2010 02:53:54AM 1 point [-]

That was part of my motivation for proposing an alternative.