This is a very personal account of thoughts and events that have led me to a very interesting point in my life. Please read it as such. I present a lot of points, arguments, conclusions, etc..., but that's not what this is about.

I've started reading LW around spring of 2010. I was at the rationality minicamp last summer (2011). The night of February 10, 2012 all the rationality learning and practice finally caught up with me. Like a water that has been building up behind a damn, it finally broke through and flooded my poor brain.

"What if the Bayesian Conspiracy is real?" (By Bayesian Conspiracy I just mean a secret group that operates within and around LW and SIAI.) That is the question that set it all in motion. "Perhaps they left clues for those that are smart enough to see it. And to see those clues, you would actually have to understand and apply everything that they are trying to teach." The chain of thoughts that followed (conspiracies within conspiracies, shadow governments and Illuminati) it too ridiculous to want to repeat, but it all ended up with one simple question: How do I find out for sure? And that's when I realized that almost all the information I have has been accepted without as much as an ounce of verification. So little of my knowledge has been tested in the real world. In that moment I achieved a sort of enlightenment: I realized I don't know anything. I felt a dire urge to regress to the very basic questions: "What is real? What is true?" And then I laughed, because that's exactly where The Sequences start.

Through the turmoil of jumbled and confused thoughts came a shock of my most valuable belief propagating through my mind, breaking down final barriers, reaching its logical conclusion. FAI is the most important thing we should be doing right now! I already knew that. In fact, I knew that for a long time now, but I didn't... what? Feel it? Accept it? Visualize it? Understand the consequences? I think I didn't let that belief propagate to its natural conclusion: I should be doing something to help this cause.

I can't say: "It's the most important thing, but..." Yet, I've said it so many times inside my head. It's like hearing other people say: "Yes, X is the rational thing to do, but..." What follows is a defense that allows them to keep the path to their goal that they are comfortable with, that they are already invested in.

Interestingly enough, I've already thought about this. Right after rationality minicamp, I've asked myself the question: Should I switch to working on FAI, or should I continue to make games? I've thought about it heavily for some time, but I felt like I lacked the necessary math skills to be of much use on FAI front. Making games was the convenient answer. It's something I've been doing for a long time, it's something I am good. I decided to make games that explain various ideas that LW presents in text. This way I could help raise the sanity waterline. Seemed like a very nice, neat solution that allowed me to do what I wanted and feel a bit helpful to the FAI cause.

Looking back, I was dishonest with myself. In my mind, I already wrote the answer I wanted. I convinced myself that I didn't, but part of me certainly sabotaged the whole process. But that's okay, because I was still somewhat helpful, even though may be not in the most optimal way. Right? Right?? The correct answer is "no". So, now I have to ask myself again: What is the best path for me? And to answer that, I have to understand what my goal is.

Rationality doesn't just help you to get what you want better/faster. Increased rationality starts to change what you want. May be you wanted the air to be clean, so you bought a hybrid. Sweet. But then you realized that what you actually want is for people to be healthy. So you became a nurse. That's nice. Then you realized that if you did research, you could be making an order of magnitude more people healthier. So you went into research. Cool. Then you realized that you could pay for multiple researchers if you had enough money. So you went out, become a billionaire, and created your own research institute. Great. There was always you, and there was your goal, but everything in between was (and should be) up for grabs.

And if you follow that kind of chain long enough, at some point you realize that FAI is actually the thing right before your goal. Why wouldn't it be? It solves everything in the best possible way!

People joke that LW is a cult. Everyone kind of laughs it off. It's funny because cultists are weird and crazy, but they are so sure they are right. LWers are kind of like that. Unlike other cults, though, we are really, truly right. Right? But, honestly, I like the term, and I think it has a ring of truth to it. Cultists have a goal that's beyond them. We do too. My life isn't about my preferences (I can change those), it's about my goals. I can change those too, of course, but if I'm rational (and nice) about it, I feel that it's hard not to end up wanting to help other people.

Okay, so I need a goal. Let's start from the beginning:

What is truth?

Reality is truth. It's what happens. It's the rules that dictate what happens. It's the invisible territory. It's the thing that makes you feel surprised.

(Okay, great, I won't have to go back to reading Greek philosophy.)

How do we discover truth?

So far, the best method has been the scientific principle. It's has also proved itself over and over again by providing actual tangible results.

(Fantastic, I won't have to reinvent the thousands of years of progress.)

Soon enough humans will commit a fatal mistake.

This isn't a question, it's an observation. The technology is advancing on all fronts to the point where it can be used on a planetary (and wider) scale. Humans make mistakes. Making mistake with something that affects the whole world could result in an injury or death... for the planet (and potentially beyond).

That's bad.

To be honest, I don't have a strong visceral negative feeling associated with all humans becoming extinct. It doesn't feel that bad, but then again I know better than to trust my feelings on such a scale. However, if I had to simply push a button to make one person's life significantly better, I would do it. And I would keep pushing that button for each new person. For something like 222 years, by my rough calculations. Okay, then. Humanity injuring or killing itself would be bad, and I can probably spent a century or so to try to prevent that, while also doing something that's a lot more fun that mashing a button.

We need a smart safety net.

Not only smart enough to know that triggering an atomic bomb inside a city is bad, or that you get the grandma out of a burning building by teleporting her in one piece to a safe spot, but also smart enough to know that if I keep snoozing every day for an hour or two, I'd rather someone stepped in and stopped me, no matter how much I want to sleep JUST FIVE MORE MINUTES. It's something I might actively fight, but it's something that I'll be grateful for later.

FAI

There it is: the ultimate safety net. Let's get to it?

Having FAI will be very very good, that's clear enough. Getting FAI wrong will be very very bad. But there are different levels of bad, and, frankly, a universe tiled with paper-clips is actually not that high on the list. Having an AI that treats humans as special objects is very dangerous. An AI that doesn't care about humans will not do anything to humans specifically. It might borrow a molecule, or an arm or two from our bodies, but that's okay. An AI that treats humans as special, yet is not Friendly could be very bad. Imagine 3^^^3 different people being created and forced to live really horrible lives. It's hell on a whole another level. So, if FAI goes wrong, pure destruction of all humans is a pretty good scenario.

Should we even be working on FAI? What are the chances we'll get it right? (I remember Anna Salamon's comparison: "getting FAI right" is like "trying to make the first atomic bomb explode in a shape of an elephant" would have been a century ago.) What are the chances we'll get it horribly wrong and end up in hell? By working on FAI, how are we changing the probability distribution for various outcomes? Perhaps a better alternative is to seek a decisive advantage like brain uploading, where a few key people can take a century or so to think the problem through?

I keep thinking about FAI going horribly wrong, and I want to scream at the people who are involved with it: "Do you even know what you are doing?!" Everything is at stake! And suddenly I care. Really care. There is curiosity, yes, but it's so much more than that. At LW minicamp we compared curiosity to a cat chasing a mouse. It's a kind of fun, playful feeling. I think we got it wrong. The real curiosity feels like hunger. The cat isn't chasing the mouse to play with it; it's chasing it to eat it because it needs to survive. Me? I need to know the right answer.

I finally understand why SIAI isn't focusing very hard on the actual AI part right now, but is instead pouring most of their efforts into recruiting talent. The next 50-100 years is going to be a marathon for our lives. Many participants might not make it to the finish line. It's important that we establish a community that can continue to carry the research forward until we succeed.

I finally understand why when I was talking about making games that help people be more rational with Carl Shulman, his value metric was to see how many academics it could impact/recruit. That didn't make sense to me. I just wanted to raise the sanity waterline for people in general. I think when LWers say "raise the sanity waterline," there are two ideas being presented. One is to make everyone a little bit more sane. That's nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.

I finally realized that when people were talking about donating to SIAI during the rationality minicamp, most of us (certainly myself) were thinking of may be tens of thousands of dollars a year. I now understand that's silly. If our goal is truly to make the most money for SIAI, then the goal should be measured in billions.

I've realized a lot of things lately. A lot of things have been shaken up. It has been a very stressful couple of days. I'll have to re-answer the question I asked myself not too long ago: What should I be doing? And this time, instead of hoping for an answer, I'm afraid of the answer. I'm truly and honestly afraid. Thankfully, I can fight pushing a lot better than pulling: fear is easier to fight than passion. I can plunge into the unknown, but it breaks my heart to put aside a very interesting and dear life path.

I've never felt more afraid, more ready to fall into a deep depression, more ready to scream and run away, retreat, abandon logic, go back to the safe comfortable beliefs and goals. I've spent the past 10 years making games and getting better at it. And just recently I've realized how really really good I actually am at it. Armed with my rationality toolkit, I could probably do wonders in that field.

Yet, I've also never felt more ready to make a step of this magnitude. Maximizing utility, all the fallacies, biases, defense mechanisms, etc, etc, etc. One by one they come to mind and help me move forward. Patterns of thoughts and reasoning that I can't even remember the name of. All these tools and skills are right here with me, and using them I feel like I can do anything. I feel that I can dodge bullets. But I also know full well that I am at the starting line of a long and difficult marathon. A marathon that has no path and no guides, but that has to be run nonetheless.

May the human race win.

New Comment
79 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Does everyone else here think that putting aside your little quirky interests to do big important things is a good idea? It seems to me that people who choose that way typically don't end up doing much, even when they're strongly motivated, while people who follow their interests tend to become more awesome over time. Though I know Anna is going to frown on me for advocating this path...

Though I know Anna is going to frown on me for advocating this path...

Argh, no I'm not going to advocate ignoring one's quirky interests to follow one's alleged duty. My impression is more like fiddlemath's, below. You don't want to follow shiny interests at random (though even that path is much better than drifting randomly or choosing a career to appease one's parents, and cousin_it is right that even this tends to make people more awesome over time). Instead, ideally, you want to figure out what it would be useful to be interested in, cultivate real, immediate, curiosity and urges to be interested in those things, work to update your anticipations and urges so that they know more of what your abstract/verbal reasoning knows, and can see why certain subjects are pivotal…

Not "far-mode reasoning over actual felt interests" but "far-mode reasoning in dialog with actual felt interests, and both goals and urges relating strongly to what you end up actually trying to do, and so that you develop new quirky interests in the questions you need to answer, the way one develops quirky interests in almost any question if one is willing to dwell on it patiently for a long time, with staring with intrinsic interest while the details of the question come out to inhabit your mind...

4multifoliaterose
I find this comment vague and abstract, do you have examples in mind?

I think the flowchart for thinking about this question should look something like:

  1. If in a least convenient possible world where following your interests did not maximize utility, are you pretty sure you really would forego your personal interests to maximize utility? If no, go to 2; if yes, go to 3.

  2. Why are you even thinking about this question? Are you just trying to come up with a clever argument for something you're going to do anyway?

  3. Okay, now you can think about this question.

I can't answer your question because I've never gotten past 2.

9AnnaSalamon
I mostly-agree, except that question 1 shouldn't say: "In a least convenient world, would you utterly forgo all interest in return for making some small difference to global utility". It should say: "… is there any extent to which impact on strangers' well-being would influence your choices? For example, if you were faced with a choice between reading a chapter of a kind-of-interesting book with no external impact, or doing chores for an hour and thereby saving a child's life, would you sometimes choose the latter?" If the answer to that latter question is yes -- if expected impact on others' well-being can potentially sway your actions at some margin -- then it is worth looking into the empirical details, and seeing what bundles of global well-being and personal well-being can actually be bought, and how attractive those bundles are.
4Alex_Altair
I object to this being framed as primarily about others versus self. I pursue FAI for the perfectly selfish reason that it maximizes my expected life span and quality. I think the conflict being discussed is about near interest conflicting with far interest, and how near interest creates more motivation.
3[anonymous]
Because even if we don't have the strength or desire to willingly renounce all selfishness, we recognize that better versions of ourselves would do so, and that perhaps there's a good way to make some lifestyle changes that look like personal sacrifices but are actually net positive (and even more so when we nurture our sense of altruism)?
1Nectanebo
Isn't this statement also a clever argument for why you're not going to do it anyway, at least to an extent?
5Scott Alexander
Not a clever argument, more of an admission of current weakness. Admitting current weakness has the advantage of having the obvious next step of "consider becoming stronger". But saying "Pursuing my interests would increase utility anyway" has the disadvantage of requiring no further actions. Which is fine if it's true, but if you evaluate the truth of the statement while you still have this potential source of bias lurking in the background, it might not be.
0[anonymous]
For what it's worth, I have no trouble answering "yes" to 1, because for me it doesn't have the altruistic connotations it probably has for other people. My utility function is very selfish and I'm okay with that.
-2Viliam_Bur
Maybe the personal interests are the real utility, but we don't want to admit it -- because for our survival as members of social species it is better to pretend that our utility is aligned with utility of others, although it is probably just weakly positively correlated. In more complex society the correlation is probably even weaker, because the choicespace is larger. Or maybe just the utility choice mechanism is broken in this society, because it evolved in an ancient environment. It probably uses some mechanism like "if you are rewarded for doing something, or if you see that someone is rewarded for something, then develop a strong desire... and keep the desire even in times when you are not rewarded (because it may require long-term effort)". In ancient environment you would see rewards mostly for useful things. Today there are too many exciting things -- I don't say they are mostly bad, just that there is too much of them, so people's utility functions are spread too much, and sometimes there are not enough people with desire to do some critical tasks.
[-]jimmy110

That's why it's a very important skill to become interested in what you should be interested in. I made a conscious decision to become interested in what I'm working on now becase it seemed like an area full of big low hanging fruit, and now it genuinely fascinates me.

How to become really interested in something?

I would suggest spending time with people interested in X, because this would give one's brain signal "X is socially rewarded", which would motivate them to do X. Any other good ideas?

[-]jimmy110

What worked for me was to spend time thinking about the types of things I could do if it worked right, and feeling those emotions while trying to figure out rough paths to get there.

I also chose to strengthen the degree to which I identify as someone who can do this kind of thing, so it felt natural

3Karmakaiser
I'm spitballing different ideas I've used: * Like you said, talk to people who know the topic and find it interesting. * Read non technical introductory books on the topic. I found the algorithmn's part of CS interesting, but the EE dimensions of computing was utterly boring until I read Code by Charles Petzold. * Research the history of a topic in order to see the lives of the humans who worked on it. Humans being social creatures, may find a topic more interesting after they have learned of some of the more interesting people who have worked in that field.
9Alexei
First, making games isn't a little quirky interest. Second, I don't necessarily have to put it aside. My goal it to contribute to FAI. I will have to figure out the best way to do that. If I notice that whatever I try, I fail at because I can't summon enough motivation, then may be making games is the best options I've got. But the point is that I have to maximize my contribution.
1Risto_Saarelma
Why do you say it's not a little quirky interest? I'm asking this as I've been fixated on various game-making stuff for close to 20 years now, but now feel like I'm mostly going on because it's something I was really interested at 14 and subsequently tinkered with enough that it now seems like the thing I can do most interesting things with, but I suspect it's not what I'd choose to have built a compulsive interest in if I could make the choice today. Nowadays I am getting alienated from the overall gaming culture that's still mostly optimized to primarily appeal to teenagers, and I often have trouble coming up with a justification why most games should be considered anything other than shiny escapist distractions and how the enterprise of game development aspires to anything other than being a pageant for coming up with the shiniest distraction. So I would go for both quirky, gaming has a bunch of weird insider culture things going for it, and little, most gaming and gamedev has little effect in the big picture of fixing things that make life bad for people (though distractions can be valuable too), and might have a negative effect if clever people who could make a contribution elsewhere get fixated into gamedev instead. It does translate to a constantly growing programming skill for me, so at least there's that good reason to keep up at it. But that's more a side effect than a primary value of the interest.
4Alexei
You've committed mind projection fallacy. :) For me games have started out as a hobby and grew into a full blown passion. It's something I live and breathe about 10 hours day (full time job and then making a game on the side). I'll agree that current games suck, but their focus has extended way past teenagers. And just because they are bad, doesn't mean the medium is bad. It's possible to make good games, for almost any definition of good.
0Risto_Saarelma
I was describing my own mind, didn't get around to projecting it yet. Let me put the question this way: You can probably make a case for why people should want to be interested in, say, mathematics, physics or effective reasoning, even if they are not already interested in them. Is there any compelling similar reason why someone not already interested in game development should want to be interested in game development?
0Alexei
Sure, but it would be on case by case basis. I think game development is too narrow (especially when compared to things like math and physics), but if you consider game design in general, that's a useful field to know any time you are trying to design an activity so that it's engaging and understandable.
9fiddlemath
I expect that completely ignoring your quirky interests leads to completely destroying your motivation for doing useful work. On the other hand, I find myself demotivated, even from my quirky interests, when I haven't done "useful" things recently. I constantly question "why am I doing what I'm doing?" and feel pretty awful, and completely destroy my motivation for doing anything at all. But! Picking from "fiddle with shiny things" and "increase global utility" is not a binary decision. The trick is to find a workable compromise between the ethics you endorse and the interests you're drawn to, so that you don't exhaust yourself on either, and gain "energy" from both. Without some sort of deep personal modification, very few people can usefully work 12 hours a day, 7 days a week, at any one task. You can, though, spend about four hours a day on each of three different projects, so long as they're sufficiently varied, and they provide an appropriate mixture of short-term and long-term goals, near and far goals, personal time and social time, and seriousness and fun.
8scientism
Doing what's right is hard and takes time. For a long time I've been of the opinion that I should do what's most important and let my little quirky interests wither on the vine, and that's what I've done. But it took many years to get it right, not because of issues of intrinsic motivation, but because I'm tackling hard problems and it was difficult to even know what I was supposed to be doing. But once I figured out what I'm doing, I was really glad I'd taken the risk, because I can't imagine ever returning to my little quirky interests. I think it involves a genuine leap into the unknown. For example, even if you decide that you should dedicate your life to FAI, there's still the problem of figuring out what you should be doing. It might take years to find the right path and you'll probably have doubts about whether you made the right decision until you've found it. It's a vocation fraught with uncertainty and you might have several false starts, you might even discover that FAI is not the most important thing after all. Then you've got to start over. Should everyone being doing it? Probably not. Is there a good way to decide whether you should be doing or not? I doubt it. I think what really happens is you start going down that road and there's a point where you can't turn back.
2cousin_it
Just being curious, what are you doing now, and what were your interests before?
2scientism
I work on non-computationalist approaches to cognitive science (i.e., embodied, dynamical, ecological). I used to pursue interests in all manner of things, including art, movies and games.
2cousin_it
Thanks! That sounds interesting, will we see a LW post describing your progress?
0Alexei
Thanks, your statements align very well with my anticipations. I don't expect this to be easy, and I don't know exactly what I should be doing, but I know I really want to figure it out.
8Vladimir_Golovin
A couple of years ago, I'd side with Anna. Today, I'm more inclined to agree with you. As I learned the hard way, intrinsic motivation is extremely important for me. (Long story short: I have a more than decent disposable income, which I earned by following my "little quirky interests". I could use this income for direct regular donations, but instead I decided to invest it, along with my time, in a potentially money-making project I had little intrinsic motivation for. I'm still evaluating the results, but so far it's likely that I'll make intrinsic motivation mandatory for all my future endeavors.)
4[anonymous]
My trouble is that my "little quirky interests" are all I really want to do. It can be a bit hard to focus on all the things that must get done when I'd much rather be working on something totally unrelated to my "work". I'm not sure how to solve that.
2David_Gerard
Indeed. Humans in general aren't good at trying to be utility-function-satisfying machines. The question you're asking is "is there life before death?"
[-]Shmi300

Replace FAI with Rapture and LW with Born Again, and you can publish this "very personal account" full of non-sequiturs on a more mainstream site.

Replace FAI with Rapture and LW with Born Again

And "rational" with "faithful", and "evidence" with "feeling", and "thought about" with "prayed for", etc. With that many substitutions, you could turn it into just about anything.

[-]Alexei130

Thanks, I'm actually glad to see your kind of comment here. The point you make is something I am very wary off, since I've had dramatic swings like that in the past. From Christianity to Buddhism, back to Christianity, then to Agnosticism. Each one felt final, each one felt like the most right and definite step. I've learned not to trust that feeling, to be a bit more skeptical and cautious.

You are correct that my post was full of non-sequiturs. That's because I wrote it in a stream-of-thought kind of way. (I've also omitted a lot of thoughts.) It wasn't meant to be an argument for anything other than "think really hard about your goals, and then do the absolute best to fulfill them."

tl;dr: If you can spot non-sequiturs in your writing, and you put a lot of weight on the conclusion it's pointing at, it's a really good idea to take the time to fill in all the sequiturs.

Writing an argument in detail is a good way to improve the likelihood that your argument isn't somewhere flawed. Consider:

  • Writing allows reduction. By pinning the argument to paper, you can separate each logical step, and make sure that each step makes sense in isolation.
  • Writing gives the argument stability. For example, the argument won't secretly change when you think about it while you're in a different mood. This can help to prevent you from implicitly proving different points of your argument from contradictory claims.
  • Writing makes your argument vastly easier to share. Like in open source software, enough eyeballs makes all bugs trivial.

Further, notice that we probably underestimate the value of improving our arguments, and are overconfident in apparently-solid logical arguments. If an argument contains 20 inferences in sequence, and you're wrong about such inferences 5% without noticing the misstep, then you have about a 64% chance of being wrong somewhere in the argument. If you can ... (read more)

5David_Gerard
Yes. Even if this one is right, you're still running on corrupt hardware and need to know when to consciously lower your enthusiasm.

The problem with this argument is that you've spent so much emotional effort arguing why the world is screwed without FAI, that you've neglected to hold the claim "The FAI effort currently being conducted by SIAI is likely to succeed in saving the world" to the standards of evidence you would otherwise demand.

Consider the following exercise in leaving a line of retreat: suppose Omega told you that SIAI's FAI project was going to fail, what would you do?

6Alexei
I wasn't making any arguments to the fact that SIAI is likely to succeed in saving the world or even that they are the best option for FAI. (In fact, I have a lot of doubts about it.) That's a really complicated argument, and I really don't have enough information to make a statement like that. As I've said, my goal is to make FAI happen. If SIAI isn't the best option, I'll find another best option. If it turns out that FAI is not really what we need, then I'll work on whatever it is we do.
[-][anonymous]200

There's a lot to process here, but: I hear you. As you investigate your path, just remember that a) paths that involve doing what you love should be favored as you decide what to do with yourself, because depression and boredom do not a productive person make, and b) if you can make a powerful impact in gaming, you can still translate that impact into an impact on FAI by converting your success and labor into dollars. I expect these are clear to you, but bear mentioning explicitly.

These decisions are hard but important. Those who take their goals seriously must choose their paths carefully. Remember that the community is here for you, so you aren't alone.

7Alexei
Thanks! :) I'm fully aware of both points, but I definitely appreciate you brining them up. You're right, depression and boredom is not good. I sincerely doubt boredom will be a problem, and as for depression, it's something I'll have to be careful about. Thankfully, there are things in life that I like doing aside from making games. Yes, I could convert that success into dollars, but as I've mentioned in my article, that's probably not the optimal way of making money. (It still might be, I'd have to really think about it, but I'd definitely have to change my approach if that's what I decided to do.)

I've spent the past 10 years making games and getting better at it. And just recently I've realized how really really good I actually am at it.

Good enough to make billions and/or impact/recruit many (future) academics? Then do it! Use your superpowers to do it better than before.

And if you are not good enough, then what else will you do? Will you be good enough in that other thing? You should not replace one thing for another just for the sake of replacing, but because it increases your utility function. You should be able to do more in the new area, or the new area should be so significant that even if you do less, the overall result is better.

I have an idea, though I am not sure if it is good and if you will like it. From the reviews it seems to me that you are a great storyteller (except the part of writing dialogs), but your weak point is game mechanics. And if you made a game, you are obviously good at programming. So I would suggest to focus on the mechanical part and, for a moment, to forget about stories. People as SIAI are preparing a rationality curicullum; they try to make exercises that will help people improve some of their skills. I don't know how far they are, but... (read more)

Oh, wow. I was reading your description of your experiences in this, and I was like, "Oh, wow, this is like a step-by-step example of brainwashing. Yup, there's the defreezing, the change while unfrozen, and the resolidification."

3Alexei
It's certainly what it feels like from inside as well. I'm familiar with that feeling, having gone through several indoctrinations in my life. This time I am very wary of rushing into anything, or claiming that this belief if absolutely right, or anything like that. I have plenty of skepticism; however, not acting on what I believe to be correct would be very silly.

I finally realized that when people were talking about donating to SIAI during the rationality minicamp, most of us (certainly myself) were thinking of may be tens of thousands of dollars a year. I now understand that's silly. If our goal is truly to make the most money for SIAI, then the goal should be measured in billions.

Eliezer has said that he doesn't know how to usefully spend more than 10 million dollars...

0Alexei
Thankfully there are more people at SIAI than just him. :) As a very simple start, they could make sure that all SIAI researchers can focus on their work, by taking care of their other chores for them.
2Shmi
I am guessing that a part of what EY had in mind was that large organizations tend to lose its purposes and work towards self-preservation as much as towards the original objective. $10M/year translates into less than 100 full-time jobs, which is probably a good rule of thumb for an organization becoming too big to keep its collective eyes on the ball.
0hairyfigment
What chores did you have in mind? Seems like you wouldn't even need 100 million dollars to hire the world's best mathematicians (if you have any hope of doing so) and give all involved a comfortable lifestyle. Do you mean, billions total by the time we get FAI? Because you started out speaking of donating such-and-such "a year".
0Alexei
My point was that if your goal is to give money to an organization (and it seemed that that's what a lot of people at rationality minicamp were planning to do for SIAI), thinking in thousands is, while a nice gesture, is not very helpful. We should be thinking in billions (not necessarily per year, I'm just talking order of magnitude here). As for chores it's can start as basic as living accommodations, private jets, and get as complex as security guards and private staff. I won't argue for any specifics here, though. I'm just arguing that having lots of money is nice and helpful.
2hairyfigment
See, I'm not sure more is better after a certain point - shminux touched on this. I also think if we hired the world's best mathematicians for a year - assuming FAI would take more than this - some of them would either get interested and work for less money, or find some proof SI's current approach won't work.
0Alexei
Sure, there are diminishing returns after some point. And both of those results would be good.
[-][anonymous]120

I wish you well, but be wary. I would guess that many of us on this site had dreams of saving the world when younger, and there is no doubt that FAI appeals to that emotion. If the claims on the SI are true, then donating to them will mean you contributed to saving the world. Be wary of the emotions associated with that impulse. Its very easy for the brain to pick out a train of thoughts and ignore all others- those doubts you admit to may not be entirely unreasonable. Before making drastic changes to your lifestyle, give it a while. Listen to skeptical voices. Read the best arguments as to why donating to SI may not be a good idea (there are some on this very site).

If you are convinced after some time to think that helping the SI is all you want to do with life, then, as Villiam suggests, do something you love to promote it. Donate what you can spare to SI, and keep on doing what makes you happy, because I doubt you will be more productive doing something that makes you miserable. So make those rational board games, but make some populist ones too, because while the former may convert, the latter might generate more income to allow you to pay someone else to convert people.

5Alexei
Yes, I probably need a healthy dose of counter-arguments. Can you link any? (I'll do my own search too.)
1[anonymous]
I have to admit that no particular examples come to mind, but usually in the comments threads on topics such as optimal giving, and occasional posts arguing agains the probability of the singularity. I certainly have seen some, but can't remember where exactly, so any search you do will probably be as effective as my own. To present you with a few possible arguments (which I believe to varying degrees of certainty) -A lot of the arguments for becoming commited to donating to FAI are based on "even if theres a low probability of it happening, the expected gains are incredibly huge". I'm wary of this argument because I think it can be applied anywhere. For instance, even now, and certainly 40 years ago, one could make a credible argument that theres a not insignificant chance of a nuclear war eradicating human life from the planet. So we should contribute all our money to organisations devoted to stopping nuclear war. -This leads directly to another argument- how effective do we expect the SI to be? Is friendly AI possible? Are SI going to be the ones to find it? If SI create friendliness, will it be implemented? If I had devoted all my money to the CND, I would not have had a significant impact on the proliferation of nuclear weaponry. -A lot of the claims based on a singularity assume that intelligence can solve all problems. But there may be hard limits to the universe. If the speed of light is the limit, then we are trapped with finite resources, and maybe there is no way for us to use them much more efficiently than we can now. Maybe cold fusion isn't possible, maybe nanotechnology can't get much more sophisticated? -Futurism is often inaccurate. The jokes about "wheres my hover car" are relevant- the progress over the last 200 years has rocketed in some spheres but slowed in others. For instance, current medical advances have been slowing recently. They might jump forwards again, but maybe not. Predicting which bits of science will advance in a certain time sca
0John_Maxwell
This doesn't sound easy to do a keyword search for; did you have anything in mind you could link us to? Edit: Sorry, I see this has already been asked.

In re FAI vs. snoozing: What I'd hope from an FAI is that it would know how much rest I needed. Assuming that you don't need that snoozing time at all strikes me as a cultural assumption that theories (in this case, possibly about willpower, productivity, and virtue) should always trump instincts.

A little about hunter-gatherer sleep. What I've read elsewhere is that with an average of 12 hours of darkness and an average need for 8 hours of sleep, hunter-gathers would not only have different circadian rhythms (teenagers tend to run late, old people tend to ... (read more)

8David_Gerard
Leave out "artificial" - what would constitute a "human-friendly intelligence"? Humans don't. Even at our present intelligence we're a danger to ourselves. I'm not sure "human-friendly intelligence" is a coherent concept, in terms of being sufficiently well-defined (as yet) to say things about. The same way "God" isn't really a coherent concept.
3NancyLebovitz
4 + 4 hours of sleep in relatively recent history

Here's what I was thinking as I read this: Maybe you need to reassess cost/benefits. Apply the Dark Arts to games and out-Zynga Zynga. Highly addictive games with in-game purchases designed using everything we know about the psychology of addiction, reward, etc. Create negative utility for a small group of people, yes, but syphon off their money to fund FAI.

I think if I really, truly believed FAI was the only and right option I'd probably do a lot of bad stuff.

7Alex_Altair
Let's start a Singularity Casino and Lottery.
4JenniferRM
You might want to read through some decision theory stuff and ponder it for a while. Also, even before that, please consider the possibility that your political instincts are optimized to get a group of primates to change a group policy in a way you prefer while surviving the likely factional fight. If you really want to be effective here or in any other context requiring significant coordination with numerous people, it seems likely to me that you'll need to adjust your goal directed tactics so that you don't leap into counter-productive actions the moment you decide they are actually worth doing. Baby steps. Caution. Lines of retreat. Compare and contrast your prediction with: the valley of bad rationality. I have approached numerous intelligent and moral people who are perfectly capable of understanding the basic pitch for singularity activism but who will not touch it with a ten-foot pole because they are afraid to be associated with anything that has so much potential to appeal to the worst sorts of human craziness. Please do something other than confirm these bleak but plausible predictions.

I think when LWers say "raise the sanity waterline," there are two ideas being presented. One is to make everyone a little bit more sane. That's nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.

There's another possible scenario: The AI Singularity isn't far, but it is not very near, either. AGI is a generation or more beyond our current understanding of minds, and FAI i... (read more)

0Alexei
Good point, thanks.

I recently had a very similar realization and accompanying shift of efforts. It's good to know others like you have as well.

A couple of principles I'm making sure to follow (which may be obvious, but I think are worth pointing out):

  1. Happier people are more productive, so it is important to apply a rational effort toward being happy (e.g. by reading and applying the principles in "The How of Happiness"). This is entirely aside from valuing happiness in itself. The point is that I am more likely to make FAI happen if I make myself happier, as a ma

... (read more)
0Alexei
Interesting! I'm also happy to hear I'm not the only one. :) Where did you shift your efforts to? Yes and yes to both points. Seems like everyone is giving the same advice. Must be important.
8PECOS-9
Like you seemed to be looking toward in your post, I'm focusing on making as much money as possible right now. For me, the best path in that direction is probably in the field of tech startups. This has the added benefit of probably being the right choice even if I decide for some reason that SIAI isn't the most important cause. Pretty much any set of preferences seems to me like it will be best served by making a lot of money (even something like "live a simple and humble life and focus only on your own happiness" is probably easier if you can just retire early and then have the freedom to do whatever you want). Edited to add: It's worth noting that making as much money as possible via a tech startup is also good for me personally because I actually do have a fair amount of motivation and interest in that area (if I never heard of FAI, tech entrepreneurship would still be appealing to me, although I probably wouldn't dive into it as much as I did once I decided I wanted to work toward FAI). So I don't necessarily think the best thing for everyone who wants to contribute to FAI is to try to make as much money as possible (and certainly not necessarily via a tech startup), but for me personally it seems like the best path.
1Giles
I've had pretty much the exact same experience. Some random thoughts (despite the surface similarities, I'm describing myself not you here so usual caveats apply). For me the trigger was deciding that I wanted to be a utilitarian. At that point the Intelligence Explosion hypothesis (and also GiveWell's message that some charities are orders of magnitude more effective than others) went from being niggling concerns to urgently important concerns. Unlike you, I do have a visceral negative feeling associated with the end of the world. Or rather I did - now it just feels like the new normal (but I'd still put a lot of effort into reducing its probability very slightly). It's entirely nonobvious to me that FAI is the marginally most important thing for humanity to be working on right now. That depends factors such as whether the approach can be made safe and viable, how different x-risks compare with each other, and some icky stuff to do with social dynamics and information hazards. In other words, it's a question that I have no hope of correctly answering as an individual. But this doesn't suggest that inaction is the best strategy, but rather that I should be supporting smarter people who are trying to answer that kind of question. The most important thing for humanity to be doing right now is probably very different from the most important thing for me to be doing. It depends on where my comparative advantage is. Current top priority for me is actually just sorting my own life out. Other commenters here have made the analogy with religious conversion and/or brainwashing. I think that the analogy is valid in as far as something similar may have been happening in my brain; on the other hand, I don't really know what to do with this information. It doesn't mean I'm wrong. I'm still on the lookout for any really good skeptical arguments that address the main issues. I've really been suffering due to a lack of a community of like-minded people. (At least face-to-face
0Alexei
You know, it should be more non-obvious for me that FAI is the right thing to do, too. It's just that the idea clicks with me very well, but I would still prefer to collect more evidence for it. I agree with you on doing what you can do best (and for many of us that will probably be some kind of support role). I agree with the "yeah, it's kind of like brainwashing, but I'm open to counter-arguments." Not having like-minded in-person friends is really bad. Wherever you are, I would urge you to relocate. I only recently moved to the Bay Area, and it's just so much nicer here (I've lived in small towns in Midwest before). This goes double if your career is in technology. I also suffer depression-like symptoms. My head feels like it's about to explode half the time, and every so often I just want to scream. I also often want to just lie down and not do anything. I take all these feelings, and I say: "Yeah, it's normal to feel this. I'm going through a lot right now. It's going to be okay."
0kmacneill
Do not take the frequentist approach!

A question that I'm really curious about: Has anyone (SIAI?) created a roadmap to FAI? Luke talks about granularizing all the time. Has it been done for FAI? Something like: build a self-sustaining community of intelligent rational people, have them work on problems X, Y, Z. Put those solutions together with black magic. FAI.

9PECOS-9
Lukeprog's So You Want to Save the World is sort of like a roadmap, although it's more of a list of important problems with a "strategies" section at the end, including things like raising the sanity waterline.
2Alexei
This looks good. Thanks!
1John_Maxwell
http://singinst.org/files/strategicplan2011.pdf ?
[-]ryjm30

As a relatively new member of this site, I'm having trouble grasping this particular reasoning and motivation for participating in FAI. I've browsed Eleizer's various writings on the subject of FAI itself, so I have a vague understanding of why FAI is important, and such a vague understanding is enough for me to conclude that FAI is one, if not the most, important topic that currently needs to be discussed. This belief may not be entirely my own and is perhaps largely influenced by the amount of comments and posts in support of FAI, in conjunction with my ... (read more)

3Alexei
Even when you understand that FAI is the most important thing to be doing, there are many ways in which you can fail to translate that into action. It seems most people are making the assumption that I'll suddenly start doing really boring work that I hate. That's not the case. I have to maximize my benefit, which means considering all the factors. I can't be productive in something that I'm just bad at, or something that I really hate, so I won't do that. But there are plenty of things that I'm somewhat interested in and somewhat familiar with, that would probably do a lot more to help with FAI than making games. But, again, it's something that has to be carefully determined. That's all I was trying to say in this post. I have an important goal -> I need to really consider what the best way to achieve that goal is.
0ryjm
I see. I wasn't asserting that you are going to do work you hate, however. I was mainly looking at the value of having a seemingly unachievable and incredibly broad goal as one's primary motivation. I'm sure you have a much more nuanced view of how and why you are undertaking this life change, and I don't want to discourage you. Seeing as how the general consensus is that FAI is the most important thing to be doing, I think it would take a lot of effort to discourage you. I just can't help but think that there should be a primary technical interest in the problems presented by FAI motivating these kinds of decisions. If it was me, I would be confused as to what exactly I would be working on, which would be very discouraging.
1Alexei
I'm also confused as to what I should be working on. That was one of the primary reason it took a while for me to get to this point. I still don't know what to do, but I know I have to do my best to figure it out.

A pleasure to read, sir.

Sometimes, it feels like part of me would take over the world just to get people to pay attention to the danger of UFAI and the importance of Friendliness. Figuratively speaking. And part of me wants to just give up and get the most fun out of my life until death, accepting our inevitable destruction because I can't do anything about it.

So far, seems like the latter part is winning.

4Alexei
Both a little extreme. :) There are little things you can do. I've been donating to SIAI for a while, so that's a good start. Take on as much as you can bear responsibly.
0John_Maxwell
What's worked for me is acting on some average of the different parts of my personality. I realize that I do care about myself more than random other people, and that the future is uncertain, and I let that factor into my decisions.
[-][anonymous]10

Interesting. It's good to see that you are at least aware of why you're choosing this path now just as you've chosen other paths (like Buddhism) before.

However, faith without action is worthless so I am curious, as others below are, what's your next goal exactly? For all this reasoning what effect do you hope to accomplish in the real world? I don't mean the pat "raise sanity, etc." answer. I mean what tangible thing do you hope to accomplish next, under these beliefs and this line of reasoning?