Does everyone else here think that putting aside your little quirky interests to do big important things is a good idea? It seems to me that people who choose that way typically don't end up doing much, even when they're strongly motivated, while people who follow their interests tend to become more awesome over time. Though I know Anna is going to frown on me for advocating this path...
Though I know Anna is going to frown on me for advocating this path...
Argh, no I'm not going to advocate ignoring one's quirky interests to follow one's alleged duty. My impression is more like fiddlemath's, below. You don't want to follow shiny interests at random (though even that path is much better than drifting randomly or choosing a career to appease one's parents, and cousin_it is right that even this tends to make people more awesome over time). Instead, ideally, you want to figure out what it would be useful to be interested in, cultivate real, immediate, curiosity and urges to be interested in those things, work to update your anticipations and urges so that they know more of what your abstract/verbal reasoning knows, and can see why certain subjects are pivotal…
Not "far-mode reasoning over actual felt interests" but "far-mode reasoning in dialog with actual felt interests, and both goals and urges relating strongly to what you end up actually trying to do, and so that you develop new quirky interests in the questions you need to answer, the way one develops quirky interests in almost any question if one is willing to dwell on it patiently for a long time, with staring with intrinsic interest while the details of the question come out to inhabit your mind...
I think the flowchart for thinking about this question should look something like:
If in a least convenient possible world where following your interests did not maximize utility, are you pretty sure you really would forego your personal interests to maximize utility? If no, go to 2; if yes, go to 3.
Why are you even thinking about this question? Are you just trying to come up with a clever argument for something you're going to do anyway?
Okay, now you can think about this question.
I can't answer your question because I've never gotten past 2.
That's why it's a very important skill to become interested in what you should be interested in. I made a conscious decision to become interested in what I'm working on now becase it seemed like an area full of big low hanging fruit, and now it genuinely fascinates me.
How to become really interested in something?
I would suggest spending time with people interested in X, because this would give one's brain signal "X is socially rewarded", which would motivate them to do X. Any other good ideas?
What worked for me was to spend time thinking about the types of things I could do if it worked right, and feeling those emotions while trying to figure out rough paths to get there.
I also chose to strengthen the degree to which I identify as someone who can do this kind of thing, so it felt natural
Replace FAI with Rapture and LW with Born Again, and you can publish this "very personal account" full of non-sequiturs on a more mainstream site.
Replace FAI with Rapture and LW with Born Again
And "rational" with "faithful", and "evidence" with "feeling", and "thought about" with "prayed for", etc. With that many substitutions, you could turn it into just about anything.
Thanks, I'm actually glad to see your kind of comment here. The point you make is something I am very wary off, since I've had dramatic swings like that in the past. From Christianity to Buddhism, back to Christianity, then to Agnosticism. Each one felt final, each one felt like the most right and definite step. I've learned not to trust that feeling, to be a bit more skeptical and cautious.
You are correct that my post was full of non-sequiturs. That's because I wrote it in a stream-of-thought kind of way. (I've also omitted a lot of thoughts.) It wasn't meant to be an argument for anything other than "think really hard about your goals, and then do the absolute best to fulfill them."
tl;dr: If you can spot non-sequiturs in your writing, and you put a lot of weight on the conclusion it's pointing at, it's a really good idea to take the time to fill in all the sequiturs.
Writing an argument in detail is a good way to improve the likelihood that your argument isn't somewhere flawed. Consider:
Further, notice that we probably underestimate the value of improving our arguments, and are overconfident in apparently-solid logical arguments. If an argument contains 20 inferences in sequence, and you're wrong about such inferences 5% without noticing the misstep, then you have about a 64% chance of being wrong somewhere in the argument. If you can ...
The problem with this argument is that you've spent so much emotional effort arguing why the world is screwed without FAI, that you've neglected to hold the claim "The FAI effort currently being conducted by SIAI is likely to succeed in saving the world" to the standards of evidence you would otherwise demand.
Consider the following exercise in leaving a line of retreat: suppose Omega told you that SIAI's FAI project was going to fail, what would you do?
There's a lot to process here, but: I hear you. As you investigate your path, just remember that a) paths that involve doing what you love should be favored as you decide what to do with yourself, because depression and boredom do not a productive person make, and b) if you can make a powerful impact in gaming, you can still translate that impact into an impact on FAI by converting your success and labor into dollars. I expect these are clear to you, but bear mentioning explicitly.
These decisions are hard but important. Those who take their goals seriously must choose their paths carefully. Remember that the community is here for you, so you aren't alone.
I've spent the past 10 years making games and getting better at it. And just recently I've realized how really really good I actually am at it.
Good enough to make billions and/or impact/recruit many (future) academics? Then do it! Use your superpowers to do it better than before.
And if you are not good enough, then what else will you do? Will you be good enough in that other thing? You should not replace one thing for another just for the sake of replacing, but because it increases your utility function. You should be able to do more in the new area, or the new area should be so significant that even if you do less, the overall result is better.
I have an idea, though I am not sure if it is good and if you will like it. From the reviews it seems to me that you are a great storyteller (except the part of writing dialogs), but your weak point is game mechanics. And if you made a game, you are obviously good at programming. So I would suggest to focus on the mechanical part and, for a moment, to forget about stories. People as SIAI are preparing a rationality curicullum; they try to make exercises that will help people improve some of their skills. I don't know how far they are, but...
Oh, wow. I was reading your description of your experiences in this, and I was like, "Oh, wow, this is like a step-by-step example of brainwashing. Yup, there's the defreezing, the change while unfrozen, and the resolidification."
I finally realized that when people were talking about donating to SIAI during the rationality minicamp, most of us (certainly myself) were thinking of may be tens of thousands of dollars a year. I now understand that's silly. If our goal is truly to make the most money for SIAI, then the goal should be measured in billions.
Eliezer has said that he doesn't know how to usefully spend more than 10 million dollars...
I wish you well, but be wary. I would guess that many of us on this site had dreams of saving the world when younger, and there is no doubt that FAI appeals to that emotion. If the claims on the SI are true, then donating to them will mean you contributed to saving the world. Be wary of the emotions associated with that impulse. Its very easy for the brain to pick out a train of thoughts and ignore all others- those doubts you admit to may not be entirely unreasonable. Before making drastic changes to your lifestyle, give it a while. Listen to skeptical voices. Read the best arguments as to why donating to SI may not be a good idea (there are some on this very site).
If you are convinced after some time to think that helping the SI is all you want to do with life, then, as Villiam suggests, do something you love to promote it. Donate what you can spare to SI, and keep on doing what makes you happy, because I doubt you will be more productive doing something that makes you miserable. So make those rational board games, but make some populist ones too, because while the former may convert, the latter might generate more income to allow you to pay someone else to convert people.
In re FAI vs. snoozing: What I'd hope from an FAI is that it would know how much rest I needed. Assuming that you don't need that snoozing time at all strikes me as a cultural assumption that theories (in this case, possibly about willpower, productivity, and virtue) should always trump instincts.
A little about hunter-gatherer sleep. What I've read elsewhere is that with an average of 12 hours of darkness and an average need for 8 hours of sleep, hunter-gathers would not only have different circadian rhythms (teenagers tend to run late, old people tend to ...
Here's what I was thinking as I read this: Maybe you need to reassess cost/benefits. Apply the Dark Arts to games and out-Zynga Zynga. Highly addictive games with in-game purchases designed using everything we know about the psychology of addiction, reward, etc. Create negative utility for a small group of people, yes, but syphon off their money to fund FAI.
I think if I really, truly believed FAI was the only and right option I'd probably do a lot of bad stuff.
I think when LWers say "raise the sanity waterline," there are two ideas being presented. One is to make everyone a little bit more sane. That's nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.
There's another possible scenario: The AI Singularity isn't far, but it is not very near, either. AGI is a generation or more beyond our current understanding of minds, and FAI i...
I recently had a very similar realization and accompanying shift of efforts. It's good to know others like you have as well.
A couple of principles I'm making sure to follow (which may be obvious, but I think are worth pointing out):
Happier people are more productive, so it is important to apply a rational effort toward being happy (e.g. by reading and applying the principles in "The How of Happiness"). This is entirely aside from valuing happiness in itself. The point is that I am more likely to make FAI happen if I make myself happier, as a ma
A question that I'm really curious about: Has anyone (SIAI?) created a roadmap to FAI? Luke talks about granularizing all the time. Has it been done for FAI? Something like: build a self-sustaining community of intelligent rational people, have them work on problems X, Y, Z. Put those solutions together with black magic. FAI.
As a relatively new member of this site, I'm having trouble grasping this particular reasoning and motivation for participating in FAI. I've browsed Eleizer's various writings on the subject of FAI itself, so I have a vague understanding of why FAI is important, and such a vague understanding is enough for me to conclude that FAI is one, if not the most, important topic that currently needs to be discussed. This belief may not be entirely my own and is perhaps largely influenced by the amount of comments and posts in support of FAI, in conjunction with my ...
Sometimes, it feels like part of me would take over the world just to get people to pay attention to the danger of UFAI and the importance of Friendliness. Figuratively speaking. And part of me wants to just give up and get the most fun out of my life until death, accepting our inevitable destruction because I can't do anything about it.
So far, seems like the latter part is winning.
Interesting. It's good to see that you are at least aware of why you're choosing this path now just as you've chosen other paths (like Buddhism) before.
However, faith without action is worthless so I am curious, as others below are, what's your next goal exactly? For all this reasoning what effect do you hope to accomplish in the real world? I don't mean the pat "raise sanity, etc." answer. I mean what tangible thing do you hope to accomplish next, under these beliefs and this line of reasoning?
I've started reading LW around spring of 2010. I was at the rationality minicamp last summer (2011). The night of February 10, 2012 all the rationality learning and practice finally caught up with me. Like a water that has been building up behind a damn, it finally broke through and flooded my poor brain.
"What if the Bayesian Conspiracy is real?" (By Bayesian Conspiracy I just mean a secret group that operates within and around LW and SIAI.) That is the question that set it all in motion. "Perhaps they left clues for those that are smart enough to see it. And to see those clues, you would actually have to understand and apply everything that they are trying to teach." The chain of thoughts that followed (conspiracies within conspiracies, shadow governments and Illuminati) it too ridiculous to want to repeat, but it all ended up with one simple question: How do I find out for sure? And that's when I realized that almost all the information I have has been accepted without as much as an ounce of verification. So little of my knowledge has been tested in the real world. In that moment I achieved a sort of enlightenment: I realized I don't know anything. I felt a dire urge to regress to the very basic questions: "What is real? What is true?" And then I laughed, because that's exactly where The Sequences start.
Through the turmoil of jumbled and confused thoughts came a shock of my most valuable belief propagating through my mind, breaking down final barriers, reaching its logical conclusion. FAI is the most important thing we should be doing right now! I already knew that. In fact, I knew that for a long time now, but I didn't... what? Feel it? Accept it? Visualize it? Understand the consequences? I think I didn't let that belief propagate to its natural conclusion: I should be doing something to help this cause.
I can't say: "It's the most important thing, but..." Yet, I've said it so many times inside my head. It's like hearing other people say: "Yes, X is the rational thing to do, but..." What follows is a defense that allows them to keep the path to their goal that they are comfortable with, that they are already invested in.
Interestingly enough, I've already thought about this. Right after rationality minicamp, I've asked myself the question: Should I switch to working on FAI, or should I continue to make games? I've thought about it heavily for some time, but I felt like I lacked the necessary math skills to be of much use on FAI front. Making games was the convenient answer. It's something I've been doing for a long time, it's something I am good. I decided to make games that explain various ideas that LW presents in text. This way I could help raise the sanity waterline. Seemed like a very nice, neat solution that allowed me to do what I wanted and feel a bit helpful to the FAI cause.
Looking back, I was dishonest with myself. In my mind, I already wrote the answer I wanted. I convinced myself that I didn't, but part of me certainly sabotaged the whole process. But that's okay, because I was still somewhat helpful, even though may be not in the most optimal way. Right? Right?? The correct answer is "no". So, now I have to ask myself again: What is the best path for me? And to answer that, I have to understand what my goal is.
Rationality doesn't just help you to get what you want better/faster. Increased rationality starts to change what you want. May be you wanted the air to be clean, so you bought a hybrid. Sweet. But then you realized that what you actually want is for people to be healthy. So you became a nurse. That's nice. Then you realized that if you did research, you could be making an order of magnitude more people healthier. So you went into research. Cool. Then you realized that you could pay for multiple researchers if you had enough money. So you went out, become a billionaire, and created your own research institute. Great. There was always you, and there was your goal, but everything in between was (and should be) up for grabs.
And if you follow that kind of chain long enough, at some point you realize that FAI is actually the thing right before your goal. Why wouldn't it be? It solves everything in the best possible way!
People joke that LW is a cult. Everyone kind of laughs it off. It's funny because cultists are weird and crazy, but they are so sure they are right. LWers are kind of like that. Unlike other cults, though, we are really, truly right. Right? But, honestly, I like the term, and I think it has a ring of truth to it. Cultists have a goal that's beyond them. We do too. My life isn't about my preferences (I can change those), it's about my goals. I can change those too, of course, but if I'm rational (and nice) about it, I feel that it's hard not to end up wanting to help other people.
Okay, so I need a goal. Let's start from the beginning:
What is truth?
Reality is truth. It's what happens. It's the rules that dictate what happens. It's the invisible territory. It's the thing that makes you feel surprised.
(Okay, great, I won't have to go back to reading Greek philosophy.)
How do we discover truth?
So far, the best method has been the scientific principle. It's has also proved itself over and over again by providing actual tangible results.
(Fantastic, I won't have to reinvent the thousands of years of progress.)
Soon enough humans will commit a fatal mistake.
This isn't a question, it's an observation. The technology is advancing on all fronts to the point where it can be used on a planetary (and wider) scale. Humans make mistakes. Making mistake with something that affects the whole world could result in an injury or death... for the planet (and potentially beyond).
That's bad.
To be honest, I don't have a strong visceral negative feeling associated with all humans becoming extinct. It doesn't feel that bad, but then again I know better than to trust my feelings on such a scale. However, if I had to simply push a button to make one person's life significantly better, I would do it. And I would keep pushing that button for each new person. For something like 222 years, by my rough calculations. Okay, then. Humanity injuring or killing itself would be bad, and I can probably spent a century or so to try to prevent that, while also doing something that's a lot more fun that mashing a button.
We need a smart safety net.
Not only smart enough to know that triggering an atomic bomb inside a city is bad, or that you get the grandma out of a burning building by teleporting her in one piece to a safe spot, but also smart enough to know that if I keep snoozing every day for an hour or two, I'd rather someone stepped in and stopped me, no matter how much I want to sleep JUST FIVE MORE MINUTES. It's something I might actively fight, but it's something that I'll be grateful for later.
FAI
There it is: the ultimate safety net. Let's get to it?
Having FAI will be very very good, that's clear enough. Getting FAI wrong will be very very bad. But there are different levels of bad, and, frankly, a universe tiled with paper-clips is actually not that high on the list. Having an AI that treats humans as special objects is very dangerous. An AI that doesn't care about humans will not do anything to humans specifically. It might borrow a molecule, or an arm or two from our bodies, but that's okay. An AI that treats humans as special, yet is not Friendly could be very bad. Imagine 3^^^3 different people being created and forced to live really horrible lives. It's hell on a whole another level. So, if FAI goes wrong, pure destruction of all humans is a pretty good scenario.
Should we even be working on FAI? What are the chances we'll get it right? (I remember Anna Salamon's comparison: "getting FAI right" is like "trying to make the first atomic bomb explode in a shape of an elephant" would have been a century ago.) What are the chances we'll get it horribly wrong and end up in hell? By working on FAI, how are we changing the probability distribution for various outcomes? Perhaps a better alternative is to seek a decisive advantage like brain uploading, where a few key people can take a century or so to think the problem through?
I keep thinking about FAI going horribly wrong, and I want to scream at the people who are involved with it: "Do you even know what you are doing?!" Everything is at stake! And suddenly I care. Really care. There is curiosity, yes, but it's so much more than that. At LW minicamp we compared curiosity to a cat chasing a mouse. It's a kind of fun, playful feeling. I think we got it wrong. The real curiosity feels like hunger. The cat isn't chasing the mouse to play with it; it's chasing it to eat it because it needs to survive. Me? I need to know the right answer.
I finally understand why SIAI isn't focusing very hard on the actual AI part right now, but is instead pouring most of their efforts into recruiting talent. The next 50-100 years is going to be a marathon for our lives. Many participants might not make it to the finish line. It's important that we establish a community that can continue to carry the research forward until we succeed.
I finally understand why when I was talking about making games that help people be more rational with Carl Shulman, his value metric was to see how many academics it could impact/recruit. That didn't make sense to me. I just wanted to raise the sanity waterline for people in general. I think when LWers say "raise the sanity waterline," there are two ideas being presented. One is to make everyone a little bit more sane. That's nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.
I finally realized that when people were talking about donating to SIAI during the rationality minicamp, most of us (certainly myself) were thinking of may be tens of thousands of dollars a year. I now understand that's silly. If our goal is truly to make the most money for SIAI, then the goal should be measured in billions.
I've realized a lot of things lately. A lot of things have been shaken up. It has been a very stressful couple of days. I'll have to re-answer the question I asked myself not too long ago: What should I be doing? And this time, instead of hoping for an answer, I'm afraid of the answer. I'm truly and honestly afraid. Thankfully, I can fight pushing a lot better than pulling: fear is easier to fight than passion. I can plunge into the unknown, but it breaks my heart to put aside a very interesting and dear life path.
I've never felt more afraid, more ready to fall into a deep depression, more ready to scream and run away, retreat, abandon logic, go back to the safe comfortable beliefs and goals. I've spent the past 10 years making games and getting better at it. And just recently I've realized how really really good I actually am at it. Armed with my rationality toolkit, I could probably do wonders in that field.
Yet, I've also never felt more ready to make a step of this magnitude. Maximizing utility, all the fallacies, biases, defense mechanisms, etc, etc, etc. One by one they come to mind and help me move forward. Patterns of thoughts and reasoning that I can't even remember the name of. All these tools and skills are right here with me, and using them I feel like I can do anything. I feel that I can dodge bullets. But I also know full well that I am at the starting line of a long and difficult marathon. A marathon that has no path and no guides, but that has to be run nonetheless.
May the human race win.