Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
This is a very personal account of thoughts and events that have led me to a very interesting point in my life. Please read it as such. I present a lot of points, arguments, conclusions, etc..., but that's not what this is about.
I've started reading LW around spring of 2010. I was at the rationality minicamp last summer (2011). The night of February 10, 2012 all the rationality learning and practice finally caught up with me. Like a water that has been building up behind a damn, it finally broke through and flooded my poor brain.
"What if the Bayesian Conspiracy is real?" (By Bayesian Conspiracy I just mean a secret group that operates within and around LW and SIAI.) That is the question that set it all in motion. "Perhaps they left clues for those that are smart enough to see it. And to see those clues, you would actually have to understand and apply everything that they are trying to teach." The chain of thoughts that followed (conspiracies within conspiracies, shadow governments and Illuminati) it too ridiculous to want to repeat, but it all ended up with one simple question: How do I find out for sure? And that's when I realized that almost all the information I have has been accepted without as much as an ounce of verification. So little of my knowledge has been tested in the real world. In that moment I achieved a sort of enlightenment: I realized I don't know anything. I felt a dire urge to regress to the very basic questions: "What is real? What is true?" And then I laughed, because that's exactly where The Sequences start.
Through the turmoil of jumbled and confused thoughts came a shock of my most valuable belief propagating through my mind, breaking down final barriers, reaching its logical conclusion. FAI is the most important thing we should be doing right now! I already knew that. In fact, I knew that for a long time now, but I didn't... what? Feel it? Accept it? Visualize it? Understand the consequences? I think I didn't let that belief propagate to its natural conclusion: I should be doing something to help this cause.
I can't say: "It's the most important thing, but..." Yet, I've said it so many times inside my head. It's like hearing other people say: "Yes, X is the rational thing to do, but..." What follows is a defense that allows them to keep the path to their goal that they are comfortable with, that they are already invested in.
Interestingly enough, I've already thought about this. Right after rationality minicamp, I've asked myself the question: Should I switch to working on FAI, or should I continue to make games? I've thought about it heavily for some time, but I felt like I lacked the necessary math skills to be of much use on FAI front. Making games was the convenient answer. It's something I've been doing for a long time, it's something I am good. I decided to make games that explain various ideas that LW presents in text. This way I could help raise the sanity waterline. Seemed like a very nice, neat solution that allowed me to do what I wanted and feel a bit helpful to the FAI cause.
Looking back, I was dishonest with myself. In my mind, I already wrote the answer I wanted. I convinced myself that I didn't, but part of me certainly sabotaged the whole process. But that's okay, because I was still somewhat helpful, even though may be not in the most optimal way. Right? Right?? The correct answer is "no". So, now I have to ask myself again: What is the best path for me? And to answer that, I have to understand what my goal is.
Rationality doesn't just help you to get what you want better/faster. Increased rationality starts to change what you want. May be you wanted the air to be clean, so you bought a hybrid. Sweet. But then you realized that what you actually want is for people to be healthy. So you became a nurse. That's nice. Then you realized that if you did research, you could be making an order of magnitude more people healthier. So you went into research. Cool. Then you realized that you could pay for multiple researchers if you had enough money. So you went out, become a billionaire, and created your own research institute. Great. There was always you, and there was your goal, but everything in between was (and should be) up for grabs.
And if you follow that kind of chain long enough, at some point you realize that FAI is actually the thing right before your goal. Why wouldn't it be? It solves everything in the best possible way!
People joke that LW is a cult. Everyone kind of laughs it off. It's funny because cultists are weird and crazy, but they are so sure they are right. LWers are kind of like that. Unlike other cults, though, we are really, truly right. Right? But, honestly, I like the term, and I think it has a ring of truth to it. Cultists have a goal that's beyond them. We do too. My life isn't about my preferences (I can change those), it's about my goals. I can change those too, of course, but if I'm rational (and nice) about it, I feel that it's hard not to end up wanting to help other people.
Okay, so I need a goal. Let's start from the beginning:
What is truth?
Reality is truth. It's what happens. It's the rules that dictate what happens. It's the invisible territory. It's the thing that makes you feel surprised.
(Okay, great, I won't have to go back to reading Greek philosophy.)
How do we discover truth?
So far, the best method has been the scientific principle. It's has also proved itself over and over again by providing actual tangible results.
(Fantastic, I won't have to reinvent the thousands of years of progress.)
Soon enough humans will commit a fatal mistake.
This isn't a question, it's an observation. The technology is advancing on all fronts to the point where it can be used on a planetary (and wider) scale. Humans make mistakes. Making mistake with something that affects the whole world could result in an injury or death... for the planet (and potentially beyond).
To be honest, I don't have a strong visceral negative feeling associated with all humans becoming extinct. It doesn't feel that bad, but then again I know better than to trust my feelings on such a scale. However, if I had to simply push a button to make one person's life significantly better, I would do it. And I would keep pushing that button for each new person. For something like 222 years, by my rough calculations. Okay, then. Humanity injuring or killing itself would be bad, and I can probably spent a century or so to try to prevent that, while also doing something that's a lot more fun that mashing a button.
We need a smart safety net.
Not only smart enough to know that triggering an atomic bomb inside a city is bad, or that you get the grandma out of a burning building by teleporting her in one piece to a safe spot, but also smart enough to know that if I keep snoozing every day for an hour or two, I'd rather someone stepped in and stopped me, no matter how much I want to sleep JUST FIVE MORE MINUTES. It's something I might actively fight, but it's something that I'll be grateful for later.
There it is: the ultimate safety net. Let's get to it?
Having FAI will be very very good, that's clear enough. Getting FAI wrong will be very very bad. But there are different levels of bad, and, frankly, a universe tiled with paper-clips is actually not that high on the list. Having an AI that treats humans as special objects is very dangerous. An AI that doesn't care about humans will not do anything to humans specifically. It might borrow a molecule, or an arm or two from our bodies, but that's okay. An AI that treats humans as special, yet is not Friendly could be very bad. Imagine 3^^^3 different people being created and forced to live really horrible lives. It's hell on a whole another level. So, if FAI goes wrong, pure destruction of all humans is a pretty good scenario.
Should we even be working on FAI? What are the chances we'll get it right? (I remember Anna Salamon's comparison: "getting FAI right" is like "trying to make the first atomic bomb explode in a shape of an elephant" would have been a century ago.) What are the chances we'll get it horribly wrong and end up in hell? By working on FAI, how are we changing the probability distribution for various outcomes? Perhaps a better alternative is to seek a decisive advantage like brain uploading, where a few key people can take a century or so to think the problem through?
I keep thinking about FAI going horribly wrong, and I want to scream at the people who are involved with it: "Do you even know what you are doing?!" Everything is at stake! And suddenly I care. Really care. There is curiosity, yes, but it's so much more than that. At LW minicamp we compared curiosity to a cat chasing a mouse. It's a kind of fun, playful feeling. I think we got it wrong. The real curiosity feels like hunger. The cat isn't chasing the mouse to play with it; it's chasing it to eat it because it needs to survive. Me? I need to know the right answer.
I finally understand why SIAI isn't focusing very hard on the actual AI part right now, but is instead pouring most of their efforts into recruiting talent. The next 50-100 years is going to be a marathon for our lives. Many participants might not make it to the finish line. It's important that we establish a community that can continue to carry the research forward until we succeed.
I finally understand why when I was talking about making games that help people be more rational with Carl Shulman, his value metric was to see how many academics it could impact/recruit. That didn't make sense to me. I just wanted to raise the sanity waterline for people in general. I think when LWers say "raise the sanity waterline," there are two ideas being presented. One is to make everyone a little bit more sane. That's nice, but overall probably not very beneficial to FAI cause. Another is to make certain key people a bit more sane, hopefully sane enough to realize that FAI is a big deal, and sane enough to do some meaningful progress on it.
I finally realized that when people were talking about donating to SIAI during the rationality minicamp, most of us (certainly myself) were thinking of may be tens of thousands of dollars a year. I now understand that's silly. If our goal is truly to make the most money for SIAI, then the goal should be measured in billions.
I've realized a lot of things lately. A lot of things have been shaken up. It has been a very stressful couple of days. I'll have to re-answer the question I asked myself not too long ago: What should I be doing? And this time, instead of hoping for an answer, I'm afraid of the answer. I'm truly and honestly afraid. Thankfully, I can fight pushing a lot better than pulling: fear is easier to fight than passion. I can plunge into the unknown, but it breaks my heart to put aside a very interesting and dear life path.
I've never felt more afraid, more ready to fall into a deep depression, more ready to scream and run away, retreat, abandon logic, go back to the safe comfortable beliefs and goals. I've spent the past 10 years making games and getting better at it. And just recently I've realized how really really good I actually am at it. Armed with my rationality toolkit, I could probably do wonders in that field.
Yet, I've also never felt more ready to make a step of this magnitude. Maximizing utility, all the fallacies, biases, defense mechanisms, etc, etc, etc. One by one they come to mind and help me move forward. Patterns of thoughts and reasoning that I can't even remember the name of. All these tools and skills are right here with me, and using them I feel like I can do anything. I feel that I can dodge bullets. But I also know full well that I am at the starting line of a long and difficult marathon. A marathon that has no path and no guides, but that has to be run nonetheless.
May the human race win.
Over the past few months I've been doing a lot of reading about cryonics, and though I agree with the arguments of Eliezer Yudkowsky and Robin Hanson on the issue, I still feel uncomfortable about actually signing up. Upon reflection, my true rejection is my fear of the social cost of cryonics, i.e. being perceived as weird and completely incomprehensible by everyone around me. I've read the "Hostile Wife Phenomenon" article on Depressed Metabolism, the New York Times Magazine article on Robin Hanson's personal situation (as well as Robin's reply), and scores of comments on LessWrong, and it looks a lot of cryonicists do indeed experience the feeling that Eliezer describes in Lonely Dissent.
My concerns about the social cost of cryonics can be broken down into two categories:
- Loss of existing relationships with family, friends, etc. I value the relationships I currently have with my family and friends, and signing up for cryonics would jeopardize many of these relationships. Most of my friends and family members are not interested in rationality and would be completely baffled if I decided to sign up. Nonetheless, I do not want to lose these relationships, as they are currently an important part of my life; I would consider my life to be significantly worse than it is now if I had to sever a lot of these emotional ties.
- Increased difficulty of forming relationships in the future. I'm not particularly good at forming new relationships, and I'm very worried that signing up for cryonics will create an insurmountable social stigma that will make it nearly impossible for me to do so.
Overall, though, I have very little information about what the social cost of cryonics really is beyond a few scattered anecdotes and secondhand descriptions of cryonicists' lives. Ultimately, I don't really know how many of my fears would actually be realized if I signed up. This makes it difficult to for me to make a decision, as I am very risk-averse and I feel reluctant to choose something that could potentially make the next six or seven decades of my life miserable. As a result, I have decided to engage in some data collection.
To do so, I would like to hear about your experiences. If you are currently signed up for cryonics, I would very much appreciate it if you took a minute or two to describe the effects that signing up has had on your relationships and your social life in general. If you are not signed up, your feedback on this topic is still welcome. Links to articles would be good, but discussion of personal experiences would be better.
There are people out there who want to do good in the world, but don't know how.
Maybe you are one of them.
Maybe you kind of feel that you should be into the "saving the world" stuff but aren't quite sure if it's for you. You'd have to be some kind of saint, right? That doesn't sound like you.
Maybe you really do feel it's you, but don't know where to start. You've read the "How to Save the World" guide and your reaction is, ok, I get it, now where do I start? A plan that starts "first, change your entire life" somehow doesn't sound like a very good plan.
All the guides on how to save the world, all the advice, all the essays on why cooperation is so hard, everything I've read so far, has missed one fundamental point.
If I could put it into words, it would be this:
AAAAAAAAAAAGGGHH WTF CRAP WHERE DO I START EEK BLURFBL
If that's your reaction then you're half way there. That's what you get when you finally grasp how much pointless pain, misery, risk, death there is in the world; just how much good could be done if everyone would get their act together; just how little anyone seems to care.
If you're still reading, then maybe this is you. A little bit.
And I want to help you.
How will I help you? That's the easy part. I'll start a community of aspiring rationalist do-gooders. If I can, I'll start it right here in the comments section of this post. If anything about this post speaks to you, let me know. At this point I just want to know whether there's anybody out there.
And what then? I'll listen to people's opinions, feelings and concerns. I'll post about my worldview and invite people to criticize, attack, tear it apart. Because it's not my worldview I care about. I care about making the world better. I have something to protect.
The posts will mainly be about what I don't see enough of on Less Wrong. About reconciling being rational with being human. Posts that encourage doing rather than thinking. I've had enough ideas that I can commit to writing 20 discussion posts over a reasonable timescale, although some might be quite short - just single ideas.
Someone mentioned there should be a "saving the world wiki". That sounds like a great idea and I'm sure that setting one up would be well within my power if someone else doesn't get around to it first.
But how I intend to help you is not the important part. The important part is why.
To answer that I'll need to take a couple of steps back.
Since basically forever, I've had vague, guilt-motivated feelings that I ought to be good. I ought to work towards making the world the place I wished it would be. I knew that others appeared to do good for greedy or selfish reasons; I wasn't like that. I wasn't going to do it for personal gain.
If everyone did their bit, then things would be great. So I wanted to do my bit.
I wanted to privately, secretively, give a hell of a lot of money to a good charity. So that I would be doing good and that I would know I wasn't doing it for status or glory.
I started small. I gave small amounts to some big-name charities, charities I could be fairly sure would be doing something right. That went on for about a year, with not much given in total - I was still building up confidence.
And then I heard about GiveWell. And I stopped giving. Entirely.
WHY??? I can't really give a reason. But something just didn't seem right to me. People who talked about GiveWell also tended to mention that the best policy was to give only to the charity listed at the top. And that didn't seem right either. I couldn't argue with the maths, but it went against what I'd been doing up until that point and something about that didn't seem right.
Also, I hadn't heard of GiveWell or any of the charities they listed. How could I trust any of them? And yet how could I give to anyone else if these charities were so much more effective? Big akrasia time.
It took a while to sink in. But when it did, I realised that my life so far had mostly been a waste of time. I'd earned some money, but I had no real goals or ambitions. And yet, why should I care if my life so far had been wasted? What I had done in the past was irrelevant to what I intended to do in the future. I knew what my goal was now and from that a whole lot became clear.
One thing mattered most of all. If I was to be truly virtuous, altruistic, world-changing then I shouldn't deny myself status or make financial sacrifices. I should be completely indifferent to those things. And from that the plan became clear: the best way to save the world would be to persuade other people to do it for me. I'm still not entirely sure why they're not already doing it, but I will use the typical mind prior and assume that for some at least, it's for the same reasons as me. They're confused. And that to carry out my plan I won't need to manipulate anyone into carrying out my wishes, but simply help them carry out their own.
I could say a lot more and I will, but for now I just want to know. Who will be my ally?
I have encountered personally in conversations, and also observed in the media over the past couple of decades, a great deal of skepticism, scorn, and ridicule, if not merely indifference or dismissal, from many people in reaction to the est training, which I completed in 1983, and the Myers-Briggs Type Indicator tool, which I first took in 1993 or 1994. I would like to share some concrete examples from my own life where information and perspective that I gained from these two sources have improved my life, both in my own way of conceptualizing and approaching things, and also in my relationships with others. I do this with the hope and intention of showing that est and MBTI have positive value, and encouraging people to explore these and other tools for personal growth.
One important insight that I gained from the est training is an understanding and the experience that I am not my opinions, and my opinions are not me. Opinions are neutral things, and they may be something I hold, or agree with, but I can separate my self from them, and I can discuss them, and I can change or discard them, but I am still the same "me". I am not more or less "myself" in relation to what I think or believe. Before I did the est training, whenever someone would question an opinion I held, I felt personally attacked. I identified my self with my opinion or belief. My emotional response to attack, like for many other people, is to defend and/or to retreat, so when I perceived of my "self" being "attacked", I gave in to the standard fight or flight response, and therefore I did not get the opportunity to explore the opinion in question to see if the person who questioned me had some important new information or a perspective that I had not previously considered. It is not that I always remember this or that it is my first response, but once I notice myself responding in the old way, I can then take that step back and remember the separation between self and opinion. That choice is now available to me, where it wasn't before. When I find myself in conversations with another person or people who disagree with me, my response now is to draw them out, to ask them about what they believe and why they believe it. I regard myself as if I were a reporter on a fact-finding mission. I step back and I do not feel attacked. I learn sometimes from this, and other times I do not, but I no longer feel attacked, and I find that I can more easily become friends with people even if we have disagreements. That was not the case for me prior to doing est.
Another valuable tool that I got from est and still use in my life is the ability to accept responsibility without attaching blame to it, even if someone is trying to heap blame upon me. This is similar to what I said above about basically not identifying my self with what I think. I do not have to feel or think of myself as a "bad person" because I made a mistake. I have come to the belief that guilt is an emotion that I need not wallow in. If I feel guilt about doing or not doing something, saying or not saying something, I take that feeling of guilt as a sign that I either need to take some action to rectify the situation, and/or I need to apologize to someone about it, and/or I need to learn from the situation so that hopefully I will not repeat it, and then forgive myself, and move on. Hanging on to guilt is something I see many people doing, and it not only holds them up and blocks them off from taking action, they often pull that feeling in and create a scenario or self-definition that involves beating themselves up about it, or they wallow around in feeling guilty in a way that serves as a self-indulgent excuse for not improving things. "I'm so awful, I'm such a screw-up, I can't do anything right." That kind of negative self-esteem can affect a person for their entire life if they allow it to. There are many ways to come to these realizations, and I make no claim that est is some kind of "cure-all". One of the characters on the tv show "SOAP" called est "The McDonald's of Psychiatry". That's amusing, but it denigrates a very useful and powerful experience. I believe in an eclectic approach to life. I look at many things, explore many ideas and experiences, and I take what works and leave the rest. est is only one of many helpful experiences I have had in my 49 years.
I took the Myers-Briggs Personality Index at a science fiction convention in the early years of my marriage, when I was living in Alexandria, VA, in 1993 and 1994. It was given as part of a panel, and I also took it again when I read "Do What You Are", which is a book about finding employment/a profession based on your MBTI personality type. The basics, if you have not encountered MBTI before are: There are 4 "continuums" in how people tend to interact with the world. Most people use both sides of each continuum, but are most comfortable on one side. The traits are Extrovert/Introvert, Sensing/Intuiting, Thinking/Feeling, and Judging/Perceiving. (The use of these words in the MBTI context is not exactly the same as their dictionary definitions). I am a strong ENFP. My husband was an ISTP. Understanding the differences between how we approached the world was very helpful to me in learning why we were so different about socializing with other people, and about our communication style with each other. As an "I", John (as they put it in the book), "got his batteries charged" by mostly being alone. I, as an "E", got mine charged by being with other people. We went to conventions and parties, but he often wanted to leave well before I felt ready to go. Once we had two cars, we would each take our own to events. Even though I felt it wasted gas, it gave him the opportunity to "flee" once he had had enough of being with others, while I could then come home at my leisure, and neither of us had to give up on what made us happier and more comfortable. It also explained why he would not always respond immediately to a question. "I "people tend to figure out in their own mind first what they want to say before they say anything aloud. "E" people often start talking right away, and as they speak, what they think becomes clearer to them. This is also a very useful data point for teachers. If they know about it, they can realize that the "I" kids need more time to come up with their answers, while the "E" kids put their hands in the air more immediately. They can then allow the "I" kids the time they need to respond to questions without thinking they are not good students, or are not as intelligent or knowledgeable as they "E" kids are.
My boyfriend is an ENTJ. The source of some of the friction in our relationship became clear to me after I asked him to find out his Myers-Briggs type, which he had never done before. Gerry often asks me to give him a list of what I want to do in the course of my day, and how much time things will take. These are reasonable requests. However, the rub comes from the fact that as a "J", he is uncomfortable not knowing the answer to these things. I, as a "P", am uncomfortable stating these things in advance, in nailing things down. I prefer to leave things open-ended. He regarded what I said as more concrete, whereas I regarded it more as a guideline, but not a definite plan or promise. In addition, I have always had a hard time judging how long things will take, and as a person with ADD, I also get distracted easily, so it was making me upset when he would come home and ask me what I'd gotten done, and then he would get upset when I hadn't done what I had said I wanted to, or if things took longer than I said they would. Understanding the differences in our types has helped me to understand more about why this has been an area of friction. That leaves room for us to discuss it without feeling the need to blame each other for our preferred method of dealing with things. I feel clearer about stating goals for the day, but not necessarily promising to do specific things, and working on figuring out how to allocate enough time for things. He understands that just because I tell him what I would like to do, it is not necessarily what I will end up doing. It's still a work in progress.
I want to be clear that I am not talking about using the types as excuses to get out of doing things, or for taking what other people feel is "too long" to get things done. It's merely another "tool in my tool box" that helps me to process how I and my loved ones function, and to figure out how to improve.
I am curious to know how other people feel about their experiences, if they have done a personal growth seminar such as est and/or taken the MBTI, if they feel that they have also taken tools from those experiences that have had an ongoing positive impact on their lives and relationships. I look forward to hearing what people have to say in response to this article.
So after reading SarahC's latest post I noticed that she's gotten a lot out of rationality.
More importantly, she got different things out of it than I have.
Off the top of my head, I've learned...
- that other people see themselves differently, and should be understood on their terms (mostly from here)
- that I can pay attention to what I'm doing, and try to notice patterns to make intervention more effective.
- the whole utilitarian structure of having a goal that you take actions to achieve, coupled with the idea of an optimization process. It was really helpful to me to realize that you can do whatever it takes to achieve something, not just what has been suggested.
- the importance/usefulness of dissolving the question/how words work (especially great when combined with previous part)
- that an event is evidence for something, not just what I think it can support
- to pull people in, don't force them. Seriously that one is ridiculously useful. Thanks David Gerard.
- that things don't happen unless something makes them happen.
- that other people are smart and cool, and often have good advice
Where she got...
- a habit of learning new skills
- better time-management habits
- an awesome community
- more initiative
- the idea that she can change the world
I've only recently making a habit out of trying new things, and that's been going really well for me. Is there other low hanging fruit that I'm missing?
What cool/important/useful things has rationality gotten you?