This is the final post in my productivity sequence.
The first post described what I achieved. The next three posts describe how. This post describes why, explaining the sources of my passion and the circumstances that convinced a young Nate to try and save the world. Within, you will find no suggestions, no techniques to emulate, no new ideas to ponder. This is a rationalist coming-of-age story. With luck, you may find it inspiring. Regardless, I hope you can learn from my mistakes.
Never fear, I'll be back to business soon — there's lots of studying to do. But before then, there's a story to tell, a memorial to what I left behind.
I was raised Catholic. On my eighth birthday, having received my first communion about a year prior, I casually asked my priest how to reaffirm my faith and do something for the Lord. The memory is fuzzy, but I think I donated a chunk of allowance money and made a public confession at the following mass.
A bunch of the grownups made a big deal out of it, as grownups are like to do. "Faith of a child", and all that. This confused me, especially when I realized that what I had done was rare. I wasn't trying to get pats on the head, I was appealing to the Lord of the Heavens and the Earth. Were we all on the same page, here? This was the creator. He was infinitely virtuous, and he had told us what to do.
And yet, everyone was content to recite hymns once a week and donate for the reconstruction of the church. What about the rest of the world, the sick, the dying? Where were the proselytizers, the missionary opportunities? Why was everyone just sitting around?
On that day, I became acquainted with civilizational inadequacy. I realized you could hand a room full of people the literal word of God, and they'd still struggle to pay attention for an hour every weekend.
This didn't shake my faith, mind you. It didn't even occur to me that the grownups might not actually believe their tales. No, what I learned that day was that there are a lot of people who hold beliefs they aren't willing to act upon.
Eventually, my faith faded. The distrust remained.
Gaining Confidence
I grew up in a small village, population ~1200. My early education took place in a one-room schoolhouse. The local towns eventually rolled all their school districts into one, but even then, my graduating class barely broke 50 people. It wasn't difficult to excel.
Ages twelve and thirteen were rough — that was right after they merged school districts, and those were the years I was first put a few grades ahead in math classes. I was awkward and underconfident. I felt estranged and lonely, and it was easy to get shoehorned into the "smart kid" stereotype by all the new students.
Eventually, though, I decided that the stereotype was bogus. Anyone intelligent should be able to escape such pigeonholing. In fact, I concluded that anyone with real smarts should be able to find their way out of any mess. I observed the confidence possessed by my peers, even those who seemed to have no reason for confidence. I noticed the ease with which they engaged in social interactions. I decided I could emulate these.
I faked confidence, and it soon became real. I found that my social limitations had been largely psychological, and that the majority of my classmates were more than willing to be friends. I learned how to get good grades without alienating my peers. It helped that I tended to buck authority (I was no "teacher's pet") and that I enjoyed teaching others. I had a knack for pinpointing misunderstandings and was often able to teach better than the teachers could — as a peer, I could communicate on a different level.
I started doing very well for myself. I got excellent grades with minimal effort. I overcame my social anxieties. I had a few close friends and was on good terms with most everyone else. I participated in a number of extra circulars where I held high status. As you may imagine, I grew quite arrogant.
In retrospect, my accomplishments were hardly impressive. At the time, though, it felt like everyone else wasn't even trying. It became apparent that if I wanted something done right, I'd have to do it myself.
Shattered Illusions
Up until the age of fourteen I had this growing intuition that you can't trust others to actually get things done. This belief didn't become explicit until the end of ninth grade, when I learned how the government of the United States of America actually works.
Allow me to provide a few pieces of context.
For one thing, I was learning to program computers at the time. I had been programming for maybe a year and a half, and I was starting to form concepts of elegance and minimalism. I had a belief that the best design is a small design, a design forced by nature at every step along the way, a design that requires no arbitrary choices.
For another thing, my religion had died not with a bang, but with a whimper. I'd compartmentalized it, and it had slowly withered away. I didn't Believe any more, but I didn't mind that others did. It was a happy fantasy, a social tool. Just as children are allowed to believe in Santa Claus, grownups were allowed to believe in Gods.
The government, though, was a different matter all together. I assumed that a lot of very smart people had put a lot of effort into its design — that's what the "Founding Fathers" meme implied, anyway. But maybe it wasn't even that. Maybe I just possessed an unspoken, unchallenged belief that the grownups knew what they were doing, at least at the very highest levels. This was the very fabric of society itself: surely it was meticulously calibrated to maximize human virtue, to protect us from circumstance and evil.
When I was finally told how the US government worked, I couldn't believe my ears. It was a mess. An arbitrary, clunky monstrosity full of loopholes a child could abuse. I could think of a dozen improvements off the top of my head.
To give you an idea of how my teenaged mind worked, it was immediately clear to me that any first-order "improvements" suggested by naïve ninth-graders would have unintended negative consequences. Therefore, improvement number one involved redesigning the system to make it easy to test many different improvements in parallel, adding machinery to adopt the improvements that were actually shown to work.
Yet even these simple ideas were absent in the actual system. Corruption and inefficiency ran rampant. Worse, my peers didn't seem particularly perturbed: they took the system as a given, and merely memorized the machinery for long enough to pass a test. Even the grownups were apathetic: they dickered over who should have power within the system, never suggesting we should alter the system itself.
My childhood illusions fell to pieces. I realized that nothing was meticulously managed, that the smartest people weren't in control, making sure that everything was optimal. All the world problems, the sicknesses and the injustices and the death: these weren't necessary evils, they were a product of neglect. The most important system of all was poorly coordinated, bloated, and outdated — and nobody seemed to care.
Deciding to Save the World
This is the context in which I decided to save the world. I wasn't as young and stupid as you might think — I didn't believe I was going to save the world. I just decided to. The world is big, and I was small. I knew that, in all likelihood, I'd struggle ineffectually for decades and achieve only a bitter, cynical adulthood.
But the vast majority of my peers hadn't made it as far as I had. Even though a few were sympathetic, there was simply no way we could change things. It was outside of our control.
The adults were worse. They smiled, they nodded, they commended my critical thinking skills. Then they went back to what they were doing. A few of them took the time to inform me that it's great to want to change the world and all, but eventually I'd realize that the best way to do that was to settle down and be a teacher, or run a church, or just be kind to others.
I wasn't surprised. I already knew it was rare for people to actually try and fix things.
I had youthful idealism, I had big ambitions, but I knew full well that I didn't actually have a chance. I knew that I wouldn't be able to single-handedly redesign the social contract, but I also knew that if everyone who made it as far as I did gave up just because changing the world is impossible, then the world would never change.
If everybody was cowed by the simple fact that they can't succeed, then that one-in-a-million person who can succeed would never take their shot.
So I was sure as hell going to take mine.
Broadening Scope
Mere impossibility was never a hurdle: The Phantom Tollbooth saw to that at a young age. When grownups say you can't do something, what they mean is that they can't do it. I spent time devising strategies to get leverage and push governments out of their stagnant state and into something capable of growth.
In 2005, a teacher to whom I'd ranted introduced me to another important book: Ishmael. It wasn't the ideas that stuck with me — I disagreed with a few at the time, and I now disagree with most. No, what this book gave me was scope. This author, too, wished to save the world, and the breadth of his ideas exceeded my own. This book gave me no answers, but it gave me better questions
Why merely hone the government, instead of redesigning it altogether?
More importantly, What sort of world are you aiming for?
"So you want to be an idealist?", the book asked. "Very well, but what is your ideal?"
I refocused, looking to fully define the ideals I strove for in a human social system. I knew I wouldn't be able to institute any solution directly, but I also knew that pushing governments would be much easier if I had something to push them towards.
After all, the Communist Manifesto changed the world, once.
This became my new goal: distill an ideal social structure for humans. The problem was insurmountable, of course, but this was hardly a deterrence. I was bright enough to understand truisms like "no one system will work for everybody" and "you're not perfect enough to get this right", but these were no trouble. I didn't need to directly specify an ideal social structure: a meta-structure, an imperfect system that ratchets towards perfection, a system that is optimal in the limit, would be fine by me.
From my vantage point, old ideas like communism and democracy soon seemed laughable. Interesting ideas in their time, perhaps, but obviously doomed to failure. It's easy to build a utopia when you imagine that people will set aside their greed and overcome their apathy. But those aren't systems for people: People are greedy, and people are apathetic. I wanted something that worked — nay, thrived — when populated by actual humans, with all their flaws.
I devoted time and effort to research and study. This was dangerous, as there was no feedback loop. As soon as I stepped beyond the achievements of history, there was no way to actually test anything I came up with. Many times, I settled on one idea for a few months, mulling it over, declaring it perfect. Time and again, I later found a fatal flaw, a piece of faulty reasoning, and the whole thing came tumbling down. After many cycles, I noticed that the flaws were usually visible in advance. I became cognizant of the fact that I'd been glossing over them, ignoring them, explaining them away.
I learned not to trust my own decrees of perfection. I started monitoring my thought processes very closely. I learned to notice the little ghosts of doubt, to address them earlier and more thoroughly. (I became a staunch atheist, unsurprisingly.) This was, perhaps, the beginning of my rationalist training. Unfortunately, it was all self-directed. Somehow, it never occurred to me to read literature on how to think better. I didn't have much trust in psychological literature, anyway, and I was arrogant.
Communication Failures
It was during this period that I explicitly decided not to pursue math. I reasoned that in order to actually save the world, I'd need to focus on charisma, political connections, and a solid understanding of the machinery underlying the world's major governments. Upon graduating high school, I decided to go to a college in Washington D.C. and study political science. I double majored in Computer Science as a fallback plan, a way to actually make money as needed (and because I loved it).
I went into my Poly Sci degree expecting to learn about the mechanics of society. Amusingly enough, I didn't know that "Economics" was a field. We didn't have any econ classes in my tiny high school, and nobody had seen fit to tell me about it. I expected "Political Science" to teach me the workings of nations including the world economy, but quickly realized that it's about the actual politicians, the social peacocking, the façades. Fortunately, a required Intro to Econ class soon remedied the situation, and I quickly changed my major to Economics.
My ideas experienced significant refinement as I received formal training. Unfortunately, nobody would listen to them.
It's not that they were dismissed as childish idealism: I had graduated to larger problems. I'd been thinking long and hard about the problem for a few years, and I'd had some interesting insights. But when I tried to explain them to people, almost everyone had immediate adverse reactions.
I anticipated criticism, and relished the prospect. My ideas were in desperate need of an outside challenger. But the reactions of others were far worse than I anticipated.
Nobody found flaws in my logic. Nobody challenged my bold claims. Instead, they simply failed to understand. They got stuck three or four points before the interesting points, and could go no further. I learned that most people don't understand basic economics or game theory. Many others were entrenched in bluegreensmanship and reflexively treated my suggestions as attacks. Aspiring politicians balked at the claim that Democracy, while perhaps an important step in our cultural evolution, can't possibly be the end of the line. Still others insisted that it's useless to discuss ideals, because they can never be achieved.
In short, I found myself on the far side of a wide inferential gap.
I learned that many people, after falling into the gap, were incapable of climbing out, no matter how slowly I walked them through the intervening steps. They had already passed judgement on the conclusion, and rejected my attempts to root out their misconceptions, becoming impatient before actually listening. I grew very cautious with who I shared my ideas with, worrying that exposing them too quickly or in the wrong fashion would be a permanent setback.
I had a small few friends who knew enough economics and other subjects to follow along and who wouldn't discard uncouth ideas outright. I began to value these people highly, as they were among the few who could actually put pressure on me, expose flaws in my reasoning, and help me come up with solutions.
Eventually, I had a few insights that I've yet to find in the literature, a few ideas that I still actually believe are important. You'll excuse me if I don't mention them here: there is a lot of inferential distance. Perhaps one day I'll write a sequence.
Even then, I could see no easy path to public support. Most people lacked the knowledge to understand my claims without effort, and lacked the incentive to put in the effort for some unproven boy.
Phase Two
Fortunately, I had other tricks up my sleeve.
I attempted three different tech startups. Two of them failed. The last was healthier, but we shut it down because the expected gains were lower than an industry salary. In the interim, I honed my programming skills and secured an industry job (I'm a software engineer at Google).
By the time I graduated, my ideas were largely refined and stable. I had settled upon a solid meta social system as an ideal to strive for, and I'm still fairly confident that it's a good one — one where the design is forced by nature at every step, one that requires no arbitrary choices, one that ratchets towards optimality. And even if the ideal was not perfect, the modern world is insane enough that even a small step towards a better-coordinated society would yield gigantic benefits.
The problem changed from one of refining ideas to one of convincing others.
It was clear that I couldn't spread my ideas by merely stating them, due to the inferential distance, so I started working on two indirect approaches in the hours after work.
The first was a book, which went back to my roots: simple, low-cost ideas for how to change the current system of government in small ways that could have large payoffs. The goal of this project was to shake people from the blue-green mindset, to convince them that we should stop bickering within the framework and consider modifying the framework itself. This book was meant to the be first in a series, in which I'd slowly build towards more radical suggestions.
The second project was designed to put people in a more rational frame of mind. I wanted people who could look past the labels and see the things, people who don't just memorize how the world works but see it as mutable, as something they can actually change. I wanted people that I could pull out of inferential gaps, in case they fell into mine.
Upon introspection, I realized that much of my ability came from a specific outlook on the world that I had at a young age. I had a knack for understanding what the teachers were trying to teach me, for recognizing and discarding the cruft in their statements. I saw many fellow students putting stock in historical accidents of explanation where I found it easy to grasp the underlying concepts and drop the baggage. This ability to cull the cruft is important to understanding my grand designs.
This reasoning (and a few other desires, including a perpetual fascination with math and physics) led me to create simplifience, a website that promotes such a mindset.
It never made it to the point where I was comfortable publicizing it, but that hardly matters anymore. In retrospect, it's an unfinished jumble of rationality training, math explanations, and science enthusiasm. It's important in one key respect:
As I was writing simplifience, I did a lot of research for it. During this research, I kept stumbling upon web articles on this one website that articulated what I was trying to express, only better. That website was LessWrong, and those articles were the Sequences.
It took me an embarrassingly long time to actually pay attention. In fact, if you go to simplifience.com, you can watch as the articles grow more and more influenced by the sequences. My exposure to them was patchy, centered around ideas that I'd already had. It took me a while to realize that I should read the rest of them, that I might learn new things that extended the ideas I'd figured out on my own.
It seemed like a good way to learn how to think better, to learn from someone who had had similar insights. I didn't even consider the possibility that this author, too, had some grand agenda. The idea that Eliezer's agenda could be more pressing than my own never even crossed my mind.
At this point, you may be able to empathize with how I felt when I first realized the importance of an intelligence explosion.
Superseded
It was like getting ten years worth of wind knocked out of me.
I saw something familiar in the sequences — the winding, meticulous explanations of someone struggling to bridge an inferential gap. I recognized the need to cover subjects that looked completely tangential to the actual point, just to get people to the level where they wouldn't reject the main ideas out-of-hand. I noticed the people falling to the side, debating issues two or three steps before the actual interesting problems. It was this familiar pattern, above all else, that made me actually pay attention.
Everything clicked. I was already thoroughly convinced of civilizational inadequacy. I had long since concluded that there's not much that can hold a strong intelligence down. I had a sort of vague idea that an AI would seek out "good" values, but such illusions were easily dispelled — I was a moral relativist. And the stakes were as high as stakes go. Artificial intelligence was a problem more pressing than my own.
The realization shook me to my core. It wasn't even the intelligence explosion idea that scared me, it was the revelation of a fatal flaw at the foundation of my beliefs. Poorly designed governments had awoken my fear that society can't handle coordination problems, but I never — not once in nearly a decade — stopped to consider whether designing better social systems was actually the best way to optimize the world.
I professed a desire to save the world, but had misunderstood the playing field so badly that existential risk had never even crossed my mind. Somehow, I had missed the most important problems, and they should have been obvious. Something was very wrong.
It was time to halt, melt, and catch fire.
This was one of the most difficult things I've done.
I was more careful, the second time around. The Sequences shook my foundations and brought the whole tower crashing down, but what I would build in its place was by no means a foregone conclusion.
I had been blind to all existential risks, not just AI risk, and there was a possibility that I had missed other features of the problem space as well. I was well aware of the fact that, having been introduced to AI risk by Eliezer's writings, I was biased towards his viewpoint. I didn't want to make the same mistake twice, to jump for the second big problem that crossed my path just because it was larger than the first. I had to start from scratch, reasoning from the beginning. I knew I must watch out for conjunction fallacies caused by nice narratives, arguments made from high stakes (Pascal's mugging), putting too much stock on inside views, and so on. I had to figure out how to actually save the world.
It took me a long time to deprogram, to get back to neutral. I considered carefully, accounting for my biases as best I could. I read a lot. I weighed the evidence. The process took many months.
By July of 2013, I came to agree with MIRI's conclusions.
Disclaimer
Writing it all out like this, I realize that I've failed to convey the feeling of it all. Depending upon whether you believe that I was actually able to come up with better ways to structure people, you may feel that I'm either pretty accomplished or extremely deluded. Perhaps both.
Really, though, it's neither. This raw story, which omits details from the rest of my life, paints a strange picture indeed. The intensity is distilled.
I was not a zealot, in practice. My attempts to save the world didn't bleed much into the rest of my life. I learned early on that this wasn't the sort of thing that most people enjoyed discussing, and I was wary of inferential gaps. My work was done parallel to an otherwise normal life. Only a select few people were privy to my goals, my conclusions. The whole thing often felt disconnected from reality, just some unusual hobby. The majority of my friends, if they read this, will be surprised.
There are many holes in this summary, too. It fails to capture the dark spots. It omits the feelings of uncertainty and helplessness, the cycles of guilt at being unproductive followed by lingering depression, the wavering between staunch idealism and a conviction that my goals were nothing but a comfortable fantasy. It skips over the year I burned out, writing the whole idea off, studying abroad and building myself a healthier mental state before returning and picking everything back up.
Nothing in this summary describes the constant doubt about whether I was pursuing the best path or merely the easiest one. I've failed to mention my complete failure to network and my spectacular inability to find people who would actually take me seriously. It's hard to convey the fear that I was just pretending I wanted to save the world, just acting like I was trying, because that's the narrative that I wanted. How could someone 'smart' actually fail to find powerful friends if they were really trying for nine years?
I claim no glory: the journey was messy, and it was poorly executed. I tell the story in part because people have asked me where my passion comes from and how I became aligned with MIRI's mission. Mostly, though, I tell the story because it feels like something I have to tell before moving on. It feels almost dishonest to try to save the world in this new way without at least acknowledging that I walked another path, once.
The source of my passion
So to those of you wondering where my passion comes from, I answer this: it has always been there. It was a small flame, when I was young, and it was fed by a deep mistrust in society's capabilities and a strong belief that if anyone can matter then I had better try.
From my perspective, I've been dedicating my energy towards 'saving the world' since first I realized that the world was in need of saving. This passion was not recently kindled, it was merely redirected.
There was a burst of productivity these past few months, after I refocused my efforts. I was given a new path, and on it the analogous obstacles have already been surmounted. MIRI has already spent years promoting that rational state of mind, bridging its inferential gap, finding people who can actually work on solving the problem instead of arguing about whether there is a problem to be solved. This was invigorating, like skipping ahead ten years in terms of where I wanted to be.
Alongside that, I felt a burning need to catch up. I was late to the party, and I had been foolish for a very long time. I was terrified that I wouldn't actually be able to help — that, after all my work, the most I'd be able to do to solve the big problems was earn to give. I'd have done it, because the actual goal is to save the world, not to satisfy Nate. But the idea scared me, and the desire to keep actively working on the big problems drove me forward.
In a way, too, everything got easier — I needed only to become good at logic and decision theory, to read a bunch of math textbooks, a task that was trivially measurable and joyfully easy compared to trying to convince the entire world to embrace strange, unpolished ideas.
All these factors contributed to my recent productivity. But the passion, the fervor, the desire to optimize the future — that has been there for a long time. People sometimes ask where I get my passion from, and I find it hard to answer.
We hold the entire future of the universe in our hands. Is that not justification enough?
I learned a long time ago that most people are content to accept the way things are. Everyone wants the world to change, but most are cowed by the fact that they can't change it themselves.
But if the chance that one person can save the world is one in a million, then there had better be a million people trying.
It is this knowledge — that the world will only be saved by people who actually try to save it — that drives me.
I still have these strange ideas, this pet inferential gap that I hope to bridge one day. It still hurts, that things important to me were superseded, but they were superseded, and it is better to know than to remain in the dark.
When I was fourteen, I saw many horrors laid out before us: war, corruption, environmental destruction, and the silent tragedies of automobile accidents, courtroom injustices, and death by disease and aging. All around me, I saw a society that couldn't coordinate, full of people resigned to unnecessary fates.
I was told to settle for making a small difference. I resolved to do the opposite.
I made a promise to myself. I didn't promise to fix governments: that was a means to an end, a convenient solution for someone who didn't know how to look further out. I didn't promise to change the world, either: every little thing is a change, and not all changes are good. No, I promised to save the world.
That promise still stands.
The world sure as hell isn't going to save itself.
It seems to me that there's considerably less search in "not buy a porche" than in "build a skyscraper".
Let's suppose you value paperclips. Someone takes 10 paperclips from you, unbends them, but makes 10^90 paperclips later thanks to their use of 10 paperclips. In this hypothetical universe, these 10 paperclips are very special, and if someone gives you coordinates of a paperclip and claims it's one of those legendary 10 paperclips (that are going to be turned into 10^90 paperclips), you'd be wise to be quite skeptical - you need evidence that the paperclips you're looking at are so oddly located within the totality of paperclips. edit: or if someone gives you papers with paperclip marks left on them and says it's the papers that were held together by said legendary paperclips.
edit2: albeit i do agree - if we actually seek out something, we may be able to overcome very large priors against. In this case though, the issue is that we have a claim that our existing intrinsic values are in a necessarily very unusual relation to the vast majority of what's intrinsically valuable.