If everybody was cowed by the simple fact that they can't succeed, then that one-in-a-million person who can succeed would never take their shot. So I was sure as hell going to take mine. But if the chance that one person can save the world is one in a million, then there had better be a million people trying.
I want to upvote about twenty times for this phrase alone. I suspect that your psychology was very different than mine; I think I crave stability and predictability a lot more. One of the reasons that "saving the world" always seemed like an impossible thing to do, like something that didn't even count as a coherent goal, was that I didn't know where to start or even what the ending would look like. That becomes a lot more tractable if you're one of a million people trying to solve a problem, and a lot less scary.
However, idealism still scares me. I remember being a kid and reading about communism and thinking that it really ought to work. I remember thinking that if I'd been a young adult back before communism, I would have bet my time and effort on it working. And...it turned out not to work. Since I probably wasn't any smarter than the people who tried to make ...
Communism definitely serves as a warning to smart optimizers to not get ahead of themselves.
But it also cuts the other way: it lets smart optimizers the know how powerful some ideas can be.
In a sociology class, the teacher once mentioned to us that Karl Marx was the only truly applied sociologist. I don't know how far this is true, but he is certainly the one who has had the most impact.
I wasn't trying to get pats on the head, I was appealing to the Lord of the Heavens and the Earth. Were we all on the same page, here? This was the creator. He was infinitely virtuous, and he had told us what to do.
I know exactly what you're talking about. I quickly realized as a kid that grown-ups get quite worried if you start taking the religion too seriously.
The more stories I hear of other LessWrongers' life stories (and taking my own into consideration) the more I realise how one of our defining traits is our inability and/or unwillingness to compartmentalize on important ideas.
The government, though, was a different matter all together. I assumed that a lot of very smart people had put a lot of effort into its design — that's what the "Founding Fathers" meme implied, anyway.
I've always taken the framing of the US Constitution as a cautionary tale about the importance of getting things exactly right. The founding fathers were highly intelligent (some of them, anyway), well-read and fastidious; after a careful review of numerous different contemporary and historical government systems, from the Iriquois confederacy to ancient Greek city-states, they devised a very clever, highly non-obvious alternative designed to be watertight against any loopholes they could think of, including being self-modifying in carefully regulated ways.
It almost worked. They created a system that came very, very close to preventing dictatorship and oligarchy... and the United States today is a grim testament to what happens when you cleverly construct an optimization engine that almost works.
One of the things that is impressive about the Constitution is that it was designed to last a few decades and then reset in a new Constitutional Convention when it got too far from optimal. It's gone far beyond spec at this point, and works.. relatively well.
When I was finally told how the US government worked, I couldn't believe my ears. It was a mess. An arbitrary, clunky monstrosity full of loopholes a child could abuse. I could think of a dozen improvements off the top of my head.
For what it's worth, the Founding Fathers actually did do quite a bit of research into what kinds of "loopholes" had existed in earlier systems, particularly the one in England, and took steps to avoid them. For example, the Constitution mandates that a census be taken every ten years because, in England, there were "rotten boroughs" which had a member of Parliament even though they had a tiny population. Needless to say, it wasn't easy to get politicians in these districts to approve redistricting laws.
On the other hand, the Founding Fathers didn't anticipate gerrymandering, though.
...To give you an idea of how my teenaged mind worked, it was immediately clear to me that any first-order "improvements" suggested by naïve ninth-graders would have unintended negative consequences. Therefore, improvement number one involved redesigning the system to make it easy to test many different improvements in parallel, adding machinery
To be equally fair, a lot of the more obvious exploits in the American system have been tried at one point or another; one of the clearer examples I can think of offhand is FDR's attempt to pack the US Supreme Court in 1937. Historically most of these have been shot down or rendered more or less toothless by other power centers in the government, although a similar (albeit somewhat unique) situation did contribute to the American Civil War.
There's a lot of bad things I could say about the American system, but the dynamic stability built in seems to have been quite a good plan.
"Avoid concentrating power, and try to pit power centers against each other whenever possible" seems to have been a fairly successful design heuristic for governments.
How do you know that campaign spending is reduced?
Revealed preferences and margins. By spending on the 'obvious roads', entities reveal that those are the optimum roads for them and their first choice; by forcing them back onto secondary choices, they must in some way be worse off (for example, be paying more or getting less) else they would have been using those non-obvious roads in the first place; and then by supply & demand, less will be spent.
Parties aren't a built-in feature of the American political system as such -- in fact, many of the people involved in setting it up were vociferous about their opposition to factionalism (and then proceeded more or less directly into some rather nasty factional conflict, because humans). The first-past-the-post decision system used in American federal elections is often cited as leading to a two-party system (Duverger's law), and indeed probably contributes to such a state, but it's not a hard rule; the UK for example uses FPTP voting in many contexts but isn't polarized to the extent of the US, though it's more polarized in turn than most continental systems.
You'll excuse me if I don't mention them here: there is a lot of inferential distance. Perhaps one day I'll write a sequence.
What a tease! Why not give us a short bullet point list of your conclusions, most readers around here wouldn't dismiss them out of hand, even lacking a chain of arguments leading up to them. It's enjoyable to jump across inferential chasms. Especially if you think of your conclusions as important. Are they reactionary?
It's tempting to say "either I present my conclusions in their most convincing form, as a sequence, or not at all", but remember that in resource constrained environments, the perfect is the enemy of the good.
Why not give us a short bullet point list of your conclusions, most readers around here wouldn't dismiss them out of hand, even lacking a chain of arguments leading up to them.
We sure would. We think we are smart, and the inferential gap the OP mentioned is unfortunately almost invisible from this side. That's why Eliezer had to write all those millions of words.
Easy test: send a summary/bullet point/whatever as a private message to a few select people from LessWrong, and ask them for their reactions. Possible loss: a few select members become biased due to large inferential gap against the ideas that you gave up to pursue a more important goal. Possible gains: rational feedback to your ideas, supporters, and an estimate of the number of supporters you could gain by sharing your ideas more widely on this site.
Easy test: send a summary/bullet point/whatever as a private message to a few select people from LessWrong, and ask them for their reactions.
That is an interesting test but it is not testing quite the same thing as whether the conclusions would be dismissed out of hand in a post. "Herding cats" is a very different thing to interacting with a particular cat with whom you have opened up a direct mammalian social exchange.
I'll try, just for fun, to summarize Eliezer's conclusions of the pre-fun-theory and pre-community-building part of the sequence:
We hold the entire future of the universe in our hands. Is that not justification enough?
It's too much justification. Don't assume that this immense savage universe is just a growth medium for whatever microbe wins the game on Earth.
This is not about values, it is about realism. I am protesting this presumption that the cosmos is just a dumb desert waiting for transhumanity to come and make it bloom in our image. If a line of argument tells you that you are a 1-in-10^80 special snowflake from the dawn of time, you should conclude that there is something wrong with the argument, not wallow in the ecstatic dread of your implied cosmic responsibility. It would be far more reasonable to conclude that there is some presently unknown property of the universe which either renders such expansion physically impossible, or which actively suppresses it when it begins to occur.
Would you agree that you are carrying out a Pascal's Muggle line of reasoning using a leverage prior?
http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/
If so, you're using it very controversially, compared to disbelieving in a googolplex or Ackermann of leverage. A 10^-80 prior is easy for sensory evidence to overcome if your model implies that fewer than 10^-80 sentients hallucinate your sensory evidence; this happens every time you flip 266 coins. Conversely to state the 10^-80 prior is invincible just restates that you think more than 10^-80 sentients are having your experiences, due to Simulation Arguments or some explanation of the Fermi Paradox which involves lots of civilizations like ours within any given Hubble volume. In other words, to say that the 10^-80 prior is not beaten by our sensory experience merely restates that you believe in an alternate explanation for the Fermi Paradox in which our sensory experiences are not rare.
From the "Desk" of: Snooldorp Gastool V
Attention: Eliezer Yudkowsky Machine Intelligence Research Institute
Sir, you will doubtlessly be astonished to be receiving a letter from a species unknown to you, who is about to ask a favor from you.
As fifth rectified knigget of my underclan's overhive, I have recently come into possession of an ancient Andromedan passkey, guaranteeing the owner access to no less than 2^419 intergalactic credits. My own species is a trans-cladistic harmonic agglomerate and therefore does not satisfy the anghyfieithadwy of Andromedan culture-law, which stipulates that the titular beneficiary of the passkey (who has first claim on half the credits) must be a natural sophont species. However, we have inherited a trust relationship with a Voolhari Legacy adjudication system, in the vicinity of what you know as the Orion OB1 association, and we have verified that your species is the nearest natural sophont with the technical capacity and cognitive inclinations needed to be our partners in this venture. In order to earn your share of this account, your species should beam by radio telescope its genome, cultural history, and at least two hundred (200) ...
I directly state that, for other reasons not related to the a priori pre-sensory exclusion of any act which can yield 2^419 credits, it seems to me likely that most of the sentients receiving such a message will not be dealing with a genuine offer.
I see a hilarious and inspiring similarity between your story and mine.
In high school, I realized that I enjoyed reflecting on topics to achieve coherence, discussing mechanisms of superficial phenomena, and wanted everyone to be happy on a deep level. So I created a religion, because, of course, I wanted to save the world. I thought other religions were a failed attempt to incorporate modern positive psychology learnings (which had "solved happiness") into moral theories but I wanted to use the meme potential of social phenomena like religion ...
I liked this series a lot. Thanks for writing it.
But I couldn't resist this small math nitpick: "But if the chance that one person can save the world is one in a million, then there had better be a million people trying." -> That's a great quote, but we can be more precise:
If these probabilities were indeed independent (which they can't possibly be, but still), and a million people tried with a chance of 1 in a million each, then the chance P that the world is saved is only P=1-(999999/1000000)^1000000=63.2%. If we want the world to be saved w...
This was enjoyable to me because "saving the world", as you put it, is completely unmotivational for me. (Luckily I have other sources of motivation) It's interesting to see what drives other people and how the source of their drive changes their trajectory.
I'm definitely curious to see a sequence or at least a short feature list about your model for a government that structurally ratchets better instead of worse. That's definitely something that's never been achieved consistently in practice.
I'm not sure that dismissing government reform is necessarily the right thing to do, even if AI-risk is the larger problem. The timelines for the good the solutions do may be different - even if you have to save the world fifty years from now from UFAI, there's still a world of good you can do by helping people with a better government system in the meantime.
Also, getting a government that better serves its constituents' values could be relevant progress towards getting a computer program that serves well the programmers' values.
You've probably thought thr...
You've probably thought through these exact points, but glossed over them in the summary.
Yep. Trust me, I really wanted the answer to be that I should leverage a comparative advantage in reform instead of working on AI risk. There are a number of reasons why it wasn't. It turns out I don't possess much advantage on reform work -- most of the mental gains are transferable, and I lack key resources such as powerful friends & political sway that would have granted reform advantage. But yes, this was a difficult realization.
Attending the December MIRI workshop was actually something of a trial run. If it had turned out that I couldn't do the math / assist with time, then I would have donated while spending time on reform problems. It turns out that I can do FAI research, though, and by now I'm quite confident that I can do more good there.
Yeah, could I take five minutes to stump for more people getting involved in FAI issues? I've read through the MIRI/FHI papers on the subject, and the field of Machine Ethics is really currently in its infancy, with a whole lot of open questions, both philosophical and mathematical.
Now, you might despair and say, Google bought DeepMind, there's gonna be UFAI, we're all gonna die because we didn't solve this a decade ago.
I prefer to say: this field is young and growing more serious and respected. This means that the barrier to entry for a successful contribution is relatively low and the comparative advantage of possessing relevant knowledge is relatively high, compared to other things you could be pursuing with similar skills.
Thanks for sharing. I suspect this is a common sort of story amongst LessWrong readers.
Depending upon whether you believe that I was actually able to come up with better ways to structure people, you may feel that I'm either pretty accomplished or extremely deluded. Perhaps both.
I accepted your claim easily as I think it's plausible that a great many people have come up with better ways to structure people. It's in the aggregate form that stupidity prevails.
Nice story.
I notice you don't talk about the interaction between the two big goals you've held. Your beliefs here presumably hinge on timescales? If most existential risk is a long way off, then improving the coordination and decision making of society is likely a better route to long-term safety than anything more direct (though perhaps there is something else better still).
If you agree with that, when historically do you guess the changeover point was?
A decade late to the party, thank you for reminding me once again what is important and rekindling the spark
Eventually, I had a few insights that I've yet to find in the literature, a few ideas that I still actually believe are important. You'll excuse me if I don't mention them here: there is a lot of inferential distance. Perhaps one day I'll write a sequence.
Did you end up writing them anywhere?
I have to agree with Kawoomba. It would be totally awesome to try and puzzle out the reasons that you have for your ideas with just the ideas given. An hour of your time (to write a post) could prompt people to change their minds on how society should be optimized and that is an opportunity that you shouldn't miss. Also, changing the way society works is one of my pet peeves.
So what is your ideal meta-social system? I'm glad you've turned to a better path, but I would hate for that work to have gone to waste. I know like-minded people from the bitcoin space who are working on social reform and radical changes of governance, and it would be interesting to ferry them across that inferential gap.
Thank you. I can relate to much of what you said, as isn't terribly rare here.
And the most enjoyable of the feelings evoked in me (as has happened on several occasions already), is seeing a young one being better and more promising than me.
(Though my enjoyment at being superseded is dangerous in the sense that such may be associated with laziness, so you are very welcome to not enjoy yours -- or enjoy, however you wish.)
The actual reason why I started to comment at all, however, is that it's amusing to note how I'm in a sense in the reverse of your situati...
By July of 2013, I came to agree with MIRI's conclusions.
Do you think the orthogonality thesis is intuitively true or have you worked out a way to prove it?
Because in that case I'd like to avoid to reinvent the wheel...
Everyone wants the world to change, but most are cowed by the fact that they can't change it themselves.
But what if the best method to save the world involves government after all? After all, government is how humans coordinate our resources toward our goals. Our current government is also working on the AI project, and there are decent odds that it will be solved by either our espionage or military research branches. Meanwhile, the individuals and groups working on the Friendly aspect of the AI project are poorly coordinated and poorly funded. Perhaps there is a way you could use your old expertise on your new goal?
To be clear, I did my best to present various past-Nate's viewpoints from that past-Nate's perspective. I do not necessarily endorse my old beliefs, and I readily admit that 2005!Nate and other Nates nearby were far too arrogant. I aim to explain where I came from, not to justify it.
My arrogance has been tempered in recent years (though I retain some for the Nothing Is Beyond My Grasp compartment). That said, self evaluation can be difficult, and perhaps I have not been tempered enough. If you notice any specific instances where it seems my model of my abilities/importance is out of sync with reality, I'd very much appreciate your input.
From what I gather, most people don't respond to rational ideas and actions, just ideas and actions they believe will benefit themselves or their group. This is how bad ideas continue to flourish (Bigger Church = Pleasing the Lord = Better chance of an afterlife). In addition, people do respond to ideas they believe are moral, but what most people define as "good" or "bad" actions, moral or immoral, tend to be what people believe will benefit them or the group they relate to (family, community, country, etc.) As a rule of thumb, to ...
This is the final post in my productivity sequence.
The first post described what I achieved. The next three posts describe how. This post describes why, explaining the sources of my passion and the circumstances that convinced a young Nate to try and save the world. Within, you will find no suggestions, no techniques to emulate, no new ideas to ponder. This is a rationalist coming-of-age story. With luck, you may find it inspiring. Regardless, I hope you can learn from my mistakes.
Never fear, I'll be back to business soon — there's lots of studying to do. But before then, there's a story to tell, a memorial to what I left behind.
I was raised Catholic. On my eighth birthday, having received my first communion about a year prior, I casually asked my priest how to reaffirm my faith and do something for the Lord. The memory is fuzzy, but I think I donated a chunk of allowance money and made a public confession at the following mass.
A bunch of the grownups made a big deal out of it, as grownups are like to do. "Faith of a child", and all that. This confused me, especially when I realized that what I had done was rare. I wasn't trying to get pats on the head, I was appealing to the Lord of the Heavens and the Earth. Were we all on the same page, here? This was the creator. He was infinitely virtuous, and he had told us what to do.
And yet, everyone was content to recite hymns once a week and donate for the reconstruction of the church. What about the rest of the world, the sick, the dying? Where were the proselytizers, the missionary opportunities? Why was everyone just sitting around?
On that day, I became acquainted with civilizational inadequacy. I realized you could hand a room full of people the literal word of God, and they'd still struggle to pay attention for an hour every weekend.
This didn't shake my faith, mind you. It didn't even occur to me that the grownups might not actually believe their tales. No, what I learned that day was that there are a lot of people who hold beliefs they aren't willing to act upon.
Eventually, my faith faded. The distrust remained.
Gaining Confidence
I grew up in a small village, population ~1200. My early education took place in a one-room schoolhouse. The local towns eventually rolled all their school districts into one, but even then, my graduating class barely broke 50 people. It wasn't difficult to excel.
Ages twelve and thirteen were rough — that was right after they merged school districts, and those were the years I was first put a few grades ahead in math classes. I was awkward and underconfident. I felt estranged and lonely, and it was easy to get shoehorned into the "smart kid" stereotype by all the new students.
Eventually, though, I decided that the stereotype was bogus. Anyone intelligent should be able to escape such pigeonholing. In fact, I concluded that anyone with real smarts should be able to find their way out of any mess. I observed the confidence possessed by my peers, even those who seemed to have no reason for confidence. I noticed the ease with which they engaged in social interactions. I decided I could emulate these.
I faked confidence, and it soon became real. I found that my social limitations had been largely psychological, and that the majority of my classmates were more than willing to be friends. I learned how to get good grades without alienating my peers. It helped that I tended to buck authority (I was no "teacher's pet") and that I enjoyed teaching others. I had a knack for pinpointing misunderstandings and was often able to teach better than the teachers could — as a peer, I could communicate on a different level.
I started doing very well for myself. I got excellent grades with minimal effort. I overcame my social anxieties. I had a few close friends and was on good terms with most everyone else. I participated in a number of extra circulars where I held high status. As you may imagine, I grew quite arrogant.
In retrospect, my accomplishments were hardly impressive. At the time, though, it felt like everyone else wasn't even trying. It became apparent that if I wanted something done right, I'd have to do it myself.
Shattered Illusions
Up until the age of fourteen I had this growing intuition that you can't trust others to actually get things done. This belief didn't become explicit until the end of ninth grade, when I learned how the government of the United States of America actually works.
Allow me to provide a few pieces of context.
For one thing, I was learning to program computers at the time. I had been programming for maybe a year and a half, and I was starting to form concepts of elegance and minimalism. I had a belief that the best design is a small design, a design forced by nature at every step along the way, a design that requires no arbitrary choices.
For another thing, my religion had died not with a bang, but with a whimper. I'd compartmentalized it, and it had slowly withered away. I didn't Believe any more, but I didn't mind that others did. It was a happy fantasy, a social tool. Just as children are allowed to believe in Santa Claus, grownups were allowed to believe in Gods.
The government, though, was a different matter all together. I assumed that a lot of very smart people had put a lot of effort into its design — that's what the "Founding Fathers" meme implied, anyway. But maybe it wasn't even that. Maybe I just possessed an unspoken, unchallenged belief that the grownups knew what they were doing, at least at the very highest levels. This was the very fabric of society itself: surely it was meticulously calibrated to maximize human virtue, to protect us from circumstance and evil.
When I was finally told how the US government worked, I couldn't believe my ears. It was a mess. An arbitrary, clunky monstrosity full of loopholes a child could abuse. I could think of a dozen improvements off the top of my head.
To give you an idea of how my teenaged mind worked, it was immediately clear to me that any first-order "improvements" suggested by naïve ninth-graders would have unintended negative consequences. Therefore, improvement number one involved redesigning the system to make it easy to test many different improvements in parallel, adding machinery to adopt the improvements that were actually shown to work.
Yet even these simple ideas were absent in the actual system. Corruption and inefficiency ran rampant. Worse, my peers didn't seem particularly perturbed: they took the system as a given, and merely memorized the machinery for long enough to pass a test. Even the grownups were apathetic: they dickered over who should have power within the system, never suggesting we should alter the system itself.
My childhood illusions fell to pieces. I realized that nothing was meticulously managed, that the smartest people weren't in control, making sure that everything was optimal. All the world problems, the sicknesses and the injustices and the death: these weren't necessary evils, they were a product of neglect. The most important system of all was poorly coordinated, bloated, and outdated — and nobody seemed to care.
Deciding to Save the World
This is the context in which I decided to save the world. I wasn't as young and stupid as you might think — I didn't believe I was going to save the world. I just decided to. The world is big, and I was small. I knew that, in all likelihood, I'd struggle ineffectually for decades and achieve only a bitter, cynical adulthood.
But the vast majority of my peers hadn't made it as far as I had. Even though a few were sympathetic, there was simply no way we could change things. It was outside of our control.
The adults were worse. They smiled, they nodded, they commended my critical thinking skills. Then they went back to what they were doing. A few of them took the time to inform me that it's great to want to change the world and all, but eventually I'd realize that the best way to do that was to settle down and be a teacher, or run a church, or just be kind to others.
I wasn't surprised. I already knew it was rare for people to actually try and fix things.
I had youthful idealism, I had big ambitions, but I knew full well that I didn't actually have a chance. I knew that I wouldn't be able to single-handedly redesign the social contract, but I also knew that if everyone who made it as far as I did gave up just because changing the world is impossible, then the world would never change.
If everybody was cowed by the simple fact that they can't succeed, then that one-in-a-million person who can succeed would never take their shot.
So I was sure as hell going to take mine.
Broadening Scope
Mere impossibility was never a hurdle: The Phantom Tollbooth saw to that at a young age. When grownups say you can't do something, what they mean is that they can't do it. I spent time devising strategies to get leverage and push governments out of their stagnant state and into something capable of growth.
In 2005, a teacher to whom I'd ranted introduced me to another important book: Ishmael. It wasn't the ideas that stuck with me — I disagreed with a few at the time, and I now disagree with most. No, what this book gave me was scope. This author, too, wished to save the world, and the breadth of his ideas exceeded my own. This book gave me no answers, but it gave me better questions
Why merely hone the government, instead of redesigning it altogether?
More importantly, What sort of world are you aiming for?
"So you want to be an idealist?", the book asked. "Very well, but what is your ideal?"
I refocused, looking to fully define the ideals I strove for in a human social system. I knew I wouldn't be able to institute any solution directly, but I also knew that pushing governments would be much easier if I had something to push them towards.
After all, the Communist Manifesto changed the world, once.
This became my new goal: distill an ideal social structure for humans. The problem was insurmountable, of course, but this was hardly a deterrence. I was bright enough to understand truisms like "no one system will work for everybody" and "you're not perfect enough to get this right", but these were no trouble. I didn't need to directly specify an ideal social structure: a meta-structure, an imperfect system that ratchets towards perfection, a system that is optimal in the limit, would be fine by me.
From my vantage point, old ideas like communism and democracy soon seemed laughable. Interesting ideas in their time, perhaps, but obviously doomed to failure. It's easy to build a utopia when you imagine that people will set aside their greed and overcome their apathy. But those aren't systems for people: People are greedy, and people are apathetic. I wanted something that worked — nay, thrived — when populated by actual humans, with all their flaws.
I devoted time and effort to research and study. This was dangerous, as there was no feedback loop. As soon as I stepped beyond the achievements of history, there was no way to actually test anything I came up with. Many times, I settled on one idea for a few months, mulling it over, declaring it perfect. Time and again, I later found a fatal flaw, a piece of faulty reasoning, and the whole thing came tumbling down. After many cycles, I noticed that the flaws were usually visible in advance. I became cognizant of the fact that I'd been glossing over them, ignoring them, explaining them away.
I learned not to trust my own decrees of perfection. I started monitoring my thought processes very closely. I learned to notice the little ghosts of doubt, to address them earlier and more thoroughly. (I became a staunch atheist, unsurprisingly.) This was, perhaps, the beginning of my rationalist training. Unfortunately, it was all self-directed. Somehow, it never occurred to me to read literature on how to think better. I didn't have much trust in psychological literature, anyway, and I was arrogant.
Communication Failures
It was during this period that I explicitly decided not to pursue math. I reasoned that in order to actually save the world, I'd need to focus on charisma, political connections, and a solid understanding of the machinery underlying the world's major governments. Upon graduating high school, I decided to go to a college in Washington D.C. and study political science. I double majored in Computer Science as a fallback plan, a way to actually make money as needed (and because I loved it).
I went into my Poly Sci degree expecting to learn about the mechanics of society. Amusingly enough, I didn't know that "Economics" was a field. We didn't have any econ classes in my tiny high school, and nobody had seen fit to tell me about it. I expected "Political Science" to teach me the workings of nations including the world economy, but quickly realized that it's about the actual politicians, the social peacocking, the façades. Fortunately, a required Intro to Econ class soon remedied the situation, and I quickly changed my major to Economics.
My ideas experienced significant refinement as I received formal training. Unfortunately, nobody would listen to them.
It's not that they were dismissed as childish idealism: I had graduated to larger problems. I'd been thinking long and hard about the problem for a few years, and I'd had some interesting insights. But when I tried to explain them to people, almost everyone had immediate adverse reactions.
I anticipated criticism, and relished the prospect. My ideas were in desperate need of an outside challenger. But the reactions of others were far worse than I anticipated.
Nobody found flaws in my logic. Nobody challenged my bold claims. Instead, they simply failed to understand. They got stuck three or four points before the interesting points, and could go no further. I learned that most people don't understand basic economics or game theory. Many others were entrenched in bluegreensmanship and reflexively treated my suggestions as attacks. Aspiring politicians balked at the claim that Democracy, while perhaps an important step in our cultural evolution, can't possibly be the end of the line. Still others insisted that it's useless to discuss ideals, because they can never be achieved.
In short, I found myself on the far side of a wide inferential gap.
I learned that many people, after falling into the gap, were incapable of climbing out, no matter how slowly I walked them through the intervening steps. They had already passed judgement on the conclusion, and rejected my attempts to root out their misconceptions, becoming impatient before actually listening. I grew very cautious with who I shared my ideas with, worrying that exposing them too quickly or in the wrong fashion would be a permanent setback.
I had a small few friends who knew enough economics and other subjects to follow along and who wouldn't discard uncouth ideas outright. I began to value these people highly, as they were among the few who could actually put pressure on me, expose flaws in my reasoning, and help me come up with solutions.
Eventually, I had a few insights that I've yet to find in the literature, a few ideas that I still actually believe are important. You'll excuse me if I don't mention them here: there is a lot of inferential distance. Perhaps one day I'll write a sequence.
Even then, I could see no easy path to public support. Most people lacked the knowledge to understand my claims without effort, and lacked the incentive to put in the effort for some unproven boy.
Phase Two
Fortunately, I had other tricks up my sleeve.
I attempted three different tech startups. Two of them failed. The last was healthier, but we shut it down because the expected gains were lower than an industry salary. In the interim, I honed my programming skills and secured an industry job (I'm a software engineer at Google).
By the time I graduated, my ideas were largely refined and stable. I had settled upon a solid meta social system as an ideal to strive for, and I'm still fairly confident that it's a good one — one where the design is forced by nature at every step, one that requires no arbitrary choices, one that ratchets towards optimality. And even if the ideal was not perfect, the modern world is insane enough that even a small step towards a better-coordinated society would yield gigantic benefits.
The problem changed from one of refining ideas to one of convincing others.
It was clear that I couldn't spread my ideas by merely stating them, due to the inferential distance, so I started working on two indirect approaches in the hours after work.
The first was a book, which went back to my roots: simple, low-cost ideas for how to change the current system of government in small ways that could have large payoffs. The goal of this project was to shake people from the blue-green mindset, to convince them that we should stop bickering within the framework and consider modifying the framework itself. This book was meant to the be first in a series, in which I'd slowly build towards more radical suggestions.
The second project was designed to put people in a more rational frame of mind. I wanted people who could look past the labels and see the things, people who don't just memorize how the world works but see it as mutable, as something they can actually change. I wanted people that I could pull out of inferential gaps, in case they fell into mine.
Upon introspection, I realized that much of my ability came from a specific outlook on the world that I had at a young age. I had a knack for understanding what the teachers were trying to teach me, for recognizing and discarding the cruft in their statements. I saw many fellow students putting stock in historical accidents of explanation where I found it easy to grasp the underlying concepts and drop the baggage. This ability to cull the cruft is important to understanding my grand designs.
This reasoning (and a few other desires, including a perpetual fascination with math and physics) led me to create simplifience, a website that promotes such a mindset.
It never made it to the point where I was comfortable publicizing it, but that hardly matters anymore. In retrospect, it's an unfinished jumble of rationality training, math explanations, and science enthusiasm. It's important in one key respect:
As I was writing simplifience, I did a lot of research for it. During this research, I kept stumbling upon web articles on this one website that articulated what I was trying to express, only better. That website was LessWrong, and those articles were the Sequences.
It took me an embarrassingly long time to actually pay attention. In fact, if you go to simplifience.com, you can watch as the articles grow more and more influenced by the sequences. My exposure to them was patchy, centered around ideas that I'd already had. It took me a while to realize that I should read the rest of them, that I might learn new things that extended the ideas I'd figured out on my own.
It seemed like a good way to learn how to think better, to learn from someone who had had similar insights. I didn't even consider the possibility that this author, too, had some grand agenda. The idea that Eliezer's agenda could be more pressing than my own never even crossed my mind.
At this point, you may be able to empathize with how I felt when I first realized the importance of an intelligence explosion.
Superseded
It was like getting ten years worth of wind knocked out of me.
I saw something familiar in the sequences — the winding, meticulous explanations of someone struggling to bridge an inferential gap. I recognized the need to cover subjects that looked completely tangential to the actual point, just to get people to the level where they wouldn't reject the main ideas out-of-hand. I noticed the people falling to the side, debating issues two or three steps before the actual interesting problems. It was this familiar pattern, above all else, that made me actually pay attention.
Everything clicked. I was already thoroughly convinced of civilizational inadequacy. I had long since concluded that there's not much that can hold a strong intelligence down. I had a sort of vague idea that an AI would seek out "good" values, but such illusions were easily dispelled — I was a moral relativist. And the stakes were as high as stakes go. Artificial intelligence was a problem more pressing than my own.
The realization shook me to my core. It wasn't even the intelligence explosion idea that scared me, it was the revelation of a fatal flaw at the foundation of my beliefs. Poorly designed governments had awoken my fear that society can't handle coordination problems, but I never — not once in nearly a decade — stopped to consider whether designing better social systems was actually the best way to optimize the world.
I professed a desire to save the world, but had misunderstood the playing field so badly that existential risk had never even crossed my mind. Somehow, I had missed the most important problems, and they should have been obvious. Something was very wrong.
It was time to halt, melt, and catch fire.
This was one of the most difficult things I've done.
I was more careful, the second time around. The Sequences shook my foundations and brought the whole tower crashing down, but what I would build in its place was by no means a foregone conclusion.
I had been blind to all existential risks, not just AI risk, and there was a possibility that I had missed other features of the problem space as well. I was well aware of the fact that, having been introduced to AI risk by Eliezer's writings, I was biased towards his viewpoint. I didn't want to make the same mistake twice, to jump for the second big problem that crossed my path just because it was larger than the first. I had to start from scratch, reasoning from the beginning. I knew I must watch out for conjunction fallacies caused by nice narratives, arguments made from high stakes (Pascal's mugging), putting too much stock on inside views, and so on. I had to figure out how to actually save the world.
It took me a long time to deprogram, to get back to neutral. I considered carefully, accounting for my biases as best I could. I read a lot. I weighed the evidence. The process took many months.
By July of 2013, I came to agree with MIRI's conclusions.
Disclaimer
Writing it all out like this, I realize that I've failed to convey the feeling of it all. Depending upon whether you believe that I was actually able to come up with better ways to structure people, you may feel that I'm either pretty accomplished or extremely deluded. Perhaps both.
Really, though, it's neither. This raw story, which omits details from the rest of my life, paints a strange picture indeed. The intensity is distilled.
I was not a zealot, in practice. My attempts to save the world didn't bleed much into the rest of my life. I learned early on that this wasn't the sort of thing that most people enjoyed discussing, and I was wary of inferential gaps. My work was done parallel to an otherwise normal life. Only a select few people were privy to my goals, my conclusions. The whole thing often felt disconnected from reality, just some unusual hobby. The majority of my friends, if they read this, will be surprised.
There are many holes in this summary, too. It fails to capture the dark spots. It omits the feelings of uncertainty and helplessness, the cycles of guilt at being unproductive followed by lingering depression, the wavering between staunch idealism and a conviction that my goals were nothing but a comfortable fantasy. It skips over the year I burned out, writing the whole idea off, studying abroad and building myself a healthier mental state before returning and picking everything back up.
Nothing in this summary describes the constant doubt about whether I was pursuing the best path or merely the easiest one. I've failed to mention my complete failure to network and my spectacular inability to find people who would actually take me seriously. It's hard to convey the fear that I was just pretending I wanted to save the world, just acting like I was trying, because that's the narrative that I wanted. How could someone 'smart' actually fail to find powerful friends if they were really trying for nine years?
I claim no glory: the journey was messy, and it was poorly executed. I tell the story in part because people have asked me where my passion comes from and how I became aligned with MIRI's mission. Mostly, though, I tell the story because it feels like something I have to tell before moving on. It feels almost dishonest to try to save the world in this new way without at least acknowledging that I walked another path, once.
The source of my passion
So to those of you wondering where my passion comes from, I answer this: it has always been there. It was a small flame, when I was young, and it was fed by a deep mistrust in society's capabilities and a strong belief that if anyone can matter then I had better try.
From my perspective, I've been dedicating my energy towards 'saving the world' since first I realized that the world was in need of saving. This passion was not recently kindled, it was merely redirected.
There was a burst of productivity these past few months, after I refocused my efforts. I was given a new path, and on it the analogous obstacles have already been surmounted. MIRI has already spent years promoting that rational state of mind, bridging its inferential gap, finding people who can actually work on solving the problem instead of arguing about whether there is a problem to be solved. This was invigorating, like skipping ahead ten years in terms of where I wanted to be.
Alongside that, I felt a burning need to catch up. I was late to the party, and I had been foolish for a very long time. I was terrified that I wouldn't actually be able to help — that, after all my work, the most I'd be able to do to solve the big problems was earn to give. I'd have done it, because the actual goal is to save the world, not to satisfy Nate. But the idea scared me, and the desire to keep actively working on the big problems drove me forward.
In a way, too, everything got easier — I needed only to become good at logic and decision theory, to read a bunch of math textbooks, a task that was trivially measurable and joyfully easy compared to trying to convince the entire world to embrace strange, unpolished ideas.
All these factors contributed to my recent productivity. But the passion, the fervor, the desire to optimize the future — that has been there for a long time. People sometimes ask where I get my passion from, and I find it hard to answer.
We hold the entire future of the universe in our hands. Is that not justification enough?
I learned a long time ago that most people are content to accept the way things are. Everyone wants the world to change, but most are cowed by the fact that they can't change it themselves.
But if the chance that one person can save the world is one in a million, then there had better be a million people trying.
It is this knowledge — that the world will only be saved by people who actually try to save it — that drives me.
I still have these strange ideas, this pet inferential gap that I hope to bridge one day. It still hurts, that things important to me were superseded, but they were superseded, and it is better to know than to remain in the dark.
When I was fourteen, I saw many horrors laid out before us: war, corruption, environmental destruction, and the silent tragedies of automobile accidents, courtroom injustices, and death by disease and aging. All around me, I saw a society that couldn't coordinate, full of people resigned to unnecessary fates.
I was told to settle for making a small difference. I resolved to do the opposite.
I made a promise to myself. I didn't promise to fix governments: that was a means to an end, a convenient solution for someone who didn't know how to look further out. I didn't promise to change the world, either: every little thing is a change, and not all changes are good. No, I promised to save the world.
That promise still stands.
The world sure as hell isn't going to save itself.