This is the final post in my productivity sequence.

The first post described what I achieved. The next three posts describe how. This post describes why, explaining the sources of my passion and the circumstances that convinced a young Nate to try and save the world. Within, you will find no suggestions, no techniques to emulate, no new ideas to ponder. This is a rationalist coming-of-age story. With luck, you may find it inspiring. Regardless, I hope you can learn from my mistakes.

Never fear, I'll be back to business soon — there's lots of studying to do. But before then, there's a story to tell, a memorial to what I left behind.


I was raised Catholic. On my eighth birthday, having received my first communion about a year prior, I casually asked my priest how to reaffirm my faith and do something for the Lord. The memory is fuzzy, but I think I donated a chunk of allowance money and made a public confession at the following mass.

A bunch of the grownups made a big deal out of it, as grownups are like to do. "Faith of a child", and all that. This confused me, especially when I realized that what I had done was rare. I wasn't trying to get pats on the head, I was appealing to the Lord of the Heavens and the Earth. Were we all on the same page, here? This was the creator. He was infinitely virtuous, and he had told us what to do.

And yet, everyone was content to recite hymns once a week and donate for the reconstruction of the church. What about the rest of the world, the sick, the dying? Where were the proselytizers, the missionary opportunities? Why was everyone just sitting around? 

On that day, I became acquainted with civilizational inadequacy. I realized you could hand a room full of people the literal word of God, and they'd still struggle to pay attention for an hour every weekend.

This didn't shake my faith, mind you. It didn't even occur to me that the grownups might not actually believe their tales. No, what I learned that day was that there are a lot of people who hold beliefs they aren't willing to act upon.

Eventually, my faith faded. The distrust remained.

Gaining Confidence

I grew up in a small village, population ~1200. My early education took place in a one-room schoolhouse. The local towns eventually rolled all their school districts into one, but even then, my graduating class barely broke 50 people. It wasn't difficult to excel.

Ages twelve and thirteen were rough — that was right after they merged school districts, and those were the years I was first put a few grades ahead in math classes. I was awkward and underconfident. I felt estranged and lonely, and it was easy to get shoehorned into the "smart kid" stereotype by all the new students.

Eventually, though, I decided that the stereotype was bogus. Anyone intelligent should be able to escape such pigeonholing. In fact, I concluded that anyone with real smarts should be able to find their way out of any mess. I observed the confidence possessed by my peers, even those who seemed to have no reason for confidence. I noticed the ease with which they engaged in social interactions. I decided I could emulate these.

I faked confidence, and it soon became real. I found that my social limitations had been largely psychological, and that the majority of my classmates were more than willing to be friends. I learned how to get good grades without alienating my peers. It helped that I tended to buck authority (I was no "teacher's pet") and that I enjoyed teaching others. I had a knack for pinpointing misunderstandings and was often able to teach better than the teachers could — as a peer, I could communicate on a different level.

I started doing very well for myself. I got excellent grades with minimal effort. I overcame my social anxieties. I had a few close friends and was on good terms with most everyone else. I participated in a number of extra circulars where I held high status. As you may imagine, I grew quite arrogant.

In retrospect, my accomplishments were hardly impressive. At the time, though, it felt like everyone else wasn't even trying. It became apparent that if I wanted something done right, I'd have to do it myself.

Shattered Illusions

Up until the age of fourteen I had this growing intuition that you can't trust others to actually get things done. This belief didn't become explicit until the end of ninth grade, when I learned how the government of the United States of America actually works.

Allow me to provide a few pieces of context.

For one thing, I was learning to program computers at the time. I had been programming for maybe a year and a half, and I was starting to form concepts of elegance and minimalism. I had a belief that the best design is a small design, a design forced by nature at every step along the way, a design that requires no arbitrary choices.

For another thing, my religion had died not with a bang, but with a whimper. I'd compartmentalized it, and it had slowly withered away. I didn't Believe any more, but I didn't mind that others did. It was a happy fantasy, a social tool. Just as children are allowed to believe in Santa Claus, grownups were allowed to believe in Gods.

The government, though, was a different matter all together. I assumed that a lot of very smart people had put a lot of effort into its design — that's what the "Founding Fathers" meme implied, anyway. But maybe it wasn't even that. Maybe I just possessed an unspoken, unchallenged belief that the grownups knew what they were doing, at least at the very highest levels. This was the very fabric of society itself: surely it was meticulously calibrated to maximize human virtue, to protect us from circumstance and evil.

When I was finally told how the US government worked, I couldn't believe my ears. It was a mess. An arbitrary, clunky monstrosity full of loopholes a child could abuse. I could think of a dozen improvements off the top of my head.

To give you an idea of how my teenaged mind worked, it was immediately clear to me that any first-order "improvements" suggested by naïve ninth-graders would have unintended negative consequences. Therefore, improvement number one involved redesigning the system to make it easy to test many different improvements in parallel, adding machinery to adopt the improvements that were actually shown to work.

Yet even these simple ideas were absent in the actual system. Corruption and inefficiency ran rampant. Worse, my peers didn't seem particularly perturbed: they took the system as a given, and merely memorized the machinery for long enough to pass a test. Even the grownups were apathetic: they dickered over who should have power within the system, never suggesting we should alter the system itself.

My childhood illusions fell to pieces. I realized that nothing was meticulously managed, that the smartest people weren't in control, making sure that everything was optimal. All the world problems, the sicknesses and the injustices and the death: these weren't necessary evils, they were a product of neglect. The most important system of all was poorly coordinated, bloated, and outdated — and nobody seemed to care.

Deciding to Save the World

This is the context in which I decided to save the world. I wasn't as young and stupid as you might think — I didn't believe I was going to save the world. I just decided to. The world is big, and I was small. I knew that, in all likelihood, I'd struggle ineffectually for decades and achieve only a bitter, cynical adulthood.

But the vast majority of my peers hadn't made it as far as I had. Even though a few were sympathetic, there was simply no way we could change things. It was outside of our control.

The adults were worse. They smiled, they nodded, they commended my critical thinking skills. Then they went back to what they were doing. A few of them took the time to inform me that it's great to want to change the world and all, but eventually I'd realize that the best way to do that was to settle down and be a teacher, or run a church, or just be kind to others.

I wasn't surprised. I already knew it was rare for people to actually try and fix things.

I had youthful idealism, I had big ambitions, but I knew full well that I didn't actually have a chance. I knew that I wouldn't be able to single-handedly redesign the social contract, but I also knew that if everyone who made it as far as I did gave up just because changing the world is impossible, then the world would never change.

If everybody was cowed by the simple fact that they can't succeed, then that one-in-a-million person who can succeed would never take their shot.

So I was sure as hell going to take mine.

Broadening Scope

Mere impossibility was never a hurdle: The Phantom Tollbooth saw to that at a young age. When grownups say you can't do something, what they mean is that they can't do it. I spent time devising strategies to get leverage and push governments out of their stagnant state and into something capable of growth.

In 2005, a teacher to whom I'd ranted introduced me to another important book: Ishmael. It wasn't the ideas that stuck with me — I disagreed with a few at the time, and I now disagree with most. No, what this book gave me was scope. This author, too, wished to save the world, and the breadth of his ideas exceeded my own. This book gave me no answers, but it gave me better questions

Why merely hone the government, instead of redesigning it altogether?

More importantly, What sort of world are you aiming for?

"So you want to be an idealist?", the book asked. "Very well, but what is your ideal?"

I refocused, looking to fully define the ideals I strove for in a human social system. I knew I wouldn't be able to institute any solution directly, but I also knew that pushing governments would be much easier if I had something to push them towards.

After all, the Communist Manifesto changed the world, once.

This became my new goal: distill an ideal social structure for humans. The problem was insurmountable, of course, but this was hardly a deterrence. I was bright enough to understand truisms like "no one system will work for everybody" and "you're not perfect enough to get this right", but these were no trouble. I didn't need to directly specify an ideal social structure: a meta-structure, an imperfect system that ratchets towards perfection, a system that is optimal in the limit, would be fine by me.

From my vantage point, old ideas like communism and democracy soon seemed laughable. Interesting ideas in their time, perhaps, but obviously doomed to failure. It's easy to build a utopia when you imagine that people will set aside their greed and overcome their apathy. But those aren't systems for people: People are greedy, and people are apathetic. I wanted something that worked — nay, thrived — when populated by actual humans, with all their flaws.

I devoted time and effort to research and study. This was dangerous, as there was no feedback loop. As soon as I stepped beyond the achievements of history, there was no way to actually test anything I came up with. Many times, I settled on one idea for a few months, mulling it over, declaring it perfect. Time and again, I later found a fatal flaw, a piece of faulty reasoning, and the whole thing came tumbling down. After many cycles, I noticed that the flaws were usually visible in advance. I became cognizant of the fact that I'd been glossing over them, ignoring them, explaining them away.

I learned not to trust my own decrees of perfection. I started monitoring my thought processes very closely. I learned to notice the little ghosts of doubt, to address them earlier and more thoroughly. (I became a staunch atheist, unsurprisingly.) This was, perhaps, the beginning of my rationalist training. Unfortunately, it was all self-directed. Somehow, it never occurred to me to read literature on how to think better. I didn't have much trust in psychological literature, anyway, and I was arrogant.

Communication Failures

It was during this period that I explicitly decided not to pursue math. I reasoned that in order to actually save the world, I'd need to focus on charisma, political connections, and a solid understanding of the machinery underlying the world's major governments. Upon graduating high school, I decided to go to a college in Washington D.C. and study political science. I double majored in Computer Science as a fallback plan, a way to actually make money as needed (and because I loved it).

I went into my Poly Sci degree expecting to learn about the mechanics of society. Amusingly enough, I didn't know that "Economics" was a field. We didn't have any econ classes in my tiny high school, and nobody had seen fit to tell me about it. I expected "Political Science" to teach me the workings of nations including the world economy, but quickly realized that it's about the actual politicians, the social peacocking, the façades. Fortunately, a required Intro to Econ class soon remedied the situation, and I quickly changed my major to Economics.

My ideas experienced significant refinement as I received formal training. Unfortunately, nobody would listen to them.

It's not that they were dismissed as childish idealism: I had graduated to larger problems. I'd been thinking long and hard about the problem for a few years, and I'd had some interesting insights. But when I tried to explain them to people, almost everyone had immediate adverse reactions.

I anticipated criticism, and relished the prospect. My ideas were in desperate need of an outside challenger. But the reactions of others were far worse than I anticipated.

Nobody found flaws in my logic. Nobody challenged my bold claims. Instead, they simply failed to understand. They got stuck three or four points before the interesting points, and could go no further. I learned that most people don't understand basic economics or game theory. Many others were entrenched in bluegreensmanship and reflexively treated my suggestions as attacks. Aspiring politicians balked at the claim that Democracy, while perhaps an important step in our cultural evolution, can't possibly be the end of the line. Still others insisted that it's useless to discuss ideals, because they can never be achieved.

In short, I found myself on the far side of a wide inferential gap.

I learned that many people, after falling into the gap, were incapable of climbing out, no matter how slowly I walked them through the intervening steps. They had already passed judgement on the conclusion, and rejected my attempts to root out their misconceptions, becoming impatient before actually listening. I grew very cautious with who I shared my ideas with, worrying that exposing them too quickly or in the wrong fashion would be a permanent setback.

I had a small few friends who knew enough economics and other subjects to follow along and who wouldn't discard uncouth ideas outright. I began to value these people highly, as they were among the few who could actually put pressure on me, expose flaws in my reasoning, and help me come up with solutions.

Eventually, I had a few insights that I've yet to find in the literature, a few ideas that I still actually believe are important. You'll excuse me if I don't mention them here: there is a lot of inferential distance. Perhaps one day I'll write a sequence.

Even then, I could see no easy path to public support. Most people lacked the knowledge to understand my claims without effort, and lacked the incentive to put in the effort for some unproven boy.

Phase Two

Fortunately, I had other tricks up my sleeve.

I attempted three different tech startups. Two of them failed. The last was healthier, but we shut it down because the expected gains were lower than an industry salary. In the interim, I honed my programming skills and secured an industry job (I'm a software engineer at Google).

By the time I graduated, my ideas were largely refined and stable. I had settled upon a solid meta social system as an ideal to strive for, and I'm still fairly confident that it's a good one — one where the design is forced by nature at every step, one that requires no arbitrary choices, one that ratchets towards optimality. And even if the ideal was not perfect, the modern world is insane enough that even a small step towards a better-coordinated society would yield gigantic benefits.

The problem changed from one of refining ideas to one of convincing others.

It was clear that I couldn't spread my ideas by merely stating them, due to the inferential distance, so I started working on two indirect approaches in the hours after work.

The first was a book, which went back to my roots: simple, low-cost ideas for how to change the current system of government in small ways that could have large payoffs. The goal of this project was to shake people from the blue-green mindset, to convince them that we should stop bickering within the framework and consider modifying the framework itself. This book was meant to the be first in a series, in which I'd slowly build towards more radical suggestions.

The second project was designed to put people in a more rational frame of mind. I wanted people who could look past the labels and see the things, people who don't just memorize how the world works but see it as mutable, as something they can actually change. I wanted people that I could pull out of inferential gaps, in case they fell into mine.

Upon introspection, I realized that much of my ability came from a specific outlook on the world that I had at a young age. I had a knack for understanding what the teachers were trying to teach me, for recognizing and discarding the cruft in their statements. I saw many fellow students putting stock in historical accidents of explanation where I found it easy to grasp the underlying concepts and drop the baggage. This ability to cull the cruft is important to understanding my grand designs.

This reasoning (and a few other desires, including a perpetual fascination with math and physics) led me to create simplifience, a website that promotes such a mindset.

It never made it to the point where I was comfortable publicizing it, but that hardly matters anymore. In retrospect, it's an unfinished jumble of rationality training, math explanations, and science enthusiasm. It's important in one key respect:

As I was writing simplifience, I did a lot of research for it. During this research, I kept stumbling upon web articles on this one website that articulated what I was trying to express, only better. That website was LessWrong, and those articles were the Sequences.

It took me an embarrassingly long time to actually pay attention. In fact, if you go to simplifience.com, you can watch as the articles grow more and more influenced by the sequences. My exposure to them was patchy, centered around ideas that I'd already had. It took me a while to realize that I should read the rest of them, that I might learn new things that extended the ideas I'd figured out on my own.

It seemed like a good way to learn how to think better, to learn from someone who had had similar insights. I didn't even consider the possibility that this author, too, had some grand agenda. The idea that Eliezer's agenda could be more pressing than my own never even crossed my mind.

At this point, you may be able to empathize with how I felt when I first realized the importance of an intelligence explosion.

Superseded

It was like getting ten years worth of wind knocked out of me.

I saw something familiar in the sequences — the winding, meticulous explanations of someone struggling to bridge an inferential gap. I recognized the need to cover subjects that looked completely tangential to the actual point, just to get people to the level where they wouldn't reject the main ideas out-of-hand. I noticed the people falling to the side, debating issues two or three steps before the actual interesting problems. It was this familiar pattern, above all else, that made me actually pay attention.

Everything clicked. I was already thoroughly convinced of civilizational inadequacy. I had long since concluded that there's not much that can hold a strong intelligence down. I had a sort of vague idea that an AI would seek out "good" values, but such illusions were easily dispelled — I was a moral relativist. And the stakes were as high as stakes go. Artificial intelligence was a problem more pressing than my own.

The realization shook me to my core. It wasn't even the intelligence explosion idea that scared me, it was the revelation of a fatal flaw at the foundation of my beliefs. Poorly designed governments had awoken my fear that society can't handle coordination problems, but I never — not once in nearly a decade — stopped to consider whether designing better social systems was actually the best way to optimize the world.

I professed a desire to save the world, but had misunderstood the playing field so badly that existential risk had never even crossed my mind. Somehow, I had missed the most important problems, and they should have been obvious. Something was very wrong.

It was time to halt, melt, and catch fire.

This was one of the most difficult things I've done.


I was more careful, the second time around. The Sequences shook my foundations and brought the whole tower crashing down, but what I would build in its place was by no means a foregone conclusion.

I had been blind to all existential risks, not just AI risk, and there was a possibility that I had missed other features of the problem space as well. I was well aware of the fact that, having been introduced to AI risk by Eliezer's writings, I was biased towards his viewpoint. I didn't want to make the same mistake twice, to jump for the second big problem that crossed my path just because it was larger than the first. I had to start from scratch, reasoning from the beginning. I knew I must watch out for conjunction fallacies caused by nice narratives, arguments made from high stakes (Pascal's mugging), putting too much stock on inside views, and so on. I had to figure out how to actually save the world.

It took me a long time to deprogram, to get back to neutral. I considered carefully, accounting for my biases as best I could. I read a lot. I weighed the evidence. The process took many months.

By July of 2013, I came to agree with MIRI's conclusions.

Disclaimer

Writing it all out like this, I realize that I've failed to convey the feeling of it all. Depending upon whether you believe that I was actually able to come up with better ways to structure people, you may feel that I'm either pretty accomplished or extremely deluded. Perhaps both.

Really, though, it's neither. This raw story, which omits details from the rest of my life, paints a strange picture indeed. The intensity is distilled.

I was not a zealot, in practice. My attempts to save the world didn't bleed much into the rest of my life. I learned early on that this wasn't the sort of thing that most people enjoyed discussing, and I was wary of inferential gaps. My work was done parallel to an otherwise normal life. Only a select few people were privy to my goals, my conclusions. The whole thing often felt disconnected from reality, just some unusual hobby. The majority of my friends, if they read this, will be surprised.

There are many holes in this summary, too. It fails to capture the dark spots. It omits the feelings of uncertainty and helplessness, the cycles of guilt at being unproductive followed by lingering depression, the wavering between staunch idealism and a conviction that my goals were nothing but a comfortable fantasy. It skips over the year I burned out, writing the whole idea off, studying abroad and building myself a healthier mental state before returning and picking everything back up.

Nothing in this summary describes the constant doubt about whether I was pursuing the best path or merely the easiest one. I've failed to mention my complete failure to network and my spectacular inability to find people who would actually take me seriously. It's hard to convey the fear that I was just pretending I wanted to save the world, just acting like I was trying, because that's the narrative that I wanted. How could someone 'smart' actually fail to find powerful friends if they were really trying for nine years?

I claim no glory: the journey was messy, and it was poorly executed. I tell the story in part because people have asked me where my passion comes from and how I became aligned with MIRI's mission. Mostly, though, I tell the story because it feels like something I have to tell before moving on. It feels almost dishonest to try to save the world in this new way without at least acknowledging that I walked another path, once.

The source of my passion

So to those of you wondering where my passion comes from, I answer this: it has always been there. It was a small flame, when I was young, and it was fed by a deep mistrust in society's capabilities and a strong belief that if anyone can matter then I had better try.

From my perspective, I've been dedicating my energy towards 'saving the world' since first I realized that the world was in need of saving. This passion was not recently kindled, it was merely redirected.

There was a burst of productivity these past few months, after I refocused my efforts. I was given a new path, and on it the analogous obstacles have already been surmounted. MIRI has already spent years promoting that rational state of mind, bridging its inferential gap, finding people who can actually work on solving the problem instead of arguing about whether there is a problem to be solved. This was invigorating, like skipping ahead ten years in terms of where I wanted to be.

Alongside that, I felt a burning need to catch up. I was late to the party, and I had been foolish for a very long time. I was terrified that I wouldn't actually be able to help — that, after all my work, the most I'd be able to do to solve the big problems was earn to give. I'd have done it, because the actual goal is to save the world, not to satisfy Nate. But the idea scared me, and the desire to keep actively working on the big problems drove me forward.

In a way, too, everything got easier — I needed only to become good at logic and decision theory, to read a bunch of math textbooks, a task that was trivially measurable and joyfully easy compared to trying to convince the entire world to embrace strange, unpolished ideas.

All these factors contributed to my recent productivity. But the passion, the fervor, the desire to optimize the future — that has been there for a long time. People sometimes ask where I get my passion from, and I find it hard to answer.

We hold the entire future of the universe in our hands. Is that not justification enough?

I learned a long time ago that most people are content to accept the way things are. Everyone wants the world to change, but most are cowed by the fact that they can't change it themselves.

But if the chance that one person can save the world is one in a million, then there had better be a million people trying.

It is this knowledge — that the world will only be saved by people who actually try to save it — that drives me.

I still have these strange ideas, this pet inferential gap that I hope to bridge one day. It still hurts, that things important to me were superseded, but they were superseded, and it is better to know than to remain in the dark.

When I was fourteen, I saw many horrors laid out before us: war, corruption, environmental destruction, and the silent tragedies of automobile accidents, courtroom injustices, and death by disease and aging. All around me, I saw a society that couldn't coordinate, full of people resigned to unnecessary fates.

I was told to settle for making a small difference. I resolved to do the opposite.

I made a promise to myself. I didn't promise to fix governments: that was a means to an end, a convenient solution for someone who didn't know how to look further out. I didn't promise to change the world, either: every little thing is a change, and not all changes are good. No, I promised to save the world.

That promise still stands.

The world sure as hell isn't going to save itself.

New Comment
171 comments, sorted by Click to highlight new comments since: Today at 6:43 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

If everybody was cowed by the simple fact that they can't succeed, then that one-in-a-million person who can succeed would never take their shot. So I was sure as hell going to take mine. But if the chance that one person can save the world is one in a million, then there had better be a million people trying.

I want to upvote about twenty times for this phrase alone. I suspect that your psychology was very different than mine; I think I crave stability and predictability a lot more. One of the reasons that "saving the world" always seemed like an impossible thing to do, like something that didn't even count as a coherent goal, was that I didn't know where to start or even what the ending would look like. That becomes a lot more tractable if you're one of a million people trying to solve a problem, and a lot less scary.

However, idealism still scares me. I remember being a kid and reading about communism and thinking that it really ought to work. I remember thinking that if I'd been a young adult back before communism, I would have bet my time and effort on it working. And...it turned out not to work. Since I probably wasn't any smarter than the people who tried to make ... (read more)

Communism definitely serves as a warning to smart optimizers to not get ahead of themselves.

But it also cuts the other way: it lets smart optimizers the know how powerful some ideas can be.

In a sociology class, the teacher once mentioned to us that Karl Marx was the only truly applied sociologist. I don't know how far this is true, but he is certainly the one who has had the most impact.

[-][anonymous]10y100

Not coincidentally, Karl Marx was also the first to warn people about unfriendly, overly powerful optimization processes.

It's only a pity he hadn't the words to put it so succinctly!

5Kaj_Sotala10y
I just ran into an intriguing blog post where the author seems to essentially bring stability and predictability into his life by deliberately pursuing an impossible goal, and remembering this comment, got curious about what you'd think about it:
2Error10y
The first thing this makes me think of is the Babylon 5 episode "Grail." The concept appeals to me in a romantic sort of way.
2Swimmer963 (Miranda Dixon-Luinenburg) 10y
I saw that on your Facebook before I saw it here, so already had thoughts on it. 1) I can see how it's less scary to think about, as a goal. 2) Picturing it in my head, I can't imagine myself using this and actually feeling motivated to work really hard because of this goal. But that may be less because it's impossible, and more because it's big and vague–my brain has an established problem with big vague goals.
-1WingedViper10y
I have to disagree a bit on the communism part. One of the ways that it went wrong, that it ended in Totalitarianism, was due to how it was implemented and foreseeable to a certain extent. All it really tells us is that we have to take human nature into account when designing a society for humans, not that we shouldn't try out powerful ideas.

I wasn't trying to get pats on the head, I was appealing to the Lord of the Heavens and the Earth. Were we all on the same page, here? This was the creator. He was infinitely virtuous, and he had told us what to do.

I know exactly what you're talking about. I quickly realized as a kid that grown-ups get quite worried if you start taking the religion too seriously.

The more stories I hear of other LessWrongers' life stories (and taking my own into consideration) the more I realise how one of our defining traits is our inability and/or unwillingness to compartmentalize on important ideas.

7A1987dM10y
Relevant classic LW post
3Error10y
That's one of the traits that makes me feel at home here.
2Error10y
I'm somewhat curious what the reaction was. Did they notice the contradiction between wanting the kid to go to church and not wanting him to actually act on what he learned there?
8Vaniver10y
I suspect the issue is what the kid learned there. They were supposed to be focusing on the pro-social habits and socializing.

The government, though, was a different matter all together. I assumed that a lot of very smart people had put a lot of effort into its design — that's what the "Founding Fathers" meme implied, anyway.

I've always taken the framing of the US Constitution as a cautionary tale about the importance of getting things exactly right. The founding fathers were highly intelligent (some of them, anyway), well-read and fastidious; after a careful review of numerous different contemporary and historical government systems, from the Iriquois confederacy to ancient Greek city-states, they devised a very clever, highly non-obvious alternative designed to be watertight against any loopholes they could think of, including being self-modifying in carefully regulated ways.

It almost worked. They created a system that came very, very close to preventing dictatorship and oligarchy... and the United States today is a grim testament to what happens when you cleverly construct an optimization engine that almost works.

One of the things that is impressive about the Constitution is that it was designed to last a few decades and then reset in a new Constitutional Convention when it got too far from optimal. It's gone far beyond spec at this point, and works.. relatively well.

0[anonymous]10y
Source?
5VAuroch10y
The source I took this from? My highschool History and Government teacher. Actual source to prove it? Can't find a solid one, though Jefferson certainly endorsed this position (blogpost goes into some detail into one of his letters). Jefferson was extremely suspicious of central government in general (he was the leader of the Republican/states-first faction at the time, as opposed to the Federalist/country-first faction), so I'm not sure how much of the rest would agree. Looking into it further, here's the letter from Jefferson to Madison, and here is Madison's reply. Summary: Nah, 19 years is too short, we're writing law for the "yet unborn" as well as the living. Madison was at the other extreme, obviously; he was one of the most Federalist (though probably not the most; I'd give that spot to Adams). However, the fact that there is a section of the calling of a Constitutional Convention indicates that they expected it to be used. I have no proof, but I'd be willing to bet that Madison, Jefferson, and anyone in between would be very surprised that that provision has never been used in 230 years.
0ChristianKl10y
Not the most trustworthy source.
1VAuroch10y
He was a damn good teacher, to be fair. And this was in one of the areas he taught an elective of his own design, so it was something he had studied in more depth than you'd expect.
0[anonymous]10y
Thanks for the answer.
2[anonymous]10y
With respect, I think you're giving the American Founders too much credit. Their values were not our values, and their Constitution works extremely well for the kind of society they aimed to create: a republic of white, male, propertied yeoman farmers whose main disagreements were whether to allow slavery and whether this "industrialization" thing would catch on. If the system appears broken today, it is because it is attempting to enforce the norms of a republic of white, male, propertied yeoman farmers on an increasingly urbanized/suburbanized, increasingly post-industrial and networked, increasingly multicultural nation spread across many times the population and land area of the original. Times have actually, really changed, and so have values, but the dead hands of the Founding Fathers are still preserving their norms and values in our time. That is very good engineering.
5Nornagest10y
My reading suggests that the main disagreements among the framers of the US Constitution (the "Founding Fathers" phrase is a bit too hagiographic for my taste) had to do with regional rivalry and the degree of centralization of power -- concerns which I wouldn't call modern as such, but could fairly be described as perennial. (Compare the modern urban vs. rural distinction, which drives most of the red vs. blue state divide.) Slavery factored into this, but mainly as a factor informing regional differences -- it wouldn't reach its ultimate apocalyptic nation-breaking significance until westward expansion had started in earnest and the abolition movement gained some steam. I'm unaware of any significant disputes over industrialization in early US politics.
1gwern10y
Hamilton vs Jefferson comes to mind.
0Nornagest10y
I thought that didn't happen until a decade or so later?
3blacktrance10y
That doesn't qualify as "early"?
3Nornagest10y
Should have been more precise. I was talking about the roughly 10-year period between independence and the acceptance of the US Constitution. The 1790s are early in the nation's history, all right, but that was a period of very rapid evolution in US politics.
0[anonymous]10y
You may know your American history better than I, but I do remember some nascent concerns over whether industry and finance could gain too much power versus the agricultural sector. It's entirely possible I'm just wrong, though.
0Lumifer10y
...Tolkien..? :-D
-2Eugine_Nier10y
I believe Nornagest counted that under urban versus rural.

When I was finally told how the US government worked, I couldn't believe my ears. It was a mess. An arbitrary, clunky monstrosity full of loopholes a child could abuse. I could think of a dozen improvements off the top of my head.

For what it's worth, the Founding Fathers actually did do quite a bit of research into what kinds of "loopholes" had existed in earlier systems, particularly the one in England, and took steps to avoid them. For example, the Constitution mandates that a census be taken every ten years because, in England, there were "rotten boroughs" which had a member of Parliament even though they had a tiny population. Needless to say, it wasn't easy to get politicians in these districts to approve redistricting laws.

On the other hand, the Founding Fathers didn't anticipate gerrymandering, though.

To give you an idea of how my teenaged mind worked, it was immediately clear to me that any first-order "improvements" suggested by naïve ninth-graders would have unintended negative consequences. Therefore, improvement number one involved redesigning the system to make it easy to test many different improvements in parallel, adding machinery

... (read more)

To be equally fair, a lot of the more obvious exploits in the American system have been tried at one point or another; one of the clearer examples I can think of offhand is FDR's attempt to pack the US Supreme Court in 1937. Historically most of these have been shot down or rendered more or less toothless by other power centers in the government, although a similar (albeit somewhat unique) situation did contribute to the American Civil War.

There's a lot of bad things I could say about the American system, but the dynamic stability built in seems to have been quite a good plan.

"Avoid concentrating power, and try to pit power centers against each other whenever possible" seems to have been a fairly successful design heuristic for governments.

-8[anonymous]10y
1TheAncientGeek10y
In this Brit's NSHO, the main problem with the US system is the lack of a limit on campaign spending.
-2ChristianKl10y
You can't really limit campaign spending. If you forbid a billionaire from buying ads they can go ahead and buy themselves a TV channel or a newspaper. Of course not everyone can buy a newspaper, your limit shifts power to those people who are wealthy. You can't create a situation in which nobody can spend money in a way to increase the likelihood that a particular politician gets elected. Money is just to useful for you to be able to pass a law that prevents it to be used to effect public opinion. If you start with hard limits the money just takes a less obvious road. On the other hand public funding of elections actually works. You actually need well funded parties that are funded through government money as actors if you don't want rich people to dominate the political system.
3TheAncientGeek10y
You can, since this has been done in the UK. And, yes, individuals are limited in how much of the media they can buy up too. You can't cure every disease, but that is no argument for not building hospitals.
2Lumifer10y
...and did you get a better government as a result?
1TheAncientGeek10y
We didn't get a choice between two conservative parties.
-4Eugine_Nier10y
True, you appear to have a choice between three left wing parties.
2TheAncientGeek10y
Socially, maybe, that being where the votes are. However, you will be delighted to hear that the Conservatives are still sufficiently traditional to want to cut welfare to the poor and taxes to the rich.
2Moss_Piglet10y
And yet not traditional enough to see any problem with the UK's disastrous immigration policy. The BNP exists pretty much entirely because the "conservative" party is more concerned with not being called racists than with doing what the majority of their constituents have demanding for decades.
1TheAncientGeek10y
What disaster was that?
-6Eugine_Nier10y
0ChristianKl10y
How do you know that campaign spending is reduced? You don't know the alternative roads that the money travels when you don't allow the obvious roads. Just because you don't see the money flowing anymore doesn't mean that the invisible hand of the market doesn't direct the money to those opportunities where it produces political effects. The loss of transparency of money flow is a big problem with spending limits. Who cares whether individuals are limited when you have corporations? But even if you have antitrust laws that prevent a single corporation from controlling all media that doesn't mean that you can't have 10 corporations with similar agendas controlling all media.

How do you know that campaign spending is reduced?

Revealed preferences and margins. By spending on the 'obvious roads', entities reveal that those are the optimum roads for them and their first choice; by forcing them back onto secondary choices, they must in some way be worse off (for example, be paying more or getting less) else they would have been using those non-obvious roads in the first place; and then by supply & demand, less will be spent.

0ChristianKl10y
I don't think it's a question of paying more and getting less but of being less certain about the payout. If you have a policy of giving high paying jobs to people who end their political career if they furthered the interests of your company, you aren't certain about the payoff of that spending. On average it will motivate politicians to further your course but it's a gamble. It requires a relationship of trust between the politicians and the companies doing the hiring. Only big actors can have those relationships. You might be right that total money spent goes down but that's not the thing we really care about. We care about the amount that policy get's influenced by special interests.
0TheAncientGeek10y
Politicla parties are aonoly allowed limited airtime on mass media: they may be able to spend money on other things, but they would be less effective. If the giovt says you can only broadcast for five minutes a year, that isn't a free market. Agian, not being able to do something perfectly is not a good reason not to do it at all.
0ChristianKl10y
Okay, then they don't hire an advertising company to produce advertising. I instead hire them to produce a documentary of my favorite political issue and then sell that documentary for a low price to a TV station that it doesn't run as advertising but as documentary. You know that a lot of the players who produce documentaries that you see on TV also produce advertising for paying clients right? Is that really an improvement of the political system is you get less political speech that's overtly labeled as being advertising?
1TheAncientGeek10y
There already a large amount of politics that is not labelled as advertising, whether in the forms of songs, moviesornewspaper articles. Since the UK system also limits overall spending by parties, what they are able to do by means other than overt advertising is a drop in the ocean.
0ChristianKl10y
Then the rich corporation who wants to influence a political party doesn't donate money but things brought by money. You probably do succeed weaken political parties. If you are a lobbyist and want to influence politics to further the agenda of a corporation you want weak political parties. If you look at the US it's a country of very weak parties. The head of the Republican and Democratic party don't have much political power. To have a career as a politician in Germany you mainly have to impress fellow members of your political party. To have a career as a politician in the US you mainly have to impress corporate donors who fund your campaign. I prefer the incentives of the German system.
0V_V10y
Which would still be constrained by donation limits, I suppose. IIUC, there are no spending limits by corporations in the US system.
0ChristianKl10y
No. If I hire a polling firm to gather data about the views hold by voters and hand the resulting data over to a politician that doesn't count against donation limits. If you want to label those acts as donations that you to be limited you destroy a lot of free speech rights. There are spending limits as far as corporations donating money to political parties go. Citizens United basically says that anyone can make a Super PAC and that Super PAC is allowed to buy TV ads. It doesn't say that you can just hand over the cash to a political party. The US democratic party is currently chaired by Debbie Wasserman Schultz. If you would make a list of the most influential US politicans I doubt that Debbie Wasserman Schultz would make the top ten. The institution of the democratic party is just to weak that heading it gives you a lot of political power. I don't want to say that Debbie Wasserman Schultz has no political power at all but her power is miniscule compared to the head of a German political party.
2V_V10y
If you publicly disclose the results, then you are helping everybody. If you disclose the results only to a politician, then you are making a donation, by any reasonable meaning of the term. Is there more private funding of politics, per capita or per unit of gdp, in the US or in Germany? I don't have the data at hand, but I'll bet that in the US corporations and wealthy individuals spend more on politics rather than Germans do. Moreover, the German electoral system is a mix of relative majority and propositional representation, whereas the US one is a mostly pure relative majority system. Pure relative majority systems tends to produce a two parties with weak identities, with most political competition happening inside each party, and party chairpersons acting more as senior administrators and mediators rather than political leaders, while proportional representation favours political landscapes with multiple parties with strong identities and strong leaders.
2Nornagest10y
"Donation" conventionally refers to money or tangible resources: you can donate a thousand dollars, the use of a building, or your services in some professional capacity, but the word's usually not used for advocacy, data, or analysis. I'm not sure there's a word for an unsolicited gift of privately held information that you don't intend to publicly disclose; if you did intend to disclose it at some point, it'd be a leak. In this case you're essentially working as a think tank, though, and I don't believe think tank funding is generally counted as a direct political contribution. Might work differently in Europe, though.
0V_V10y
I suppose that disclosing data bought from a commercial polling service would count as political donation, though I'm not sure what regulations actually say in various jurisdictions. Anyway, certainly there are ways to perform political activism that don't count as campaign donations, my point is that their effect on the outcome of the election is likely not the same as direct donations of money, ads, building use, and other tangible goods or services.
-2Eugine_Nier10y
That's because of differences in the electoral system. In the German system people vote for party lists, which the party heads choose, in the US system people vote directly for politicians; furthermore, each party's candidate is decided by another election, called a primary, this leaves a lot less for party officials to do.
-1TheAncientGeek10y
Such as?
3ChristianKl10y
In the US straightforward things such as TV ads. In the US a lot of the political ads are payed for by Super PACs that aren't allowed to donate money to candidates or parties but which are allowed to buy advertising. Apart from ads, modern political campaign usually depend on polling voters to target messages. A corporation can just pay a polling company to run a poll and then give the resulting data to the political party to be better able to target messages. Of course in the moment the corporation pays the bills of the polling company instead of the political party the polling company suddenly gets interests to shape the poll to the liking of the corporation. A politician can use more personal assistants if a lobbyist wants to serve as a personal assistant for free there often no reason for the politician to just send the lobbyist away. The kid of the politician needs a job? The politician is probably grateful to a lobbyist who makes the necessary connections for the kid to get a good job. It's not easy to calculate how much it costs a corporation to arrange the job for the kid and how big a favor the corporation can ask later for having arranged the job but I don't see that it will likely be a much worse return on the money than a corporation donating money to a party to run TV ads.
0V_V10y
You are being hypercritical. Yes, there are loopholes that sufficiently motivated individuals can use to elude regulation to a certain extent, but this doesn't mean that they are as effective as just giving cash. Cash is much more fungible than anything else.
0ChristianKl10y
Cash gives the person who writes the bill power. If a political party pays the money they got donated by a corporation for a polling firm to target ads than the polling firm serves the interests of the political party. If the person who writes the bill is a corporation who then donates the resulting data, the polling firm has interests to shape the data in the interests of the corporation. The political party and politicians prefer receiving cash. The lobbyists on the other hand don't prefer to give cash. If you now come and pass a law that makes it harder for politicians to accept cash to use for political purposes you weaken the politicians and therefore strengthen the lobbyists. Which is of course exactly how we get such laws in a society in which lobbyists hold a lot of political power and want more power. The only way to get around lobbyists increase their power is to actually give other political actors more power. That means public funding of elections.
-4Eugine_Nier10y
And that's precisely the problem. The net affect of these regulations is to limit political influence to those who are sufficiently motivated. This is already the mechanism behind things like regulatory capture, these laws just make the effect worse.
0V_V10y
While allowing to donate millions of dollars extends the political influence to the average person?
-3Eugine_Nier10y
My point is that the barrier to entry to donate large amounts of money is lower than the barrier to elude regulations.
0V_V10y
Possibly. But the point is how much political influence you get. Influencing politics with direct donations is much more efficient than eluding regulation.
-5Eugine_Nier10y
-3Eugine_Nier10y
The way an American would phrase it is: To have a career as a politician in Germany you mainly have to impress the party bosses. To have a career as a politician in the US you mainly have to impress your constituents.
2ChristianKl10y
Not completely. If I live in Berlin and want to be elected into the Bundestag for the SPD I want to get a high place of the SPD list allocated in the Berlin SPD party convention. The head of the SPD in Berlin is the person got head because they have a majority of the Berlin SPD behind them, but their power over the convention isn't absolute. It's like the power the Nancy Pelosi has over democratic US congressman. Yes, constituents weighted by the amount of political donations that they can give.
0Nornagest10y
In fairness, we can't very well assume without evidence that this is true, either. We're probably best off with comparing results; are the laws of the UK notably friendlier or unfriendlier to wealthy individuals? What about monied businesses? Note that friendliness in this sense doesn't necessarily mean deregulation; regulations tend to lower profits but also tend to raise barriers to entry. If a particular business institution is worried about disruption by emerging players, it may be rational for it to accept or even push for regulation. Trade barriers are an especially pure example.
-2Eugine_Nier10y
Then you get into the question of what qualifies as a party for purposes of getting public money. I can see this degenerating into a system for keeping non-established parties out.
0ChristianKl10y
I think our German system works quite well in that regard. The main reason the pirate party didn't join the Bundestag is they were largely incompetent. Infighting weakened them. Snowden gave them the perfect topic but all they did do is being reactive and saying the establishment is bad instead of developing policy ideas with they could have pushed into reality. The main problem with establishing a new party is getting competent people together who are willing to think deeply about public policy and who don't destroy each other through infighting.
-5Eugine_Nier10y
0Vulture10y
And while changing the American system would be incredibly difficult, democracies which formed later tended to use better-patched versions of the American system - there's a reason that most western European countries have more than two major parties, for instance.

Parties aren't a built-in feature of the American political system as such -- in fact, many of the people involved in setting it up were vociferous about their opposition to factionalism (and then proceeded more or less directly into some rather nasty factional conflict, because humans). The first-past-the-post decision system used in American federal elections is often cited as leading to a two-party system (Duverger's law), and indeed probably contributes to such a state, but it's not a hard rule; the UK for example uses FPTP voting in many contexts but isn't polarized to the extent of the US, though it's more polarized in turn than most continental systems.

You'll excuse me if I don't mention them here: there is a lot of inferential distance. Perhaps one day I'll write a sequence.

What a tease! Why not give us a short bullet point list of your conclusions, most readers around here wouldn't dismiss them out of hand, even lacking a chain of arguments leading up to them. It's enjoyable to jump across inferential chasms. Especially if you think of your conclusions as important. Are they reactionary?

It's tempting to say "either I present my conclusions in their most convincing form, as a sequence, or not at all", but remember that in resource constrained environments, the perfect is the enemy of the good.

Why not give us a short bullet point list of your conclusions, most readers around here wouldn't dismiss them out of hand, even lacking a chain of arguments leading up to them.

We sure would. We think we are smart, and the inferential gap the OP mentioned is unfortunately almost invisible from this side. That's why Eliezer had to write all those millions of words.

Easy test: send a summary/bullet point/whatever as a private message to a few select people from LessWrong, and ask them for their reactions. Possible loss: a few select members become biased due to large inferential gap against the ideas that you gave up to pursue a more important goal. Possible gains: rational feedback to your ideas, supporters, and an estimate of the number of supporters you could gain by sharing your ideas more widely on this site.

Easy test: send a summary/bullet point/whatever as a private message to a few select people from LessWrong, and ask them for their reactions.

That is an interesting test but it is not testing quite the same thing as whether the conclusions would be dismissed out of hand in a post. "Herding cats" is a very different thing to interacting with a particular cat with whom you have opened up a direct mammalian social exchange.

5So8res10y
Perhaps. People, PM me if you're interested. No guarantees.
5Kaj_Sotala10y
In case So8res wants to try this, I'd be quite curious to see the bullet points.
1Yaakov T10y
me too
1MrMind10y
Me too.
1blacktrance10y
And me as well.
0EndlessStrategy10y
I think you underestimate the potential loss. Worst case scenario one of the people he PMs his ideas to puts them online and spreads links around this site.
8[anonymous]10y
Do we presume Eliezer had to write all those millions of words?
5shminux10y
Write a bullet-point summary for each sequence and tell me that one would not be tempted to "dismiss them out of hand, even lacking a chain of arguments leading up to them", unless one is already familiar with the arguments.

I'll try, just for fun, to summarize Eliezer's conclusions of the pre-fun-theory and pre-community-building part of the sequence:

  • artificial intelligence can self-improve;
  • with every improvement, the rate at which it can improve increases;
  • AGI will therefore experience exponential improvement (AI fooms);
  • even if there's a cap to this process, the resulting agent will be a very powerful agent, incomprehensibly so (singularity);
  • an agent effectiveness does not constrain its utility function (orthogonality thesis);
  • humanity's utility function occupy a very tiny and fragmented fraction of the set of all possible utility functions (human values are fragile);
  • if we fail to encode the correct human utility function in a self-improving AGI, even tiny differences will results in a catastrophically unpleasant future (UFAI as x-risk);
  • an AGI is about to come pretty soon, so we better hurry to figure out how to do the latter point correctly.
6JamesAndrix10y
Anecdote: I think I've had better responses summarizing LW articles in a few paragraphs without linking, than linking to them with short explanations. It does take a lot to crosss those inferential distances, but I don't think quite that much. To be fair, my discussions may not cover a whole sequence, I have the opportunity to pick out what is needed in a particular instance.
6[anonymous]10y
That would kind of require that I spend my time reading dozens to hundreds of blog entries espousing a mixture of basic good sense and completely unfalsifiable theories extrapolated from pure mathematics, just so I can summarize them in terms of their most surprising conclusions. EDIT: The previous comment is not meant as personal disrespect. They're just meant to point out that treating Eliezer's Sequences as epistemically superlative and requiring someone to read them all to have even well-informed views on anything is... low-utility, especially considering I have read a fair portion.
4shminux10y
I agree with all that, actually. My original point was not that Eliezer was right about everything, or that the Sequences should be canonized into scriptures, but that the conclusions are far enough from the mainstream as to be easily dismissed if presented on their own.
1[anonymous]10y
Which ones?
1EGarrett10y
Eli, I want to +1 this comment because I agree with the awkwardness of expecting people to read such a large amount of information to participate in a conversation, but it looks like you're suggesting also that those articles are "just basic good sense." Unless I misunderstood you, that's "obviousness-in-retrospect" (aka hindsight bias). So I won't go +1 or -1.
6[anonymous]10y
I wouldn't say retrospect, no. Maybe it's because I've mostly read the "Core Sequences" (covering epistemology rather than more controversial subjects), but most of it did seem like basic good sense, in terms of "finding out what is true and actually correcting your beliefs for it". As in, I wasn't really surprised all that much by what was written there, since it was mostly giving me vocabulary for things I had already known on some vaguer level. Maybe I just had an abnormally high exposure to epistemic rationality prior to coming across the Sequences via HPMoR since I found out about those at age 21 rather than younger and was already of the "read everything interesting in sight" bent? Maybe my overexposure to an abnormally scientific clade of people makes me predisposed to think some degree of rationality is normal? Maybe it was the fact that when I heard about psychics as a kid I bought myself a book on telekinesis, tried it out, and got bitterly disappointed by its failure to work -- indicating an anbnormal predisposition towards taking ideas seriously and testing them? Screw it. Put this one down as "destiny at work". Everyone here has a story like that; it's why we're here.
0EGarrett10y
I think we see eye-to-eye, we both came here with a large amount of pre-existing knowledge and understanding of rationality...and I think for both of us reading all of the sequences is just not going to be a realistic expectation. But by the same token, I can't go with you when you say the ideas are basic. Even if you knew them already, they are still very important and useful ideas that most people don't seem to know or act upon. I have respect for them and the people that write about them, even if I don't have time to go through all of them, and the inability to do that forms a significant barrier to my participation in the site.
7itaibn010y
Personally I think it is plausible that I would find such a bullet point list as true or mostly true. However, I have already dismissed out of hand the possibility that it would be both true, important, and novel.
0MTGandP10y
When I read this story, I became emotionally invested in Nate (So8res). I empathized with him. He's the protagonist of the story. Therefore, I have to accept his ideas because otherwise I'd be rejecting his status as protagonist.

We hold the entire future of the universe in our hands. Is that not justification enough?

It's too much justification. Don't assume that this immense savage universe is just a growth medium for whatever microbe wins the game on Earth.

4Vulture10y
Personally, I assume this as a two-place function; I assume that by my values, "basically a growth medium for humanity" is a good and useful way to think about the universe. Someone with a different value system, e.g. placing greater value than I do on non-human life, might prefer that we not think of it that way. Oh well.

This is not about values, it is about realism. I am protesting this presumption that the cosmos is just a dumb desert waiting for transhumanity to come and make it bloom in our image. If a line of argument tells you that you are a 1-in-10^80 special snowflake from the dawn of time, you should conclude that there is something wrong with the argument, not wallow in the ecstatic dread of your implied cosmic responsibility. It would be far more reasonable to conclude that there is some presently unknown property of the universe which either renders such expansion physically impossible, or which actively suppresses it when it begins to occur.

Would you agree that you are carrying out a Pascal's Muggle line of reasoning using a leverage prior?

http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/

If so, you're using it very controversially, compared to disbelieving in a googolplex or Ackermann of leverage. A 10^-80 prior is easy for sensory evidence to overcome if your model implies that fewer than 10^-80 sentients hallucinate your sensory evidence; this happens every time you flip 266 coins. Conversely to state the 10^-80 prior is invincible just restates that you think more than 10^-80 sentients are having your experiences, due to Simulation Arguments or some explanation of the Fermi Paradox which involves lots of civilizations like ours within any given Hubble volume. In other words, to say that the 10^-80 prior is not beaten by our sensory experience merely restates that you believe in an alternate explanation for the Fermi Paradox in which our sensory experiences are not rare.

From the "Desk" of: Snooldorp Gastool V

Attention: Eliezer Yudkowsky Machine Intelligence Research Institute

Sir, you will doubtlessly be astonished to be receiving a letter from a species unknown to you, who is about to ask a favor from you.

As fifth rectified knigget of my underclan's overhive, I have recently come into possession of an ancient Andromedan passkey, guaranteeing the owner access to no less than 2^419 intergalactic credits. My own species is a trans-cladistic harmonic agglomerate and therefore does not satisfy the anghyfieithadwy of Andromedan culture-law, which stipulates that the titular beneficiary of the passkey (who has first claim on half the credits) must be a natural sophont species. However, we have inherited a trust relationship with a Voolhari Legacy adjudication system, in the vicinity of what you know as the Orion OB1 association, and we have verified that your species is the nearest natural sophont with the technical capacity and cognitive inclinations needed to be our partners in this venture. In order to earn your share of this account, your species should beam by radio telescope its genome, cultural history, and at least two hundred (200) ... (read more)

I directly state that, for other reasons not related to the a priori pre-sensory exclusion of any act which can yield 2^419 credits, it seems to me likely that most of the sentients receiving such a message will not be dealing with a genuine offer.

5Kawoomba10y
Best comment I read all week. Thanks!
3MugaSofer10y
OK, that is excellent.
5Mitchell_Porter10y
I want to respond directly now... It seems to me that winning the leverage lottery (by being at the dawn of an intergalactic civilization) is not like flipping a few hundred coins and getting a random bitstring that was not generated in that fashion, anywhere else in our Hubble volume. It is like flipping a few hundred coins and getting nothing but heads. The individual random bitstring is improbable, but it is not special, and getting some not-special bitstring through the coinflipping process is the expected outcome. Therefore I think the analogy fails, and the proper conclusion is that models implying a "cosmic manifest destiny" for present-day Earthlings are wrong. How this relates to the whole Mugging/Muggle dialectic I do not know, I haven't had time to see what's really going on there. I am presently more interested in the practical consequences of this conclusion for our model of the universe, than I am in the epistemology.
5private_messaging10y
Yeah, exactly. The issue is not so much the 10^-80 prior, as the 10^-80 prior on obtaining it randomly vs much much larger prior of obtaining it because, say, you can't visually discriminate between the coin sides.
3ArisKatsaris10y
My own position regarding this is that yet we haven't really even started properly thinking how to use anthropic evidence. e.g. you're seemingly just treating every single individual consciousness in the history of the universe as of equal probability to have been 'you', but that by itself implies an assumption that there exists a well-defined thing called 'individual consciousness' rather than a confusing combination of different processes in your brain... That they must each be given equal weight is an additional step that I don't think can be properly supported (e.g. if MWI is correct and my consciousness splits into a trillion different people every second, some of which merge back together, what is the anthropic weight assigned to my past self vs the future self?) Another possibility would e.g. be that for some reason anthropic evidence are heavily tilted to favour the early universe -- that it's more likely to 'be' someone in the early universe, the earlier the better. (e.g. easier to simulate the early than the late universe, hence more Universe-simulators do the former than the latter) Or anthropic evidence could be tilted to favour simple intelligences. (e.g. easier to simulate simple intelligences than complex ones) (The above is not meant to imply that I support the simulation hypothesis. I'm just using it as a way of demonstrating how some anthropic calculations may be off)
3private_messaging10y
You could think of the "utilities" in your utilitarianism. Why would one unit of global utility that you can sacrifice be able to produce 10^80 - ish units of utility gain? It's unlikely to come across an unit of utility that you can so profitably sacrifice (if it is bounded and doesn't just exponentially stack up in influences ad infinitum). This removes the anthropic considerations from the leverage problem.
2ArisKatsaris10y
Since utility isn't an inherent concept in the physical laws of the universe but just a calculation inside our minds, I don't see your meaning here: You don't "come across" a unit of utility to sacrifice, you seek it out. An architect that seeks to design a skyscraper is more likely to succeed in designing a skyscraper than a random monkey doodling. To estimate the architect's chances of success I see no point in starting out by thinking "how likely is a monkey to be able to randomly design a skyscraper?".
3private_messaging10y
It seems to me that there's considerably less search in "not buy a porche" than in "build a skyscraper". Let's suppose you value paperclips. Someone takes 10 paperclips from you, unbends them, but makes 10^90 paperclips later thanks to their use of 10 paperclips. In this hypothetical universe, these 10 paperclips are very special, and if someone gives you coordinates of a paperclip and claims it's one of those legendary 10 paperclips (that are going to be turned into 10^90 paperclips), you'd be wise to be quite skeptical - you need evidence that the paperclips you're looking at are so oddly located within the totality of paperclips. edit: or if someone gives you papers with paperclip marks left on them and says it's the papers that were held together by said legendary paperclips. edit2: albeit i do agree - if we actually seek out something, we may be able to overcome very large priors against. In this case though, the issue is that we have a claim that our existing intrinsic values are in a necessarily very unusual relation to the vast majority of what's intrinsically valuable.
-2V_V10y
What sensory experience are you talking about?
5[anonymous]10y
When it comes to the Fermi Paradox, we have an easy third option with a high prior for which a small amount of evidence is starting to accumulate: we are simply not that special in the universe. Life, perhaps even sapient life, has happened elsewhere before, and will happen elsewhere after we have either died off or become a permanent fixture of the universe. There may already be other species who are permanent fixtures of the universe, and have chosen for one reason or another not to interfere with our development. In fact, I would figure that "don't touch life-infested planets" might be a very common moral notion among trans-$SPECIES_NAME races), a kind of intergalactic social contract: any race could have been the ones whose existence would have been prevented by strip-mining their planet or paving it over in living quarters or whatever they do with planets, so everyone refrains from messing with worlds that have evolution going on. As to the evidence, well, as time goes on we're finding out that Earth is less and less of an astronomical (ahaha) rarity compared to what we thought it was. Turns out liquid-water planets aren't very common, but they're common enough for there to be large numbers of them in our galaxy. Given two billion planets and billions upon billions of years for evolution to work, I think we should give some weight to the thought that someone else is out there, even though they may be nowhere near us and not be communicating with us at all.
4[anonymous]10y
What, no aliens? But I was really looking forward to meeting them! Or just makes it very quiet, or causes it to happen at the same time to multiple species. There could already be someone out there who "went trans-flooberghian" and have begun expanding, but they do so slowly and quietly to responsibly conserve resources on the opposite side of the galaxy from us. How would we know?
2Vulture10y
Oh, okay, I understand what you mean now. Sorry for the misplaced "rebuttal". I don't understand this topic well enough to have a real opinion about the Great Filter, so I think I'll butt out.
1[anonymous]10y
I would contend that it's the simple, KNOWN attributes of the universe that render expansion past islands of habitability implausible.
4wedrifid10y
Maybe not assume. But I'll most likely conclude that is what it is after analysis of my preferences, philosophy and the multiverse as I can understand it.
2[anonymous]10y
THIS so many times over. I can never understand why the idea that replicating systems might just never expand past small islands of clement circumstances (like, say, the surface of the Earth) gets so readily dismissed in these parts.
2yli10y
People in these parts don't necessarily have in mind the spread of biological replicators. Spreading almost any kind of computing machinery would be good enough to count, because it could host simulations of humans or other worthwhile intelligent life. (Note that that question of whether simulated people are actually conscious is not that relevant to the question of whether this kind of expansion will happen. What's relevant is the question of whether the relevant decision makers would come to think they are conscious. For example, even if simulated people aren't actually conscious, after interacting with simulated people intergrated into society all their lives most non-simulated people would probably think they are conscious, and thus worth sending out to colonize space. And the simulated people themselves will definitely think they are conscious.)
3[anonymous]10y
I wasn't limiting myself to biology, hence talking about 'replicating systems'. I was more going for the possibility that the sorts of places that non-biologically-descended replicators can replicate are also very limited, possibly to not terribly much wider-ranging to those in which biological replicators can work. We can send one-off things that work for a long time all over the place, but all you need for them not to establish themselves somewhere is for the successful replacement rate to be less than one.

I liked this series a lot. Thanks for writing it.

But I couldn't resist this small math nitpick: "But if the chance that one person can save the world is one in a million, then there had better be a million people trying." -> That's a great quote, but we can be more precise:

If these probabilities were indeed independent (which they can't possibly be, but still), and a million people tried with a chance of 1 in a million each, then the chance P that the world is saved is only P=1-(999999/1000000)^1000000=63.2%. If we want the world to be saved w... (read more)

3MalcolmOcean9y
I was amused to note that Nate's site (click "More »") now reads:

This was enjoyable to me because "saving the world", as you put it, is completely unmotivational for me. (Luckily I have other sources of motivation) It's interesting to see what drives other people and how the source of their drive changes their trajectory.

I'm definitely curious to see a sequence or at least a short feature list about your model for a government that structurally ratchets better instead of worse. That's definitely something that's never been achieved consistently in practice.

I'm not sure that dismissing government reform is necessarily the right thing to do, even if AI-risk is the larger problem. The timelines for the good the solutions do may be different - even if you have to save the world fifty years from now from UFAI, there's still a world of good you can do by helping people with a better government system in the meantime.

Also, getting a government that better serves its constituents' values could be relevant progress towards getting a computer program that serves well the programmers' values.

You've probably thought thr... (read more)

You've probably thought through these exact points, but glossed over them in the summary.

Yep. Trust me, I really wanted the answer to be that I should leverage a comparative advantage in reform instead of working on AI risk. There are a number of reasons why it wasn't. It turns out I don't possess much advantage on reform work -- most of the mental gains are transferable, and I lack key resources such as powerful friends & political sway that would have granted reform advantage. But yes, this was a difficult realization.

Attending the December MIRI workshop was actually something of a trial run. If it had turned out that I couldn't do the math / assist with time, then I would have donated while spending time on reform problems. It turns out that I can do FAI research, though, and by now I'm quite confident that I can do more good there.

[-][anonymous]10y130

Yeah, could I take five minutes to stump for more people getting involved in FAI issues? I've read through the MIRI/FHI papers on the subject, and the field of Machine Ethics is really currently in its infancy, with a whole lot of open questions, both philosophical and mathematical.

Now, you might despair and say, Google bought DeepMind, there's gonna be UFAI, we're all gonna die because we didn't solve this a decade ago.

I prefer to say: this field is young and growing more serious and respected. This means that the barrier to entry for a successful contribution is relatively low and the comparative advantage of possessing relevant knowledge is relatively high, compared to other things you could be pursuing with similar skills.

4[anonymous]10y
Merely being a charismatic opinion leader and donating a bunch of money can help with reforming government, though. Many of us can do that, and do so.

Thanks for sharing. I suspect this is a common sort of story amongst LessWrong readers.

Depending upon whether you believe that I was actually able to come up with better ways to structure people, you may feel that I'm either pretty accomplished or extremely deluded. Perhaps both.

I accepted your claim easily as I think it's plausible that a great many people have come up with better ways to structure people. It's in the aggregate form that stupidity prevails.

I see a hilarious and inspiring similarity between your story and mine. 

In high school, I realized that I enjoyed reflecting on topics to achieve coherence, discussing mechanisms of superficial phenomena, and wanted everyone to be happy on a deep level. So I created a religion, because, of course, I wanted to save the world.  I thought other religions were a failed attempt to incorporate modern positive psychology learnings (which had "solved happiness") into moral theories but I wanted to use the meme potential of social phenomena like religion ... (read more)

Nice story.

I notice you don't talk about the interaction between the two big goals you've held. Your beliefs here presumably hinge on timescales? If most existential risk is a long way off, then improving the coordination and decision making of society is likely a better route to long-term safety than anything more direct (though perhaps there is something else better still).

If you agree with that, when historically do you guess the changeover point was?

7So8res10y
There are a number of factors here. Timescales are certainly important. I obviously can't re-organize people at will. Even in a best-case scenario, it would take decades or even centuries to transition social systems, to shift away from governments and nations, and so on. If I believed AI would take millennia, then I'd keep addressing coordination problems. However, AI is also on the decades-to-centuries timescale. Furthermore, developing an FAI would (depending upon your definition of 'friendly') address coordination problems. Whether my ideas were flawed or not, developing FAI dominates social restructuring. I'm not quite sure what you mean. Are you asking the historical date at which I believe the value of a person-hour spent on AI research overtook the value of a person-hour spent on restructuring people? I'd guess maybe 1850, in hopes that we'd be ready to build an FAI as soon as we were able to build a computer. This seems like a strange counterfactual to me, though.
5owencb10y
Yes, that was the question I was asking (I am not certain we are over the threshold, and certainly suspicious of answers before about 1965, so I wanted to find out how far apart our positions were). I agree that a good enough AI outcome would address coordination problems, but this cuts both ways. A society which deals with coordination problems well must all-else-equal be more likely to achieve AGI safely than one which does not. Early enough work seems hard to target at FAI rather than just accelerating AI in general (though it's possible you could factor out a particular part such as value loading). Given that I think we see long-term trends towards better coordination and decision-making in society, it is not even clear this work would be positive in expectation. There is a counter-consideration that AI might be likely safer if developed earlier when less computing power is available, but I guess this is a smaller factor.
5Kurros10y
It would have been kind of impossible to work on AI in 1850, before even modern set theory was developed. Unless by work on AI, you mean work on mathematical logic in general.

A decade late to the party, thank you for reminding me once again what is important and rekindling the spark

Eventually, I had a few insights that I've yet to find in the literature, a few ideas that I still actually believe are important. You'll excuse me if I don't mention them here: there is a lot of inferential distance. Perhaps one day I'll write a sequence.

Did you end up writing them anywhere?

I didn't have much trust psychological literature, anyway

You're missing a word here

3So8res10y
Fixed, thanks.

Thank you for sharing your story and methods.

I have to agree with Kawoomba. It would be totally awesome to try and puzzle out the reasons that you have for your ideas with just the ideas given. An hour of your time (to write a post) could prompt people to change their minds on how society should be optimized and that is an opportunity that you shouldn't miss. Also, changing the way society works is one of my pet peeves.

[-][anonymous]10y10

So what is your ideal meta-social system? I'm glad you've turned to a better path, but I would hate for that work to have gone to waste. I know like-minded people from the bitcoin space who are working on social reform and radical changes of governance, and it would be interesting to ferry them across that inferential gap.

Thank you. I can relate to much of what you said, as isn't terribly rare here.

And the most enjoyable of the feelings evoked in me (as has happened on several occasions already), is seeing a young one being better and more promising than me.

(Though my enjoyment at being superseded is dangerous in the sense that such may be associated with laziness, so you are very welcome to not enjoy yours -- or enjoy, however you wish.)

The actual reason why I started to comment at all, however, is that it's amusing to note how I'm in a sense in the reverse of your situati... (read more)

8TheAncientGeek10y
But it doesn't include bringing back Kings. This is why examples are important. You can't coclude "the world just doesn't want to listen to ideas, however, good" if the ideas , are in fact, terrible.
1Aleksei_Riikonen10y
As a Brit, you already have a king/queen in your country. Details are important as well as examples, and I'm not in the business of simply bringing back empowered kings. In the system I discussed the role mostly is about being a cool figurehead, not so terribly different from what you have now (though the king would be elected from among the re-invented Aristocracy in a meritocratic way, and therefore be better at the role than what you have now -- and it is of course true that the discussed system would be about bringing back the nobility in a genuinely empowered way).
0TheAncientGeek10y
And it's not paradise on Earth. How does that work? If an aristocrat's offpsring are crap, do they get thrown out of the aristocracy? If so, how does that differ from meritocracy?
3Aleksei_Riikonen10y
I believe this comment thread is not the proper place to discuss the details of my proposal. (Also I believe the page linked earlier answers those specific questions.)
-2TheAncientGeek10y
I've read it, and I believe it doesn't..
8Aleksei_Riikonen10y
Regardless, I wish to not take over (a part of) this comment thread by discussing this thing in detail. If further comments from me on the matter are in demand, contacting me through some other means is a better option.
-4V_V10y
A bunch of self-righteous inbreeds running everything. With horses. That's totally going to work...
7ChristianKl10y
I think a lot of thinks that look like bright political ideas are in part a misunderstanding of the problem.

By July of 2013, I came to agree with MIRI's conclusions.

Do you think the orthogonality thesis is intuitively true or have you worked out a way to prove it?
Because in that case I'd like to avoid to reinvent the wheel...

Everyone wants the world to change, but most are cowed by the fact that they can't change it themselves.

http://www.youtube.com/watch?v=oBIxScJ5rlY

But what if the best method to save the world involves government after all? After all, government is how humans coordinate our resources toward our goals. Our current government is also working on the AI project, and there are decent odds that it will be solved by either our espionage or military research branches. Meanwhile, the individuals and groups working on the Friendly aspect of the AI project are poorly coordinated and poorly funded. Perhaps there is a way you could use your old expertise on your new goal?

5Dias10y
Government, and markets, and religion, and persuasion, and societies, and charities, and mailing lists, and kickstarter projects, and corporations...
-1Strange710y
Name one Kickstarter project that can afford to support a hundred riflemen or a single armored fighting vehicle "just in case we need them later," and one otherwise-credible government that can't.
2So8res10y
It may well. I know not absolutes. I merely play the probabilities. To be clear, though, my goals did not include "become a powerful politician". The goals were more along the lines of convincing lots of people that, hey, remember when people spent a long time thinking about better ways to run a government, and then founded America, and it turned out pretty good? What if we did that again, only on a regular basis on small scales, preferably non-territoriality, all of the time? It's unlikely that I'll be able to convince a few million people to succeed from their nations (without invoking the ire of their tax collectors) anytime soon. Hopefully, yeah. Much of my expertise is transferable between domains (resolve, passion, productivity, intelligence, etc.) -- I actually don't have much of a specific advantage in societal reform. That which I do have is trumped by the relative importance of AI risk -- sunk cost fallacy, and all that.
1seed4y
What about the millions of people who are already stateless? I once thought to try and bring about anarcho-capitalism by starting a campaign for stateless people's rights, before I came up with a better plan.
0Strange710y
Strictly speaking, all the people who actually remember that time period are long dead. Accordingly you may be underestimating the amount of work involved in re-thinking literally everything about how to run a government. It's a lot easier to convince somebody to put in as many hours as it takes to assemble a house using only a hand-axe and a forest when the alternative is being rained on while they sleep and eventually eaten by a bear, compared to the situation where they already have a semi-adequate house.
0VAuroch10y
So seasteading?
0BloodyShrimp10y
While this is also what came to my mind, the next thing that came to my mind was that this is exactly what the kind of communication failure So8res was worried about would look like.

This post seems uncomfortably arrogant to me.

To be clear, I did my best to present various past-Nate's viewpoints from that past-Nate's perspective. I do not necessarily endorse my old beliefs, and I readily admit that 2005!Nate and other Nates nearby were far too arrogant. I aim to explain where I came from, not to justify it.

My arrogance has been tempered in recent years (though I retain some for the Nothing Is Beyond My Grasp compartment). That said, self evaluation can be difficult, and perhaps I have not been tempered enough. If you notice any specific instances where it seems my model of my abilities/importance is out of sync with reality, I'd very much appreciate your input.

8diegocaleiro10y
Though arrogance puts some people off, keep in mind some people are very excited by it. I love your arrogance, I haven't seen that flavour of arrogance in a while, and it was the trait yudkowky had back in 2003-2005 that got me interested in him. We've had some similar growing up types of experiences, and we interacted with the same cluster of people and ideas the last few years. I'd be curious to skype with you over some differences in actions taken and actions endorsed after our different hitories. Mostly I'd like to pick up particular examples of smart people with the saving the world mindset, see what they concluded and what they ended up doing, and checking wheter we would have done the same or different, and in which ways. My skype is diegocaleiro feel free to add it if you think this could be valuable.
5jsteinhardt10y
While either one on its own is suboptimal, arrogance combined with constant self-doubt seems like a pretty good combination to me.

From what I gather, most people don't respond to rational ideas and actions, just ideas and actions they believe will benefit themselves or their group. This is how bad ideas continue to flourish (Bigger Church = Pleasing the Lord = Better chance of an afterlife). In addition, people do respond to ideas they believe are moral, but what most people define as "good" or "bad" actions, moral or immoral, tend to be what people believe will benefit them or the group they relate to (family, community, country, etc.) As a rule of thumb, to ... (read more)

2Lumifer10y
You are confused between rationality and altruism. These are quite different things. What does it mean for the world to be "saved"?
0TheAncientGeek10y
At least, he is using rational to mean epistemically rational
0John_D10y
Yes, that was a little extreme on my part. What I was trying to say is that people don't always respond to rational ideas. "What does it mean for the world to be "saved"?" I was trying to relate to the author's idea of "saving" the world, which from what I gather is maximizing altruism and bureaucratic inefficiencies, to start. (governments are inefficient, wars are bad, etc.)
[+][anonymous]10y-50