"Persevere." It's a piece of advice you'll get from a whole lot of high achievers in a whole lot of disciplines. I didn't understand it at all, at first.
At first, I thought "perseverance" meant working 14-hour days. Apparently, there are people out there who can work for 10 hours at a technical job, and then, in their moments between eating and sleeping and going to the bathroom, seize that unfilled spare time to work on a book. I am not one of those people—it still hurts my pride even now to confess that. I'm working on something important; shouldn't my brain be willing to put in 14 hours a day? But it's not. When it gets too hard to keep working, I stop and go read or watch something. Because of that, I thought for years that I entirely lacked the virtue of "perseverance".
In accordance with human nature, Eliezer1998 would think things like: "What counts is output, not input." Or, "Laziness is also a virtue—it leads us to back off from failing methods and think of better ways." Or, "I'm doing better than other people who are working more hours. Maybe, for creative work, your momentary peak output is more important than working 16 hours a day." Perhaps the famous scientists were seduced by the Deep Wisdom of saying that "hard work is a virtue", because it would be too awful if that counted for less than intelligence?
I didn't understand the virtue of perseverance until I looked back on my journey through AI, and realized that I had overestimated the difficulty of almost every single important problem.
Sounds crazy, right? But bear with me here.
When I was first deciding to challenge AI, I thought in terms of 40-year timescales, Manhattan Projects, planetary computing networks, millions of programmers, and possibly augmented humans.
This is a common failure mode in AI-futurism which I may write about later; it consists of the leap from "I don't know how to solve this" to "I'll imagine throwing something really big at it". Something huge enough that, when you imagine it, that imagination creates a feeling of impressiveness strong enough to be commensurable with the problem. (There's a fellow currently on the AI list who goes around saying that AI will cost a quadrillion dollars—we can't get AI without spending a quadrillion dollars, but we could get AI at any time by spending a quadrillion dollars.) This, in turn, lets you imagine that you know how to solve AI, without trying to fill the obviously-impossible demand that you understand intelligence.
So, in the beginning, I made the same mistake: I didn't understand intelligence, so I imagined throwing a Manhattan Project at the problem.
But, having calculated the planetary death rate at 55 million per year or 150,000 per day, I did not turn around and run away from the big scary problem like a frightened rabbit. Instead, I started trying to figure out what kind of AI project could get there fastest. If I could make the Singularity happen one hour earlier, that was a reasonable return on investment for a pre-Singularity career. (I wasn't thinking in terms of existential risks or Friendly AI at this point.)
So I didn't run away from the big scary problem like a frightened rabbit, but stayed to see if there was anything I could do.
Fun historical fact: In 1998, I'd written this long treatise proposing how to go about creating a self-improving or "seed" AI (a term I had the honor of coining). Brian Atkins, who would later become the founding funder of the Singularity Institute, had just sold Hypermart to Go2Net. Brian emailed me to ask whether this AI project I was describing was something that a reasonable-sized team could go out and actually do. "No," I said, "it would take a Manhattan Project and thirty years," so for a while we were considering a new dot-com startup instead, to create the funding to get real work done on AI...
A year or two later, after I'd heard about this newfangled "open source" thing, it seemed to me that there was some preliminary development work—new computer languages and so on—that a small organization could do; and that was how the Singularity Institute started.
This strategy was, of course, entirely wrong.
But even so, I went from "There's nothing I can do about it now" to "Hm... maybe there's an incremental path through open-source development, if the initial versions are useful to enough people."
This is back at the dawn of time, so I'm not saying any of this was a good idea. But in terms of what I thought I was trying to do, a year of creative thinking had shortened the apparent pathway: The problem looked slightly less impossible than it did the very first time I approached it.
The more interesting pattern is my entry into Friendly AI. Initially, Friendly AI hadn't been something that I had considered at all—because it was obviously impossible and useless to deceive a superintelligence about what was the right course of action.
So, historically, I went from completely ignoring a problem that was "impossible", to taking on a problem that was merely extremely difficult.
Naturally this increased my total workload.
Same thing with trying to understand intelligence on a precise level. Originally, I'd written off this problem as impossible, thus removing it from my workload. (This logic seems pretty deranged in retrospect—Nature doesn't care what you can't do when It's writing your project requirements—but I still see AIfolk trying it all the time.) To hold myself to a precise standard meant putting in more work than I'd previously imagined I needed. But it also meant tackling a problem that I would have dismissed as entirely impossible not too much earlier.
Even though individual problems in AI have seemed to become less intimidating over time, the total mountain-to-be-climbed has increased in height—just like conventional wisdom says is supposed to happen—as problems got taken off the "impossible" list and put on the "to do" list.
I started to understand what was happening—and what "Persevere!" really meant—at the point where I noticed other AIfolk doing the same thing: saying "Impossible!" on problems that seemed eminently solvable—relatively more straightforward, as such things go. But they were things that would have seemed vastly more intimidating at the point when I first approached the problem.
And I realized that the word "impossible" had two usages:
1) Mathematical proof of impossibility conditional on specified axioms;
2) "I can't see any way to do that."
Needless to say, all my own uses of the word "impossible" had been of the second type.
Any time you don't understand a domain, many problems in that domain will seem impossible because when you query your brain for a solution pathway, it will return null. But there are only mysterious questions, never mysterious answers. If you spend a year or two working on the domain, then, if you don't get stuck in any blind alleys, and if you have the native ability level required to make progress, you will understand it better. The apparent difficulty of problems may go way down. It won't be as scary as it was to your novice-self.
And this is especially likely on the confusing problems that seem most intimidating.
Since we have some notion of the processes by which a star burns, we know that it's not easy to build a star from scratch. Because we understand gears, we can prove that no collection of gears obeying known physics can form a perpetual motion machine. These are not good problems on which to practice doing the impossible.
When you're confused about a domain, problems in it will feel very intimidating and mysterious, and a query to your brain will produce a count of zero solutions. But you don't know how much work will be left when the confusion clears. Dissolving the confusion may itself be a very difficult challenge, of course. But the word "impossible" should hardly be used in that connection. Confusion exists in the map, not in the territory.
So if you spend a few years working on an impossible problem, and you manage to avoid or climb out of blind alleys, and your native ability is high enough to make progress, then, by golly, after a few years it may not seem so impossible after all.
But if something seems impossible, you won't try.
Now that's a vicious cycle.
If I hadn't been in a sufficiently driven frame of mind that "forty years and a Manhattan Project" just meant we should get started earlier, I wouldn't have tried. I wouldn't have stuck to the problem. And I wouldn't have gotten a chance to become less intimidated.
I'm not ordinarily a fan of the theory that opposing biases can cancel each other out, but sometimes it happens by luck. If I'd seen that whole mountain at the start—if I'd realized at the start that the problem was not to build a seed capable of improving itself, but to produce a provably correct Friendly AI—then I probably would have burst into flames.
Even so, part of understanding those above-average scientists who constitute the bulk of AGI researchers, is realizing that they are not driven to take on a nearly impossible problem even if it takes them 40 years. By and large, they are there because they have found the Key to AI that will let them solve the problem without such tremendous difficulty, in just five years.
Richard Hamming used to go around asking his fellow scientists two questions: "What are the important problems in your field?", and, "Why aren't you working on them?"
Often the important problems look Big, Scary, and Intimidating. They don't promise 10 publications a year. They don't promise any progress at all. You might not get any reward after working on them for a year, or five years, or ten years.
And not uncommonly, the most important problems in your field are impossible. That's why you don't see more philosophers working on reductionist decompositions of consciousness.
Trying to do the impossible is definitely not for everyone. Exceptional talent is only the ante to sit down at the table. The chips are the years of your life. If wagering those chips and losing seems like an unbearable possibility to you, then go do something else. Seriously. Because you can lose.
I'm not going to say anything like, "Everyone should do something impossible at least once in their lifetimes, because it teaches an important lesson." Most of the people all of the time, and all of the people most of the time, should stick to the possible.
Never give up? Don't be ridiculous. Doing the impossible should be reserved for very special occasions. Learning when to lose hope is an important skill in life.
But if there's something you can imagine that's even worse than wasting your life, if there's something you want that's more important than thirty chips, or if there are scarier things than a life of inconvenience, then you may have cause to attempt the impossible.
There's a good deal to be said for persevering through difficulties; but one of the things that must be said of it, is that it does keep things difficult. If you can't handle that, stay away! There are easier ways to obtain glamor and respect. I don't want anyone to read this and needlessly plunge headlong into a life of permanent difficulty.
But to conclude: The "perseverance" that is required to work on important problems has a component beyond working 14 hours a day.
It's strange, the pattern of what we notice and don't notice about ourselves. This selectivity isn't always about inflating your self-image. Sometimes it's just about ordinary salience.
To keep working was a constant struggle for me, so it was salient: I noticed that I couldn't work for 14 solid hours a day. It didn't occur to me that "perseverance" might also apply at a timescale of seconds or years. Not until I saw people who instantly declared "impossible" anything they didn't want to try, or saw how reluctant they were to take on work that looked like it might take a couple of decades instead of "five years".
That was when I realized that "perseverance" applied at multiple time scales. On the timescale of seconds, perseverance is to "not to give up instantly at the very first sign of difficulty". On the timescale of years, perseverance is to "keep working on an insanely difficult problem even though it's inconvenient and you could be getting higher personal rewards elsewhere".
To do things that are very difficult or "impossible",
First you have to not run away. That takes seconds.
Then you have to work. That takes hours.
Then you have to stick at it. That takes years.
Of these, I had to learn to do the first reliably instead of sporadically; the second is still a constant struggle for me; and the third comes naturally.
There are two issues entangled in not trying to do the impossible, but doing it.
The first issue is trying. To try is to focus on your success or failure, instead of focusing on the problem and a solution. That brings your ego into the equation. That's one of the best ways to fail. The majority of problems people have are trivial to solve once they are not our personal problems, embedded in our hopes, fears, pride, and personal bits of crazy.
I try to get myself and my ego out of my problems. One of my "thinking hats" is a mythical older brother Jonathan that I channel to think about my problems for me. I've tried the trick with other people "what would your big sister say you should do"? Presto! The solution is obvious.
The second issue is the evidence for impossible - how do you claim to know it is impossible? To believe that something is impossible is to mistake "No one sees how it is possible" for "It is impossible". As the years have gone by, I've been increasingly struck by just how stupid humans are, and how most of our intelligence is just the painfully slow accumulation of the cultural store of better concepts. Once you keep in mind how humans have been wrong forever about most things, the fact that everyone claims that something is impossible loses its misperceived predictive force.
Related links: