Almost always, the people who say “I am going to keep going until this works, and no matter what the challenges are I’m going to figure them out”, and mean it, go on to succeed. They are persistent long enough to give themselves a chance for luck to go their way.
I've seen this quote (and similar ones) before. I believe that this approach is extremely flawed, to the point of being anti-rationalist. In no particular order, my objections are:
I don't think founder/investor class conflict makes that much sense as an explanation for that. It's easy to imagine a world in which investors wanted their money returned when the team updates downwards on their likelihood of success. (In fact, that sometimes happens! I don't know whether Sam would do that but my guess is only if the founders want to give up.)
I also don't think at least Sam glorifies pivots or ignores opportunity cost. For instance the first lecture from his startup course:
And pivots are supposed to be great, the more pivots the better. So this isn't totally wrong, things do evolve in ways you can't totally predict.... But the pendulum has swung way out of whack. A bad idea is still bad and the pivot-happy world we're in today feels suboptimal.... There are exceptions, of course, but most great companies start with a great idea, not a pivot.... [I]f you look at the track record of pivots, they don't become big companies. I myself used to believe ideas didn't matter that much, but I'm very sure that's wrong now.
---
More generally, I agree that this claim clashes strongly with some rationalists' worldviews, and it's plausible that it just increases the variance of outcomes and not the mean. But given that outcomes are power-law distributed (mean is proportional to variance!), the number of people endorsing it from on top of a giant pile of utility, and the perhaps surprisingly low number of highly successful rationalists, I'd recommend rationalists treat it with curiosity instead of dismissiveness.
I do agree that it increases the variance of outcomes. I think it decreases the mean, but I'm less sure about that. Here's one way I think it could work, if it does work: If some people are generally pessimistic about their chances of success, and this causes them to update their beliefs closer to reality, then Altman's advice would help. That is, if some people give up too easily, it will help them, while the outside world (investors, the market, etc) will put a check on those who are overly optimistic. However, I think it's still important to note that "not giving up" can lead not just to lack of success, but also to value destruction (Pets.com; Theranos; WeWork).
Thanks for the "Young Rationalists" link, I hadn't read that before. I think there are a fair number of successful rationalists, but they mostly focus on doing their work rather than engaging with the rationalist community. One example of this is Cliff Asness - here's a essay by him that takes a strongly rationalist view.
I think it's still important to note that "not giving up" can lead not just to lack of success, but also to value destruction (Pets.com; Theranos; WeWork).
If you're going to interpret the original "don't give up" advice so literally and blindly that "no matter what the challenges are I'm going to figure them out" includes committing massive fraud, then yes, it will be bad advice for you. That's a really remarkably uncharitable interpretation.
Sarah Constantin's Errors vs. Bugs and the End of Stupidity remains one of my favorite essays.
I wasn't an exceptional pianist, and when I'd play my nocturne for [my teacher], there would be a few clinkers. I apologized -- I was embarrassed to be wasting his time. But he never seem to judge me for my mistakes. Instead, he'd try to fix them with me: repeating a three-note phrase, differently each time, trying to get me to unlearn a hand position or habitual movement pattern that was systematically sending my fingers to wrong notes.
I had never thought about wrong notes that way. I had thought that wrong notes came from being "bad at piano" or "not practicing hard enough," and if you practiced harder the clinkers would go away. But that's a myth.
In fact, wrong notes always have a cause. An immediate physical cause. Just before you play a wrong note, your fingers were in a position that made that wrong note inevitable. Fixing wrong notes isn't about "practicing harder" but about trying to unkink those systematically error-causing fingerings and hand motions. That's where the "telekinesis" comes in: pretending you can move your fingers with your mind is a kind of mindfulness meditation that can make it easier to unlearn the calcified patterns of movement that cause mistakes.Remembering that experience, I realized that we really tend to think about mistakes wrong, in the context of music performance but also in the context of academic performance.
A common mental model for performance is what I'll call the "error model." In the error model, a person's performance of a musical piece (or performance on a test) is a perfect performance plus some random error. You can literally think of each note, or each answer, as x + c*epsilon_i, where x is the correct note/answer, and epsilon_i is a random variable, iid Gaussian or something. Better performers have a lower error rate c. Improvement is a matter of lowering your error rate. This, or something like it, is the model that underlies school grades and test scores. Your grade is based on the percent you get correct. Your performance is defined by a single continuous parameter, your accuracy.
But we could also consider the "bug model" of errors. A person taking a test or playing a piece of music is executing a program, a deterministic procedure. If your program has a bug, then you'll get a whole class of problems wrong, consistently. Bugs, unlike error rates, can't be quantified along a single axis as less or more severe. A bug gets everything that it affects wrong. And fixing bugs doesn't improve your performance in a continuous fashion; you can fix a "little" bug and immediately go from getting everything wrong to everything right. You can't really describe the accuracy of a buggy program by the percent of questions it gets right; if you ask it to do something different, it could suddenly go from 99% right to 0% right. You can only define its behavior by isolating what the bug does.
Often, I think mistakes are more like bugs than errors. My clinkers weren't random; they were in specific places, because I had sub-optimal fingerings in those places. A kid who gets arithmetic questions wrong usually isn't getting them wrong at random; there's something missing in their understanding, like not getting the difference between multiplication and addition. Working generically "harder" doesn't fix bugs (though fixing bugs does require work).
Once you start to think of mistakes as deterministic rather than random, as caused by "bugs" (incorrect understanding or incorrect procedures) rather than random inaccuracy, a curious thing happens.
You stop thinking of people as "stupid."
Tags like "stupid," "bad at ____", "sloppy," and so on, are ways of saying "You're performing badly and I don't know why." Once you move it to "you're performing badly because you have the wrong fingerings," or "you're performing badly because you don't understand what a limit is," it's no longer a vague personal failing but a causal necessity. Anyone who never understood limits will flunk calculus. It's not you, it's the bug.
This also applies to "lazy." Lazy just means "you're not meeting your obligations and I don't know why." If it turns out that you've been missing appointments because you don't keep a calendar, then you're not intrinsically "lazy," you were just executing the wrong procedure. And suddenly you stop wanting to call the person "lazy" when it makes more sense to say they need organizational tools.
"Lazy" and "stupid" and "bad at ____" are terms about the map, not the territory. Once you understand what causes mistakes, those terms are far less informative than actually describing what's happening. [...]As a matter of self-improvement, I think it can make sense not to think in terms of "getting better" ("better at piano", "better at math," "better at organizing my time"). How are you going to get better until you figure out what's wrong with what you're already doing? It's really more an exploratory process -- where is the bug, and what can be done to dislodge it?
Another essay that I like is Will Wilkinson's Public Policy After Utopia; it's technically more "politics" than "life advice", but to the extent that people want to devote their lives to pushing society in a better direction, it seems important:
Many political philosophers, and most adherents of radical political ideologies, tend to think that an ideal vision of the best social, economic, and political system serves a useful and necessary orienting function. The idea is that reformers need to know what to aim at if they are to make steady incremental progress toward the maximally good and just society. If you don’t know where you’re headed—if you don’t know what utopia looks like—how are you supposed to know which steps to take next?
The idea that a vision of an ideal society can serve as a moral and strategic star to steer by is both intuitive and appealing. But it turns out to be wrong. [...]
The fact that all our evidence about how social systems actually work comes from formerly or presently existing systems is a huge problem for anyone committed to a radically revisionary ideal of the morally best society. The further a possible system is from a historical system, and thus from our base of evidence about how social systems function, the more likely we are to be mistaken about how it would work if it were realized. And the more likely we are to be mistaken about how it would actually work, the more likely we are to be mistaken that it is more free, or more equal, or more socially just than other systems, possible or actual.
Indeed, there’s basically no way to rationally justify the belief that, say, “anarcho-capitalism” ranks better in terms of libertarian freedom than “Canada 2017,” or the belief that “economic democracy” ranks better in terms of socialist equality than “Canada 2017.” [...]
You may think you can imagine how anarcho-capitalism or economic democracy would work, but you can’t. You’re really just guessing—extrapolating way beyond your evidence. You can’t just stipulate that it works the way you want it to work. Rationally speaking, you probably shouldn’t even suspect that your favorite system comes out better than an actual system. Rationally speaking, your favorite probably shouldn’t be your favorite. Utopia is a guess. [...]
... expert predictions about the the likely effects of changing a single policy tend to be pretty bad. I’ll use myself as an example. I’ve followed the academic literature about the minimum wage for almost twenty years, and I’m an experienced, professional policy analyst, so I’ve got a weak claim to expertise in the subject. What do I have to show for that? Not much, really. I’ve got strong intuitions about the likely effects of raising minimum wages in various contexts. But all I really know is that the context matters a great deal, that a lot of interrelated factors affect the dynamics of low-wage labor markets, and that I can’t say in advance which margin will adjust when the wage floor is raised. Indeed, whether we should expect increases in the minimum wage to hurt or help low-wage workers is a question Nobel Prize-winning economists disagree about. Labor markets are complicated! Well, the comprehensive political economies of nation-states are vastly more complicated. And that means that our predictions about the outcome of radically changing the entire system are unlikely to be better than random. [...]
The death of ideal theory implies a non-ideological, empirical, comparative approach to political analysis. That doesn’t mean giving up on, say, the value of freedom. I think I’m more libertarian—more committed to value of liberty—than I’ve ever been. But that doesn’t mean being committed to an eschatology of liberty, a picture of an ideally free society, or a libertarian utopia. We’re not in a position to know what that looks like. The best we can do is to go ahead and try to rank social systems in terms of the values we care about, and then see what we can learn. The Cato Institute’s Human Freedom Index is one such useful measurement attempt. What do we see? [...]
Every highlighted country is some version of the liberal-democratic capitalist welfare state. Evidently, this general regime type is good for freedom. Indeed, it is likely the best we have ever done in terms of freedom.
Moreover, Denmark (#5), Finland (#9), and the Netherlands (#10) are among the world’s “biggest” governments, in terms of government spending as a percentage of GDP. The “economic freedom” side of the index, which embodies a distinctly libertarian conception of economic liberty, hurts their ratings pretty significantly. Still, according to a libertarian Human Freedom Index, some of the freest places in on Earth have some of the “biggest” governments. That’s unexpected. [...]
Though libertarianism is of personal interest to me, I want to emphasize again that my larger point has nothing to do with libertarianism. The same lesson applies to alt-right ethno-nationalists dazzled by a fanciful picture of a homogenous, solidaristic ethno-state. The same lesson applies to progressives and socialists in the grip of utopian pictures of egalitarian social justice. Of course, nobody knows what an ideally equal society would look like. If we stick to the data we do have, and inspect the top ranks of the Social Progress Index, which is based on progressive assumptions about basic needs, the conditions for individual health, well-being, and opportunity, you’ll mostly find the same countries that populate the Freedom Index’s leaderboard. [...]
The overlap is striking. And this highlights some of the pathologies of ideal theory: irrational polarization and the narcissism of small differences. [...]
For me, the death of ideal theory has meant adopting a non-speculative, non-utopian perspective on freedom-enhancing institutions. If you know that you can’t know in advance what the freest social system looks will look like, you’re unlikely to see evidence that suggests that policy A (social insurance, e.g.) is freedom-enhancing, or that policy B (heroin legalization, e.g.) isn’t, as threats to your identity as a freedom lover. Uncertainty about the details of the freest feasible social scheme opens you up to looking at evidence in a genuinely curious, non-biased way. And it frees you from the anxiety that genuine experts, people with merited epistemic authority, will say things you don’t want to hear. This in turn frees you from the urge to wage quixotic campaigns against the authority of legitimate experts. You can start acting like a rational person! You can simply defer to the consensus of experts on empirical questions, or accept that you bear an extraordinary burden of proof when you disagree.
What we need are folks who are passionate about freedom, or social justice (or what have you) who actively seek solutions to domination and injustice, but who also don’t think they already know exactly what ideal liberation or social justice look like, and are therefore motivated to identify our real alternatives and to evaluate them objectively. The space of possibility is infinite, and it takes energy and enthusiasm to want to explore it.
I like the sentiment but I rarely go back to read things I already read. Instead I seek out new things that say similar things in different ways.
A great example of this in my life comes from Zen books. Most of them say the same thing (there's a half joke that there are only three dharma talks a teacher can give), but in different ways. Sometimes the way it's said and where I am connect, so it's proven for me a good strategy to keep hearing similar teaching in new ways.
More quotes from the Sam Altman essay.
It’s useful to focus on adding another zero to whatever you define as your success metric—money, status, impact on the world, or whatever. I am willing to take as much time as needed between projects to find my next thing. But I always want it to be a project that, if successful, will make the rest of my career look like a footnote.
Most people get bogged down in linear opportunities. Be willing to let small opportunities go to focus on potential step changes.
I wonder to what extent the closed vs. open door dichotomy is true. In Deep Work, Cal Newport develops the point that rather than 100% open-door (interruptible) vs. 100% closed-door (uninterruptible) - we should mix the two. Knowledge workers obviously need to get feedback on their work and contribute with feedback to the work of others. You can also clearly keep your door open when doing work that doesn't require that much focus while also allowing yourself to get deep into something by shutting yourself off from the world at times. And you might miss out on really important focused time if you always keep your door open.
A book that goes very much in that direction with small but impactful chapters is:
“Chop Wood Carry Water”
https://www.amazon.com/Chop-Wood-Carry-Water-Becoming-dp-153698440X/dp/153698440X
I really enjoyed (and as I’m now reminded of it, I will have a look at it again) and I guess you could like it too.
(I found it via https://fs.blog/reading-2019/ and the very good review got me interested in it.)
I love this one from the introduction part of "Algorithms to live by":
Even where perfect algorithms haven't been found, however, the battle between generations of computer scientists and the most intractable real-world problems has yielded a series of insights. These hard-won precepts are at odds with our intuitions about rationality, and they don't sound anything like the narrow prescriptions of a mathematician trying to force the world into clean, formal lines.
They say: Don't always consider all your options. Don't necessarily go for the outcome that seems best every time. Make a mess on occasion. Travel light. Let things wait. Trust your instincts and don't think too long. Relax. Toss a coin. Forgive, but don't forget. To thine own self be true.
I start each of my weekly reviews by re-reading one of my favorite essays of life advice—a different one each week. It’s useful for a few different reasons:
It helps me get into the right reflective frame of mind.
The best essays are dense enough with useful advice that I find new interesting bits every time I read them.
Much good advice is easy to understand, but hard to implement. So to get the most benefit from it, you should find whatever version of it most resonates you and then re-read it frequently to keep yourself on track.
I’ve collected my favorite essays for re-reading below. I’ll keep this updated as I find more great essays, and I’d welcome other contributions—please suggest your own favorites in the comments!
There's a lot of essays here! If you'd like, I can email you one essay every weekend, so you can read it before your weekly review: (sign up on site)
Paul Graham, Life is Short. Inspire yourself never to waste time on bullshit again:
I’ve found that unless I’m vigilant, the amount of bullshit in my life only ever increases. Rereading Life is Short every so often gives me a kick in the pants to figure out what really matters and how to get the bullshit levels back down.
Derek Sivers, There is no speed limit, in which he learns a semester’s worth of music theory in an afternoon:
This was one of the major inspirations for Be impatient. Every time I reread it, I think of at least one thing where I’m setting myself a speed limit for no reason!
Sam Altman, How To Be Successful. Sam might have observed more successful people more closely than anyone else on the planet, and the advice is as good as you’d expect.
There are lots of different points here, so this one especially bears rereading!
R. W. Hamming, You and your research. Hamming observed almost as many great scientists as Sam Altman did founders. He had some interesting conclusions:
Hamming is an unusual combination of (a) a great scientist himself, (b) curious and thoughtful about what makes others great, and (c) honest and open about his observations (it seems).
Anonymous, Becoming a Magician—on how to become a person that your current self would perceive as magical:
The exercise they suggest is a really useful activity for weekly (or monthly or yearly) reviews. Highly recommended!
Dan Luu, 95th percentile isn’t that good. Great for cultivating self-improvement mindset by reminding you how easy (in some sense) it is to make huge improvements at something:
It’s not weekly review material, but I also appreciate the bonus section on Dan’s other most ridiculable ideas.
Suggest your own favorite life advice essays in the comments!