Comparative advantage isn't something that I actually ever think about when it comes to making long-term project decisions, and empirically when my friends have used it as a justification for their career choices I've tended to not feel their plans were good. I much prefer people to act on a secret they think they know but that others don't.
Over time I've generally come to put far less trust into mine and others' explicit reasoning for making long-term plans about what to work on (I have an old draft post on this I should maybe also push). The opening line in this essay by Paul Graham recommends one good way of not following your explicit reasoning for making long-term plans:
The way to get startup ideas is not to try to think of startup ideas. It's to look for problems, preferably problems you have yourself.
I don't know that I'd honestly advise anyone to 'pick a career' these days. I don't think the big institutions (academia, government, news, etc) that you can rise up in are healthy enough or will last long-enough that planning around them on 10-20 year timescales to be one of the best ways to achieve tail outcomes with your life (I'm not sure I phrased that quite precisely right, but whatever). Focusing on projects that make you curious, make you excited, and that have good short-term feedback loops with reality seems better. Most of all, if your project is built around a potential secret, then really go for it.
Your advice seems pretty close to what I'm saying in this post, just with a different framing. Instead of "don't think in terms of comparative advantage" I'm saying "be careful when thinking in terms of comparative advantage because it's probably trickier than you think". I guess my framing is more useful when someone already tends to think in terms of comparative advantage (for example because they learned about it in economics and it seems like a really important insight).
Most of all, if your project is built around a potential secret, then really go for it.
I'd add that in some situations (e.g., if the secret is relevant to some altruistic aim) instead of building a project around the secret (with the connotation of keeping it safe), the aim of your project should be to first disperse the secret amongst others who can help, and then possibly stepping out of the way once that's done.
That seems correct.
I'd add that in some situations (e.g., if the secret is relevant to some altruistic aim) instead of building a project around the secret (with the connotation of keeping it safe), the aim of your project should be to first disperse the secret amongst others who can help, and then possibly stepping out of the way once that's done.
I do think there's variance in the communicability of such insights. I think that, for example, Holden when thinking of starting GiveWell or Eliezer when thinking of building MIRI (initiall SIAI) both correctly just tried to build the thing they believed could exist, rather than first lower the inferential gap such that a much larger community could understand it. OTOH EY wrote the sequences, Holden has put a lot of work into making OpenPhil+GiveWell's decision making understandable, and these have both had massive payoffs.
Interestingly, this fits with what Paul Graham replied to my email (about career advice) a while back:
I would choose whichever you find most interesting.
It's more important that you be excited about what
you're doing than which particular field you're working in.
Just ask yourself what would be cool, in an ambitious
way, to know more about. --pg
Over time I've generally come to put far less trust into mine and others' explicit reasoning for making long-term plans about what to work on (I have an old draft post on this I should maybe also push).
Have you released this post yet? It seems interesting to read and check whether or not it's true.
Sorry if I'm belaboring the obvious, but aiming for secrets is an extreme form of comparative advantage. Maybe the framing is important, either by focusing on the tail or on risk or by some purely psychological effect, but the argument is exactly the same.
Comparitive advantage in producing fungible outputs is an important concept. It gets a lot trickier to reason about greater advantage in less-demanded activities.
This has been sitting in my drafts folder since 2011. Decided to post it today given the recent post about Dunning—Kruger and related discussions.
The standard rationalist answer when someone asks for career advice is "find your comparative advantage." I don't have any really good suggestions about how to make this easier, but it seems like a good topic to bring up for discussion.
If 15 years ago (when I was still in college and my initial career choice hadn't been finalized yet), someone told me that perhaps I ought to consider a career in philosophy, I would have laughed. "You must be joking. Obviously, I'll be really bad at doing philosophy," I would have answered. I thought of myself as a natural born programmer, and that's the career direction I ended up choosing.
As it turns out, I am a pretty good programmer, and a terrible philosopher, but it also happens to be the case that just about everyone else is even worse at doing philosophy, and getting some philosophical questions right might be really important.
The usual (instinctive) way for someone to choose a career is probably to pick a field that they think they will be particularly good at, using a single standard of goodness across all of the candidate fields. For example, the implicit reasoning behind my own career choice could be something like "Given a typical programming problem, I can solve it in a few hours with high probability. Whereas, given a typical philosophical problem, I can at best solve it after many years with low probability."
On the other hand, comparative advantage says that in addition to your own abilities, you should also consider how good other people are (or will be) at various fields, and how valuable the outputs of those fields are (will be). Unless you're only interested in maximizing income, and the fields you're considering are likely to remain stable over your lifetime (in which case you can just compare current salaries, although apparently many people don't even do that), this can be pretty tricky.
(There doesn't appear to be any previous OB/LW posts on comparative advantage. The closest I could find is Eliezer's Money: The Unit of Caring. Most discussions elsewhere seem to focus on simple static examples where finding comparative advantage is relatively trivial.)
Today (in 2018) there's an 80,000 Hour article about comparative advantage but that is more about how to find one's comparative advantage in a community of people who share a cause, like in EA, rather in the wider economy.
I would also add (in 2018) that besides everyone else lacking skill or talent at something, an even bigger source of comparative advantage is being one of the first people to realize that a problem is a problem, or to realize an important new variant or subproblem of an existing problem. In that case, everyone else is really bad at solving that problem just because they have no idea the problem even exists.