First Puzzle Piece

By and large, the President of the United States can order people to do things, and they will do those things. POTUS is often considered the most powerful person in the world. And yet, the president cannot order a virus to stop replicating. The president cannot order GDP to increase. The president cannot order world peace.

Are there orders the president could give which would result in world peace, or increasing GDP, or the end of a virus? Probably, yes. Any of these could likely even be done with relatively little opportunity cost. Yet no president in history has known which orders will efficiently achieve these objectives. There are probably some people in the world who know which orders would efficiently increase GDP, but the president cannot distinguish them from the millions of people who claim to know (and may even believe it themselves) but are wrong.

Last I heard, Jeff Bezos was the official richest man in the world. He can buy basically anything money can buy. But he can’t buy a cure for cancer. Is there some way he could spend a billion dollars to cure cancer in five years? Probably, yes. But Jeff Bezos does not know how to do that. Even if someone somewhere in the world does know how to turn a billion dollars into a cancer cure in five years, Jeff Bezos cannot distinguish that person from the thousands of other people who claim to know (and may even believe it themselves) but are wrong.

When non-experts cannot distinguish true expertise from noise, money cannot buy expertise. Knowledge cannot be outsourced; we must understand things ourselves.

Second Puzzle Piece

The Haber process combines one molecule of nitrogen with three molecules of hydrogen to produce two molecules of ammonia - useful for fertilizer, explosives, etc. If I feed a few grams of hydrogen and several tons of nitrogen into the Haber process, I’ll get out a few grams of ammonia. No matter how much more nitrogen I pile in - a thousand tons, a million tons, whatever - I will not get more than a few grams of ammonia. If the reaction is limited by the amount of hydrogen, then throwing more nitrogen at it will not make much difference.

In the language of constraints and slackness: ammonia production is constrained by hydrogen, and by nitrogen. When nitrogen is abundant, the nitrogen constraint is slack; adding more nitrogen won’t make much difference. Conversely, since hydrogen is scarce, the hydrogen constraint is taut; adding more hydrogen will make a difference. Hydrogen is the bottleneck.

Likewise in economic production: if a medieval book-maker requires 12 sheep skins and 30 days’ work from a transcriptionist to produce a book, and the book-maker has thousands of transcriptionist-hours available but only 12 sheep, then he can only make one book. Throwing more transcriptionists at the book-maker will not increase the number of books produced; sheep are the bottleneck.

When some inputs become more or less abundant, bottlenecks change. If our book-maker suddenly acquires tens of thousands of sheep skins, then transcriptionists may become the bottleneck to book-production. In general, when one resource becomes abundant, other resources become bottlenecks.

Putting The Pieces Together

If I don’t know how to efficiently turn power into a GDP increase, or money into a cure for cancer, then throwing more power/money at the problem will not make much difference.

King Louis XV of France was one of the richest and most powerful people in the world. He died of smallpox in 1774, the same year that a dairy farmer successfully immunized his wife and children with cowpox. All that money and power could not buy the knowledge of a dairy farmer - the knowledge that cowpox could safely immunize against smallpox. There were thousands of humoral experts, faith healers, eastern spiritualists, and so forth who would claim to offer some protection against smallpox, and King Louis XV could not distinguish the real solution.

As one resource becomes abundant, other resources become bottlenecks. When wealth and power become abundant, anything wealth and power cannot buy become bottlenecks - including knowledge and expertise.

After a certain point, wealth and power cease to be the taut constraints on one’s action space. They just don’t matter that much. Sure, giant yachts are great for social status, and our lizard-brains love politics. The modern economy is happy to provide outlets for disposing of large amounts of wealth and power. But personally, I don’t care that much about giant yachts. I want a cure for aging. I want weekend trips to the moon. I want flying cars and an indestructible body and tiny genetically-engineered dragons. Money and power can’t efficiently buy that; the bottleneck is knowledge.

Based on my own experience and the experience of others I know, I think knowledge starts to become taut rather quickly - I’d say at an annual income level in the low hundred thousands. With that much income, if I knew exactly the experiments or studies to perform to discover a cure for cancer, I could probably make them happen. (Getting regulatory approval is another matter, but I think that would largely handle itself if people knew the solution - there’s a large profit incentive, after all.) Beyond that level, more money mostly just means more ability to spray and pray for solutions - which is not a promising strategy in our high-dimensional world.

So, two years ago I quit my monetarily-lucrative job as a data scientist and have mostly focused on acquiring knowledge since then. I can worry about money if and when I know what to do with it.

A mindset I recommend trying on from time to time, especially for people with $100k+ income: think of money as an abundant resource. Everything money can buy is “cheap”, because money is "cheap". Then the things which are “expensive” are the things which money alone cannot buy - including knowledge and understanding of the world. Life lesson from Disney!Rumplestiltskin: there are things which money cannot buy, therefore it is important to acquire such things and use them for barter and investment. In particular, it’s worth looking for opportunities to acquire knowledge and expertise which can be leveraged for more knowledge and expertise.

Investments In Knowledge

Past a certain point, money and power are no longer the limiting factors for me to get what I want. Knowledge becomes the bottleneck instead. At that point, money and power are no longer particularly relevant measures of my capabilities. Pursuing more “wealth” in the usual sense of the word is no longer a very useful instrumental goal. At that point, the type of “wealth” I really need to pursue is knowledge.

If I want to build long-term knowledge-wealth, then the analogy between money-wealth and knowledge-wealth suggests an interesting question: what does a knowledge “investment” look like? What is a capital asset of knowledge, an investment which pays dividends in more knowledge?

Enter gears-level models.

Mapping out the internal workings of a system takes a lot of up-front work. It’s much easier to try random molecules and see if they cure cancer, than to map out all the internal signals and cells and interactions which cause cancer. But the latter is a capital investment: once we’ve nailed down one gear in the model, one signal or one mutation or one cell-state, that informs all of our future tests and model-building. If we find that Y mediates the effect of X on Z, then our future studies of the Y-Z interaction can safely ignore X. On the other hand, if we test a random molecule and find that it doesn’t cure cancer, then that tells us little-to-nothing; that knowledge does not yield dividends.

Of course, gears-level models aren’t the only form of capital investment in knowledge. Most tools of applied math and the sciences consist of general models which we can learn once and then apply in many different contexts. They are general-purpose gears which we can recognize in many systems.

Once I understand the internal details of how e.g. capacitors work, I can apply that knowledge to understand not only electronic circuits, but also charged biological membranes. When I understand the math of microeconomics, I can apply it to optimization problems in AI. When I understand shocks and rarefactions in nonlinear PDEs, I can see them in action at the beach or in traffic. And the “core” topics - calculus, linear algebra, differential equations, big-O analysis, Bayesian probability, optimization, dynamical systems, etc - can be applied all over. General-purpose models are a capital investment in knowledge.

I hope that someday my own research will be on that list. That’s the kind of wealth I’m investing in now.

New Comment
62 comments, sorted by Click to highlight new comments since:

I'm in somewhat agreement with this general idea, but I think that most people that try to "build knowledge" ignore a central element of why money is good, it's a hard to fake signal.

I agree with something like:

So, two years ago I quit my monetarily-lucrative job as a data scientist and have mostly focused on acquiring knowledge since then. I can worry about money if and when I know what to do with it.

A mindset I recommend trying on from time to time, especially for people with $100k+ income: think of money as an abundant resource. Everything money can buy is “cheap”, because money is "cheap".

To the extent that I basically did the same (not quit my job, but got a less well-paying, less time-consuming job doing a thing that's close to what I'd be doing in my spare time had I had no job).

But this is a problem if your new aim isn't to make "even more money", i.e. say a few million dollars.

***

The problem of money is when it scales linearly, when you make 50k this year, 55 the next, 100k ten years later. Because the difference between 50 and 100k is indeed very little.

But the difference between 100m and 100k isn't, 100m would allow me to pursue projects that are far more interesting than what I'm doing right now.

***

Knowledge is hard to anchor. The guy building TempleOS was acquring something like knowledge in the process of being an unmedicated schizophrenic. Certainly, his programming skills improved as his mental state went downhill. Certainly he was more knowledgeable in specific areas than most (who the **** build a x86 OS from scratch, kernel and all, there's maybe like 5 or 6 of them total, I'd bet there are fewer than 10,000 people alive that can do that given a few years !?)... and those areas were not "lesbian dance PhD" style knowledge, they were technical applied engineering areas, the kind people get paid in the millions for woerking in.

Yet for some reason, poor Terry Davis was becoming insane, not smart, as he went through life.

Similarly, people doing various "blind inuit pottery making success gender gap PhD" style learning think they are acquiring knowledge, but many of the people here would agree they aren't. Or at least that they are acuqiring knowledge of little consequence, which will not help them live more happily or affect positive change, or really any change, upon the world.

At most, you can see it "fail" in the knolwedge you've acquiered once it hits an extreme, once you're poor, sad, and alone in spite of a lifetime of knowledge acquistion.

Money, on the other hand, is very objective, everyone wants it. Most need it. Everyone, to some extent, won't give up theirs or print more of it very easily. It's also instant. Given 10 minutes I can tell you +/-1% how much liqudity I have access to in total at this very moment. That number will then be honored by millions of businesses and thousands of banks accross the world who will give me services, goods or precious metals, stakes in businesses and government bonds in exchange for it. I can't have any such validation with knowledge.

So is it no a good "test of your knowledge" to try and acquire some of it ?

Even if doing a 1-1 knowledge-money mapping is harmful, doing a, say, 0.2 - 1 knowledge - money mapping isn't. Instead it serves as a guideline. Are you acquiring relevant knowledge about the world ? Maybe you're just becoming a numerology quack, or a religious preacher, or a self-help guru, or a bullshiter, or whatever.

Which is not to say the knowledge-money test is flawless, it isn't, it's just the best one we have thus far. Maybe one could suggest other tests exchanging knowledge for things that one can't buy (e.g. affection), but most of those things are much less easy to quantify and trying to "game" them would feel dirty and immoral, trying to "game" money is the name of the game, everyone does it, that's part of its role.

I generally agree with this comment, and I think the vast majority of people underestimate the importance of this factor. Personally, I consider "staying grounded" one of the primary challenges of what I'm currently doing, and I do not think it's healthy to stay out of the markets for extended periods of time.

precious mentals

I like this coinage.

This is a great comment, and I kind of really want to see it get written up as a top-level post. I've made this argument myself a few times, and would love to see it written up in a way that's easy to reference and link to.

I will ping you with the more cohesive + in depth, syntactically and grammatically correct version when it's done. Either this Monday or the next, it's been in draft form ever since I wrote this comment...

Though the main point I'm making here is basically just Taleeb's Skin in the Game idea, he doesn't talk about the above specifically, but the idea flows naturally after reading him (granted, I read the book ~3yrs ago, maybe I'm missremembering the)

What was your old job, and what is your current job?

What have you been learning? How has it been working out for you?

Plurality of my effort has been studying agency-adjacent problems. How to detect embedded Bayesian models (turns out to be numerically unstable), markets/committees requiring unanimity as a more general model of inexploitable preferences than utility functions, abstraction, how to express world models, and lately ontology translation.

Other things I've spent time on:

  • Financial market models. Some progress there, but mostly I found that my statistical tools just aren't yet up to the task of (reliably) dealing with full-scale market data.
  • Statistical and optimization algorithms. Fair bit of progress there, and many of the insights feed my gears-level modelling posts (though obviously with most of the original math stripped out).
  • More general economic questions. For instance, a couple months ago I was thinking about when and to what extent working as a group outperforms working independently, and I ended up reading a book on theory of the firm. That lead to some interesting thoughts about identifiability-in-hindsight as a constraint on incentive/contract design, which will probably be a post eventually.
  • Making the ideas/intuitions underlying gears-level models more explicit.
  • Reading the literature on aging.
  • Several more minor threads which didn't lead anywhere interesting.

thousands of other people who claim to know (and may even believe it themselves) but are wrong

Seems to me the greatest risk of this strategy is becoming one of them.

The risk can be mitigated by studying the textbooks of settled science first, and only trying to push the boundary of human knowledge later. But then, time becomes another bottleneck.

How exactly do people end up knowing little? 

I'd venture it tends towards starting with poor mental models and then addressing a huge universe of learnable information with those models. Amplifies confirmation bias and leads to consistently learning the wrong lessons. So there's real value to optimizing your mental models before you even try to learn the settled knowledge, but of course the knowledge itself is the basis of most people's models.

Perhaps there's a happy medium in building out a set of models before you start work on any new field, and looking to those you respect in those fields for pointers on what the essential models are.

If you were the President or as rich as Jeff Bezos, you could use your power or money to just throw a lot more darts at the dartboard. There are plenty of research labs using old equipment, promising projects that don't get funding, post-docs who move into industry because they're discouraged about landing that tenure-track position, schools that can't attract competent STEM teachers partly because there's just so little money in it.

And of course, you can build institutions like OpenPhil to help reduce uncertainty about how to spend that money.

Using money or power to fix those problems is do-able. You don't have to know everything. You can be a dart, or, if you're lucky and hard-working, you can be a dart-thrower.

If you were the President or as rich as Jeff Bezos, you could use your power or money to just throw a lot more darts at the dartboard. 

From the OP:

Beyond that level, more money mostly just means more ability to spray and pray for solutions - which is not a promising strategy in our high-dimensional world.

When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.

I would love to see John, or anyone with an interest in the subject, do an explainer on all the ways science organizes and coordinates to solve problems.

In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.

When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.

Tho if you take analyses like Braden's seriously, quite possibly these filtering efforts have negative value, in that they are more likely to favor projects supported by insiders and senior people, who have historically been bad at predicting where the next good things will come from. "Science advances one funeral at a time," in a way that seems detectable from analyzing the literature.

This isn't to say that planning is worthless, and that no one can see the future. It's to say that you can't buy the ability to buy the right things; you have to develop that sort of judgment on your own, and all the hard evidence comes too late to be useful.

I'm starting to read Braden. The thing is, that if Braden's analysis is true, then either:

  1. We can filter for the right people, we're just doing it wrong. We need to empower a few senior scientists who no longer have a dog in the fight to select who they think should be endowed with money for unconstrained research. Money can buy knowledge if you do it right.
  2. We truly can't filter for the right ideas. Either rich people need to do research, researchers need to get rich, or we need to just randomly dump money on researchers and hope that a few of them turn out to be the next Einstein.

I think there's a fairly rigorous, step-by-step, logical way to ground this whole argument we're having, but I think it's suffering from a lack of precision somehow...

There seems to be a lack of knowledge in the people who fund science about how to structure the funding in an effective way.

There are some experts who think that they have an alternative proposal that leads to a much better return on investment. Those experts have some arguments for their position but it's not straightforward to know which expert is right and that judgement can't be brought.

I suspect being good at finding better scientists is very close to having a complete theory of scientific advancement and being able to automate the research itself.

The extreme form of that idea is If we could evaluate the quality of scientists, then we could fully computerize research. Since we cannot fully computerize research, we therefore have no ability to evaluate the quality of scientists.

The most valuable thing to do would be to observe what's going on right now, and the possibilities we haven't tried (or have abandoned). Insofar as we have credence in the "we know nothing" hypothesis, we should blindly dump money on random scientists. Our credence should never be zero, so this implies that some nonzero amount of random money-dumping is optimal.

I think this is true if you're looking for near-perfect scientists but if you're assessing current science to decide who to invest in there are lots of things you can do to get better at performing such assessments (e.g. here).

>In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.

How about a metaculus/prediction market for scientific advances given an investment in X person or project? (where people put stake into the success of a person or project?) is this susceptible to bad incentives?

I think the greater concern is that it's hard to measure. And yes, you could imagine that owning shares against, say, the efficacy of a vaccine being above a certain level could be read as an incentive to sabotage the effort to develop it.

There are plenty of research labs using old equipment

The people in those research labs probably believe that newer equipment is likely to yield the knowledge that we are seeking. Our labs now have much better equipment and much more people then before the Great Stagnation started.

Expensive equipment has the problem that it forces the researchers to focus on questions that can actually be answered with the expensive equipment and those questions might not be the best to focus on. 

What does the NIH have to show for Bush doubling their budget?

Would philanthropy be better off it people just threw darts, or if they stuck to tried and true ways of giving? Is not even taking a gamble on a possible great outcome for the overall good a form of genuine altruism?

Well, if you’re a subscriber to mainstream EA, the idea is that neither traditionalism nor dart-throwing is best. We need a rigorous cost-benefit analysis.

If one believes that, yet also that less cost-benefit analysis is needed (or tractable) in science, that needs an explanation.

Again, I think that this post is getting at something important, but the definitions here aren’t precise enough to make it easy to apply to real issues. Like, can a billionaire use his money to buy a cost/benefit analysis of an investment of interest? Definitely.

But how can he evaluate it? Does he have to do it himself? Does he focus on creating an incentive structure for the people producing it? If so, what about Goodhart’s Law - how will he evaluate the incentive structure?

It’s “who will watch the watchmen” all the way down, but that’s a pretty defeatist perspective. My guess is that institutions do best when they adopt a variety of metrics and evaluative methods to make decisions, possibly including some randomization just to keep things spicy.

I imagine most good deeds or true altruism takes place on non-measurable scales. It's the thought that counts right? A smile goes a long way, how can you measure a smile, or positive energy.  Whether you throw a dart or follow a non dart follow method, maybe the positive energy put out means something, especially now. 

Look at all the good Bill Gates does that I think is effective altruism and he gets vilified . It's a weird thing. I remember watching a patriot act episode https://www.youtube.com/watch?v=mS9CFBlLOcg

Welcome to LW, by the way :)

You’re doing something (a good thing) that we call Babble. Freely coming up with ideas that all circle around a central question, without worrying too much about whether they’re silly, important, obvious, or any of the other reasons we hold stuff back.

I’d suggest going further. Feel free to use this comment thread (or make a shortform) to throw out ideas about “why philanthropy might benefit from more (or less) cost/benefit analysis”.

We often suggest trying to come up with 50 ideas all in one go. Have at it!

By and large, the President of the United States can order people to do things, and they will do those things.

I think this exaggerates the power of the president. Obama did try to order Gitmo closed. Trump did try to withdraw troops from Afghanistan and failed.

On the specific topic of COVID-19 Trump spoke about having a vaccine quite early, likely because he believed he could just approve it and get it distributed even if the evidence for the vaccine was only a little better then the evidence then what Benjamin Jesty had. 

It turns out that the president doesn't have the power to just approve a vaccine without it having gone through "enough" testing. 

This is one of those posts, like "pain is not the unit of effort," that combines a memorable and informative and very useful and important slogan with a bunch of argumentation and examples to back up that slogan. I think this type of post is great for the LW review.

When I first read this post, I thought it was boring and unimportant: trivially, there will be some circumstances where knowledge is the bottleneck, because for pretty much all X there will be some circumstances where X is the bottleneck.

However, since then I've ended up saying the slogan "when money is abundant, knowledge is the real wealth" probably about a dozen separate times when explaining my career decisions, arguing with others at CLR about what our strategy should be, and even when deliberating to myself about what to do next. I guess longtermist EAs right now do have a surplus of money and a shortage of knowledge (relative to how much knowledge is needed to solve the problems we are trying to solve...) so in retrospect it's not surprising that this slogan was practically applicable to my life so often.

I do think there are ways the post could be expanded and improved. Come to think of it, I'll make a mini-comment right here to gesture at the stuff I would add to it if I could:

1. List of other ideas for how to invest in knowledge. For example, building a community with good epistemic norms. Or paying a bunch of people to collect data / info about various world developments and report on them to you. Or paying a bunch of people to write textbooks and summaries and explainer videos and make diagrams illustrating cutting-edge knowledge (yours and others').

2. Arguments that in fact, right now, longtermist EAs and/or AI-risk-reducers are bottlenecked on knowledge (rather than money, or power/status)

--My own experience doing cost-benefit analyses is that interventions/plans vary in EV by OOMs and that it's common to find new considerations or updated models that flip the sign entirely, or add or subtract a few OOMs, for a given intervention/plan. This sure seems like a situation in which more knowledge is really helpful compared to just having more money & ability to execute on plans.

--Everyone I've talked to in government/policy says that the bottleneck is knowledge. Nobody knows what to advocate for right now because everything is so uncertain and backfirey. (See previous point, lol)

--One might counterargue that it is precisely for the above reasons that we shouldn't invest in knowledge; we aren't getting much knowledge out of our research and we still won't in the future. Instead, our best hope is to accumulate lots of power and resources and then when the crucial crunch time period comes, hopefully it'll be clear what to do. Because the world might hand us knowledge on a silver platter, so to speak, in the form of new evidence. No need to deduce it in advance.

--(I have a couple responses to the above counterargument, but I take it seriously and think that others should too)

--Reasoning by analogy, making AI go well sure seems like the sort of problem where knowledge is the bottleneck rather than money or power. It seems a lot more like figuring out the laws of physics and building a safe rocket before anyone gets killed in an unsafe rocket, or building a secure merchant drone operating system before anyone else builds an unsecure one, than e.g. preventing malaria or reforming US animal welfare laws.

[-]Ruby100

Curated.

If I understood it correctly, the central point of this post is that very often, knowing what to do is a much larger problem than having the ability to do things, i.e., money and power. I often like to say that planning is an information problem for this reason. This post is an excellent articulation of this point, probably the best I've seen.

It's an important point. Ultimately, it is precisely this point that unifies epistemic and practical rationality, the skills of figuring out what's true and skills of achieving success. When you recognize that success is hard because you don't know what to do, you appreciate why understanding what is actually true is darned important, and that figuring out how to discover truth is among the best way to accomplish goals whose solution isn't known.

This can all get applied downstream in Value of Information calculations and in knowledge-centric approaches to planning. It's good stuff. Thanks!

Great summary! A nit:

our lizard-brains love politics

it's more likely our monkey (or ape) brains that love politics. e.g. https://www.bbc.co.uk/news/uk-politics-41612352

On the note of monkey-business - what about investments in collective knowledge and collaboration? If you've not come across this, you might like it https://80000hours.org/articles/coordination/

EDIT to add some colour to my endorsement of the 80000hours link: I've personally found it beneficial in a few ways. One such is that although the value of coordination is 'obvious', I nevertheless have recognised in myself some of the traits of 'single-player thinking' described.

This is an idea I've been struggling to wrap my head around for a while now. I usually think of it as "capital vs knowledge" instead of "money vs knowledge". But since knowledge is a form of capital, your phrasing is more accurate.

I also agree the low hundred thousands is about where this happens for someone who lives in the United States and works a full-time job. I wonder how this number changes if you have passive income instead.

Excellent article! I agree with your thesis, and you’ve presented it very clearly.

I largely agree that we cannot outsource knowledge. For example, you cannot outsource the knowledge to play the violin, and you must invest in countless hours of deliberate practice to learn to play the violin.

A rule of thumb I like is only to delegate things that you know how to do yourself. A successful startup founder is capable of comfortably stepping into the shoes of anyone they delegate work to. Otherwise, they would have no idea what high-quality work looks like and how long work is expected to take. The same perspective applies to wanting to cure ageing with an investment of a billion dollars. If you don’t know how to do the work yourself, you have little chance of successfully delegating that work.

Do you think outsourcing knowledge to experts would be more feasible if we had more accurate and robust mechanisms for distinguishing the real experts from the noise?

This post’s claim seems to have a strong and weak version, both of which are asserted at different places in the post.

  1. Strong claim: At some level of wealth and power, knowledge is the most common or only bottleneck for achieving one’s goals.
  2. Weak claim: Things money and power cannot obtain can become the bottleneck for achieving one’s goals.

The claim implied by the title is the strong form. Here is a quote representing the weak form:

“As one resource becomes abundant, other resources become bottlenecks. When wealth and power become abundant, anything wealth and power cannot buy become bottlenecks - including knowledge and expertise.”

Of course, knowing arbitrary facts (N values of an infinite sequence of randomly generated numbers) is not what’s meant by “knowledge and expertise.” What is?

I’d suggest “sufficient and necessary knowledge to achieve a given goal.” A person who can achieve a goal, given some reasonable but not excessive amount of time and money, is an expert at achieving that goal.

As others pointed out, just because a person calls themselves an expert in goal G doesn’t mean that they are. John’s point is that being able to identify an expert in goal G, or an expert in identifying experts in goal G, when you yourself wish to achieve goal G, is its own form of expertise.

This in turn suggests that finding, sharing and verifying expertise is a key problem-solving skill. At any given time, we have:

  1. Questions nobody can answer.
  2. Answers nobody can understand.
  3. Answers nobody can verify.

To these short statements, insert the qualifiers “crucial,” “presently,” “efficiently,” and so on. Some of the most important questions are about other questions, such as “what questions should we be asking?”

I expect these problems to be simultaneous and mutually-reinforcing.

Based on my own experience and the experience of others I know, I think knowledge starts to become taut rather quickly - I’d say at an annual income level in the low hundred thousands.

I really appreciate this specific calling out of the audience for this post. It may be limiting, but it is also likely limiting to an audience with a strong overlap with LW readership.

Everything money can buy is “cheap”, because money is "cheap".

I feel like there's a catch-22 here, in that there are many problems that probably could be solved with money, but I don't know how to solve them with money--at least not efficiently. As a very mundane example, I know I could reduce my chance of ankle injury during sports by spending more money on shoes. But I don't know which shoes will actually be cost-efficient for this, and the last time I bought shoes I stopped using two different pairs after just a couple months.

Unfortunately I think that's too broad of a topic to cover and I'm digressing.

 

Overall coming back to this I'm realizing that I don't actually have any way to act on this piece. even though I am in the intended audience, and I have been making a specific effort in my life to treat money as cheap and plentiful, I am not seeing:

  • Advice on which subjects are likely to pay dividends, or why
  • Advice on how to recover larger amounts of time or effort by spending money more efficiently
  • Discussion of when those tradeoffs would be useful

This seems especially silly not to have given, for example, Zvi's Covid posts, which are a pretty clear modern day example of the Louis XV smallpox problem.

I would be interested in seeing someone work through how it is that people on LW ended up trusting Zvi's posts and how that knowledge was built. But I would expect that to turn into social group dynamics and analysis of scientific reasoning, and I'm not sure that I see where the idea of money's abundancy would even come into it.

Overall coming back to this I'm realizing that I don't actually have any way to act on this piece. even though I am in the intended audience, and I have been making a specific effort in my life to treat money as cheap and plentiful, I am not seeing:

  • Advice on which subjects are likely to pay dividends, or why
  • Advice on how to recover larger amounts of time or effort by spending money more efficiently
  • Discussion of when those tradeoffs would be useful

This seems especially silly not to have given, for example, Zvi's Covid posts, which are a pretty clear modern day example of the Louis XV smallpox problem.

Sounds like you want roughly the sequence Inadequate Equilibria.

in the space of aging (or models in bioscience research in general), you should contact Alexey Guzey and Jose Ricon and Michael Nielsen and Adam Marblestone and Laura Deming. You'd particularly click with some of these people, and many of them recognize the low number of independent thinkers in the area.

I think you have a kind of thinking that almost everyone else in aging I know seems to lack (If I showed your writing to most aging researchers, they'd most likely glare over what you wrote), so writing a good way to, say, put a physical principles framework to aging can result in a lot of people wanting to fund you (a la Pascal's wager - there are LOTS of people who are willing to throw money into the field even if it doesn't have a huge chance of producing results - and a good physical framework can make others want you to make the most out of your time, especially as many richer/older people lack the neuroplasticity to change how aging research is fine). Many many many papers have already been written on the field (many by people making guesses as to what matters most) - a lot of them being very messy and not very first-principles (even JP de magalhaes's work, while important, is kind of "messy" guessing at the factors that matter).  

Are you time-limited? Do you have all the money needed to maximize your output on the world? (note for making the most out of your limited time, I generally recommend being like mati roy and trying to create a simulation of yourself that future you/others can search, which generally requires a lot of HD/streaming - though even that is not that expensive). 

It seems that you can understand a broad range of extremely technical fields that few other people do (esp optimization theory and category theory), and that you get a lot out of what you read (the time investment of other people reading a technical textbook may not be as high as that of you reading one) - thus you may be more suited for theoretical/scaleable work than you are for work that's less generalizeable/scaleable (one issue with bioscience research is that most people in bioscience research spend a lot of time on busywork that may be automated later, so most biologists aren't as broad or generalizeable as you are, and you can put together broad frameworks that can improve the efficiencies/rigor of future people who read you, so you should optimize for things that are highly generalizeable.)

[you also put them all in a clear/explainable fashion that makes me WANT to return back to reading your posts, which is not something I can say for most textbooks].

There are tradeoffs between spending more time on ONE area vs spending time on ANOTHER area of academic knowledge - though there are areas where good thinking in one area can transfer to another (eg optimization theory => whole cell modeling/systems biology in biology/aging). Building general purpose models (if described well) could be an area you might have unique comparative advantage over others in, where you could guide someone else's thinking on the details even if you did not have the time to look at the individual implementations of your model on the system at hand. 

If you become someone who everyone else in the area wants to follow (eg Laura Deming), you can ask question and get pretty much every expert swarming over you, wanting to answer your questions.

You seem good at theory (which is low-cost), but how much would you want to ideally budget for sample lab space and experiments? [the more details you put in your framework - along with how you will measure the deliverables, the easier it would be to get some sort of starter funding for your ideas]. Doing some small cheap study (and putting all the output in an open online format that transcends academic publishing) can help net you attention and funding for more studies (it certainly seems that with every nascent field, it takes a certain something to get noticed, but once you do get noticed, things can get much easier over time, particularly if you're the independent kind of person). Wrt biology, I do get the impression that you don't interact much with other biologists, which might make the communication problems more difficult for now [like, if I sent your aging posts as is to most biologists I know, I don't think they would be particularly responsive or excited].

BTW - regarding wealth - fightaging has a great definition at https://www.fightaging.org/archives/2008/02/what-is-wealth/

Wealth is a measure of your ability to do what you would like to do, when you would like to do it - a measure of your breadth of immediately available choice. Therefore your wealth is determined by the resources you presently own, as everything requires resources.

Generally speaking, due to aging [and the loss of potential that comes with it] most people's wealth decreases with age (it's said that the wealthiest people are really those that are born) - however, your ability to imagine what you can do with wealth (within an affordance space - or what you can imagine doing over the next year if given all the resources you can handle - framework) can increase over time. Mental models are only wealth inasmuch as they actively work to improve people's decision-making on the margin relative to an alternative model (they are necessary for innovation, but there are now so many mental models that taking time to understand one reduces the amount of time one has to understand another mental model) - I do believe that compressible mental models (or network models) that explain a principle elegantly can offload the time investment it takes to use a model to act on a decision (eg superforecasters use elegant models that others believe and can act on - thus knowing when to use the expertise of superforecasters can help decision-making). Not many people can create an elegant mental model, and fewer can create one that is useful on top of all the other models that have been developed (useful in the sense that it makes it more useful for others to read your model than all the confusing model renditions used by others) - obviously there is vast space for improvement on this front (as you can see if you read quantum country) as most people forget the vast majority of what they read from textbooks or from conversations with others. Presentism is an ongoing issue as more papers/online content is published than there are total eyeballs to read them (+all the material published in the past)

The best kind of wealth you can create, in this sense, is a model/framework/tool that everyone uses. Think of how wealth was created with the invention of a new programming language, for example, or with Stack Exchange/Hacker News, or a game engine, or the wealth that could be created with automating tedious steps in biology, or the kind that makes it far easier for other people to make or write almost anything. The more people cite you, the more wealth and influence (of a certain kind) you get. This generalizes better than putting your entire life into studying a single protein or model organism, especially if you find a model/technique that is easily-adoptable and makes it easy to do/automate high-throughput "-omics" of all organisms and interventions at once (making it possible for others to speed up and generalize biology research where it used to be super-slow). Bonus points if you make it machine-readable and put in a database that can be queried so that it is useful even if no one reads it at first [as amount of data generated is higher/faster than the total mental bandwidth/capacity of all humans who can read it]. 

[btw, attention also correlates with wealth, and money/attention/wealth is competitive in a way that knowledge is not (wisdom may be which knowledge to read in which order - wisdom is how you can use knowledge to maximize the wealth that you can use with that knowledge)]

[Shaping people's framework by causing them to constantly refer to your list of causes, btw, is another way to create influence/wealth - but this may get in the way of maximizing social wealth over a lifetime if your frameworks end up preventing people from modeling or envisioning how they can discover new anomalies in the data that do not fit within those frameworks - this is also why we just need a better concrete framework with physical observables for measuring aging rate, where our ability to characterize epigenetic aging is a local improvement. ]

In the area of aging already there is too much "knowledge" (though not all of it particularly insightful), but does the sum of all aging papers published constitute as knowledge? Laura Deming mentions on her twitter that she thinks about what not to read, rather than what to read, and recommends students study math/CS/physics rather than biochemistry. There can be a way to compress all this knowledge into a more organized physical principles format that better helps other people map what counts as knowledge and what doesn't count - but at this moment the sum of all aging research is still a disorganized mess, and it may be that the details of much of what we know now will become superseded by new high-throughput papers that publish data/meta-data rather than as papers (along with a publicly accessible annotation service that better guides people as to which aging papers represent true progress and which papers will simply obsolete quickly.). Guiding people to the physical insight of a cell is more important for this kind of true understanding of aging, even though we can still get things done through rudimentary insight-free guesses like more work on rapamycin and calorie restriction.

I think you are having the wrong idea about money. No one in the world has stated money can solve every problem but people still write about this for no reason. Also the people who write about this aren't that rich either. They write as if having more money is a bad thing and doesn't help anyone.

Sure if you had only money can you cure cancer? Nope. But what if you knew how to cure cancer but don't have the money to set up the machines or make medicines? Your knowledge here will be useless.

Money can solve a lot of problems knowledge cant. Imagine your mother is in the hospital and requires an immediate surgery that costs around 100,000 dollars. Would you prefer having the knowledge to cure it or have the money to treat it?

Money can help you set up 100 schools and teach around 100,000 kids. How is having some knowledge going to fair here?

Money can feed 100,000 homeless people everyday. How is having some knowledge going to tackle this?

Money is like a car but people also want it to fly and go underwater and it cant, so they think its useless.

The notion that money isn't important or that "knowledge is the real wealth" wasn't intended to be a universal law; it's only applicable in cases where money is sufficiently abundant (as the title says). The scenarios you list do not meet that condition, so they are not situations that the OP intended to address.

100%, I think too many people are unreflectively seeking money past when its marginal returns are worth it for them, and they should think about that more explicitly. I do think (and expect you agree, but it's useful to put in the conversation) that it's good that someone aligned work on money, and that it is some people's comparative advantage. But I expect people who for whom that is true to know that.

I do indeed agree with that.

[-]Avi40

The best investments in knowledge are mental models that can be applied across domains (some of which were mentioned in the post) and unchanging/permanent/durable knowledge like that in the STEM fields. This provides both leverage (from the cross-disciplinary latticework of mental models) and allows compounding to work as your knowledge compounds over the years.

The world would undoubtedly be better if more Data Scientists became monks. 

Access to scientific discoveries is not given to those who invent them, but rather those who buy them on the market from whoever commercializes the technology (probably not the inventor). If there was a known cure for cancer then the rich will have better access to it than those who know a lot about curing cancer. Personal knowledge does nothing for you in terms of getting access to those things in a capitalist society.

Things that money can't buy are by definition outside of the capitalist system. If you are investing in knowledge in order to get access to things, you sorta have to assume that you will hold your knowledge back from the public/market, which is basically unethical. 

There is obviously an issue with free-riding here where the public will gain access to "knowledge capital" that was built by others, and so people aren't incentivized to develop knowledge, and if they do they want to keep it private or at the very least charge a lot of money for it. 

The solution probably isn't to have more people who are independently wealthy quit their jobs and work in science for peanuts. It's an irrational decision by them because they are unlikely to personally develop any major breakthrough, whereas by making more money they could buy access to any breakthrough made by the community. It also displaces those that aren't rich from pursuing science as a career, and the rationale given here has perverse incentives.

Better to invest in scientists generally. Even if "spray and pray" is a bad idea, surely it's better than "invest only in one person who is also the person I'm most irrationally biased towards (myself)". 

Also, gaining "general-purpose knowledge capital" doesn't really help the situation. Progress is disproportionately driven by a few people that are highly knowledgeable and motivated in specific areas, along with being creative and/or lucky. Learning the basics of everything makes you a Renaissance man but isn't going to help with putting a man on the moon. Treating a cell like a capacitor is neat but probably misses a lot of details once you get into it. Digging deeply into a problem is necessary, but inherently risky as a lot of knowledge, unlike actual capital, turns out to be quite useless for practical considerations and you can't exchange it so easily. 

[+][comment deleted]10

Is it really true that money can't buy knowledge?

We can ask the most knowledgeable person we know to name the most knowledgeable person they know, and do that until we find the best expert. Or alternatively, ask a bunch of people to name a few, and keep walking this graph for a while.

This won't let us buy knowledge that doesn't exist, but seems good enough for learning from experts, given enough money and modern communication technology that Louis XV didn't have.

This is an excellent algorithm for finding people with high status. Unfortunately, the correlation between status and knowledge is unreliable at best.

Caveat: ask each person to name someone they personally worked with.

Hard to get right, but not sure whether it's harder than knowledge investment.

Wouldn't have helped Louis XV. We might need infrastructure in place that would incentivize people to make themselves easy to find.

I agree. But knowledge was abundant for him too. What wasn’t abundant was critical thinking. And this was the problem from the start

The idea of knowledge over money is romantic and one that I agree, is at times the right long-run bet. I understand you're arguing for diminishing returns here but the issue I've found is that some tiers of wealth *on average* allow one to access tiers of collectives of knowledge that meaningfully impact your ability to scale with lower effort (& thus at equal effort, faster).

So, two years ago I quit my monetarily-lucrative job as a data scientist and have mostly focused on acquiring knowledge since then. I can worry about money if and when I know what to do with it.

Also this knowledge only matters if you do something useful with that knowledge, which I'm convinced that you are, for instance. many other people are not able to create useful knowledge and thus may be better suited for earning2give.

This is why money printing by itself doesn't generate more wealth. It's human ingenuity that comes from education/knowledge that actually moves the needle and improves our living standards over time. 

[-][anonymous]00

I don't like this post because it ignores that instead of yachts you can simply buy knowledge for money. Plenty of research that isn't happening because it isn't being funded. 

This runs into the same problem: unless you're already an expert, you can't distinguish actually-useful research from the piles of completely useless research (much of it by relatively high-status researchers).

An example close to LW's original goals: imagine an EA five years ago, wanting to donate to research on safe/friendly AI. They hear somebody argue about how important it is for AI research to be open-source so that the benefits of AI can be reaped by everyone. They're convinced, and donate to a group trying to create widely-available versions of cutting-edge algorithms. From an X-risk standpoint, they've probably done close-to-nothing at best, and there's an argument to be made that their impact was net harmful.

One needs to already have some amount of expertise in order to distinguish useful research to fund.

[-][anonymous]50

Can you come up with an example that isn't AI? Most fields aren't rife with infohazards, and 20% certainty of funding the best research will just divide your impact by a factor 5, which could still be good enough if you've got millions.

For what it's worth, given the scenario that you've at least got enough to fund multiple AI researchers and your goal is purely to fix AI, I concede your point.

How about cancer research? This page lists rate of success of clinical trials in different subfields; oncology clinical trials have a success rate of around 4%. I would also guess that a large chunk of the "successes" in fact do basically-nothing and made it through largely by being the one-in-twenty which hit 95% significance by chance, or managed to p-hack, or the like. From an inside view, most cancer research I've seen indeed looks pretty unhelpful based on my understanding of biology and how-science-works in general (and this goes double for any cancer research "using machine learning", which is a hot subfield).

More generally: we live in a high-dimensional world. Figuring out "which direction to search in" is usually a much more taut constraint than having the resources to search. Brute-force searching a high-dimensional space requires resources exponential in the dimension of the space.

Combine that with misaligned incentives for researchers, and our default expectation should usually be that finding the right researchers to fund is more of a constraint than resources.