Specializing in Problems We Don't Understand

Most problems can be separated pretty cleanly into two categories: things we basically understand, and things we basically don’t understand. Some things we basically understand: building bridges and skyscrapers, treating and preventing infections, satellites and GPS, cars and ships, oil wells and gas pipelines and power plants, cell networks and databases and websites. Some things we basically don’t understand: building fusion power plants, treating and preventing cancer, high-temperature superconductors, programmable contracts, genetic engineering, fluctuations in the value of money, biological and artificial neural networks. Problems we basically understand may have lots of moving parts, require many people with many specialties, but they’re generally problems which can be reliably solved by throwing resources at it. There usually isn’t much uncertainty about whether the problem will be solved at all, or a high risk of unknown unknowns, or a need for foundational research in order to move forward. Problems we basically don’t understand are the opposite: they are research problems, problems which likely require a whole new paradigm.

In agency terms: problems we basically understand are typically solved via adaptation-execution rather than goal-optimization. Problems we basically don’t understand are exactly those for which existing adaptations fail.

Main claim underlying this post: it is possible to specialize in problems-we-basically-don’t-understand, as a category in its own right, in a way which generalizes across fields. Problems we do understand mainly require relatively-specialized knowledge and techniques adapted to solving particular problems. But problems we don’t understand mainly require general-purpose skills of empiricism, noticing patterns and bottlenecks, model-building, and design principles. Existing specialized knowledge and techniques don’t suffice - after all, if the existing specialized knowledge and techniques were sufficient to reliably solve the problem, then it wouldn’t be a problem-we-basically-don’t-understand in the first place.

So… how would one go about specializing in problems we basically don’t understand? This post will mostly talk about how to choose what to formally study, and how to study it, in order to specialize in problems we don’t understand.

Specialize in Things Which Generalize

Suppose existing models and techniques for hot plasmas don’t suffice for fusion power. A paradigm shift is likely necessary. So, insofar as we want to learn skills which will give us an advantage (relative to existing hot plasma specialists) in finding the new paradigm, those skills need to come from some other area - they need to generalize from their original context to the field of hot plasmas. We want skills which generalize well.

Unfortunately, a lot of topics which are advertised as “very general” don’t actually add much value on most problems in practice. A lot of pure math is like this - think abstract algebra or topology. Yes, they can be applied all over the place, but in practice the things they say are usually either irrelevant or easily noticed by some other path. (Though of course there are exceptions.) Telling us things we would have figured out anyway doesn’t add much value.

There are skills and knowledge which do generalize well. Within technical subjects, think probability and information theory, programming and algorithms, dynamical systems and control theory, optimization and microeconomics,  linear algebra and numerical analysis. Systems and synthetic biology generalize well within biology, mechanics and electrodynamics are necessary for fermi estimates in most physical sciences, continuum mechanics and PDEs are useful for a wide variety of areas in engineering and science.

But just listing subjects isn’t all that useful - after all, a lot of the most generally-useful skills and techniques don’t explicitly appear in a university course catalogue (or if they do, they appear hidden in a pile of more-specialized information). Many aren’t explicitly taught at all. What we really need is an outside-view criterion or heuristic, some way to systematically steer toward generalizable knowledge and skills.

To Build General Problem-Solving Capabilities, Tackle General Problems

It sounds really obvious: if we want to build knowledge and skills which will apply to a wide variety of problems, then we should tackle a wide variety of problems. Then, steer toward knowledge and skills which address bottlenecks relevant to multiple problems.

Early on in a technical education, this will usually involve fairly basic things, like "how do I do a Fermi estimate for this design?" or "what are even the equations needed to model this thing?" or "how do the systems involved generally work?" - questions typically answered in core classes in physics or engineering, and advanced classes in biology or economics. Propagating back from that, it will also involve the math/programming skills needed to both think about and simulate a wide variety of systems.

But even more important than coursework, having a wide variety of problems in mind is directly useful for learning to actually use the relevant skills and knowledge. A lot of the value of studying generalizable knowledge/skills comes from being able to apply them in new contexts, very different from any problem one has seen before. One needs to recognize, without prompting, situations-in-the-wild in which a model or technique applies.

A toy example which I encountered in the wild: Proset is a variant of the game Set. We draw a set of cards with random dots of various colors, and the goal is to find a (nonempty) subset of the cards such that each color appears an even number of times.

The game Proset: find a subset of cards with an even number of each color.

How can we build a big-O-efficient algorithmic solver for this game? Key insight (hover to reveal):

Write down a binary matrix in which each column is a card, each row is a color, and the 0/1 in each entry says whether that color is present on that card. Then, the game is to find the nullspace of that matrix, in arithmetic mod 2. We can solve it via row-reduction.

(Read the spoiler text before continuing, but don’t worry if you don’t know what the jargon means.)

If we’re comfortable with linear algebra, then finding a nullspace via row-reduction is pretty straightforward. (Remember the claim from earlier that the things abstract algebra says are “usually either irrelevant or easily noticed by some other path”? The generalization of row reduction to modular arithmetic is the sort of thing you’d see in an abstract algebra class, rather than a linear algebra class, but if you understand row-reduction then it’s not hard to figure out even without studying abstract field theory.) Once we have a reasonable command of linear algebra, the rate-limiting step to figuring out Proset is to notice that it’s a nullspace problem.

This requires its own kind of practice, quite different from the relatively rote exercises which often show up in formal studies.

Keeping around 10 or 20 interesting problems on which to apply new techniques is a great way to practice this sort of thing. In particular, since the point of all this is to develop skills for problems which we don’t understand or know how to solve, it’s useful to keep around 10 or 20 problems which you don’t understand or know how to solve. For me, it used to be things like nuclear fusion energy, AGI, aging, time travel, solving NP-complete problems, government mechanism design, beating the financial markets, restarting Moore's law, building a real-life flying broomstick, genetically engineering a dragon, or factoring large integers. Most classes I took in college were chosen for likely relevance to at least one of these problems (usually more than one), and whenever I learned some interesting new technique or theorem or model I’d try to apply it to one or more of these problems. When I first studied linear algebra, one of the first problems I applied it to was constructing uncorrelated assets to beat the financial markets, and I also tried for quite some time to apply it to integer factorization (and later various NP-complete problems). Those were the sorts of experiences which built the mental lenses necessary to recognize a modular nullspace problem in Proset.

Use-Cases of Knowledge and Suggested Exercises

If the rate-limiting step is to notice that a particular technique applies (e.g. noticing that proset is a nullspace problem), then we don’t even necessarily need to be good at using the technique. We just need to be good at noticing problems where the technique applies, and then we can google it if and when we need it. This suggests exercises pretty different from exercises in a lot of classes - for instance, a typical intro linear algebra class involves a lot of practice executing row reduction, but not as much recognizing linear systems in the wild.

More generally: we said earlier that problems-we-basically-understand are usually solved by adaption-execution, i.e. executing a known method which usually works. In that context, the main skill-learning problem is to reliably execute the adaptation; rote practice is a great way to achieve that. But when dealing with problems we basically don’t understand, the use-cases for learned knowledge are different, and therefore require different kinds of practice. Some example use-cases for the kinds of things one might formally study:

  • Learn a skill or tool which you will later use directly. Ex.: programming classes.
  • Learn the gears of a system, so you can later tackle problems involving the system which are unlike any you've seen before. Ex.: physiology classes for doctors.
  • Learn how to think about a system at a high level, e.g. enough to do Fermi estimates or identify key bottlenecks relevant to some design problem. Ex.: intro-level fluid mechanics.
  • Uncover unknown unknowns, like pitfalls which you wouldn't have thought to check for, tools you wouldn't have known existed, or problems you didn't know were tractable/intractable. Ex.: intro-level statistics, or any course covering NP-completeness.
  • Learn jargon, common assumptions, and other concepts needed to effectively interface to some field. Ex.: much of law school.
  • Learn enough to distinguish experts from non-experts in a field. Ex.: programming or physiology, for people who don't intend to be programmers/doctors but do need to distinguish good work from quackery in these fields.

These different use-cases suggest different strategies for study, and different degrees of investment. Some require in-depth practice (like skills/tools), others just require a quick first pass (like unknown unknowns), and some can be done with a quick pass if you have the right general background knowledge but require more effort otherwise (like Fermi estimates).

What kind of exercises might we want for some of these use-cases? Some possible patterns for flashcard-style practice:

  • Include some open-ended, babble-style questions. For instance, rather than "What is X useful for?", something like "Come up with an application for X which is qualitatively different from anything you've seen before". (I've found that particular exercise very useful - for instance, trying to apply coherence theorems to financial markets led directly to the subagents post.)
  • Include some pull-style questions, i.e. questions in which you have to realize that X is relevant. For instance "Here's a problem in layman's terms; what keywords should you google?" or "Here's a system, what equations govern it?". These are how problems will show up in real life.
  • Questions of the form "which of these are not relevant?" or "given <situation>, which of these causes can we rule out?" are probably useful for training gearsy understanding, and reflect how the models are used in the real world.
  • Debugging-style questions, i.e. "system X has weird malfunction Y, what's likely going on, and what test should we try next?". This is another one which reflects how gearsy models are used in the real world.
  • For unknown unknowns, questions like "Here's a solution to problem X; what's wrong with it?". (Also relevant for distinguishing experts from non-experts.)
  • For jargon and the like, maybe copy some sentences or abstracts from actual papers, and then translate them into layman's terms or otherwise say what they mean.
  • Similarly, a useful exercise is to read an abstract and then explain why it's significant/interesting (assuming that it is, in fact, significant/interesting). This would mean connecting it to the broader problems or applications to which the research is relevant.
  • For recognizing experts, I'd recommend exercises like "Suppose you want to find someone who can help with problem X, what do you google for?".

Cautionary note: I have never made heavy use of exercises-intended-as-exercises (including flashcards), other than course assignments. I brainstormed these exercises to mimic the kinds of things I naturally ended up doing in the process of pursuing the various hard problems we talked about earlier. (This was part of a conversation with AllAmericanBreakfast, where we talked about exercises specifically.) I find it likely that explicitly using these sorts of exercises would build similar skills, faster.

Summary

Problems-we-basically-understand are usually solved by executing specialized strategies which are already known to usually work. Problems-we-basically-don’t-understand are exactly those for which such strategies fail. Because the specialized techniques fail, we have to fall back on more general-purpose methods and models. To specialize in problems-we-basically-don’t-understand, specialize in skills and knowledge which generalize well.

To learn the sort of skills and knowledge which are likely to generalize well to new, poorly-understood problems, it’s useful to have a fairly-wide variety of problems which you basically don’t understand or know how to solve. Then, prioritize techniques and models which seem relevant to multiple such problems. The problems also provide natural applications in which to test new techniques, and in particular to test the crucial skill of recognizing (without prompting) situations-in-the-wild in which the technique applies.

This sort of practice differs from the exercises often seen in classes, which tend to focus more on reliable execution of fixed strategies. Such exercises make sense for problems-we-basically-understand, since reliable execution of a known strategy is the main way we solve such problems. But learned skills and knowledge have different use-cases for problems-we-basically-don’t-understand, and these use-cases suggest different kinds of exercises. For instance, take a theorem, and try to find a system to apply it to which is qualitatively different from anything you've seen before. Or, try to translate the abstract of a paper into layman’s terms.

I’ve ended up doing things like this in the pursuit of a variety of problems in the wild, and I find it likely that explicit exercises could build similar skills faster.

New Comment
29 comments, sorted by Click to highlight new comments since:

This looks like expanding on the folklore separation between engineering and research in technical fields like computer science: engineering is solving a problem we know how to solve or know the various pieces needed for solving, whereas research is solving a problem no one ever solved, and such that we don't expect/don't know if the standard techniques apply. Of course this is not exactly accurate, and generalizes to field that we wouldn't think of as engineering.

I quite like your approach; it looks like the training for an applied mathematician (in the sense of Shannon, that is, a mathematician which uses the mental tools of maths to think about a variety of problems). I don't intend right now to use the exercises that you recommend, Yet this approach seems similar to what I'm trying to do, which is having a broad map of the territory in say Maths or other fields, such that I can recognize that the problem I'm working might gain from insight in this specific subfield.

One aspect I feel is not emphasized enough here is the skill of finding/formulating good problems. I don't think you're disregarding this skill, but your concrete example of finding an algorithm is a well-defined problem. We might oppose to it the kind of problems the computer science pioneers where trying to solve, like "what is even a computation/an algorithm?". And your approach of recognizing when a very specific technique applies looks less useful in the second case. Other approaches you highlight probably translate well to this setting, but I think it's valuable to dig deeper into which one specifically, and why.

One aspect I feel is not emphasized enough here is the skill of finding/formulating good problems.

That's a good one to highlight. In general, there's a lot of different skills which I didn't highlight in the post (in many cases because I haven't even explicitly noticed them) which are still quite high-value.

The outside-view approach of having a variety of problems you don't really understand should still naturally encourage building those sorts of skills. In the case of problem formulation, working on a wide variety of problems you don't understand will naturally involve identifying good subproblems. It's especially useful to look for subproblems which are a bottleneck for multiple problems at once - i.e. generalizable bottleneck problems. That's exactly the sort of thinking which first led me to (what we now call) the embedded agency cluster of problems, as well as the abstraction work.

Fair enough. But identifying good subproblems of well-posed problems is a different skill from identifying good well-posed subproblems of a weird and not formalized problem. An example of the first would be to simplify the problem as much as possible without making it trivial (classic technique in algorithm analysis and design), whereas an example of the second would be defining the logical induction criterion, which creates the problems of finding a logical inductor (not sure that happened in this order, this is a part of what's weird with problem formulation)

And I have the intuition that there are way more useful and generalizable techniques for the first case than the second case. Do you feel differently? If so, I'm really interested with techniques you have in mind for starting from a complex mess/intuitions and getting to a formal problem/setting.

I disagree with the claim that "identifying good subproblems of well-posed problems is a different skill from identifying good well-posed subproblems of a weird and not formalized problem", at least insofar as we're focused on problems for which current paradigms fail.

P vs NP is a good example here. How do you identify a good subproblem for P vs NP? I mean, lots of people have come up with subproblems in mathematically-straightforward ways, like the strong exponential time hypothesis or P/poly vs NP. But as far as we can tell so far, these are not very good subproblems - they are "simplifications" in name only, and whatever elements make P vs NP hard in the first place seem to be fully maintained in them. They don't simplify the parts of the original problem which are actually hard. They're essentially variants of the original problem, a whole cluster of problems which are probably-effectively-identical in terms of the core principles. They're not really simplifications.

Simplifying an actually-hard part of P vs NP is very much a fuzzy conceptual problem. We have to figure out how-to-carve-up-the-problem in the right way, how to frame it so that a substantive piece can be reduced.

I suspect that your intuition that "there are way more useful and generalizable techniques for the first case than the second case" is looking at things like simplifying-P-vs-NP-to-strong-exponential-time-hypothesis, and confusing these for useful progress on the hard part of a hard problem. Something like "simplify the problem as much as possible without making it trivial" is a very useful first step, but it's not the sort of thing which is going address the hardest part of a problem when the current paradigm fails. (After all, the current paradigm is usually what underlies our notion of "simplicity".)

If so, I'm really interested with techniques you have in mind for starting from a complex mess/intuitions and getting to a formal problem/setting.

This deserves its own separate response.

At a high level, we can split this into two parts:

  • developing intuitions
  • translating intuitions into math

We've talked about the translation step a fair bit before (the conversation which led to this post). A core point of that post is that the translation from intuition to math should be faithful, and not inject any "extra assumptions" which weren't part of the math. So, for instance, if I have an intuition that some function is monotonically increasing between 0 and 1, then my math should say "assume f(x) is monotonically increasing between 0 and 1", not "let f(x) = x^2"; the latter would be making assumptions not justified by my intuition. (Some people also make the opposite mistake - failing to include assumptions which their intuition actually does believe. Usually this is because the intuition only feels like it's mostly true or usually true, rather than reliable and certain; the main way to address this is to explicitly call the assumption an approximation.)

On the flip side, that implies that the intuition has to do quite a bit of work, and the linked post talked a lot less about that. How do we build these intuitions in the first place? The main way is to play around with the system/problem. Try examples, and see what they do. Try proofs, and see where they fail. Play with variations on the system/problem. Look for bottlenecks and barriers, look for approximations, look for parts of the problem-space/state-space with different behavior. Look for analogous systems in the real world, and carry over intuitions from them. Come up with hypotheses/conjectures, and test them.

[-][anonymous]100

building fusion power plants, treating and preventing cancer, high-temperature superconductors, programmable contracts, genetic engineering, fluctuations in the value of money, biological and artificial neural networks.

vs

building bridges and skyscrapers, treating and preventing infections, satellites and GPS, cars and ships, oil wells and gas pipelines and power plants, cell networks and databases and websites.

 

Note that there is a way to split these sets into "problems we can easily perform experiments both real and simulated" and "problems where experimentation is extremely expensive and sometimes unethical".

Perhaps the element making this problems less tractable is we cannot easily obtain a lot of good quality information about the problem itself.  

Fusion you need giga-dollars to actually tinker with the plasmas at the scale you would get net power from.  Cancer, you can easily find a way to kill cancer in a lab or lab rat but there are no functioning mockups of human bodies (yet) to try your approach on.  Also there are government barriers that create shortages of workers and slow down any trial of new ideas.  HTSC, well, the physical models predict these poorly and it is not certain if a solution exists under STP.  Programmable contracts are easy to write but difficult to prove impervious to assault.  Genetic engineering, easy to do on small scales, difficult to do on complex creatures like humans due to the same barriers behind cancer treatment.  Money fluctuations - there are hostile and irrational agents blocking you from learning clean information about how it works, so your model will be confused by the noise they are injecting [in real economies].  And biological NNs have the information barrier, artificial NNs seem to be tractable they are just new.


How is this relevant? Well to me it sounds like if we invent a high end AGI, it'll still be throttled for solving this problems until the right robotics/mockups are made for the AGI to get the information it needs to solve them.

The AGI will not be able to formulate a solution merely reading human writings and journals on these subjects, we will need to authorize it to build thousands of robotic research systems where it then generates it's own experiments to fill in the gaps in our knowledge and to learn enough to solve them.

Agree. I like to split the empirical problems out using levels of abstraction:

Traversal problems: each experiment is expensive or it isn't clear how to generate a new experiment from old ones because of lack of visibility about controlled variables.

Representation space problems: the search space is too large, our experiments don't reliably screen off large portions of it. So we can't expect to converge in any reasonable time.

Intentional problems: we're not even clear on what we're trying to do or whether our representation of what we're trying to do matches the natural categories of the solution space such that we are even testing real things when we design the experiment.

Implementation problems: we can't build the tooling or control the variable we need to control even if we are pretty sure what it is. Measurement problems means we can't distinguish between important final or intermediate outcomes (eg error bars).

Does the phrase "levels of abstraction" imply that those four problems form some kind of hierarchy?  If so, could you explain how that hierarchy works?

https://en.wikipedia.org/wiki/David_Marr_(neuroscientist)#Levels_of_analysis

It is often characterized as 3 levels but if you read his book the algorithmic level is split into traversal and representation (which is highly useful as a way to think about algorithms in general) As four levels it also corresponds to Aristotle's 4 whys: final, formal, efficient, and material.

[-][anonymous]10

So intentional problems would be markets, where noise is being injected and any clear pattern is being drained dry by automated systems, preventing you from converging to a model. Or public private encryption where you aren't supposed to be able to solve it? (But possibly you can)

Endorsed.  A lot of this article is strongly similar to an unfinished draft of mine about how to achieve breakthroughs on unsolved problems.

I'm not ready to publish the entire draft yet, but I will add one effective heuristic.  When tackling an unsolved problem, try to model how other people are likely to have attacked it and then avoid those approaches.  If they worked, someone else would probably have achieved success with them before you came along. 

Curated.

I think the problem this post grapples with is essentially one of the core rationality problems. Or, one of the core reasons I think it might be useful to have "rationality" as a field.

The particular set of suggestions and exercises here seemed a) plausibly quite useful (although I haven't really tried them), b) pointed towards a useful generator of how to think more about how to develop as "the sort of person who can solve general confusing problems."

One way you can approach this in a scientific context is by picking a broad unsolved or imperfectly solved problem ("cure cancer," "outperform the market," "improve personal productivity") and dividing it into sub-problems at progressively granular levels of detail.

You'd look for descriptions of things like:

  1. Theory that articulates the general nature of the problem. For cancer, it's things like selection for drug resistance and immune system evasion. For outperforming the market, it's the EMH and analyzing risk. For improving personal productivity, it's the ratio of salesmanship to science in the field.
  2. Understanding how people work on the problem. For cancer, it's divided into areas like screening, treatments, classifying cancers, and identifying pathways of cancer growth and suppression. For outperforming the market, it's analyzing specific markets or evaluating investment strategies. For personal productivity, it's researching products, building habits, improving motivation.

It seems very important to me to work on problems you can actually solve with the tools you have on hand. For example, I am preparing for grad school in bioengineering. As it is, though, I don't have access to a biology lab. I'm just living in my apartment. I have a nice computer, a light microscope, a few basic chemistry apparatus from the in-home o-chem labs I took during COVID, and that's about it.

Hence, despite my interest in problems like biosecurity, cancer, aging, Alzheimer's, and vaccine development, I don't have much of an ability to work on any of these problems with the tools I have at hand. I don't have access to data, equipment, expertise, funding, or mentorship.

But I can do a couple things.

  1. I can work on projects related to the work I want to do, but that require only what I have on hand. For example, although I can't work on the project "cure Alzheimer's," I can work on the problem "understand how scientists are trying to cure Alzheimer's," a purely scholarly project.
  2. I can find projects that are tractable with the tools I have on hand. For example, I might be able to do some actual bioinformatics or mathematical modeling. I could re-analyze published datasets. I can build software tools.
  3. I can expand my resources, trying to acquire new pieces of equipment, funding opportunities, and building my network.

This is the focus of General Systems, as outlined by Weinberg. That book is very good, by the way - I highly recommend reading it. It's both very dense and very accessible.

It's always puzzled me that the rationalist community hasn't put more emphasis on general systems. It seems like it should fit in perfectly, but I haven't seen anyone mention it explicitly. General Semantics mentioned in the recent historical post is somewhat related, but not the same thing.

More on topic: One thing you don't mention is that there are fairly general problem solving techniques, which start before, and are relatively independent of, your level of specific technical knowledge. From what I've observed, most people are completely lost when approaching a new problem, because they don't even know where to start. So as well as your suggestion of focusing on learning the existence of techniques and when they apply, you can also directly focus on learning problem solving approaches.

Also makes me think of TRIZ. I don't really understand how to use it that well or even know if it produces useful results, but I know it's popular within the Russosphere (or at least more popular there than anywhere else).

The impression I always had with general systems (from afar) was that it looked cool, but it never seemed to be useful for doing anything other than "think in systems", (so not useful for doing research in another field or making any concrete applications). So that's why I never felt interested. Note that I'm clearly not knowledgeable at all on the subject, this is just my outside impression.

I assume from your comment you think that's wrong. Is the Weinberg book a good resource for educating myself and seeing how wrong I am?

I'm intrigued by your second paragraph -- perhaps write a post about it?

Some things we basically understand: building bridges and skyscrapers, treating and preventing infections, satellites and GPS, cars and ships, oil wells and gas pipelines and power plants, cell networks and databases and websites. 

If we understand basically understand building bridges, then why are we building so few new bridges and those that we build end up being so expensive? The fact that building bridges with the advanced technology we have today isn't cheaper but more expensive then building bridges 100 years ago suggests to me that we don't understand the subject well.  

As far as treating and preventing infections goes we are actually understanding the topic better then hundred years ago but we still don't understand it well enough to prevent people from getting new infections in world-class hospitals, partly even treatment-resistentant ones.

The fact that we have existing paradigms for approaching both of those issues doesn't limit us to think of new paradigms to approach them better.

Your last sentence is true and important. I think John's focusing on a different problem. One could use "general" skills to turn existing paradigms, though for a student that will be hard.

Actually building bridges and Actually preventing infections requires not only improvements in applied science, but also human coordination. In the former we've improved, in the latter we've stagnated.

Actually building bridges and Actually preventing infections requires not only improvements in applied science, but also human coordination. In the former we've improved, in the latter we've stagnated.

+1 to this. The difficulties we have today with bridges and infections do not seem to me to have anything to do with the physical systems involved; they are all about incentives and coordination problems.

(Yes, I know lots of doctors talk about how we're overly trigger-happy with antibiotics, but I'm pretty sure that hospital infection rates have a lot more to do with things like doctors not consistently washing their hands than with antibiotic-resistant bacteria. In order for antibiotic resistance to be relevant, you have to get infected in the first place, and if hospitals were consistent about sterilization then they wouldn't have infection rates any higher than any other buildings.)

I do agree that some kinds of technical paradigm shifts could plaster over social problems (at least partially), but let's not mistake that for a lack of understanding of the physical systems. The thing-we-don't-understand here is mechanism design and coordination.

In order for antibiotic resistance to be relevant, you have to get infected in the first place, and if hospitals were consistent about sterilization then they wouldn't have infection rates any higher than any other buildings.

It's the sterilization that creates the niche in which those bacteria thrive because they face less competition then they would face in other normal buildings which are populated by diverse bacteria. No matter how much you sterilize you are not going to go to zero bacteria in a space occupied by humans and when human are a primary vector of moving bacteria around in the space you select for bacteria's that actually interact with humans. 

Where is the selection effect coming from? You'd think that the human body is large enough to host a range of different bacteria, so unless they have some way of competing within the body, sterilization would just remove some bacterial populations rather than select for those resistant to antibiotics.

I'm not talking about sterilization of the human body but sterilization of the hospital enviroment. It leads to selection effects for bacteria that are adapted to the hospital enviroment. 

If you have plants in a room then part of the room is filled with bacteria that interact with plants and that creates a more diverse microbial enviroment. Having plants in a room makes it more likely that a random bacteria in the room is a plant pathogen compared to a human pathogen. 

https://www.frontiersin.org/articles/10.3389/fmicb.2014.00491/full#B7 is a paper that for example argues for maintaining microbial diversity in the different environments as an important issue to avoid pathogen outbreaks.

I would expect that in 50 years you will have plants with microbiomes in hospitals that are selected for hosting a microbiome that increases the surrounding microbial diversity and not hosting human pathogens. Hospitals will move from the paradigm of "everything should be sterile" to the paradigm of "there should be a lot of microbial diversity without human pathogens".

The will regularly test what bacteria are around and when there are problems use a mix of adding new bacteria to the enviroment that contribute to healthy microbial diversity and phage therapy against those bacteria that are unwelcome.

Having cheap ways to measure the microbial enviroment via cheaper gene sequencing will lead there but there will be a lot about how to have a good microbial enviroment that we have very little understanding of today.

I unfortunately know very little about building bridges, so I can't really tell how a new paradigm might improve the status quo. It might be possible to switch the composition of a bridge in a way where it can be created in a more automated fashion then it's currently created. 

When it comes to actually preventing infections I do think there's room for a new paradigm that replaces "let's kill all bacteria" with "let's see that we have an ecosystem of bacteria without those that are problematic". 

Moving to that new paradigm for infections has similar problems to improving on cancer treatment and prevention.

I think this post is valuable because it encourages people to try solving very hard problem, specifically by showing them how they might be able to do that! I think its main positive effect is simply in pointing out that it is possible to get good at solving hard problems, and the majority of the concretes in the post are useful for continuing to convince the reader of this basic possibility.

[-][anonymous]30

I enjoyed this post a lot but in the weeks since reading it, one unaddressed aspect has been bugging me and I've finally put my finger on it: the recommendation to "Specialize in Things Which Generalize" neglects the all-important question of "how much?" Put a different way, at least in my experience, one can always go deeper into one of these subjects -- probability theory, information theory, etc. -- but doing so takes time away from expanding one's breadth. Therefore, as someone looking to build general knowledge, you're constantly presented with the trade-off of continuing to learn one area deeply vs. switching to the next area you'd like to learn.

If I try to inhabit the mindset of the OP, I can generate two potential answers to this quandary, but none of them are super satisfying:

  • Learn enough to be able to leverage what you've learned for novel problems.
  • Learn enough to be able to build gears-level models using what you've learned.

The post mentions a few different use-cases of learned knowledge, and those different use-cases require different depth of study. So one reasonable answer is: figure out what use-case(s) we care about, and study enough to satisfy those.

A different angle: it's useful to be lazy. Put off learning things until we need them, assuming that we won't be under too much time pressure later. The problem with that approach is that it won't be obvious that a particular technique or area or frame is relevant until after we've studied it. However, as long as we understand X enough that we can reliably recognize when it applies in the wild, we can safely put off learning more about X until it comes up. So, being able to recognize relevant problems/situations in the wild is the "most important" use-case, in the sense that it's the use-case which we can't put off until later.

This could be the flip-side of the flashcard-set you're selling, and I enjoyed learning that Wikipedia included lists like these:

The relevant list of lists probably covers several more, although most don't seem to be following the "Named" naming convention. At a glance I'm seeing...

Oh yeah, cruising Wikipedia lists is great. Definitely stumbled on multiple unknown unknowns that way. (Hauser's Law is one interesting example, which I first encountered on the list of eponymous laws.) Related: Exercises in Comprehensive Information Gathering.