I spent a long time stalling on this post because I was framing the problem as “how to choose a book (or paper. Whatever)?”. The point of my project is to be able to get to correct models even from bad starting places, and part of the reason for that goal is that assessing a work often requires the same skills/knowledge you were hoping to get from said work. You can’t identify a good book in a field until you’ve read several. But improving your starting place does save time, so I should talk about how to choose a starting place.

One difficulty is that this process is heavily adversarial. A lot of people want you to believe a particular thing, and a larger set don’t care what you believe as long as you find your truth via their amazon affiliate link (full disclosure: I use amazon affiliate links on this blog). The latter group fills me with anger and sadness; at least the people trying to convert you believe in something (maybe even the thing they’re trying to convince you of). The link farmers are just polluting the commons.

With those difficulties in mind, here are some heuristics for finding good starting places.

  • Search “best book TOPIC” on google
    • Most of what you find will be useless listicles. If you want to save time, ignore everything on a dedicated recommendation site that isn’t five books.
    • If you want to evaluate a list, look for a list author with deep models on both the problem they are trying to address, and why each book in particular helps educate on that problem.  Examples:
    • A bad list will typically have a topic rather than a question they are trying to answer, and will talk about why books they recommend are generically good, rather than how they address a particular issue. Quoting consumer reviews is an extremely bad sign and I’ve never seen it done without being content farming.
  • Search for your topic on Google Scholar
    • Look at highly cited papers. Even if they’re wrong, they’re probably important for understanding what else you read.
    • Look at what they cite or are cited by
    • Especially keep an eye out for review articles
  • Search for web forums on your topic (easy mode: just check reddit). Sometimes these will have intro guides with recommendations, sometimes they will have where-to-start posts, and sometimes you can ask them directly for recommendations. Examples:
  • Search Amazon for books on your topic. Check related books as well.
  • Ask your followers on social media. Better, announce what you are going to read and wait for people to tell you why you are wrong (appreciate it, Ian). Admittedly there’s a lot of prep work that goes into having friends/a following that makes this work, but it has a lot of other benefits so if it sounds fun to you I do recommend it. Example:
  • Ask an expert. If you already know an expert, great. If you don’t, this won’t necessarily save you any time, because you have to search for and assess the quality of the expert.
  • Follow interesting people on social media and squirrel away their recommendations as they make them, whether they’re relevant to your current projects or not.
New Comment


9 comments, sorted by Click to highlight new comments since:

I like to start by trying to find one author who has excellent thinking and see what they cite — this works for both papers and books with bibliographies, but increasingly other forms of media. 

For instance, Dan Carlin of the (exceptional and highly recommended) Hardcore History podcast cites all the sources he uses when he does a deep investigation of a historical era, which is a good jumping-off point if you want to go deep.

The hard part is finding that first excellent thinker, especially in a domain where you can't differentiate quality in a field yet. But there's some general conventions of how smart thinkers tend to write and reason that you can learn to spot. There's a certain amount of empathy, clarity, and — for lack of a better word — "good aesthetics" that, if they're present, the author tends to be smart and trustworthy. 

The opposite isn't necessarily the case — there are good thinkers who don't follow those practices and are hard to follow (say, Laozi or Wittgenstein maybe) — but when those factors are present, I tend to weight the thinking well.

Even if you have no technical background at all, this piece by Paul Graham looks credible (emphasis added) —

https://sep.yimg.com/ty/cdn/paulgraham/acl1.txt?t=1593689476&

"What does addn look like in C?  You just can't write it.

You might be wondering, when does one ever want to do things like this?  Programming languages teach you not to want what they cannot provide.  You have to think in a language to write programs in it, and it's hard to want something you can't describe.  When I first started writing programs-- in Basic-- I didn't miss recursion, because I didn't know there was such a thing.  I thought in Basic. I could only conceive of iterative algorithms, so why should I miss recursion?

If you don't miss lexical closures (which is what's being made in the preceding example), take it on faith, for the time being, that Lisp programmers use them all the time.  It would be hard to find a Common Lisp program of any length that did not take advantage of closures.  By page 112 you will be using them yourself."

When I spot that level of empathy/clarity/aesthetics, I think, "Ok, this person likely knows what they're talking about."

So, me, I start by looking for someone like Paul Graham or Ray Dalio or Dan Carlin, and then I look at who they cite and reference when I want to go deeper.

My experience is that readability doesn't translate much to quality and might even be negatively correlated, because reality is messy and simplifications are easier to read. I do think works that make themselves easy to double check are probably higher quality on average, but haven't rigorously tested this.

The quality I'm describing isn't quite "readability" — it overlaps, but that's not quite it. 

Feynman has it —

http://www.faculty.umassd.edu/j.wang/feynman.pdf

It's hard to nail down; it'd probably be a very long essay to even try.

And it's not a perfect predictor, alas — just evidence.

But I believe there's a certain way to spot "good reasoning" and "having thoroughly worked out the problem" from one's writing. It's not the smoothness of the words, nor the simplicity.

I's hard to describe, but it seems somewhat consistently recognizable. Yudkowsky has it, incidentally. 

It seems like your approach would work well in fields like programming. It's a practical skill with a lot of people working in it and huge amounts of money at stake to figure out best practices. Plus, the issue he's addressing doesn't seem to be controversial.

Outside that safe zone, prose quality isn't a proxy for the truth. And I think it's these issues that Elizabeth's worried about.

For example, how many windows are there in your house? If you wanted to answer that question without getting out of your chair, you'd probably form a mental image of the house, then "walk around" and count up the windows.

At least, that's what the picture theorists think. Others think there's some other process underlying this cognition, perhaps linguistic in nature.

Reading their diametrically opposed papers on the same topic, I'm sure I couldn't tell who's right based on their prose. It's formal academic writing, and the issue is nuanced.

I wonder if a good pre-reading strategy is to search for, or ask experts about, the major controversies and challenges/issues related to the topic in question.

Your first step would be to try and understand what those controversies are, and the differences in philosophy or empirical evaluation that generate them. After you've understood what's controversial and why, you'll probably be in a better position to interpret anything you read on the subject.

One way you could potentially further your work on epistemic evaluation is to find or create a taxonomy of sources of epistemic uncertainty. Examples might include:

  • Controversy (some questions have voluminous evidence, but it's either conflicting, or else various factions disagree on how to interpret or synthesize it).
  • Lack of scholarship (some questions may have little evidence or only a handful of experts, so you have limited eyes on the problem)
  • Lack of academic freedom (some questions may be so politicized that it's difficult or impossible for scholars to follow the evidence to its natural conclusion)
  • Lack of reliable methods (some questions may be very difficult to answer via empirical or logical methods, so that the quality of the evidence is inevitably weak).

You can find papers addressing many of these issues with the right Google Scholar search. For example, searching for "controversies economic inequality" turns up a paper titled "Controversies about the Rise of American Inequality: A Survey." And searching for "methodological issues creativity" turns up "Methodological Issues in Measuring Creativity: A Systematic Literature Review."

My guess is that even just a few hours spent working on these meta-issues might pay big dividends in interpreting object-level answers to the research question.

This sure seems like it should work. My experience is that there's either nothing, or whatever quality analyses exist are drowned out by pap reviews (it is possible I should tolerate reading more pap reviews in order to find the gems). However I think you're right that for issues that have an academic presence, google scholar will return good results.

It seems like some questions might seem heavily researched, but are in fact either so hazy that no amount of research will produce clarity, or so huge that even a lot of research is nowhere near enough.

An example of the latter might be “what caused the fall of Rome?”

Ideally, you’d want numerous scholars working on each hypothesis, modeling the complex causal graph, specializing in various levels of detail.

In reality, it sounds like there are some hypotheses that are advanced by just one or a handful of scholars. Without enough eyes on every aspect of the problem, it’s no surprise that you’d have to become an expert to really evaluate the quality of the arguments on each side.

[+][comment deleted]50

I assume your two "N best books" examples are intended as bad examples. Since your other links are to good examples and the whole bullet-list block is introduced by offering "heuristics for finding good starting places", I think it would be worth making it even more explicit that they are intended as examples of what not to do (rather than e.g. a couple of rare counterexamples to the general pattern you've just mentioned).