This is probably also my response to Hanson's review, that didn't see how the two-books-in-a-book connected up. The first book (on Inadequate Equilibria) is what it looks like to build domain-knowledge about when to use the outside view / when to trust experts / when to trust your own meta-rationality. It is the object level to the second book's meta level.
The most interesting section to me of Hanson's review was
Furthermore, Yudkowsky thinks that he can infer his own high meta-rationality from his details:
I learned about processes for producing good judgments, like Bayes’s Rule, and this let me observe when other people violated Bayes’s Rule, and try to keep to it myself. Or I read about sunk cost effects, and developed techniques for avoiding sunk costs so I can abandon bad beliefs faster. After having made observations about people’s real-world performance and invested a lot of time and effort into getting better, I expect some degree of outperformance relative to people who haven’t made similar investments. … [Clues to individual meta-rationality include] using Bayesian epistemology or debiasing techniques or experimental protocol or mathematical reasoning.
Alas, Yudkowsky doesn’t offer empirical evidence that these possible clues of meta-rationality are in fact actually clues in practice, that some correctly apply these clues much more reliably than others, nor that the magnitude of these effects are large enough to justify the size of disagreements that Yudkowsky suggests as reasonable. Remember, to justifiably disagree on which experts are right in some dispute, you’ll have to be more meta-rational than are those disputing experts, not just than the general population. So to me, these all remain open questions on disagreement.
I do take things like the practice of looking for inadequate equilibria as an example of domain-specific knowledge in meta-rationality, and furthermore the practice of using bayes' rule, betting, debiasing etc as bayesian evidence of the author's strong meta-rationality. However, it would be great to think of some clear empirical evidence of this working better, as opposed to merely bayesian evidence of it working better, and I might spend some time thinking of data for the former.
I think if anything in his review set off my instinct that I had to write "https://thezvi.wordpress.com/2017/11/27/you-have-the-right-to-think/" (which is sitting at -1 here which I'm sad about, but it got great discussion on my blog itself, which I think combines to valuable feedback on many levels that I'm still processing, and I'm thankful for people's candor).
The first part is an absurd Isolated Demand for Rigor, in violation of any reasonable rules of good writing and of common sense. Experts never seem to need to prove any of that stuff, but suddenly Eliezer's book is supposed to stop to provide expert-approved outside-view proof for the idea that being better at thinking and avoiding mistakes might make one better at thinking and avoiding mistakes. Magnitude is a legitimate question, but come on. You're not allowed to use evidence of your meta-rationality that isn't approved by the licensing court, or something? And even if the eivdence is blatant and outside view-approved you need to present all of it yourself explicitly?
The second half then says something that keeps being claimed and simply isn't true: That you have to be 'more meta-rational' or otherwise superior to others who have beliefs, in some way in order to have an opinion (to think) on something. Otherwise, your object-level evidence needs to be thrown out (in his comments he said multiple times no, don't throw out your object-level evidence, that's obviously wrong, but what else would it mean not to be able to judge?) This is insane. You don't need that at all. You just need to be good enough that your observations and analysis aren't zeroed out and fully priced in by the experts, which is a much lower bar. You could easily have different data, apply more compute, or do any number of other things, and even if you don't your likelihood ratio isn't going to be 1.
Whole thing is super frustrating.
#TheOppositeOfDeepAdviceIsAlsoDeepAdvice.
Robin Hanson: "You are never entitled to your own beliefs" i.e. there are rules of reasoning about evidence and is you state probabilities that are inconsistent with the evidence you've seen you're lying.
Zvi: "You are entitled to your own beliefs" i.e. there are many many MANY social pressures pushing for you to cast aside the evidence you've noticed for dissenting ideas, in favour of socially modest beliefs. Resist these pressures, for they are taking away your very right to think! (#LetsAssumeRightsExist)
And thus, a community deep in the first phrase, reacted poorly to the second. I admit, until I read the comment section of your post, I had not been able to at all form the charitable and correct reading of your post.
(And yeah, the comments there are awesome.)
If/when I point to empirical evidence that practising using bayes' theorem does in fact help your meta-rationality, my model of Pat Modesto says "Oh, so you claim that you have 'empirical' evidence and this means you know 'better' than others. Many people thought they too had 'special' evidence that allowed them to have 'different' beliefs." Pssh.
In general I agree with your post, and while Pat's is an argument I could imagine someone saying to me, I don't let this overwrite my models. If I think that person X has good meta-rationality, and you suggest my evidence is bad according to one particular outsid view, I will not throw away my models, but keep them while I examine this argument. If the argument is compelling I'll update, but the same heuristic that keeps me from preventing bucket errors will also stop me from immediatley saying anything like "Yes, you're probably right that I don't really have evidence of X's meta-rationality being strong".
I would be interested in some good generic tecniques for
I felt the book left me dangling in this regard. There is a lot of insight but not as actionable as I would have liked.
Copying over my comment from the SSC review, which otherwise may get lost in the fog of comments there.
Super fun review!
Conversely, I found the book gave short but excellent advice on how to resolve the interminable conflict between the inside and outside views – the only way you can: empiricism. Take each case by hand, make bets, and see how you come out. Did you bet that this education startup would fail because you believed the education market was adequate? And did you lose? Then you should update away from trusting the outside view here. Et cetera. This was the whole point of Chapter 4, giving examples of Eliezer getting closer to the truth with empiricism (including examples where he updated towards using the expert-trusting outside view, because he’d been wrong).
You quote “Eliezer’s four pronged strategy” But I feel like his actual proposed methodology was in chapter 4:
This is how you figure out if you’re Jesus – test your models, and build up a track record of predictions.
You might respond “But telling me to bet more isn’t an answer to the philosophical question about which to use” in which case I repeat: there isn’t a way a priori to know whether to trust experts using the outside view, because you don’t know how good experts are, and you need to build up domain-specific skills in predicting this.
You might respond “But this book didn’t give me any specific tools for figuring out when to trust the experts over me” in which case I continue to be baffled and point you to the first book – Moloch’s toolbox.
Finally, you might respond “Thank you Eliezer I’d already heard that a bet is a tax on bullsh*t, I didn’t require a whole new book to learn this” to which I respond that, firstly, I prefer the emphasis that “bets are a way to pay to find out where you’re wrong (and make money otherwise)” and secondly that the point of this book is that people are assuming way too quickly the adequacy of experts, so please make more bets in this particular domain. Which I think is a very good direction to push.