One of the blog posts I'm most fond of is Things I Don’t Know as of 2018. It's by Dan Abramov, one of the more prominent people in the world of front-end web development. He goes through a bunch of relatively basic programming-related things that he doesn't understand, like unix commands and low-level languages.
I'd like to do something similar, but for rationality-related things. Why?
- For fun.
- To normalize the idea that no one's perfect.
- It'll make it easier to address these knowledge gaps. Or maybe just more likely that I actualy do so.
Here's the list:[1]
- Simulcra. I spend some time going through the posts and it's one of those things that just never manages to click with me.
- Blockchain. I guess the thing that I don't understand here is the hype. I get that it's a basically a database that can't be editted and I've read through articles talking about the use cases, but it's been around for a while now and doesn't seem to have been that game changing. Yet there are smart people who are super excited about it and I suspect that there are things I am failing to appreciate, regardless of whether their excitement is justified.
- Morality. To me it seems like rationality can tell you how to achieve your goals but not what (terminal) goals to pick. Arguments that try to tell you what terminal goals to pick have just never made sense to me. Maybe there's something I'm missing though.
- Quantum physics. I skipped/lightly skimmed the sequence posts on this. Seemed high effort and not particularly important. Well, it is cool to understand how reality works at the most fundamental level. Hm. I would be interested in a going through some sort of lower effort bigger picture material on quantum physics. I spent some time messing around with that sort of stuff like 13 years ago but all that stuck is some vague notion that reality is (fundamentally?) probabilistic and weird.
- Evolution. I get that at a micro-level, if something makes an organism more likely to reproduce it will in fact, err, spread the genes. And then that happens again and again and again. And since mutations are a thing organisms basically get to try new stuff out and the stuff that works sticks. I guess that's probably the big idea but I don't know much beyond it and remember being confused when I initially skimmed through the The Simple Math of Evolution sequence.
- Evolutionary psychology. I hear people make arguments like "X was important to our hunter-gatherer ancestors and so we still find ourselves motivated by it/to do it today because evolution is slow". X might be consuming calories when available, for example. There's gotta be more to evolutionary psychology than that sort of reasoning, but I don't know what the "more" is.
- Bayes math. I actually think I have a pretty good understanding of the big picture ideas. I wouldn't be able crunch numbers or do things that they teach you in a stats course though.[2] Nor do I understand the stuff about log odds and bits of evidence. I'd have to really sit down, think hard about it, and spend some time practicing using it.
- Solomonoff induction. I never took the time to understand it or related ideas.
- Occam's razor. Is it saying anything other than
P(A) >= P(A & B)
?[3] - Moloch. I enjoyed Meditations on Moloch and found it to be thought provoking. I'm not sure that I really understand what Moloch actually is/represents though. I struggle a little with the abstractness of it.
- Double crux. This is another one of those "maybe I actually understand it but it feels like there's something I'm missing" things. I get that a crux is something that would change your mind. And yeah, if you're arguing with someone and you find a crux that would make you agree with them if vice versa and stuff, that's useful. Then you guys can work on discussing that crux. Is that it though? Isn't that common sense? Why is this presented as something that CFAR discovered? Maybe there's more to it than I'm describing?
- Turing machines. Off the top of my head I don't really know what they are. Something about a roll of tape with numbers and skipping from one place to the next and how that is somehow at the core of all computing? I wish I understood this. After all, I am a programmer. I spent a few weeks skimming through a Udacity coures on the theory of computation a while ago but none of it really stuck.
If anyone wants to play the role of teacher in the comments I'd love to play the role of student.
To construct it I skimmed through the table of contents for Rationality from AI to Zombies, the top posts of all time, the tags page, and also included some other stuff that came to my mind. ↩︎
But I would like to. I tried skimming through a couple of textbooks (Doing Bayesian Data Analysis by Kruschke and Bayesian Data Anaysis by Gelman) and found them to be horribly written. If anyone has any recommendations let me know. ↩︎
Well,
>
instead of>=
since 0 and 1 are not probabilities but maybe in some contexts it makes sense to treat things as having a probability of 0 or 1. ↩︎
Disclaimer: I am trying to explain the system that someone else invented, to the degree I seem to understand it. I certainly could not reinvent the system.
That said, it seems to me useful to distinguish between people who know the factual truth and are lying about it, from people who see reality as just some kind of "social consensus", from people who are merely mechanically saying the words without attaching any meaning of it.
Why levels rather than parallel things? It seems like there is a progression on how detached from reality one is. The liar accepts that objective reality exists, he is just lying about it. The social guy has a map of reality, he just doesn't care whether it matches the territory, only whether his group approves of it. The populist politician probably doesn't even have a coherent map, it's just individual statements.
Can all statements be classified as one of the 4 levels, or are more options needed? It's not my system; if I tried to reinvent it, I might distinguish between the "level 3" people who have one permanent identity (one social consensus they believe), and those who flexibly switch between multiple identities (believing in different social realities in different situations). The latter is still different from having no model at all, such as saying things that contradict each other (to the same audience) just because each statement separately sounds good.
Basically, I treat it as a fake framework, like Enneagram or MBTI. In some situations, it allows me to express complex ideas shortly. ("She is extraverted" = do not expect her to sit quietly and read books, when she has an opportunity to socialize instead. "He made a level-3 statement" = he just signalled his group membership, do not expect him to care whether his statements are technically true.) I am not trying to shoehorn all situations into the model. I actually rarely use this model at all -- for me it is in the "insight porn" category (interesting to discuss, not really used for anything important).
How consistent are people at using specific levels? If I saw someone make a level X statement, should I expect their other statements to also be on the same level? On one hand, caring about reality vs not caring about reality, or having a coherent model vs just saying random words, seems like a preference / personality trait that I would expect to manifest in other situations too. On the other hand, there is a difference between near mode and far mode. I don't know. Some people may be more flexible between the levels, others less so. Probably high risk of fundamental attribution error here - it can be easy to assume that someone used level 3 or level 4 because they are "that kind of person", while they only used that level in given situation instrumentally (as in "if I do not care about something, I simply make level 3/4 statements").