Why is LW not about winning?
This is a bit of a rant but I notice that I am confused. Eliezer said in the original Sequences: > Rationality is Systematized Winning But it's pretty obvious that LessWrong is not about winning (and Eliezer provides a more accurate definition of what he means by rationality here). As far as I can tell LW is mostly about cognitive biases and algorithms/epistemology (the topic of Eliezer's sequences), self-help, and a lot of AI alignment. But LW should be about winning! LW has the important goal of solving alignment, so it should care a lot about the most efficient way to go about it, in other words about how to win, right? So what would it look like if LW had a winning attitude towards alignment? Well, I think this is where the distinction between the two styles of rationality (cognitive algorithm development VS winning) matters a lot. If you want to solve alignment and want to be efficient about it, it seems obvious that there are better strategies than researching the problem yourself, like don't spend 3+ years on a PhD (cognitive rationality) but instead get 10 other people to work on the issue (winning rationality). And that 10x s your efficiency already. My point is that we should consider all strategies when solving a problem. Not only the ones that focus directly on the problem (cognitive rationality/researching alignment), but also the ones that involve acquiring a lot of resources and spending these to solve the problem (winning rationality/getting 10 other people to research alignment). This is especially true when other strategies get you orders of magnitude more leverage on the problem. To pick an extreme example, who do you think has more capacity to solve alignment, Paul Christiano, or Elon Musk? (hint: Elon Musk can hire a lot of AI alignment researchers). I am confused because LW teaches cognitive rationality so it should notice all that and recognize that epistemology and cognitive biases and a direct approach is not the most efficient way
For me a key benefit of maths is to answer the question "how much?", to turn qualitative intuitions into quantitative models.
For example if someone tells you "drug X binds to receptor Y which triggers therapeutic effect Z", the first question that comes to mind is "how much X do I need to take to get that much Z?".
If you don't answer that the info is not actionable. That's where the math models (pharmacocinetics and pharmacodynamics) come in, they tell you how much, which allows you to turn info into action.