if you needed a reason to exit this subculture, here are several dozen including the cult of genius, ingroup-overtrust, insularity, out-of-touchness, lack of rigor, and lack of sharp culture that exists in the current environments.
timestamps are included so that you can skip around each video topic.
finally, there are 34 references in the description. I would have included more but this exceeded the character limit.
"Most of the sequences are not about rationality, but about things that Eliezer considers cool, such as AI or evolutionary psychology..."
I see that we disagree a lot already at the beginning. You say that rationality is about overcoming biases. I agree with that, but then I am also curious why those biases exist. How I see it, human biases are either random quirks of human evolution (which evolutionary psychology might explain), or something that happens to intelligences in general (and then we should also expect AI to be prone to them).
Also, what are the biases? A frequent approach I have seen is providing a list of "fallacies" that you are supposed to avoid. That is a thing that can easily be abused; if you know enough fallacies, you can dismiss almost everything you do not like (start by rejecting science as a "fallacy of argumenting by authority", and then use the rest of the list to shoot down any attempt to rederive the knowledge from scratch).
But maybe more importantly, how does this kind of rationality survive under reflection? Rationality defined as avoiding the list of fallacies in Wikipedia or in some textbook... but why exactly this list, as opposed to making my own list, or maybe using some definition of correct thinking provided by a helpful political or religious institution? Why is Wikipedia or a Cambridge Handbook the correct source of the list of fallacies? Sounds to me like a fallacy of authority, or a fallacy of majority, or one of those other things you want me to avoid doing.
What if there is a fallacy that hasn't been discovered yet? If I proposed one, how would we know whether it should be added to the list? (Is it okay if I edit Wikipedia to add "fallacy of political correctness"? Just kidding.)
"AI is not rationality. AI is machine learning. Just call it what it is."
The recent debates on LW are often about machine learning, because that is the current hot thing. But other approaches were tried in the past, for example expert systems. Who knows, maybe in hindsight we will laugh about everything that was not machine learning as an obvious dead end. Maybe. Anyway, the artificial intelligence mentioned in the Sequences is defined more broadly.
And if machine learning turns out to be the only way to get artificial intelligence... I think it will be quite important to consider its rationality and biases. Especially when it becomes smarter that humans, or if it starts to control a significant part of economy or military.
If a sufficiently smart artificial intelligence becomes widely accessible as a smartphone app, so that you can ask it any question (voice recognition, you do not even have to type) and it will give you a good answer with probability 99%, and when it becomes cheap enough so that most people can afford it... at that moment, the question of AI rationality and alignment with human values will be more important than human rationality, because at that moment most humans will outsource their thinking to the cloud. (Just like today many people object against learning things, because you can find anything on Google. But on Google, you still need to use the right keywords, separate good info from nonsense, and figure out how to apply this to your current problem. The AI will do all these things for you.)
I guess the argument is that (a) a superhuman AI will probably be developed soon, (b) whether it is properly aligned with human values or not will have tremendous impact on the future of humanity, and (c) MIRI is one of the organizations that take this problem most seriously.
If you agree with all three parts, then the funding makes sense. If you disagree with any one of them, it does not. At least from political perspective, it would be better to not talk about funding missions that require belief in severa... (read more)