Wei_Dai comments on Outside View(s) and MIRI's FAI Endgame - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (60)
Thanks for the link. I don't think I've seen that comment before. Steve raises the examples of Bayesian decision theory and Solomonoff induction to support his position, but to me both of these are examples of philosophical ideas that looked really good at some point but then turned out to be incomplete / not quite right. If the FAI team comes up with new ideas that are in the same reference class as Bayesian decision theory and Solomonoff induction, then I don't know how they can gain enough confidence that those ideas can be the last words in their respective subjects.
Well I'm human which means I have multiple conflicting motivations. I'm going because I'm really curious what direction the participants will take decision theory.