wuwei comments on Intelligence enhancement as existential risk mitigation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (198)
I'm not sure intelligence enhancement alone is sufficient. It'd be better to first do rationality enhancement and then intelligence enhancement. Of course that's also much harder to implement but who said it would be easy?
It sounds like you think intelligence enhancement would result in rationality enhancement. I'm inclined to agree that there is a modest correlation but doubt that it's enough to warrant your conclusion.
I suspect you aren't sufficiently taking into account the magnitude of people's irrationality and the non-monotonicity of rationality's rewards. I agree that intelligence enhancement would have greater overall effects than rationality enhancement, but rationality's effects will be more careful and targeted -- and therefore more likely to work as existential risk mitigation.
Could you elaborate on the shape of the rewards to rationality?
This was covered in some LW posts a while ago (which I cannot be arsed to look up and link); the paradigmatic example in those posts, I think, was a LWer who used to be a theist and have a theist girlfriend, but reading OB/LW stuff convinced him of the irrationality of God. Then his girlfriend left his hell-bound hide for greener pastures, and his life is in general poorer than when he started reading OB/LW and striving to be more rational.
The suggestion is that rationality/irrationality is like a U: you can be well-off as a bible-thumper, and well-off as a stone-cold Bayesian atheist, but the middle is unhappy.
I'm not sure this is a fair statement. He did say he wouldn't go back if he had the choice.
Increases in rationality can sometimes lead with some regularity to decreasing knowledge or utility (hopefully only temporarily and in limited domains).