Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The_Jaded_One comments on CFAR’s new focus, and AI Safety - LessWrong

30 Post author: AnnaSalamon 03 December 2016 06:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread.

Comment author: The_Jaded_One 03 December 2016 03:02:31PM *  2 points [-]

Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together. And to do this among the relatively small sets of people tackling existential risk.

I get the impression that 'new ways of improving thinking skill' is a task that has mostly been saturated. The reasons people perhaps don't have great thinking skill might be because

1) Reality provides extremely sparse feedback on 'the quality of your/our thinking skills' so people don't see it as very important.

2) For a human, who represents 1/7 billionth of our species, thinking rationally is often a worse option than thinking irrationally in the same way as a particular group of humans, so as to better facilitate group membership via shared opinions. It's very hard to 'go it alone'.

3) (related to 2) Most decisions that a human has to make have already been faced by innumerable previous humans and do not require a lot of deep, fundamental-level thought.

These effects seem to present challenges to level-headed, rational thinking about the future of humanity. I see a lot of #2 in bad, broken thinking about AI risk, where the topic is treated as a proxy war for prosecuting various political/tribal conflicts.

Actually it is possible that the worst is yet to come in terms of political/tribal conflict influencing AI risk thinking.