Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Qiaochu_Yuan comments on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” - Less Wrong

36 Post author: AnnaSalamon 12 December 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: Qiaochu_Yuan 14 December 2016 07:51:08PM *  3 points [-]

If there are patterns in your thinking that are consistently causing you to think things that are not true, metacognition is the general tool by which you can notice that and try to correct the situation.

To be more specific, I can very easily imagine AI researchers not believing that AI safety is an issue due to something like cognitive dissonance: if they admitted that AI safety was an issue, they'd be admitting that what they're working on is dangerous and maybe shouldn't be worked on, which contradicts their desire to work on it. The easiest way to resolve the cognitive dissonance, and the most socially acceptable way barring people like Stuart Russell publicly pumping in the other direction, is to dismiss the concern as Luddite fear-mongering. This is the sort of thing you can try to notice and correct about yourself with the right metacognitive tools.

To make another analogy with math, I have never once heard a mathematics graduate student or professor speculate, publicly or privately, about the extent to which pure mathematics is mostly useless and overfunded. This is unsayable among mathematicians, maybe even unthinkable.

Comment author: TheAncientGeek 14 December 2016 08:08:47PM *  1 point [-]

If there are patterns in your thinking that are consistently causing you to think things that are not true, metacognition is the general tool by which you can notice that and try to correct the situatio

And it there isnt that problem, there is no need for that solution. For your argument to go through, you need to show that people likely to be impactive on AI safety are likely to have cognitive problems that affect them when they are doing AI safety. (Saying something like "academics are irrational because some of them believe in God" isn't enough." Compartmentalised beliefs are unimpactive because compartmentalised. Instrumental rationality is not epistemic rationality ).

To be more specific, I can very easily imagine AI researchers not believing that AI safety is an issue due to something like cognitive dissonance:

I dare say

if they admitted that AI safety was an issue, they'd be admitting that what they're working on is dangerous and maybe shouldn't be worked on, which contradicts their desire to work on it.

I can easily imagine an AI safety researcher maintaining a false belief that AI safety is a huge deal, because if they didn't they would be a nobody working on a non-problem. Funny how you can make logic run in more than one direction.