Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

TheAncientGeek comments on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” - Less Wrong

36 Post author: AnnaSalamon 12 December 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 14 December 2016 08:08:47PM *  1 point [-]

If there are patterns in your thinking that are consistently causing you to think things that are not true, metacognition is the general tool by which you can notice that and try to correct the situatio

And it there isnt that problem, there is no need for that solution. For your argument to go through, you need to show that people likely to be impactive on AI safety are likely to have cognitive problems that affect them when they are doing AI safety. (Saying something like "academics are irrational because some of them believe in God" isn't enough." Compartmentalised beliefs are unimpactive because compartmentalised. Instrumental rationality is not epistemic rationality ).

To be more specific, I can very easily imagine AI researchers not believing that AI safety is an issue due to something like cognitive dissonance:

I dare say

if they admitted that AI safety was an issue, they'd be admitting that what they're working on is dangerous and maybe shouldn't be worked on, which contradicts their desire to work on it.

I can easily imagine an AI safety researcher maintaining a false belief that AI safety is a huge deal, because if they didn't they would be a nobody working on a non-problem. Funny how you can make logic run in more than one direction.