Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Qiaochu_Yuan comments on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” - Less Wrong

35 Post author: AnnaSalamon 12 December 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: Qiaochu_Yuan 13 December 2016 10:13:40PM 3 points [-]

Whoever is most likely to impact AI safety is someone who knows a lot about AI. Is such a person likely to be badly lacking in rationality?

Sure. I think selecting for knowing a lot about AI mostly selects for raw intelligence and a particular kind of curiosity, and that neither of these are all that correlated with what one might call "street rationality," except insofar as street rationality requires enough raw intelligence to reliably do metacognition. There are plenty of very intelligent people who do almost no metacognition.

And could we have concrete examples of people in a position to impact AI safety?

Elon Musk, Peter Thiel, people who work or might work at DeepMind and similar groups...

Comment author: TheAncientGeek 14 December 2016 10:52:52AM *  0 points [-]

Sure. I think selecting for knowing a lot about AI mostly selects for raw intelligence and a particular kind of curiosity,

The two things you mention add up, minimally to wanting to know about AI

There is a third component to actually knowing a lot about AI, which is having succeeded in having learnt about AI, which is to say, having "won" in a certain sense. If rationality is winning, or knowing how to use raw intelligence effectively, a baseline level of rationality is indicated.

and that neither of these are all that correlated with what one might call "street rationality," except insofar as street rationality requires enough raw intelligence to reliably do metacognition. There are plenty of very intelligent people who do almost no metacognition.

Why is metacognition needed for AI safety? I can see how an average person might need to understand that they are, for instance, making anthropomorphic assumptions, but someone with a good understanding of AI would not do that..in fact, someone with hands-on experience of AI would be less biased in their assumptions about AI than someone who merely theorises about AI safety.

Salamon does not use the word "metacognition" , but does use the word "philosophical". I can see how AI safety touches on typically philosophical issues like ethics and philosophy of mind, and I can see how people with a tech background might be lacking in that kind of area. I can't see how either raw intelligence or generic thinking skills is going to help with that, since all the evidence is that domain knowledge dominates the raw and the generic. And there is an obvious answer to "AI safety needs an injection of philosophy", and that is to start a joint enterprise with both domain experts in AI and domain experts in the appropriate areas of philosophy. (Compare with the way medical ethics is done, for instance). This is something MIRI and CFAR have not ... done twice over.

And could we have concrete examples of people in a position to impact AI safety?

Elon Musk, Peter Thiel, people who work or might work at DeepMind and similar groups...

And how do you sell rationality training to them? Presumably not on the basis that they don't know how to win...

Comment author: Vaniver 15 December 2016 12:40:06AM *  4 points [-]

There is a third component to actually knowing a lot about AI, which is having succeeded in having learnt about AI, which is to say, having "won" in a certain sense. If rationality is winning, or knowing how to use raw intelligence effectively, a baseline level of rationality is indicated.

Have you heard the anecdote about Kahneman and the planning fallacy? It's from Thinking Fast and Slow, and deals with him creating curriculum to teach judgment and decision-making in high school. He puts together a team of experts, they meet for a year, and have a solid outline. They're talking about estimating uncertain quantities, and he gets the bright idea of having everyone estimate how long it will take them until they submit a finished draft to the Ministry of Education. He solicits everyone's probabilities using one of the approved-by-research methods they're including in the curriculum, and their guesses are tightly centered around two years (ranging from about 1.5 to 2.5).

Then he decides to employ the outside view, and asks the curriculum expert how long it took similar teams in the past. That expert realizes that, in the past, about 40% of similar teams gave up and never finished; those who finished, no one took less than seven years to finish. (Kahneman tries to rescue them by asking about skills and resources, and turns out that this team is below average, but not by much.)

We should have quit that day. None of us was willing to invest six more years of work in a project with a 40% chance of failure. Although we must have sensed that persevering was not reasonable, the warning did not provide an immediately compelling reason to quit. After a few minutes of desultory debate, we gathered ourselves together and carried on as if nothing had happened. The book was eventually completed eight(!) years later.


It seems to me that if the person who discovered the planning fallacy is unable to make basic use of the planning fallacy when plotting out projects, a general sense that experts know what they're doing and are able to use their symbolic manipulation skills on their actual lives is dangerously misplaced. If it is a bad idea to publish things about decision theory in academia (because the costs outweigh the benefits, say) then it will only be bad decision-makers who publish on decision theory!

Comment author: TheAncientGeek 15 December 2016 11:02:54AM *  2 points [-]

If we live in a world where the discover of the planning fallacy can fall victim to it, we live in a world where teachers of rationality fail to improve anyone's rationality skills.

Comment author: owencb 15 December 2016 11:51:06AM 0 points [-]

This conclusion is way too strong. To just give one way: there's a big space of possibilities where discovering the planning fallacy in fact makes you less susceptible to the planning fallacy, but not immune.

Comment author: TheAncientGeek 15 December 2016 12:49:25PM 2 points [-]

Actually, if the CFAR could reliably reduce susceptibility to the planning fallacy, they are wasting their time with AI safety--they could be making a fortune teaching their methods to the software industry, or engineers in general.

Comment author: alex_zag_al 23 February 2017 06:07:20AM 1 point [-]

Wow, I've read the story but I didn't quite realize the irony of it being a textbook (not a curriuculum, a textbook, right?) about judgment and decision making.

Comment author: Qiaochu_Yuan 14 December 2016 07:32:36PM *  3 points [-]

There is a third component to actually knowing a lot about AI, which is having succeeded in having learnt about AI, which is to say, having "won" in a certain sense. If rationality is winning, or knowing how to use raw intelligence effectively, a baseline level of rationality is indicated.

To speak from my own personal experience, I know a lot of math, and mostly the reason I know a lot of math is a combination of raw intelligence and teachers pushing me hard in that direction (for which I'm very grateful). I used almost no metacognition that I can remember; people just shoved topics in my direction and I got curious about and thought about some of them a lot. (But I did not, for example, do any thinking about where my curiosity should be aimed and why, nor did I spend time explicitly brainstorming ways I could be learning math faster or anything like that.)

Why is metacognition needed for AI safety? I can see how an average person might need to understand that they are, for instance, making anthropomorphic assumptions, but someone with a good understanding of AI would not do that..in fact, someone with hands-on experience of AI would be less biased in their assumptions about AI than someone who merely theorises about AI safety.

This is not at all clear to me. I think you underestimate how compartmentalized the thinking of even very intelligent academics can be.

And how do you sell rationality training to them? Presumably not on the basis that they don't know how to win...

You can try convincing them that CFAR teaches skills that they don't have that would help them in some way. In any case some kind of pitch was good enough for Max Tegmark and Jaan Tallinn, both of whom attended workshops and then played a role in making the Puerto Rico conference happen and founding FLI, along with a few other CFAR alumni. My impression is that this event was more or less responsible for getting Elon Musk on board with AI safety as a cause, which in return did a lot to normalize AI safety as a topic people could talk about publicly.

Comment author: TheAncientGeek 14 December 2016 07:34:05PM 0 points [-]

Can you answer the question : why is metacognition needed for AI safety?

Comment author: Qiaochu_Yuan 14 December 2016 07:51:08PM *  3 points [-]

If there are patterns in your thinking that are consistently causing you to think things that are not true, metacognition is the general tool by which you can notice that and try to correct the situation.

To be more specific, I can very easily imagine AI researchers not believing that AI safety is an issue due to something like cognitive dissonance: if they admitted that AI safety was an issue, they'd be admitting that what they're working on is dangerous and maybe shouldn't be worked on, which contradicts their desire to work on it. The easiest way to resolve the cognitive dissonance, and the most socially acceptable way barring people like Stuart Russell publicly pumping in the other direction, is to dismiss the concern as Luddite fear-mongering. This is the sort of thing you can try to notice and correct about yourself with the right metacognitive tools.

To make another analogy with math, I have never once heard a mathematics graduate student or professor speculate, publicly or privately, about the extent to which pure mathematics is mostly useless and overfunded. This is unsayable among mathematicians, maybe even unthinkable.

Comment author: TheAncientGeek 14 December 2016 08:08:47PM *  1 point [-]

If there are patterns in your thinking that are consistently causing you to think things that are not true, metacognition is the general tool by which you can notice that and try to correct the situatio

And it there isnt that problem, there is no need for that solution. For your argument to go through, you need to show that people likely to be impactive on AI safety are likely to have cognitive problems that affect them when they are doing AI safety. (Saying something like "academics are irrational because some of them believe in God" isn't enough." Compartmentalised beliefs are unimpactive because compartmentalised. Instrumental rationality is not epistemic rationality ).

To be more specific, I can very easily imagine AI researchers not believing that AI safety is an issue due to something like cognitive dissonance:

I dare say

if they admitted that AI safety was an issue, they'd be admitting that what they're working on is dangerous and maybe shouldn't be worked on, which contradicts their desire to work on it.

I can easily imagine an AI safety researcher maintaining a false belief that AI safety is a huge deal, because if they didn't they would be a nobody working on a non-problem. Funny how you can make logic run in more than one direction.

Comment author: ChristianKl 23 February 2017 07:08:51PM 0 points [-]

And how do you sell rationality training to them?

That's why you don't sell them a rationality workshop but a workshop for rationally thinking about AGI risks.