Eliezer_Yudkowsky comments on Will the world's elites navigate the creation of AI just fine? - Less Wrong

20 Post author: lukeprog 31 May 2013 06:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (266)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 01 June 2013 05:06:49PM 3 points [-]

Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous).

Hm! I cannot recall a single instance of this.

Will keep an eye out for the next citation.

Still, if the analysis says "you will die of this", and the brain of the person considering the analysis is willing to assign it some credence

This has not happened with AI risk so far among most AIfolk, or anyone the slightest bit motivated to reject the advice. We had a similar conversation at MIRI once, in which I was arguing that, no, people don't automatically change their behavior as soon as they are told that something bad might happen to them personally; and when we were breaking it up, Anna, on her way out, asked Louie downstairs how he had reasoned about choosing to ride motorcycles.

People only avoid certain sorts of death risks under certain circumstances.

Comment author: Benja 01 June 2013 05:27:13PM 2 points [-]

Will keep an eye out for the next citation.

Thanks!

[...] motorcycles. [...]

Point. Need to think.

Comment author: Eugine_Nier 01 June 2013 08:27:03PM 3 points [-]

We had a similar conversation at MIRI once, in which I was arguing that, no, people don't automatically change their behavior as soon as they are told that something bad might happen to them personally

Being told something is dangerous =/= believing it is =/= alieving it is.