Eliezer_Yudkowsky comments on Will the world's elites navigate the creation of AI just fine? - Less Wrong

20 Post author: lukeprog 31 May 2013 06:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (266)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 01 June 2013 12:47:04PM 0 points [-]

Climate change doesn't have the aspect that "if this ends up being a problem at all, then chances are that I (or my family/...) will die of it".

Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous). Many others do not believe it about AI. Many put sizable climate-change disaster into the far future. Many people will go on believing this AI independently of any evidence which accrues. Actors with something to gain by minimizing their belief in climate change so minimize. This has also been true in AI risk so far.

Comment author: Benja 01 June 2013 02:09:50PM 1 point [-]

Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous).

Hm! I cannot recall a single instance of this. (Hm, well; I can recall one instance of a TV interview with a politician from a non-first-world island nation taking projections seriously which would put his nation under water, so it would not be much of a stretch to think that he's taking seriously the possibility that people close to him may die from this.) If you have, probably this is because I haven't read that much about what people say about climate change. Could you give me an indication of the extent of your evidence, to help me decide how much to update?

Many others do not believe it about AI.

Ok, agreed, and this still seems likely even if you imagine sensible AI risk analyses being similarly well-known as climate change analyses are today. I can see how it could lead to an outcome similar to today's situation with climate change if that happened... Still, if the analysis says "you will die of this", and the brain of the person considering the analysis is willing to assign it some credence, that seems to align personal selfishness with global interests more than (climate change as it has looked to me so far).

Comment author: Eliezer_Yudkowsky 01 June 2013 05:06:49PM 3 points [-]

Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous).

Hm! I cannot recall a single instance of this.

Will keep an eye out for the next citation.

Still, if the analysis says "you will die of this", and the brain of the person considering the analysis is willing to assign it some credence

This has not happened with AI risk so far among most AIfolk, or anyone the slightest bit motivated to reject the advice. We had a similar conversation at MIRI once, in which I was arguing that, no, people don't automatically change their behavior as soon as they are told that something bad might happen to them personally; and when we were breaking it up, Anna, on her way out, asked Louie downstairs how he had reasoned about choosing to ride motorcycles.

People only avoid certain sorts of death risks under certain circumstances.

Comment author: Benja 01 June 2013 05:27:13PM 2 points [-]

Will keep an eye out for the next citation.

Thanks!

[...] motorcycles. [...]

Point. Need to think.

Comment author: Eugine_Nier 01 June 2013 08:27:03PM 3 points [-]

We had a similar conversation at MIRI once, in which I was arguing that, no, people don't automatically change their behavior as soon as they are told that something bad might happen to them personally

Being told something is dangerous =/= believing it is =/= alieving it is.