dxu comments on Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time" - Less Wrong

50 Post author: ciphergoth 22 January 2015 08:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (18)

You are viewing a single comment's thread. Show more comments above.

Comment author: dxu 25 January 2015 11:07:55PM *  4 points [-]

The problem is that there's too much stuff to be done. From Gates' perspective, he could spend his time worrying exclusively about AI, or he could spend his time worrying exclusively about global warming, or biological pandemics, etc. etc. etc. He chooses, of course, the broader route of focusing on more than one risk at a time. Because of this, just because AI is on his radar doesn't necessarily mean he'll do something about it; if AI is threat #11 on his list of possible x-risks, for instance, he might be too busy worrying about threats #1-10. This is an entirely separate issue from whether he is actually concerned about AI, so the fact that he is apparently aware of AI-risk isn't as reassuring as it might look at first glance.

Comment author: tim 27 January 2015 07:01:17AM 0 points [-]

Yeah, but worlds where AI is on his radar probably have a much higher Bill-Gates-intervention-rate than those where it isn't.

The base rate might be low but I still like to hear that one of the necessary conditions has been met.