Artaxerxes comments on Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time" - Less Wrong

50 Post author: ciphergoth 22 January 2015 08:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (18)

You are viewing a single comment's thread.

Comment author: Artaxerxes 22 January 2015 10:53:58PM 20 points [-]

Just knowing that this seems to be on Bill's radar is pretty reassuring. The guy has lots of resources to throw at stuff he wants something done about.

Comment author: Gondolinian 23 January 2015 11:49:41AM 12 points [-]

And he has a track record of actually doing things with his money, as opposed to the hundreds of other people who have lots of resources to throw at things they want something done about, but don't in any significant way.

Comment author: dxu 25 January 2015 10:59:45PM 2 points [-]

Name some?

Comment author: dxu 25 January 2015 11:07:55PM *  4 points [-]

The problem is that there's too much stuff to be done. From Gates' perspective, he could spend his time worrying exclusively about AI, or he could spend his time worrying exclusively about global warming, or biological pandemics, etc. etc. etc. He chooses, of course, the broader route of focusing on more than one risk at a time. Because of this, just because AI is on his radar doesn't necessarily mean he'll do something about it; if AI is threat #11 on his list of possible x-risks, for instance, he might be too busy worrying about threats #1-10. This is an entirely separate issue from whether he is actually concerned about AI, so the fact that he is apparently aware of AI-risk isn't as reassuring as it might look at first glance.

Comment author: tim 27 January 2015 07:01:17AM 0 points [-]

Yeah, but worlds where AI is on his radar probably have a much higher Bill-Gates-intervention-rate than those where it isn't.

The base rate might be low but I still like to hear that one of the necessary conditions has been met.