dxu comments on Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (18)
Just knowing that this seems to be on Bill's radar is pretty reassuring. The guy has lots of resources to throw at stuff he wants something done about.
The problem is that there's too much stuff to be done. From Gates' perspective, he could spend his time worrying exclusively about AI, or he could spend his time worrying exclusively about global warming, or biological pandemics, etc. etc. etc. He chooses, of course, the broader route of focusing on more than one risk at a time. Because of this, just because AI is on his radar doesn't necessarily mean he'll do something about it; if AI is threat #11 on his list of possible x-risks, for instance, he might be too busy worrying about threats #1-10. This is an entirely separate issue from whether he is actually concerned about AI, so the fact that he is apparently aware of AI-risk isn't as reassuring as it might look at first glance.
Yeah, but worlds where AI is on his radar probably have a much higher Bill-Gates-intervention-rate than those where it isn't.
The base rate might be low but I still like to hear that one of the necessary conditions has been met.