"There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list. My own list would include large-scale bioterrorism or a pandemic ... But bioterrorism and pandemics are the only threats I can foresee that could kill over a billion people."

- Bill Gates 

From

Africa Needs Aid, Not Flawed Theories

One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments)

It seems that Bill is also something of a Bayesian with respect to global catastrophic risk:

"Even though we can't compute the odds for threats like bioterrorism or a pandemic, it's important to have the right people worrying about them and taking steps to minimize their likelihood and potential impact. On these issues, I am not impressed right now with the work being done by the U.S. and other governments."

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 12:53 PM

It seems that Bill is also something of a Bayesian with respect to global catastrophic risk

This isn't Bayesianism. this is something closer to caring about expected utility. Not the same thing.

[-]Roko13y00

It might be the way one phrases bayesianism in a popular article, where the aim is to argue in favor of the object-level proposal rather than weaken the article by relying explicitly on bayesianism.

How so? They seem disconnected. Bayesianism is an epistemological approach. There's nothing for example that would stop someone from being a Bayesian and a virtue ethicist. Or a Bayesian with a deontology based on divine command theory.

Surely Bill Joy is another possibility, and Kurzweil does talk at least a bit about AI x-risk.

[-]Roko13y30

True. I hadn't thought of that.

Where does Kurzweil talk about AI x-risk? I read the whole of TSIN and there is precisely one tiny paragraph in it about AI risk.

[-]Roko13y10

By the way my memory fails me: what exactly does Joy say about AI risk? What is his angle? If I recall correctly he cites the dangers of robots, not of superintelligence.

E.g. the word "superintelligence(ent)" only appears once in Bill Joy's famous essay "Why the future doesn't need us", and that in a Moravec quote. "Robot(ics)" appears 52 times.

He says - of the "robots":

"If they are smarter than us, stronger than us, evolve quicker than us, they are likely to out-evolve us - in the same way that we have taken over the planet and out-evolved most of the other creatures" - source.

[-]Roko13y00

Still, that doesn't tell me why Gates said "superintelligent computers" rather than "highly-evolved robots"

Give a superintelligence some actuators and it becomes a robot. A superintelligence without actuators is not much use to anyone.

The point is that Gates's turn of phrase is informative about the provenance of his ideas.

It might so inform - but Gates has a brain between his ears and mouth - and these concepts are likely to be old and familiar ones for him - so internal concept processing also seems fairly likely.

An oracle built with solid-state hard drives and no cooling fans would not be of use to anyone?

One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments).

The idea is also quite common in Science Fiction.

Or reading OvercomingBias (unlikely), or talking to someone who did (more likely) - my impression is that more people may have been in contact with the "Scary Idea" though Eliezer's writing than through that of the other people you list (except probably Kurzweil). Back when Eliezer was posting daily on OB, I'd see mentions of the blog from quite varied sources (all quite geeky).

Of course, still more people have been exposed to a form of the Scary Idea through the Terminator movies and other works of fiction.

Gates could have come up with the idea by himself, too.

[-]Roko13y00

Very doubtful, since he goes on to reject it.

Unless he actually accepts it, but can't say so in public for fear of being branded a kook.

Kurzweil does say that AGI is a GCR.

[-]Roko13y10

Where?

In the Singularity Is Near, go to the index and look for "risk," "pathogen," and so on to find the relevant chapter. He says that the best way to reduce AI risk is to be moral, so that our future selves and successors respond well.

Note that "rational optimism" seems rather opposed to an apocalyptic end of civilisation.

Cultural evolution apparently consists of things getting better - since better things are what selection favours.