You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Superintelligent AI mentioned as a possible risk by Bill Gates

7 Post author: FormallyknownasRoko 28 November 2010 11:51AM

"There are other potential problems in the future that Mr. Ridley could have addressed but did not. Some would put super-intelligent computers on that list. My own list would include large-scale bioterrorism or a pandemic ... But bioterrorism and pandemics are the only threats I can foresee that could kill over a billion people."

- Bill Gates 

From

Africa Needs Aid, Not Flawed Theories

One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments)

It seems that Bill is also something of a Bayesian with respect to global catastrophic risk:

"Even though we can't compute the odds for threats like bioterrorism or a pandemic, it's important to have the right people worrying about them and taking steps to minimize their likelihood and potential impact. On these issues, I am not impressed right now with the work being done by the U.S. and other governments."

Comments (20)

Comment author: CarlShulman 28 November 2010 01:46:29PM *  4 points [-]

Surely Bill Joy is another possibility, and Kurzweil does talk at least a bit about AI x-risk.

Comment author: FormallyknownasRoko 28 November 2010 02:05:07PM 2 points [-]

True. I hadn't thought of that.

Where does Kurzweil talk about AI x-risk? I read the whole of TSIN and there is precisely one tiny paragraph in it about AI risk.

Comment author: FormallyknownasRoko 28 November 2010 02:09:08PM *  1 point [-]

By the way my memory fails me: what exactly does Joy say about AI risk? What is his angle? If I recall correctly he cites the dangers of robots, not of superintelligence.

E.g. the word "superintelligence(ent)" only appears once in Bill Joy's famous essay "Why the future doesn't need us", and that in a Moravec quote. "Robot(ics)" appears 52 times.

Comment author: timtyler 28 November 2010 02:45:48PM *  1 point [-]

He says - of the "robots":

"If they are smarter than us, stronger than us, evolve quicker than us, they are likely to out-evolve us - in the same way that we have taken over the planet and out-evolved most of the other creatures" - source.

Comment author: FormallyknownasRoko 28 November 2010 03:01:24PM 1 point [-]

Still, that doesn't tell me why Gates said "superintelligent computers" rather than "highly-evolved robots"

Comment author: timtyler 28 November 2010 03:40:42PM *  -1 points [-]

Give a superintelligence some actuators and it becomes a robot. A superintelligence without actuators is not much use to anyone.

Comment author: ciphergoth 30 November 2010 07:46:46AM 2 points [-]

The point is that Gates's turn of phrase is informative about the provenance of his ideas.

Comment author: timtyler 30 November 2010 11:22:29PM 1 point [-]

It might so inform - but Gates has a brain between his ears and mouth - and these concepts are likely to be old and familiar ones for him - so internal concept processing also seems fairly likely.

Comment author: Nic_Smith 28 November 2010 09:42:31PM 1 point [-]

An oracle built with solid-state hard drives and no cooling fans would not be of use to anyone?

Comment author: Emile 28 November 2010 05:51:36PM 3 points [-]

One wonders where Bill Gates read that superintelligent AI could be (but in his estimation, in fact isn't) a GCR. It couldn't have been Kurzweil, because Kurzweil doesn't say that. The only realistic possibilities are that the influence came via Nick Bostrom, Stephen Hawking or Martin Rees or possibly Bill Joy(See comments).

The idea is also quite common in Science Fiction.

Or reading OvercomingBias (unlikely), or talking to someone who did (more likely) - my impression is that more people may have been in contact with the "Scary Idea" though Eliezer's writing than through that of the other people you list (except probably Kurzweil). Back when Eliezer was posting daily on OB, I'd see mentions of the blog from quite varied sources (all quite geeky).

Of course, still more people have been exposed to a form of the Scary Idea through the Terminator movies and other works of fiction.

Comment author: NancyLebovitz 29 November 2010 01:51:21PM 1 point [-]

Gates could have come up with the idea by himself, too.

Comment author: FormallyknownasRoko 29 November 2010 07:06:09PM 0 points [-]

Very doubtful, since he goes on to reject it.

Unless he actually accepts it, but can't say so in public for fear of being branded a kook.

Comment author: JoshuaZ 28 November 2010 05:08:11PM 5 points [-]

It seems that Bill is also something of a Bayesian with respect to global catastrophic risk

This isn't Bayesianism. this is something closer to caring about expected utility. Not the same thing.

Comment author: FormallyknownasRoko 29 November 2010 07:04:18PM 0 points [-]

It might be the way one phrases bayesianism in a popular article, where the aim is to argue in favor of the object-level proposal rather than weaken the article by relying explicitly on bayesianism.

Comment author: JoshuaZ 29 November 2010 11:05:28PM 2 points [-]

How so? They seem disconnected. Bayesianism is an epistemological approach. There's nothing for example that would stop someone from being a Bayesian and a virtue ethicist. Or a Bayesian with a deontology based on divine command theory.

Comment author: MichaelVassar 28 November 2010 04:43:12PM 1 point [-]

Kurzweil does say that AGI is a GCR.

Comment author: FormallyknownasRoko 28 November 2010 04:44:47PM 1 point [-]

Where?

Comment author: CarlShulman 28 November 2010 04:59:29PM 4 points [-]

In the Singularity Is Near, go to the index and look for "risk," "pathogen," and so on to find the relevant chapter. He says that the best way to reduce AI risk is to be moral, so that our future selves and successors respond well.

Comment author: timtyler 28 November 2010 01:29:46PM 0 points [-]

Note that "rational optimism" seems rather opposed to an apocalyptic end of civilisation.

Cultural evolution apparently consists of things getting better - since better things are what selection favours.