Wei_Dai comments on Some Thoughts on Singularity Strategies - Less Wrong

26 Post author: Wei_Dai 13 July 2011 02:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread.

Comment author: Wei_Dai 14 July 2011 06:48:22PM *  14 points [-]

I was informed by Justin Shovelain that recently he independently circulated a document arguing for "IA first", and that most of the two dozen people he showed it to agreed with it, or nearly so.

I'm a bit surprised there hasn't been more people arguing (or at least stating their intuition) that "AI first" is the better strategy.

But I did find that Eliezer had written an argument explaining why he chose the "AI first" strategy in Artificial Intelligence as a Positive and Negative Factor in Global Risk (pages 31-35). Here's the conclusion from that section:

I would be pleasantly surprised if augmented humans showed up and built a Friendly AI before anyone else got the chance. But someone who would like to see this outcome will probably have to work hard to speed up intelligence enhancement technologies; it would be difficult to convince me to slow down. If AI is naturally far more difficult than intelligence enhancement, no harm done; if building a 747 is naturally easier than inflating a bird, then the wait could be fatal. There is a relatively small region of possibility within which deliberately not working on Friendly AI could possibly help, and a large region within which it would be either irrelevant or harmful. Even if human intelligence enhancement is possible, there are real, difficult safety considerations; I would have to seriously ask whether we wanted Friendly AI to precede intelligence enhancement, rather than vice versa.

I do not assign strong confidence to the assertion that Friendly AI is easier than human augmentation, or that it is safer. There are many conceivable pathways for augmenting a human. Perhaps there is a technique which is easier and safer than AI, which is also powerful enough to make a difference to existential risk. If so, I may switch jobs. But I did wish to point out some considerations which argue against the unquestioned assumption that human intelligence enhancement is easier, safer, and powerful enough to make a difference.

Comment author: Dr_Manhattan 14 July 2011 09:34:02PM 4 points [-]

informed by Justin Shovelain that recently he independently circulated a document

Is this a super-secret document of can we ask Justin to share?

Comment author: Wei_Dai 15 July 2011 08:19:10PM 1 point [-]

Sorry, I should have said that it's a draft document. I didn't see any particularly sensitive information in it, so presumably Justin will release it when it's ready. But the argument is basically along the same lines as my OP.

Comment author: Wei_Dai 14 July 2011 10:12:25PM *  5 points [-]

If AI is naturally far more difficult than intelligence enhancement, no harm done

I should probably write a more detailed response to Eliezer's argument at some point. But for now it seems worth pointing out that if UFAI is of comparable difficulty to IA, but FAI is much harder (as seems plausible), then attempting to build FAI would cause harm by diverting resources away from IA and contributing to the likelihood of UFAI coming first in other ways.

Comment author: private_messaging 22 July 2012 04:30:35PM *  1 point [-]

What if UFAI (of the dangerous kind) is incredibly difficult compared to harmless but usable AI such as a system that can find inputs to any computable function that give maximum output, analytically (not mere bruteforcing) and which for example understands ODEs?

We can cure every disease including mortality with it, we can use it to improve it, and can use it to design the machinery for mind uploading - all with comparatively little effort as it would take off much of cognitive workload - but it won't help us make the 'utility function' in the SI sense (paperclips, etc) as this is a problem of definition.

I feel that the unfriendly AI term is a clever rhetorical technique. The above-mentioned math AI is not friendly, but neither is it unfriendly. Several units could probably be combined to cobble together a natural language processing system as well. Nothing like 'hearing a statement then adopting a real world goal to the general gist of it', though.

Comment author: Wei_Dai 22 July 2012 09:46:13PM 0 points [-]

Cousin_it (who took a position similar to yours) and Nesov had a discussion about this, and I tend to agree with Nesov. But perhaps this issue deserves a more extensive discussion. I will give it some thought and maybe write a post.

Comment author: private_messaging 23 July 2012 05:58:53AM *  -1 points [-]

The discussion you link is purely ideological: pessimist, narrow minded cynicism about human race (on Nesov's side), versus the normal view, without any justifications what so ever for either view.

The magical optimizer allows for space colonization (probably), cures for every disease, solution to energy problems, and so on. We do not have as much room for intelligent improvement when it comes to destroying ourselves - the components for deadly diseases come pre made by evolution, the nuclear weapons already have been invented, etc. The capacity of destruction is bounded by what we have to lose (and we already have the capacity to lose everything), the capacity for growth is bounded by much larger value of what we may gain.

Sure, the magical friendly AI is better than anything else. So is flying carpet better than a car.

When you focus so much on the notion that others are stupid, you forget how hostile is the very universe we live in, you neglect how important it is to save ourselves from external-ish factors. As long as viruses like common cold and flu can exist and be widespread, it is only a matter of time until there is a terrible pandemic killing an enormous number of people (and potentially crippling economy). We haven't even gotten rid of dangerous parasites yet. Not even top of the foodchain really, if you count parasites. We are also stuck on a rock hurling through space full of rocks, and we can't go anywhere.

Comment author: torekp 22 July 2012 03:30:11PM *  0 points [-]

What if, as I suspect, UFAI is much easier than IA, where IA is at the level you're hoping for? Moreover, what evidence can you offer that researchers of von Neumann's intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let "significantly smaller difficulty gap" mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.

Basically, I think you overestimate the value of intelligence.

Which is not to say that a parallel track of IA might not be worth a try.

Comment author: Wei_Dai 22 July 2012 08:02:22PM 1 point [-]

What if, as I suspect, UFAI is much easier than IA, where IA is at the level you're hoping for?

I had a post about this.

Moreover, what evidence can you offer that researchers of von Neumann's intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let "significantly smaller difficulty gap" mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.

If it's the case that even researchers of von Neumann's intelligence cannot attempt to build FAI without creating unacceptable risk, then I expect they would realize that (assuming they are not less rational than we are) and find even more indirect ways of building FAI (or optimizing the universe for humane values in general), like for example building an MSI-2.