torekp comments on Some Thoughts on Singularity Strategies - Less Wrong

26 Post author: Wei_Dai 13 July 2011 02:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 14 July 2011 10:12:25PM *  5 points [-]

If AI is naturally far more difficult than intelligence enhancement, no harm done

I should probably write a more detailed response to Eliezer's argument at some point. But for now it seems worth pointing out that if UFAI is of comparable difficulty to IA, but FAI is much harder (as seems plausible), then attempting to build FAI would cause harm by diverting resources away from IA and contributing to the likelihood of UFAI coming first in other ways.

Comment author: torekp 22 July 2012 03:30:11PM *  0 points [-]

What if, as I suspect, UFAI is much easier than IA, where IA is at the level you're hoping for? Moreover, what evidence can you offer that researchers of von Neumann's intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let "significantly smaller difficulty gap" mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.

Basically, I think you overestimate the value of intelligence.

Which is not to say that a parallel track of IA might not be worth a try.

Comment author: Wei_Dai 22 July 2012 08:02:22PM 1 point [-]

What if, as I suspect, UFAI is much easier than IA, where IA is at the level you're hoping for?

I had a post about this.

Moreover, what evidence can you offer that researchers of von Neumann's intelligence face a significantly smaller difficulty gap between UFAI and FAI than those of mere high intelligence? For some determinacy, let "significantly smaller difficulty gap" mean that von Neumann level intelligence gives at least twice the probability of FAI, conditional on GAI.

If it's the case that even researchers of von Neumann's intelligence cannot attempt to build FAI without creating unacceptable risk, then I expect they would realize that (assuming they are not less rational than we are) and find even more indirect ways of building FAI (or optimizing the universe for humane values in general), like for example building an MSI-2.