You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JoshuaZ comments on Reframing the Problem of AI Progress - Less Wrong Discussion

21 Post author: Wei_Dai 12 April 2012 07:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread.

Comment author: JoshuaZ 12 April 2012 10:37:28PM 1 point [-]

If someone wants to question the importance of facing this problem, they really instead need to argue that a superintelligence isn't possible (not even a modest one), or that the future will turn out to be close to the best possible just by everyone pushing forward their own research without any concern for the big picture, or perhaps that we really don't care very much about the far future and distant strangers and should pursue AI progress just for the immediate benefits.

False dilemma. For example, someone may think that superintelligences cannot arise quickly. Or they may think that human improvement to our own intelligent will make us as effective superintelligences well before we solve the AI problem (because it is just that tricky).

Comment author: Vladimir_Nesov 12 April 2012 10:56:56PM *  3 points [-]

The point is eventual possibility of an intelligence significantly stronger than that of current humans, with "humans growing up" a special case of that. The latter doesn't resolve the problem, because "growing out of humans" doesn't automatically preserve values, this is a problem that must be solved in any case where vanilla humans are left behind, no matter in what manner or how slowly that happens.

Comment author: Wei_Dai 13 April 2012 01:31:48AM 1 point [-]

False dilemma.

Do you mean that the set of possible objections I gave isn't complete? If so, I didn't mean to imply that it was.

For example, someone may think that superintelligences cannot arise quickly.

And therefore we're powerless to do anything to prevent the default outcome? What about the Modest Superintelligences post that I linked to?

Or they may think that human improvement to our own intelligent will make us as effective superintelligences well before we solve the AI problem (because it is just that tricky).

If someone has a strong intuition to that effect, then I'd ask them to consider how to safely improve our own intelligence.