You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on Reframing the Problem of AI Progress - Less Wrong Discussion

21 Post author: Wei_Dai 12 April 2012 07:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 13 April 2012 01:31:48AM 1 point [-]

False dilemma.

Do you mean that the set of possible objections I gave isn't complete? If so, I didn't mean to imply that it was.

For example, someone may think that superintelligences cannot arise quickly.

And therefore we're powerless to do anything to prevent the default outcome? What about the Modest Superintelligences post that I linked to?

Or they may think that human improvement to our own intelligent will make us as effective superintelligences well before we solve the AI problem (because it is just that tricky).

If someone has a strong intuition to that effect, then I'd ask them to consider how to safely improve our own intelligence.