JoshuaZ comments on Reframing the Problem of AI Progress - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
False dilemma. For example, someone may think that superintelligences cannot arise quickly. Or they may think that human improvement to our own intelligent will make us as effective superintelligences well before we solve the AI problem (because it is just that tricky).
The point is eventual possibility of an intelligence significantly stronger than that of current humans, with "humans growing up" a special case of that. The latter doesn't resolve the problem, because "growing out of humans" doesn't automatically preserve values, this is a problem that must be solved in any case where vanilla humans are left behind, no matter in what manner or how slowly that happens.
Do you mean that the set of possible objections I gave isn't complete? If so, I didn't mean to imply that it was.
And therefore we're powerless to do anything to prevent the default outcome? What about the Modest Superintelligences post that I linked to?
If someone has a strong intuition to that effect, then I'd ask them to consider how to safely improve our own intelligence.