Gabriel comments on Reframing the Problem of AI Progress - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
Think of the tool and its human user as a single system. As long as the system is limited by the human's intelligence then it will not be as powerful as a system consisting of the same tool driven by a superhuman intelligence. And if the system isn't limited by the human's intelligence then the tool is making decisions, it is an AI, and we're facing the problem of making it follow the operator's will. (And didn't you mean to say "as powerful as any (U)FAI"?)
In general, it doesn't make much sense to draw a sharp distinction between tools and wills that use them. How do you draw the line in the case of a self-modifying AI?
Reasoning by cooked anecdote? Why speak of tanks and not, for example, automated biochemistry labs? I can imagine such existing in the future. And one of those could win the war against all the other biochemistry labs in the world and the rest of the biosphere too, if it were driven by a superior intelligence.