Bugmaster comments on So You Want to Save the World - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (146)
I read Luke as making three claims there, two explicit and one implicit:
1) Is true as long as long as there is no infallible outside intervention and recursively self-improving AI is possible in principle, and unless we are talking about things like "there's no such thing as intelligence" or "intelligence is boolean" I don't sufficiently understand what it would even mean for that to be impossible in principle to assign probability mass to worlds like that.
The two other claims make sense to assign lower probability to, but the inevitable part referred to the first claim (which also was the one you quoted when you asked) and I answered for that. Even if I disagreed on it being inevitable, that seems to be what Luke meant.
As far as I understand, your point (2) is too weak. The claim is not that the AI will merely be smarter than us humans by some margin; instead, the claim is that (2a) the AI will become so smart that it will become a different category of being, thus ushering in a Singularity. Some people go so far as to claim that the AI's intelligence will be effectively unbounded.
I personally do not doubt that (1) is true (after all, humans are recursively self-improving entities, so we know it's possible), and that your weaker form of (2) is true (some humans are vastly smarter than average, so again, we know it's possible), but I am not convinced that (2a) is true.