Eliezer_Yudkowsky comments on What I Think, If Not Why - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (100)
It seems to me like a pretty small probability that an AI not designed to self-improve will be the first AI that goes FOOM, when there are already many parties known to me who would like to deliberately cause such an event.
A reasonable question from the standpoint of antiprediction; here you would have to refer back to the articles on cascades, recursion, the article on hard takeoff, etcetera.
Re Tim's "suddenly develop the ability reprogram and improve themselves all-at-once" - the issue is whether something happens efficiently enough to be local or fast enough to accumulate advantage between the leading Friendly AI and the leading unFriendly AI, not whether things can happen with zero resource or instantaneously. But the former position seems to be routinely distorted into the straw latter.
I know this is four years old, but this seems like a damn good time to "shut up and multiply" (thanks for that thoughtmeme by the way).