Phil_Goetz6 comments on Disjunctions, Antipredictions, Etc. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (24)
I thought that you were changing your position; instead, you have used this opening to lead back into concentrating all your strength into one prediction.
I think this characterizes a good portion of the recent debate: Some people (me, for instance) keep saying "Outcomes other than FOOM are possible", and you keep saying, "No, FOOM is possible." Maybe you mean to address Robin specifically; and I don't recall any acknowledgement from Robin that foom is >5% probability. But in the context of all the posts from other people, it looks as if you keep making arguments for "FOOM is possible" and implying that they prove "FOOM is inevitable".
A second aspect is that some people (again, eg., me) keep saying, "The escalation leading up to the first genius-level AI might be on a human time-scale," and you keep saying, "The escalation must eventually be much faster than human time-scale." The context makes it look as if this is a disagreement, and as if you are presenting arguments that AIs will eventually self-improve themselves out of the human timescale and saying that they prove FOOM.