ciphergoth comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (432)
If Goertzel's claim that "SIAI's arguments are so unclear that he had to construct it himself" can't be disproven by the simple expedient of posting a single link to an immediately available well-structured top-down argument then the SIAI should regard this as an obvious high-priority, high-value task. If it can be proven by such a link, then that link needs to be more highly advertised since it seems that none of us are aware of it.
The nearest thing to such a link is Artificial Intelligence as a Positive and Negative Factor in Global Risk [PDF].
But of course the argument is a little large to entirely set out in one paper; the next nearest thing is What I Think, If Not Why and the title shows in what way that's not what Goertzel was looking for.
44 pages. I don't see anything much like the argument being asked for. The lack of an index doesn't help. The nearest thing I could find was this:
He also claims that intelligence could increase rapidly with a "dominant" probabilty.
This all seems pretty vague to me.