Wei_Dai comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
It sounds implausible when you put it like that, but suppose the only practical way to build a superintelligence is through some method that severely constrains the possible goals it might have (e.g., evolutionary methods, or uploading the smartest humans around and letting them self-modify), and attempts to build general purpose AIs/oracles/planning tools get nowhere (i.e., fail to be competitive against humans) until one is already a superintelligence.
Maybe when Bostrom/Armstrong/Yudkowsky talk about "possibility" in connection with the orthogonality thesis, they're talking purely about theoretical possibility as opposed to practical feasibility. In fact Bostrom made this disclaimer in a footnote:
But then who are they arguing against? Are there any AI researchers who think that even given unlimited computing power and intelligence on the part of the AI builder, it's still impossible to create AIs with arbitrary (or diverse) goals? This isn't Pei Wang's position, for example.
There are multiple variations on the OT, and the kind that just say it is possible can't support the UFAI argument. The UFAI argument is conjunctive, and each stage in the conjunction needs to have a non-neglible probability, else it is a Pascal's Mugging