Wei_Dai comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong

14 Post author: Stuart_Armstrong 30 April 2012 01:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 03 May 2012 10:46:03PM 6 points [-]

It sounds implausible when you put it like that, but suppose the only practical way to build a superintelligence is through some method that severely constrains the possible goals it might have (e.g., evolutionary methods, or uploading the smartest humans around and letting them self-modify), and attempts to build general purpose AIs/oracles/planning tools get nowhere (i.e., fail to be competitive against humans) until one is already a superintelligence.

Maybe when Bostrom/Armstrong/Yudkowsky talk about "possibility" in connection with the orthogonality thesis, they're talking purely about theoretical possibility as opposed to practical feasibility. In fact Bostrom made this disclaimer in a footnote:

The orthogonality thesis implies that most any combination of final goal and intelligence level is logically possible; it does not imply that it would be practically easy to endow a superintelligent agent with some arbitrary or human-respecting final goal—even if we knew how to construct the intelligence part.

But then who are they arguing against? Are there any AI researchers who think that even given unlimited computing power and intelligence on the part of the AI builder, it's still impossible to create AIs with arbitrary (or diverse) goals? This isn't Pei Wang's position, for example.

Comment author: TheAncientGeek 30 September 2013 05:39:36PM *  2 points [-]

There are multiple variations on the OT, and the kind that just say it is possible can't support the UFAI argument. The UFAI argument is conjunctive, and each stage in the conjunction needs to have a non-neglible probability, else it is a Pascal's Mugging