jacob_cannell comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (156)
Couple of comments:
Without getting in to the likelihood of a 'typical AI researcher' successfully creating a benevolent AI, do you doubt Goertzel's "Interdependency Thesis"? I find both to be rather obviously true. Yes its possible in principle for almost any goal system to be combined with almost any type or degree of intelligence, but that's irrelevant because in practice we can expect the distributions over both to be highly correlated in some complex fashion.
I really don't understand why this Orthogonality idea is still brought up so much on LW. It may be true, but it doesn't lead to much.
The space of all possible minds or goal systems is about as relevant to the space of actual practical AIs as the space of all configuration of a human's molecules is to the space of a particular human's set of potential children.