Luke_A_Somers comments on Muehlhauser-Wang Dialogue - Less Wrong

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread.

Comment author: Luke_A_Somers 23 April 2012 03:59:50PM *  2 points [-]

Pei seems to conflate the possibility of erroneous beliefs with the possibility of unfortunate (for us) goals. The Assumption of Insufficient Knowledge and Resources isn't what FAI is about, yet you get statements like

As I mentioned above, the goal system of an adaptive system evolves as a function of the system’s experience. No matter what initial goals are implanted, under AIKR the derived goals are not necessarily their logical implications, which is not necessarily a bad thing (the humanity is not a logical implication of the human biological nature, neither), though it means the designer has no full control to it (unless the designer also fully controls the experience of the system, which is practically impossible). See “The self-organization of goals” for detailed discussion.

Okay, so no one, not even superintelligent AI, is infallible. An AI may take on misguided instrumental goals. Yup. No way around that. That's totally absolutely missing the point.

Unless of course you think that a non-negligible portion of uFAI outcomes are where it does something horrible to us by accident while only wanting the best for us and having a clear, accurate conception of what that is.