AlexLundborg comments on [link] Essay on AI Safety - Less Wrong

12 Post author: jsteinhardt 26 June 2015 07:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread.

Comment author: AlexLundborg 28 June 2015 08:12:52PM *  3 points [-]

You write that the orthogonality thesis "...states that beliefs and values are independent of each other", whereas Bostrom writes that it states that almost any level of intelligence is compatible with almost any values, isn't that a deviation? Could you motivate the choice of words here, thanks.

From The Superintelligent Will: "...the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal."