Juno_Watt comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 16 May 2012 05:13:20PM 2 points [-]

If an agent with goal G1 acquires sufficient "philosophical ability", that it concludes that goal G is the right goal to have, that means that it decided that the best way to achieve goal G1 is to pursue goal G. For that to happen, I find it unlikely that goal G is anything other than a clarification of goal G1 in light of some confusion revealed by the "philosophical ability", and I find it extremely unlikely that there is some universal goal G that works for any goal G1.