Luke_A_Somers comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong

8 Post author: KatjaGrace 11 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 11 November 2014 02:02:17PM 6 points [-]

Among many other things, and most relevantly... We don't know what we want. We have to hack ourselves in order to approximate having a utility function. This is fairly predictable from the operation of evolution. Consistency in complex systems is something evolution is very bad at producing.

An artificial agent would most likely be built to know what it wants, and could easily have a utility function.

The consequences of this one difference are profound.