KatjaGrace comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong

8 Post author: KatjaGrace 11 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread.

Comment author: KatjaGrace 11 November 2014 02:07:10AM 1 point [-]

In practice do you expect a system's values to change with its intelligence?

Comment author: JoshuaFox 11 November 2014 10:32:38AM 3 points [-]
  1. Perhaps in resolving internal inconsistencies in the value system.
  2. An increased intelligence might end up min-maxing. In other words, if the utility function contains two terms in some sort of weighted balance, the agent might find that it can ignore one term to boost another, and that the weighting still produces much higher utility as that first term is sacrificed. This would not strictly be a change in values, but could lead to some results that certainly look like that.
Comment author: TheAncientGeek 11 November 2014 01:55:50PM 1 point [-]

I expect a system to face a trade off between self improvement and goal stability.

http://johncarlosbaez.wordpress.com/2013/12/26/logic-probability-and-reflection/