Russell_Wallace comments on Value is Fragile - Less Wrong

41 Post author: Eliezer_Yudkowsky 29 January 2009 08:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Russell_Wallace 01 February 2009 02:48:40AM 0 points [-]

Specifically, the point of utility theory is the attempt to predict the actions of complex agents by dividing them into two layers:

1. Simple list of values 2. Complex machinery for attaining those values

The idea being that if you can't know the details of the machinery, successful prediction might be possible by plugging the values into your own equivalent machinery.

Does this work in real life? In practice it works well for simple agents, or complex agents in simple/narrow contexts. It works well for Deep Blue, or for Kasparov on the chessboard. It doesn't work for Kasparov in life. If you try to predict Kasparov's actions away from the chessboard using utility theory, it ends up as epicycles; every time you see him taking a new action you can write a corresponding clause in your model of his utility function, but the model has no particular predictive power.

In hindsight we shouldn't really have expected otherwise; simple models in general have predictive power only in simple/narrow contexts.

Comment author: timtyler 13 February 2013 12:11:33AM *  2 points [-]

In hindsight we shouldn't really have expected otherwise; simple models in general have predictive power only in simple/narrow contexts.

Counter-example 1: gene-frequency maximization in biology. A tremendously simple principle with enormous explanatory power.

Counter-example 2: Entropy maximization. Another tremendously simple principle with enormous explanatory power.

Note that both are maximization principles - the very type of principle whose limitations you are arguing for.