You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

David_Gerard comments on Against utility functions - Less Wrong Discussion

40 Post author: Qiaochu_Yuan 19 June 2014 05:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (87)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 19 June 2014 09:24:50PM 10 points [-]

It seems worth reflecting on the fact that the point of the foundational LW material discussing utility functions was to make people better at reasoning about AI behavior and not about human behavior.

I think part of Eliezer's point was also to introduce decision theory as an ideal for human rationality. (See http://lesswrong.com/lw/my/the_allais_paradox/ for example.) Without talking about utility functions, we can't talk about expected utility maximization, so we can't define what it means to be ideally rational in the instrumental sense (and we also can't justify Bayesian epistemology based on decision theory).

So I agree with the problem stated here, but "let's stop talking about utility functions" can't be the right solution. Instead we need to emphasize more that having the wrong values is often worse than being irrational, so until we know how to obtain or derive utility functions that aren't wrong, we shouldn't try to act as if we have utility functions.

Comment author: David_Gerard 21 June 2014 09:38:42PM 3 points [-]

The trouble is the people who read the Sequences and went "EY said it, it's probably right, I'll internalise it." This is an actual hazard around here. (Even Eliezer can't make people think, rather than just believe in thinking.)