Wei_Dai comments on The Domain of Your Utility Function - Less Wrong

24 Post author: Peter_de_Blanc 23 June 2009 04:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 24 June 2009 02:51:29AM *  3 points [-]

The second question is: if we're making some artificial utility function for an AI or just to prove a philosophical point, how should that work - and I think your answer is spot on. I would hope that people don't really disagree with you here and are just getting bogged down by confusion about real brains and some map-territory distinctions and importing epistemology where it's not really necessary.

Where I've seen people use PDUs in AI or philosophy, they weren't confused, but rather chose to make the assumption of perception-determined utility functions (or even more restrictive assumptions) in order to prove some theorems. See these examples:

Here's a non-example, where the author managed to prove theorems without the PDU assumption:

Comment author: Wei_Dai 09 June 2011 01:56:15AM 2 points [-]

I wrote earlier:

Where I've seen people use PDUs in AI or philosophy, they weren't confused, but rather chose to make the assumption of perception-determined utility functions (or even more restrictive assumptions) in order to prove some theorems.

Well, here's a recent SIAI paper that uses perception-determined utility functions, but apparently not in order to prove theorems (since the paper contains no theorems). The author was advised by Peter de Blanc, who two years ago wrote the OP arguing against PDUs. Which makes me confused: does the author (Daniel Dewey) really think that PDUs are a good idea, and does Peter now agree?

Comment author: Peter_de_Blanc 11 June 2011 01:34:05PM 0 points [-]

I don't think that human values are well described by a PDU. I remember Daniel talking about a hidden reward tape at one point, but I guess that didn't make it into this paper.

Comment author: timtyler 11 June 2011 12:36:06PM *  0 points [-]

An adult agent has access to its internal state and its perceptions. If we model its access to its internal state as via internal sensors, then sense data are all it has access too - its only way of knowing about the world outside of its genetic heritage.

In that case, utility functions can only accept sense data as inputs - since that is the only thing that any agent ever has access to.

If you have a world-determined utility function, then - at some stage - the state of the world would first need to be reconstructed from perceptions before the function could be applied. That makes the world-determined utility functions an agent can calculate into a subset of perception-determined ones.