Wei_Dai comments on A Problem About Bargaining and Logical Uncertainty - Less Wrong

23 Post author: Wei_Dai 21 March 2012 09:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 21 March 2012 09:50:02PM 19 points [-]

To make this more relevant to real life, consider two humans negotiating over the goal system of an AI they're jointly building.

"To give a practical down to earth example, ..."

Comment author: Wei_Dai 21 March 2012 11:34:32PM 1 point [-]

Perhaps a more down to earth example would be value conflict within an individual. Without this problem with logical uncertainty, your conflicting selves should just merge into one agent with a weighted average of their utility functions. This problem suggests that maybe you should keep those conflicting selves around until you know more logical facts.

Comment author: Vladimir_Nesov 21 March 2012 11:42:35PM *  0 points [-]

Right. But this is also the default safety option, you don't throw away information if you don't have a precise understanding of its irrelevance (given that it's not that costly to keep), and we didn't have such understanding.