You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

nyan_sandwich comments on Brief Question about FAI approaches - Less Wrong Discussion

3 Post author: Dolores1984 19 September 2012 06:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 20 September 2012 11:37:54PM *  1 point [-]

Agent utility and utilitarian utility (this renormaization/combining buisness) are two entirely seperate things. No reason the former has to impact the latter, in fact, as we can see, it causes utility monsters and such.

I can't comment further. Every way I look at it, combining preferences (utilitarianism) is utterly incoherent. Game theory/cooperation seems the only tractible path. I don't know the context here tho...

if A's highest preference has no chance of being an outcome then isn't the solution to fix A's utility function

Solution for who? A certainly doesn't want you mucking around it its utility function as that would cause it to not do good things in the universe (from its perspective)

Comment author: Pentashagon 21 September 2012 12:17:35AM 0 points [-]

Solution for who? A certainly doesn't want you mucking around it its utility function as that would cause it to not do good things in the universe (from its perspective)

If A knows that a preferred outcome is completely unobtainable and it knows that some utilitarian theorist is going to discount its preferences with regard to another agent, isn't it rational to adjust its utility function? Perhaps it's not; striving for unobtainable goals is somehow a human trait.

Comment author: [deleted] 21 September 2012 12:21:36AM 0 points [-]

In pathological cases like that, sure, you can blackmail it into adjusting its post-op utility function. But only if it became convinced that that gave it a higher chance of getting the things it currently wants.

A lot of those pathological cases go away with reflectively consistent decision thoeries, but perhaps not that one. Don't feel like working it out.