Stuart_Armstrong comments on Gains from trade: Slug versus Galaxy - how much would I give up to control you? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (67)
I don't believe this is possible in useful way. However, having a utility solution may mean we can generalise to other situations...
Do you mean not possible for humans with current tools or theoretically impossible? (It seems to me that in principle human preferences can be mapped to something like a utility function in a way that is at least useful, even if not ideal.)
That's a whole conversation! I probably shouldn't start talking about this, since I don't have the time to do it justice.
In the main, I feel that humans are not easily modelled by a utility function, but we have meta-preferences that cause us to hate facing the kind of trade-offs that utility functions imply. I'd bet most people would pay to not have their preferences replaced with a utility function, no matter how well defined it was.
But that's a conversation for after the baby!