Kaj_Sotala comments on Gains from trade: Slug versus Galaxy - how much would I give up to control you? - Less Wrong

33 Post author: Stuart_Armstrong 23 July 2013 07:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 22 July 2013 08:44:49AM *  1 point [-]

It's interesting, but it assumes that human desires can be meaningfully mapped into something like a utility function, in a way which makes me skeptical about its usefulness. (Though I have a hard time articulating my objection more clearly than that.)

Comment author: scaphandre 25 July 2013 01:04:47PM *  0 points [-]

I recognise that argument, but surely we can use consideration of utility function in models in order to make progress along thinking about these things.

Even if we crudely imagine a typical human who happens to be ticking all Mazlow's boxes with access to happiness, meaning and resources tending to be more towards our (current...) normalised '1' and someone in solitary confinement, in psychological torture, tending towards our normalised '0' as a utility point – even then the concept is sufficiently coherent and grokable to allow use of these kinds of models?

Do you disagree? I am curious – I have encountered this point several times and I'd like to see where we differ.

Comment author: Stuart_Armstrong 22 July 2013 05:19:27PM 0 points [-]

human desires can be meaningfully mapped into something like a utility function

I don't believe this is possible in useful way. However, having a utility solution may mean we can generalise to other situations...

Comment author: wedrifid 22 July 2013 05:42:33PM 3 points [-]

human desires can be meaningfully mapped into something like a utility function

I don't believe this is possible in useful way.

Do you mean not possible for humans with current tools or theoretically impossible? (It seems to me that in principle human preferences can be mapped to something like a utility function in a way that is at least useful, even if not ideal.)

Comment author: Stuart_Armstrong 22 July 2013 06:06:22PM 3 points [-]

That's a whole conversation! I probably shouldn't start talking about this, since I don't have the time to do it justice.

In the main, I feel that humans are not easily modelled by a utility function, but we have meta-preferences that cause us to hate facing the kind of trade-offs that utility functions imply. I'd bet most people would pay to not have their preferences replaced with a utility function, no matter how well defined it was.

But that's a conversation for after the baby!