DaFranker comments on Logical Pinpointing - Less Wrong

62 Post author: Eliezer_Yudkowsky 02 November 2012 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (338)

You are viewing a single comment's thread. Show more comments above.

Comment author: DaFranker 01 November 2012 05:55:16PM *  0 points [-]

Because valuing others' subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.

If one posits that by working together we can achieve an utopia where each individual's values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others' values, would it not follow that it's in everyone's best interests for everyone to build and follow such models?

The free-loader problem is an obvious downside of the above simplification, but that and other issues don't seem to be part of the present discussion.

Comment author: Peterdjones 01 November 2012 06:11:28PM 1 point [-]

Because valuing others' subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.

That doesn't make them beholden--obligated. They can opt not to play that game. They can opt not to vvalue winning.

If one posits that by working together we can achieve an utopia where each individual's values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others' values, would it not follow that it's in everyone's best interests for everyone to build and follow such models?

Only if they achieve satisfaction for individuals better than their behaving selfishly. A utopia that is better on averae or in total need not be better for everyone individually.

Comment author: DaFranker 01 November 2012 06:21:18PM *  -1 points [-]

Could you taboo "beholden" in that first? I'm not sure the "feeling of moral duty borned from guilt" I associate with the word "obligated" is quite what you have in mind.

They can opt not to play that game. They can opt not to value winning.

Within context, you cannot opt to not value winning. If you wanted to "not win", and the preferred course of action is to "not win", this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.

In other words, you just didn't truly value what you thought you valued, but some other thing instead, and you end up having in fact won at your objective of not winning that sub-game within your overarching game of opting to play the game or not (the decision to opt to play the game or not is itself a separate higher-tier game, which you have won by deciding to not-win the lower-tier game).

A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.

(sorry if I'm arguing a bit by definition with the utopia thing, but my premise was that the utopia brings each individual agent's utility to its maximum possible value if there exists a maximum for that agent's function)

Comment author: Peterdjones 01 November 2012 06:43:57PM 1 point [-]

Within context, you cannot opt to not value winning. If you wanted to "not win", and the preferred course of action is to "not win", this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.

Games emerge where people have things other people value. If someone doens't value those sorts of things, they are not going to game-play.

A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.

I don't see where higher-tier functions come in.

You are assumign that a utopia will maximise everyones value indiividually AND that values diverge. That's a tall order.

Comment author: chaosmosis 01 November 2012 07:59:43PM *  0 points [-]

I wouldn't let my values be changed if doing so would thwart my current values. I think you're contending that the utopia would satisfy my current values better than the status quo would, though.

In that case, I would only resist the utopia if I had a deontic prohibition against changing my values (I don't have very strong ones but I think they're in here somewhere and for some things). You would call this a hidden utility function, I don't think that adequately models the idea that humans are satisficers and not perfect utilitarians. Deontology is sometimes a way of identifying satisficing conditions for human behavior, in that sense I think it can be a much stronger argument.

Even supposing that we were perfect utilitarians, if I placed more value on maintaining my current values than I do on anything else, I would still reject modifying myself and moving towards your utopia.

Comment author: Peterdjones 01 November 2012 08:08:02PM 1 point [-]

Do you think the utopia is feasible?

Comment author: chaosmosis 01 November 2012 08:09:54PM 0 points [-]

Naw. But even if it was, if I placed value on maintaining my current values to a high degree, I wouldn't modify.