Edit: for reasons given in the comments, I don't think the question of what circular preferences actually do is well defined, so this an answer to a wrong question.
If I like Y more than X, at an exchange rate of 0.9Y for 1X, and I like Z more than Y, at an exchange rate of 0.9Z for 1Y, and I like X more than Z, at an exchange rate of 0.9X for 1Z, you might think that given 1X and the ability to trade X for Y at an exchange rate of 0.95Y for 1X, and Y for Z at an exchange rate of 0.95Z for 1Y, and Z for X at an exchange rate of 0.95X for 1Z, I would trade in a circle until I had nothing left.
But actually, if I knew that I had circular preferences, and I knew that if I had 0.95Y I would trade it for (0.95^2)Z, which I would trade for (0.95^3)X, then actually I'd be trading 1X for (0.95^3)X, which I'm obviously not going to do.
Similarly, if the exchange rates are all 1:1, but each trade costs 1 penny, and I care about 1 penny much much less than any of 1X, 1Y, or 1Z, and I trade my X for Y, I know I'm actually going to end up with X - 3 cents, so I won't make the trade.
Unless I can set a Schelling fence, in which case I will end up trading once.
So if instead of being given X, I have a 1/3 chance of each of X, Y, and Z, I would hope I wouldn't set a Schelling fence, because then my 1/3 chance of each thing becomes a 1/3 chance of each thing minus the trading penalty. So maybe I'd want to be bad at precommitments, or would I precommit not to precommit?
I'm not saying "any thinking being knows that utility is a function," I'm saying that this creature with a broken brain prefers more X to less X. Instead of having a utility function they have a system of comparing quantities of X, Y, and Z.
I was thinking they would make comparison between what they have at the beginning and what they would have at the end, and it looks like you are making a chain of favorable comparisons to find your way back to X with less of it.
I'm not really sure what algorithm I would write into a robot to decide which path of comparisons to make. Maybe the shortest one (in number of comparisons) that compares the present state to one as far in the future as the robot can predict? But this seems kind of like deducing from contradictory premises.
Looks like an example might help you to connect this to what I was talking about.
Imagine sqrt(X). Normally people just pick the positive square root or the negative square root - but imagine the whole thing, the parabola-turned-sideways, the thing that isn't a function.
Now. Is it a valid question to ask whether sqrt(5) is greater or less than sqrt(6)?
-
What a decision-maker with circular preferences can have is local preferences - th... (read more)