Question regarding: If you don't know the name of the game, just tell me what I mean to you
I've been thinking about what it means to prefer that someone else achieve their preferences. In particular, what happens if you and I both prefer to adopt and chase after each other's preferences to some extent. This has clear good points, like cooperating and making more resources more fungible and thus probably being more efficient and achieving more preferences overall, and clear failure modes, like "What do you want to do? -- I don't know, I want to do whatever you want to do. Repeat."
My first thought: okay, simple, I'll just define my utility function U' to be U + aV where U was my previous utility function and V is your utility function and a is an appropriate scaling factor as per Stuart's post and then I can follow U'!1
This has a couple problems. First, if you're also trying to change your actions based on what I want, there's a circular reference issue. Second, U already contains part of V by definition, or something2.
My second thought: Fine, first we'll both factor our preferences into U = U1 + aV where U1 is my preference without regards to what you want. (Yours is V = V1 + bU) Basically what I want to say is "What do you want to do, 'cause ignoring you I want burgers a bit more than Italian, which I want significantly more than sandwiches from home" and then you could say "well ignoring you I want sandwiches more than Italian more than burgers but it's not a big thing, so since you mean b to me, let's do Italian". It's that "ignoring you" bit that I don't know how to correctly intuit. And by intuit I mean put into math.
Assuming it means something coherent to factor U into U1 + aV, there's still a problem. Watch what happens when we remove the self-reference. First scale U and V to something you and I can agree is approximately fungible. Maybe marginal hours, maybe marginal dollars, whatever. Now U = U1 + a(V1 + bU), so U - abU = U1 + aV1 and as long as ab<1, you can maximize U by maximizing U1 + aV1. Which sounds great, except that my intuition screams that maximizing U should depend on b. So what's up there? My guess is that somewhere I snuck a dependence on b into a...
(I like that the ab<1 constraint appears... intuitively I think it should mean that if we both try to care too much about what the other person wants, neither of us will get anywhere making a decision. "I don't know, what do you want to do?" In general if no one ever lets a>=1 then things should converge.)
I feel like the obvious next step is to list some simple outcomes and play pretend with two people trying to care about each other and fake-elicit their preferences and translate that into utility functions and just check to see how those functions factor. But I've felt like that for a week and haven't done it yet, so here's what I've got.
1Of course I know humans don't work like this, I just want the math.
2"Or something" means I have an idea that sounds maybe right but it's pretty hand-wavy and maybe completely wrong and I certainly can't or don't want to formalize it.
Therre's something a bit odd about the formulation U = U1 + aV = U1 + aV1 + abU1 = ... The term abU1 amplifies your "autologous" utility U1 by adding the value you place on the value the other gets from knowing that you are getting U1. And there will be additional terms ababU1, abababU1, etc. like a series of reflections in a pair of mirrors. If ab is close to 1 then both of your autologous utilities get hugely amplified. (BTW, this is where dependence on b shows up: the larger b is, the greater the utility you get over U1 + aV1, by a factor of 1/(1-ab).)
Would U = U1 + aV1, V = V1 + bU1 be more realistic? You're still trying to maximise U1+aV1, but without the echo chamber of multiple orders of vicarious utility.
Or you could carry it on one term further, allowing two orders of vicarious utility: U = U1 + aV1 + abU1 = (1+ab)U1 + aV1, and V = (1+ab)V1 + bU1.
I am not sure there is a principled way to decide among these.