Yep I see what you mean, I've changed the setup back to what you wrote with V_1 and V_0. My main concern is the part where we quotient V_1 by an equivalence relation to get V, I found this not super intuitive to follow and I'd ideally love to have a simpler way to express it.
The main part I don't get right now: I see that 1/(c(v+ + w−))*(v+ + w−) and 1/(c(v+ + w−))*(v- + w+) are convex combinations of elements of L and are therefore in L, however it seems to me that these two things being the same corresponds to v+ + w- = v- + w+, which is equivalent to v ...
...You recognise this in the post and so set things up as follows: a non-myopic optimiser decides the preferences of a myopic agent. But this means your argument doesn’t vindicate coherence arguments as traditionally conceived. Per my understanding, the conclusion of coherence arguments was supposed to be: you can’t rely on advanced agents not to act like expected-utility-maximisers, because even if these agents start off not acting like EUMs, they’ll recognise that acting like an EUM is the only way to avoid pursuing dominated strategies. I think that’s fals
Epistemic Status: Really unsure about a lot of this.
It's not clear to me that the randomization method here is sufficient for the condition of not missing out on sure gains with probability 1.
Scenario: B is preferred to A, but preference gap between A & C and B & C, as in the post.
Suppose both your subagents agree that the only trades that will ever be offered are A->C and C->B. These trades occur with a Poisson distribution, with = 1 for the first trade and = 3 for the second. Any trade that is offered must be immedia...
Something I have a vague inkling about based on what you and Scott have written is that the same method by which we can rescue the Completeness axiom i.e. via contracts/commitments may also doom the Independence axiom. As in, you can have one of them (under certain premises) but not both?
This may follow rather trivially from the post I linked above so it may just come back to whether that post is 'correct', but it might also be a question of trying to marry/reconcile these two frameworks by some means. I'm hoping to do some research on this area in the next few weeks, let me know if you think it's a dead end I guess!
Really enjoyed this post, my question is how does this intersect with issues stemming from other VNM axioms e.g. Independence as referenced by Scott Garrabrant?
https://www.lesswrong.com/s/4hmf7rdfuXDJkxhfg/p/Xht9swezkGZLAxBrd
It seems to me that you don't get expected utility maximizers solely from not-strong-Incompleteness, as there are other conditions that are necessary to support that conclusion.
Furthermore, human values are over the “true” values of the latents, not our estimates - e.g. I want other people to actually be happy, not just to look-to-me like they’re happy.
I'm not sure that I'm convinced of this. I think when we say we value reality over our perception it's because we have no faith in our perception to stay optimistically detached from reality. If I think about how I want my friends to be happy, not just appear happy to me, it's because of a built-in assumption that if they appear happy to me but are actually depressed, the illusion ...
Would it perhaps be helpful to think of agent-like behavior as that which takes abstractions as inputs, rather than only raw physical inputs? e.g. an inanimate object such as a rock only interacts with the world on the level of matter, not on the level of abstraction. A rock is affected by wind currents according to the same laws, regardless of the type of wind (breeze, tornado, hurricane), while an agent may take different actions or assume different states dependent on the abstractions the wind has been reduced to in its world model.
Example One: Good Cop / Bad Cop
The classic interrogation trope involves a suspect who is being interrogated by two police officers - one who is friendly and tries to offer to help the suspect, and one who is aggressive and threatens them with the consequences of not cooperating. We can think of the two cops as components of the Police Agent, and they are pursuing two goals: Trust and Fear. Both Trust and Fear are sub-goals for the ultimate goal which is likely a confession, plea deal or something of that nature, but the officers have uncertainty about how ...
Sorry if this is a stupid question, but is it true that p(X,Y)^2 has the degrees of freedom as you described? If X=Y is a uniform variable on [0,1] then p(X,Y)^2 = 1 but P(f(X),g(Y))^2 =/= 1 for (most) non-linear f and g.
In other words, I thought Pearson correlation is specifically for linear relationships so its variant under non-linear transformations.
I've tried to give (see on the post) a different description of an equivalence relation that I find intuitive and I think gives the space V as we want it, but it may not be fully correct.