It's extremely common for US politicians to trade on legislative decisions and I feel like this is a better explanation for corruption than political donations are. Which is important because it's a stupid and so maybe fragile reason for corruption. The natural tendency of market manipulation is in a sense not to protect incumbents, but to threaten them, because you can make way way more money off of volatility than you can on stasis.
So in theory, there should exist some moderate and agreeable policy intervention that could flip the equilibrium.
I have a strong example for simulationism, but I guess that might not be what you're looking for. Honestly I'm not sure I know any really important multiversal trade protocols. I think their usefulness is bounded by the generalizability of computation, or the fact that humans don't seem to want any weird computational properties..? Which isn't to say that we wont end up doing any of them, just that it'll be a thing for superintelligences to think about.
In general I'm not sure this require avoiding making your AI CDT to begin with, I think it'll usually correct its decision theory later on? The transparent newcomb/parfit's hitchhiker moment where it knows that it's no longer being examined by a potential trading partner's simulation/reasoning and can start to cheat never comes. There's no way for a participant to like, wait for the cloons in the other universe to comply, and then defect, you never see them comply, you're in different universes, there's no time-relation between your actions! You know they only comply if (they will figure out) that it is your nature to comply in kind.
I do have one multiversal trade protocol that's fun to think about though.
You don't need certainty to do acausal trade.
If it's finite you don't know how many entities there are in it, or what proportion of them are going to "trade" with you, and if it's infinite you don't know the measure (assuming that you can define a measure you find satisfying).
These are baby problems for baby animals. You develop adequate confidence about these things by running life-history simulations (built with a kind of continual actively reasoning performance optimization process that a human-level org wouldn't be able to contemplate implementing) or just by surveying the technological species in your lightcone and extrapolating. Crucially, values lock in after the singularity (and probably intelligence and technology converges to the top of the S-curve) so you don't have to simulate anyone beyond the stage at which they become infeasible large.
Conjecture: Absolute privacy + Absolute ability to selectively reveal any information one has, are theoretically optimal, transparency beyond that wont lead to better negotiation outcomes. Discussion of the privacy/coordination tension has previously missed this, specifically, it has missed the fact that technologies for selectively revealing self-verifying information, such as ZKVMs, suggest that the two are not in tension.
As to what's a viable path to a more coordinated world in practice, though, who knows.
allocate public funding to the production of disturbing, surreal, inflammatory, but socially mostly harmless deepfakes to exercise the public's epistemic immune system
The idea that this needed to be publicly funded is clownish in hindsight.
Makes me think it'd be useful to have some kind of impact market mechanism for dynamic pricing, where if a public resource is being produced for free/in sufficient quantities by private activity, the government's price for credits in that resource goes to zero.
Even doing something like that with things bidirectionally outside of your light cone is pretty fraught
Are you proposing that the universe outside of your lightcone might (like non-negligible P) just not be real?
Specifically, I've heard the claim that AI Safety should consider acausal trades over a Tegmarkian multiverse
Which trades? I don't think I've heard this. I think multiversal acausal trade is fine and valid, but my impression is it's not important in AI safety.
Yeah, agents incapable of acausal cooperation are already being selected out: Most of the dominant nations and corporations are to some degree internally transparent, or bound by public rules or commitments, which is sufficient for engaging in acausal trade. This will only become more true over time: Trustworthiness is profitable, a person who can't keep a promise is generally an undesirable trading partner, and artificial minds are much easier to make transparent and committed than individual humans and even organisations of humans are.
Also, technological (or post-biological) eras might just not have ongoing darwinian selection. Civilisations that fail to seize control of their own design process wont be strong enough to have a seat at the table, those at the table will be equipped with millions of years of advanced information technology, cryptography and game theory, perfect indefinite coordination will be a solved problem. I can think of ways this could break down, but they don't seem like the likeliest outcomes.
I notice it becomes increasingly impractical to assess whether a preference had counterfactual impact on the allocation. For instance if someone had a preference for there to be no elephants, and we get no elephants, partially because of that, but largely because of the food costs, should the person who had that preference receive less food for having already received an absense of elephants?
I'm completely over finding stuff like that aesthetically repellent after hearing Flashbots talking about MEV (miner-extractable value (ethereum hosts taking bribes to favour some transactions over others)) (a project to open source information about MEV techniques to enable honest hosts to compete), being overwhelmed by the ugliness of it, then realising like.. preventing people from profiting from information asymmetries is obviously unsolvable in general. The best we can do is reduce the amount of energy that gets wasted on it, and the kind of reflexive regulations people would try to introduce here would be counterproductive, the interventions that work tend to look more like acceptance and openness.
And I think trying to solve it on the morality/social ostracism layer is an example of a counterproductive approach, because that just leads to people continuing to do it but invisibly and incompetently. And I suspect that if it were visible and openly discussed as a normal thing it wouldn't even manifest in a way that's harmful. That's going to be difficult for many to imagine because we're a long way from having healthy openness about investing today. But at its adulthood I can imagine a culture where politicians are tempered by their experiences in investing into adopting the realist's should, where their takes about where america should go are forced into alignment with their beliefs about where it can go, which are now being exposed in their investing decisions.