Update:
I went and read the background material on acausal trade and narrowed even further where it is I'm confused. It's this paragraph:
> Another objection: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? However, this is quite common even for humans today. We care about the suffering of other people in faraway lands about whom we know next to nothing. We are even disturbed by the suffering of long-dead historical people, and wish that, counterfactually, the suffering had not happened. We even care about entities that we are not sure exist. For example: We might be concerned by news report that a valuable archaeological artifact was destroyed in a distant country, yet at the same time read other news reports stating that the entire story is a fabrication and the artifact never existed. People even get emotionally attached to the fate of a fictional character.
My problem is lack of evidence that genuine caring about entities with which one can never interact really is "quite common even for humans today", after factoring out indirect benefits/costs and social signalling.
How common, sincerely felt, and motivating should caring about such entities be for acausal trade to work?
Can you still use acausal trade to resolve various game-theory scenarios with agents whom you might later contact while putting zero priority on agents that are completely causally disconnected from you? If so, then why so much emphasis on permanently un-contactable agents? What does it add?
Acausally separate civilizations should obtain our consent in some fashion before invading our local causal environment with copies of themselves or other memes or artifacts.
Aha! Finally, there it is, a statement that exemplifies much of what I find confusing about acausal decision theory.
1. What are acausally separate civilizations? Are these civilizations we cannot directly talk to and so we model their utility functions and their modelling of our utility functions etc. and treat that as a proxy for interviewing them?
2. Are these civilizations we haven't met yet but might someday, or are these ones that are impossible for us to meet even in theory (parallel universes, far future, far past, outside our Hubble volume, etc.)? Because other acausal stuff I've read seems to imply the latter in which case...
2a. If I don't care what civilizations do (to include "simulating" me) unless it's possible for me or people I care about to someday meet them, do I have any reason to care about acausal trade?
3. Can you give any specific examples of what it would be like for an acausally separate civilization to invade our local causal environment which do NOT depend in any way on simulations?
4. I heard that acausal decision theory has practical applications in geopolitics, though unfortunately without any real-world examples. Do you know any concrete examples of using acausal trade or acausal norms to improve outcomes when dealing with ordinary physical people whom you cannot directly communicate?
I realize you probably have better things to do than educating an individual noob about something that seems to be common knowledge on LW. For what it's worth, I might be representative of a larger group of people who are open to the idea of acausal decision theory but who cannot understand existing explanations. You seem like an especially down-to-earth and accessible proponent of acausal decision theory, and you seem to care about it enough to have written extensively about it. So if you can help me bridge the gap to fully getting what it's about, it may help both of us become better at explaining it to a wider audience.
What is meant by 'reflecting'?
- reflecting on {reflecting on whether to obey norm x, and if that checks out, obeying norm x} and if that checks out, obeying norm x
Is this the same thing as saying "Before I think about whether to obey norm x, I will think about whether it's worth thinking about it and if both are true, I will obey norm x"?
I've been struggling to understand acausal trade and related concepts for a long time. Thank you for a concise and simple explanation that almost gets me there, I think...
Am I roughly correctly in the following interpretation of what I think you are saying?
Acausal norms amount to extrapolating the norms of people/aliens/AIs/whatever whom we haven't met yet and know nothing about other than what can be inferred from us someday meeting them. If we can identify norms that are likely to generalize to any intelligent being capable of contact and negotiation and not contingent on any specific culture/biology/happenstance, then we can pre-emptively obey those norms to maximize the probability of a good outcome when we do meet these people/aliens/AIs/whatever?
Would you mind sharing how you allocated the ratio of these positions?
Maybe the key is not to assume the entire economy will win, but make some attempt to distinguish winners from losers and then find ETFs and other instruments that approximate these sectors.
So, some wild guesses...
As the effects ripple out and more and more workers are displaced...
Though what I really would like to do is create some sort of rough model of an individual non-AI company with the following parameters:
...and then be able to make a principled guess about where on the AI-winners vs AI-losers spectrum a given company is. I even started sketching out a model like this until I realized that someone with relevant expertise must have already written a general-purpose model of this sort and I should find it and adapt it to the AI-automation scenario instead of making up my own.
I'm trying out this strategy on Investopedia's simulator (https://www.investopedia.com/simulator/trade/options)
The January 15 2027 call options on QQQ look like this as of posting (current price 481.48):
Strike | Black-Scholes | Ask |
---|---|---|
485 | 64.244 | 77.4 |
500 | 57.796 | 69.83 |
... | ... | ... |
675 | 14.308 | 14 |
680 | 13.693 | 13.5 |
685 | 13.077 | 12.49 |
... | ... | ... |
700 | 11.446 | 10.5 |
... | ... | ... |
720 | 9.702 | 8.5 |
So, if you were following this strategy and buying today, would you buy 485 because it has the lowest OOM strike price? Would you buy 675 because it's the lowest strike price where the ask is lower than the theoretical Black-Sholes fair price? Would you go for 720 because it's the cheapest available? Would you look for the out-of-money option with the largest difference between Black-Sholes and the ask?
What would be your thought process? I'm definitely hoping to hear from @lc but am interested in hearing from anybody who found this line of reasoning worth investigating and has opinions about it.
So, how can we improve this further?
Some things I'm going to look into, please tell me if it's a waste of time:
A risk I see is China blockading Taiwan and/or limiting trade with the US and thus slowing AI development until a new equilibrium is reached through onshoring (and maybe recycling or novel sources of materials or something?)
On the other hand maybe even the current LLMs already have the potential to eliminate millions of jobs and it's just going to take companies a while to do the planning and integration work necessarily to actually do it.
So one question is, will the resulting increase in revenue offset the revenue losses from a proxy war with China?
I always found that aspect weak. It is clearly and sadly evident that utility pessimization (I assume roughly synonymous with coercion?) is effective and stable, both on Golarion and Earth. Yet half the book seems to be gesturing at what a suboptimal strategy it is without actually spelling out how you can defeat an agent who pursues such a strategy (without having magic and some sort of mysterious meta-gods on your side).