Maybe the key is not to assume the entire economy will win, but make some attempt to distinguish winners from losers and then find ETFs and other instruments that approximate these sectors.
So, some wild guesses...
As the effects ripple out and more and more workers are displaced...
Though what I really would like to do is create some sort of rough model of an individual non-AI company with the following parameters:
...and then be able to make a principled guess about where on the AI-winners vs AI-losers spectrum a given company is. I even started sketching out a model like this until I realized that someone with relevant expertise must have already written a general-purpose model of this sort and I should find it and adapt it to the AI-automation scenario instead of making up my own.
I'm trying out this strategy on Investopedia's simulator (https://www.investopedia.com/simulator/trade/options)
The January 15 2027 call options on QQQ look like this as of posting (current price 481.48):
Strike | Black-Scholes | Ask |
---|---|---|
485 | 64.244 | 77.4 |
500 | 57.796 | 69.83 |
... | ... | ... |
675 | 14.308 | 14 |
680 | 13.693 | 13.5 |
685 | 13.077 | 12.49 |
... | ... | ... |
700 | 11.446 | 10.5 |
... | ... | ... |
720 | 9.702 | 8.5 |
So, if you were following this strategy and buying today, would you buy 485 because it has the lowest OOM strike price? Would you buy 675 because it's the lowest strike price where the ask is lower than the theoretical Black-Sholes fair price? Would you go for 720 because it's the cheapest available? Would you look for the out-of-money option with the largest difference between Black-Sholes and the ask?
What would be your thought process? I'm definitely hoping to hear from @lc but am interested in hearing from anybody who found this line of reasoning worth investigating and has opinions about it.
So, how can we improve this further?
Some things I'm going to look into, please tell me if it's a waste of time:
A risk I see is China blockading Taiwan and/or limiting trade with the US and thus slowing AI development until a new equilibrium is reached through onshoring (and maybe recycling or novel sources of materials or something?)
On the other hand maybe even the current LLMs already have the potential to eliminate millions of jobs and it's just going to take companies a while to do the planning and integration work necessarily to actually do it.
So one question is, will the resulting increase in revenue offset the revenue losses from a proxy war with China?
I guess scenarios where humans occupy a niche analogous to animals that we don't value but either cannot exterminate or choose not to.
Parfitt's Hitchhiker and transparent Newcomb: So is the interest in UDT motivated by the desire for a rigorous theory that explains human moral intuitions? Like, it's not enough that feelings of reciprocity must have conveyed a selective advantage at the population level, we need to know whether/how they also are net beneficial to the individuals involved?
What should one do if in a Newcomb's paradox situation but Omega is just a regular dude who thinks they can predict what you will choose, by analysing data from thousands of experiments on e.g. Mechanical Turk?
Do UDT and CDT differ in this case? If they differ then does it depend on how inaccurate Omega's predictions are and in what direction they are biased?
Thank you for answering.
I'm excluding simulations by construction.
Amnesia: So does UDT roughly-speking direct you to weigh your decisions based on your guesstimate of what decision-relevant facts apply in that scenario? And then choose among available options randomly but weighted by how likely each option is to be optimal in whatever scenario you have actually found yourself in?
Identical copies, (non-identical but very similar players?), players with aligned interests,: I guess this is a special case of dealing with a predictor agent where our predictions of each other's decisions are likely enough to be accurate that they should be taken into account? So UDT might direct you to disregard causality because you're confident that the other party will do so the same on their own initiative?
But I don't understand what this has in common with amnesia scenarios. Is it about disregarding causality?
Non-perfect predictors: Most predictors of anything as complicated as behaviour are VERY imperfect both at the model level and the data-collection level. So wouldn't the optimal thing to do be to down weigh your confidence in what the other player will do when deciding your own course of action? Unless you have information about how they model you in which case you could try to predict your own behaviour from their perspective?
Are there any practical applications of UDT that don't depend on uncertainty as to whether or not I am a simulation, nor on stipulating that one of the participants in a scenario is capable of predicting my decisions with perfect accuracy?
Would you mind sharing how you allocated the ratio of these positions?