Scott Alexander ended A Lesswrong Crypto Autopsy with these words

And for those of us who failed - well, the world is getting way too weird to expect there won't be similarly interesting challenges ahead in the future.

This line led me to update towards the current gameboard still being winnable, if we act creatively
I was laying in bed this morning wondering what odd, creative actions could significantly alter the playing field -- and I remembered that I thought of a potential one, emailed it off to an EA investing group, and then forgot about it. I never got a reply. On the other hand, the two friends I discussed it with both thought it was worth telling others about (even though one was skeptical).

Maybe it's a dumb idea, but given the upside potential if it's not, I think I should share it again, this time publicly: short-selling stocks likely to lose to A(G)I. For example, the stock of Chegg, a homework cheating site, dropped ~45% after a quarterly report stated it was losing signups to ChatGPT. I think this wouldn't have been hard to predict in advance.

Here's how I argued for this idea in the email.

It seems that the market, right now, is largely unaware of the impact AI will have (this comment on 'AGI and the EMH' seems true to me). If so, given many EAs have inside-views of AI's trajectory, it seems to me like looking for and shorting stocks likely to rapidly decrease in value due to outcompetition by AI could be a really effective strategy. For example, if starting with $1m, it would only take successfully predicting 19 similar drops to reach above $1b (but even predicting 1 seems significant, compared to normal return rates). Moreover, it seems a failed prediction doesn't incur much loss in comparison, e.g if a stock's value doesn't change or only shifts a little, then on average one stays equal on failed predictions.

[...] I'm a little hesitant to post this idea publicly, because I don't really have a presence in the EA community yet, but I think that if this idea seems good to you there should definitely be some coordinated effort among EAs to fund an attempt at this. (Another reason is that I'm not sure if it might be slightly infohazardous, since I'd guess there's only some years remaining before the market catches up, and posting publicly could help it catch up sooner.)

[...] Even if it only had a 10% chance of success, the expected gain/loss ratio would probably be worth attempting it still.

Here's another argument for this: most of us probably expect AI to have a large economic impact in the coming years. One easy-to-think-of method to exploit this -- creating an AI-based company -- is still hard, because you still need to outcompete other companies, including ones created by others attempting to do the same thing, and including larger companies modifying themselves to integrate AI. In contrast, short-selling only requires successfully predicting areas where AI will win out, and identifying non-AI companies in those areas, especially ones which don't seem they'll integrate AI, or which don't seem they'd have an advantage against other AI companies if they did (or against existing AIs per se, in chegg's case).

Please discuss, bring this up with knowledgeable friends, etc, since this could be important.

I'll continue looking for other creative actions, as well. If I do come up with others, I'll try not to make the same mistake of not sharing it with the right audience initially (in this case, I probably should have posted it publicly sooner).

(A note for those who might want to cite the EMH: the EA-forum post AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years and its comments have some good discussion)

New Comment
2 comments, sorted by Click to highlight new comments since:

How does this strategy lead to winning the gameboard? I take "winning" to mean "not dying due to AGI". It looks like the sort of strategy that might make some money[1], but has essentially zero impact on things that matter.

  1. ^

    If you are extremely experienced in investing, avoid all the associated risks that have nothing to do with performance of the company, and time everything just right.

AI safety is funding constrained, we win more timelines if there are a bunch of people investing to give successfully.