As my timelines have been shortening, I've been rethinking my priorities. As have many of my colleagues. It occurs to us that there are probably general considerations that should cause us to weight towards short-timelines plans, or long-timelines plans. (Besides, of course, the probability of short and long timelines) For example, if timelines are short then maybe AI safety is more neglected, and therefore higher EV for me to work on, so maybe I should be systematically more inclined to act as if timelines are short.
We are at this point very unsure what the most important considerations are, and how they balance. So I'm polling the hive mind!
I think it's time to commit to a particular "we". Let's talk about you, and I'll throw in some of my personal considerations which may or may not generalize.
The existence/nonexistence of warning shots is probably not in your control, unless I'm missing something. What is the thing within your control which is different in these two worlds?
For me, I think I'm a hell of a lot better at insights and new paradigms than the deep learning crowd, so far and away the most influence I'm likely to have is in the scenario where a new insight leads to fast takeoff, or at least a large advantage in a slow-takeoff world. I expect that finding the magic insight first is more tractable than moving a social/economic equilibrium.
(More generally, I think solving technical problems is a lot easier than moving social/economic equilibria, and "transform the social problem into a technical one" is a useful general-purpose technique.)
I'm gonna have to be a little bit rude here, so apologies in advance.
Unless you have some large social media following that I didn't know about, your social influence over both Asia and the West seems pretty negligible, including in most deep learning researcher circles. At least from where you are now, the only way your social influence is likely to matter much is if people in this specific community end up with disproportionate influence over AI. That is the major variable which matters, if we're asking how your current social influence will impact AI. So the question is: will this specific community end up with more influence in a short-timeline or long-timeline world? And, given that this community ends up with disproportionate influence, how does you influence in the community impact the outcome?
(Of course, it's also possible that your influence will grow/shift over time, possibly over different dimensions, and that would change the calculation.)
I would also add that, more generally, the path-by-which most of your influence will operate is nonobvious, and figuring that out (as well as which actions change that path) seems useful. Value of information is high.
TBH, I'm not really trying to convince you of anything in particular. You work on different sorts of things than I do. I'm relaying parts of my own reasoning, but I do not expect my own conclusions to apply to everyone, even given similar reasoning.