Setting aside AI specifically, here are some considerations relevant to short-term vs long-term influence in general.
In general, we should expect to have more influence further in the future, just because a longer timescale means there's more possible things we can do. However, the longer the timescale, the harder it is to know what specifically to do, and the more generic resource acquisition is probably a good strategy. Two conceptual models here:
Note that "resource acquisition" in this context does not necessarily mean money - this is definitely an area where knowledge is the real wealth. Rather, it would mean building general-purpose models and understanding the world, rather than something more specific to whatever AI trajectory we expect.
Thanks. While it's true that shorter timescales means less ability to shift the system, what I'm talking about is shorter timelines, in which we have plenty of ability to shift the system, because all the important stuff is happening in the next few years.
Roughly, I was thinking that conditional on long timelines, the thing to do is acquire resources (especially knowledge, as you say) and conditional on short timelines, the thing to do is... well, also a lot of that, but with a good deal more direct action of various sorts as well. And so I'...
How much influence and ability you expect to have as an individual in that timeline.
For example, I don't expect to have much influence/ability in extremely short timelines, so I should focus on timelines longer than 4 years, with more weight to longer timelines and some tapering off starting around when I expect to die.
How relevant thoughts and planning now will be.
If timelines are late in my life or after my death, thoughts, research, and planning now will be much less relevant to the trajectory of AI going well, so at this moment in time I should weight timelines in the 4-25 year range more.
There are various things that could happen that cause extinction or catastrophe prior to TAI, various things that massively reduce our ability to steer the world like a breakdown of collective epistemology or a new world war, etc. Things that push us past the point of no return. And probably a bunch of them are unknowns.
This effectively works as a discount rate, and is a reason to favor short timelines.
Neglectedness is probably correlated with short and very long timelines. In medium-timelines scenarios AI will be a bigger deal and AI safety will have built up a lot more research and researchers. In long-timelines scenarios there will have been an AI winter and people will have stopped thinking about AI and AI safety may be discredited as doomsayers or something.
Maybe money is really important. We'll probably have more money the longer we wait, as our savings accounts accumulate, our salaries rise, and our communities grow. This is a reason to favor long timelines... but a weak one IMO since I don't think we are bottlenecked by money.
Maybe we are bottlenecked by knowledge though! Knowledge is clearly very important, and we'll probably have more of it the longer we wait.
However, there are some tricky knots to untangle here. It's true that we'll know more about how to make TAI go well the closer we are to TAI, and thus no matter what our timelines are, we'll be improving our knowledge the longer we wait. However, I feel like there is something fishy about this... On short timelines, TAI is closer, and so we have more knowledge of what it'll be like, whereas on long timelines TAI is farther, so our current level of knowledge is less, and we'd need to wait a while just to catch up to where we would be if timelines were short.
I feel like these considerations roughly cancel out, but I'm not sure.
Tractibility is correlated with how much influence and status we have in AI projects that are making TAI. This consideration favors short timelines, because (1) We have a good idea which AI projects will make TAI conditional on short timelines, and (2) Some of us already work there, they seem already at least somewhat concerned about safety, etc. In the longer term, TAI could be built by a less sympathetic corporation or by a national government. In both cases we'd have much less influence.
This consideration favors short timelines, because (1) We have a good idea which AI projects will make TAI conditional on short timelines, and (2) Some of us already work there, they seem already at least somewhat concerned about safety, etc.
I don't see how we can have a good idea which project whether a certain small set of projects will make TAI first conditional on short timelines (or whether the first project will be one in which people are "already at least somewhat concerned about safety"). Like, why not some arbitrary team at Facebook/Alphabet/Am...
Tractibility is correlated with whether we use prosaic AI methods (hard to make safe) or more principled, transparent architectures (not as hard.) Maybe we are more likely to use prosaic AI methods the shorter the timelines. OTOH, on long timelines we'll have awesome amounts of compute at our disposal and it'll be easier to brute-force the solution by evolving AI etc.
I think this is overall a weak consideration in favor of longer timelines being more tractible.
The long run is strictly the sum of the sequence of short runs that it comprises. The way to influence long timelines is to have influence over pivotal sections of the shorter timelines.
That's true, but I'm not sure it's always useful to frame things that way. "To have influence over pivotal sections of the shorter timelines" you need to know which sections those are, know what type of influence is useful, and be in a position to exert influence when they arrive. If you don't have that knowledge and can't guarantee you'll have that power, and don't know how to change those things, then what you need right now is a short term plan to fix those shortcomings. However, if you are in a position to influence the short-term but not long term fut...
As my timelines have been shortening, I've been rethinking my priorities. As have many of my colleagues. It occurs to us that there are probably general considerations that should cause us to weight towards short-timelines plans, or long-timelines plans. (Besides, of course, the probability of short and long timelines) For example, if timelines are short then maybe AI safety is more neglected, and therefore higher EV for me to work on, so maybe I should be systematically more inclined to act as if timelines are short.
We are at this point very unsure what the most important considerations are, and how they balance. So I'm polling the hive mind!