Musing:
Consider a graph with "Performance, i.e. how much diverse valuable stuff you can accomplish" on the Y-axis, and "Time budget, i.e. how long you are allowed to operate as an agent with a computer" on the X-axis.
Suppose that initially, frontier AIs are broadly superhuman when given very small time budgets, but subhuman when given large time budgets. That is, humans are better able to 'scale time budget' than AIs; humans are better at 'long-horizon agency skills.'
This would be represented as a human line on the graph, and an AI line on the graph, that both go up and to the right, but that intersect: the AI line starts higher but has a lower slope.
Suppose further that as AI progress continues, frontier AIs gradually get better at converting time budget into performance / scaling time budget / long-horizon agency skills. That is, the slope of the AI line increases.
Perhaps it gradually gets closer to the theoretical limits, e.g. each time step the slope of the AI line gets 0.1% closer to the slope of the theoretical-limits-of-agency-skills line. (Which must of course be higher-slope than the human line; let's be conservative and say 2x higher.)
What happens? See this vibecoded graph:
tl;dr: As AI progress continues and AIs gradually get better at agency skills, the "crossover point" gets to larger and larger time horizons. That is, the crossover between the time budgets for which AIs outperform humans, and the time budgets for which AIs underperform, gets longer. (See the green line on the right graph, which is the actual data + some added noise)
Specifically it gets longer exponentially... wait no, superexponentially! It only looks like an exponential initially. But as the slope of the AI line starts to get close to the slope of the human line, it starts to bend up a bit and then shoots up to infinity, and then it's over: AIs have better slope than humans; there is no longer any crossover point. AIs have "infinite horizon length" now.
(Bonus: The dot