The article only touches on it briefly, but suggests faster AI takeoff are worse, but "fast" is only relative to the fastest human minds.
Has there been much examination of the benefits of slow takeoff scenarios, or takeoffs that happen after human enhancements become available? I vaguely recall a MIRI fundraiser saying that they would start putting marginal resources toward investigating a possible post-Age of EM takeoff, but I have no idea if they got to that funding goal.
Personally, I don't see Brain-Computer Interfaces as useful for AI takeoffs, at least in the near term. We can type ~100 words per minute, but it takes more than 400 minutes to write a 40,000 word novel. So, we aren't actually I/O bound, as Elon believes. We're limited by the number of neurons devoted to a given task.
Early BCIs might make some tasks much faster, like long division. Since some other tasks really are I/O bound, they'd help some with those. But, we wouldn't be able to fully keep up with AI unless we had full-fledged upgrades to all our cognative architecture.
So, is almost keeping up with AI likely to be useful, or are slow takeoff just as bad? Are the odds of throwing together a FAI in the equivalent of a month any better than in a day? What % of those pannicked emergency FAI activities could be speed up by better computer user interfaces/text editors, personal assistants, a device that zapped your brain every time it detected Akrasia setting in, or by a RAM upgrade to the brain's working-memory?
(sorry to spam. I'm separating questions out to keep the discussion tidy.)
Perhaps Elon doesn't believe we are I/O bound, but that he is I/O bound. ;]
There's a more serious problem which I've not seen most of the Neuralink-related articles talk about* - which is that layering intelligence augmentations around an overclocked baboon brain will probably actually increase the risk of a non-friendly takeoff.