Today, we have both machine-amplified human intelligence and machine intelligence - and that situation is likely to persist until we have intelligent machines that are roughly as smart as humans.
I think the natural way to classify that is to look at when the pure machine intelligences exceed the augmented humans in aggregate intelligence/power/wealth. If it happens at significantly higher than baseline human level intelligence, then I'd classify that as IA first, otherwise I'd classify it as upload or code first depending on the nature of the machine intelligences. (And of course there will always be "too close to call" cases.)
I think the natural way to classify that is to look at when the pure machine intelligences exceed the augmented humans.
So: by far the most important human augmentation in the future is going to involve preprocessing sensory inputs using machines, post-processing motor outputs by machines, and doing processing that bypasses the human brain entirely. Not drugs, or eduction, or anything else.
In such scenarios, the machines won't ever really "overtake" the augmented humans, they will just catch up with them. So, for instance, a human with a robo...
Suppose we could look into the future of our Everett branch and pick out those sub-branches in which humanity and/or human/moral values have survived past the Singularity in some form. What would we see if we then went backwards in time and look at how that happened? Here's an attempt to answer that question, or in other words to enumerate the not completely disastrous Singularity scenarios that seem to have non-negligible probability. Note that the question I'm asking here is distinct from "In what direction should we try to nudge the future?" (which I think logically ought to come second).
Sorry if this is too cryptic or compressed. I'm writing this mostly for my own future reference, but perhaps it could be expanded more if there is interest. And of course I'd welcome any scenarios that may be missing from this list.