I've been returning to my "reduced impact AI" approach, and currently working on some idea.
What I need is some ideas on features that might distinguish between an excellent FAI outcome, and a disaster. The more abstract and general the ideas, the better. Anyone got some suggestions? Don't worry about quality at this point, originality is more prized!
I'm looking for something generic that is easy to measure. At a crude level, if the only options were "papercliper" vs FAI, then we could distinguish those worlds by counting steel content.
So basically some more or less objective measure that has a higher proportion of good outcomes than the baseline.
Unfortunately our brains lack capacity of thinking about superior intelligence. As I understood you want to describe particular examples of what lies between scenarios 0 (stands for human extinction) and 1 (mutual cooperation and new better level of everything)
First, there are scenarios where human race is standing on the edge of extinction, but somehow ables to fight back and surive, call that Skynet scenario. Analogously, you can think of a scenario where emergence of FAI don't do any great harm, but also don't provide too much new insights and never really get far beyond Human-level intelligence
Skynet is no realistic scenario.