Suppose we could look into the future of our Everett branch and pick out those sub-branches in which humanity and/or human/moral values have survived past the Singularity in some form. What would we see if we then went backwards in time and look at how that happened? Here's an attempt to answer that question, or in other words to enumerate the not completely disastrous Singularity scenarios that seem to have non-negligible probability. Note that the question I'm asking here is distinct from "In what direction should we try to nudge the future?" (which I think logically ought to come second).
- Uploading first
- Become superintelligent (self-modify or build FAI), then take over the world
- Take over the world as a superorganism
- self-modify or build FAI at leisure
- (Added) stasis
- Competitive upload scenario
- (Added) subsequent singleton formation
- (Added) subsequent AGI intelligence explosion
- no singleton
- IA (intelligence amplification) first
- Clone a million von Neumanns (probably government project)
- Gradual genetic enhancement of offspring (probably market-based)
- Pharmaceutical
- Direct brain/computer interface
- What happens next? Upload or code?
- Code (de novo AI) first
- Scale of project
- International
- National
- Large Corporation
- Small Organization
- Secrecy - spectrum between
- totally open
- totally secret
- Planned Friendliness vs "emergent" non-catastrophe
- If planned, what approach?
- "Normative" - define decision process and utility function manually
- "Meta-ethical" - e.g., CEV
- "Meta-philosophical" - program AI how to do philosophy
- If emergent, why?
- Objective morality
- Convergent evolution of values
- Acausal game theory
- Standard game theory (e.g., Robin's idea that AIs in a competitive scenario will respect human property rights due to standard game theoretic considerations)
- If planned, what approach?
- Competitive vs. local FOOM
- Scale of project
- (Added) Simultaneous/complementary development of IA and AI
Sorry if this is too cryptic or compressed. I'm writing this mostly for my own future reference, but perhaps it could be expanded more if there is interest. And of course I'd welcome any scenarios that may be missing from this list.
As I see nowhere else particularly to put it, here's a thought I had about the agent in the story, and specifically whether the proposed system works if not all other entities subscribe to it.
There is a non-zero probability that there exists/could exist an AI that does not subscribe to the system outlined of respecting other AIs values. It is equally probable that his AI was created before me or after me. Given this, if it already exists I can have no defence against it. If it does not yet exist I am safe from it, but must act as much as possible to prevent it being created as it will prevent my values being established. Therefore I should eliminate all other potential sources of AI.
[I may retract this after reading up on some of the acausal game theory stuff if I haven't understood it correctly. So apologies if I have missed something obvious]
I think you might be right; it is very unlikely that all civilizations get AI right enough for all the AIs to understand acausal considerations. I don't know why you were downvoted.