Related Posts: A cynical explanation for why rationalists worry about FAI, A belief propagation graph
Lately I've been pondering the fact that while there are many critics of SIAI and its plan to form a team to build FAI, few of us seem to agree on what SIAI or we should do instead. Here are some of the alternative suggestions offered so far:
- work on computer security
- work to improve laws and institutions
- work on mind uploading
- work on intelligence amplification
- work on non-autonomous AI (e.g., Oracle AI, "Tool AI", automated formal reasoning systems, etc.)
- work on academically "mainstream" AGI approaches or trust that those researchers know what they are doing
- stop worrying about the Singularity and work on more mundane goals
ideal reasoners are not supposed to disagree
My ideal thinkers do disagree, even with themselves. Especially about areas as radically uncertain as this.