Related Posts: A cynical explanation for why rationalists worry about FAI, A belief propagation graph
Lately I've been pondering the fact that while there are many critics of SIAI and its plan to form a team to build FAI, few of us seem to agree on what SIAI or we should do instead. Here are some of the alternative suggestions offered so far:
- work on computer security
- work to improve laws and institutions
- work on mind uploading
- work on intelligence amplification
- work on non-autonomous AI (e.g., Oracle AI, "Tool AI", automated formal reasoning systems, etc.)
- work on academically "mainstream" AGI approaches or trust that those researchers know what they are doing
- stop worrying about the Singularity and work on more mundane goals
Holden presumably thinks that many academic AGI approaches are too risky since they are agent designs:
Nick Szabo thinks working on mind uploading is a waste of time.
I personally promoted intelligence amplification and argued that working on security is of little utility.
Robin Hanson thinks the Singularity will be an important event that we can help make better by improving laws/institutions or advancing certain technologies ahead of others, and presumably would disagree that we should stop worrying about it.
He's an example of biased selection in critics. No detailed critique from him wouldn't have been heard if he didn't take it seriously enough in the first place.
You don't work on mind uploading today, you work on neurology, that solves a lot of practical problems including treatments for disorders, and which may lead to uploading, or not. I am rather sceptical that the future mind uploadi... (read more)