If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
A thought occurred to me a while back. Call it the "Ghostbusters" approach to the existential risk of AI research. The basic idea is that rather than trying to make the best FAI on the first try, you hedge your bets. Work to make an AI that is a)unlikely to disrupt human civilization in a permanent way at all, and b)available for study.
Part of the stress of the 'one big AI' interpretation of the intelligence explosion is the sense that we'd better get it right the first time. But on the other hand, surely the space of all nonthreatening superintelligences is larger than the space of all helpful ones, and a comparatively easier target to hit on our first shot. You're still taking a gamble. But minimizing this risk seems much easier when you are not simultaneously trying to change human experience in positive ways. And having performed the action once, there would be a wealth of new information to inform later choices.
So I'm trying to decide if this is obviously true or obviously false: p(being destroyed by a primary FAI attempt) > p(being destroyed by a "Ghostbusters" attempt) * p(being destroyed by a subsequent more informed FAI attempt)
If you're making AI for study it shouldn't be super-intelligent at all, ideally it should be dumber than you. I can imagine an AGI that can usefully perform some tasks but is too stupid to self-modify itself into fooming if constrained. You can let it be in charge of opening and closing doors!