How do you expect to prove anything about an FAI without even knowing what an AGI would look like? I don't think current AI researchers even have that great of an idea of what AGI will eventually look like...
It will be (and look) the way we make it. And we should make it right, which requires first figuring out what that is.
An AGI is an extremely complex entity. You don't get to decide arbitrarily how to make it. If nothing else, there are fundamental computational limits on Bayesian inference that are not even well-understood yet. So if you were planning to make your FAI a Bayesian then you should probably at least be somewhat familiar with these issues, and of course working towards their resolution will help you better understand your constraints. I personally strongly suspect there are also fundamental computational limits on utility maximization, so if you were planning ...
One of the reasons that I am skeptical of contributing money to the SIAI is that I simply don't know what they would do with more money. The SIAI currently seems to be viable. Another reason is that I believe that an empirical approach is required, that we need to learn more about the nature of intelligence before we can even attempt to solve something like friendly AI.
I bring this up because I just came across an old post (2007) on the SIAI blog:
Some questions:
I also have some questions regarding the hiring of experts. Is there a way to figure out what exactly the current crew is working on in terms of friendly AI research? Peter de Blanc seems to be the only person who has done some actual work related to artificial intelligence.
I am aware that preparatory groundwork has to be done and capital has to be raised. But why is there no timeline? Why is there no progress report? What is missing for the SIAI to actually start working on friendly AI? The Singularity Institute is 10 years old, what is planned for the decade ahead?