that a small amount of information when requested is significantly better than no information
That's assuming that the information is correct. It could also be wrong or misleading, in which case it would be better not to receive it. While "Do you mean whole brain emulation?" "No" doesn't fall into this category, claims like "we know how to build AGI" are definitely claims that could be wrong, and are indeed generally considered to be wrong.
Unless you provide a reasonable argument or reason for why we should believe such a claim, anyone maintaining any epistemic hygiene standards (or common sense, for that matter) will be forced to ignore it. Therefore such comments only serve as a distraction, providing no useful value but taking up space and attention.
If I were to comment on conversations and tell people that 2 + 2 = 5 and then refuse to provide any justification when asked, people would quite reasonably conclude that I was a troll, too.
One of the reasons that I am skeptical of contributing money to the SIAI is that I simply don't know what they would do with more money. The SIAI currently seems to be viable. Another reason is that I believe that an empirical approach is required, that we need to learn more about the nature of intelligence before we can even attempt to solve something like friendly AI.
I bring this up because I just came across an old post (2007) on the SIAI blog:
Some questions:
I also have some questions regarding the hiring of experts. Is there a way to figure out what exactly the current crew is working on in terms of friendly AI research? Peter de Blanc seems to be the only person who has done some actual work related to artificial intelligence.
I am aware that preparatory groundwork has to be done and capital has to be raised. But why is there no timeline? Why is there no progress report? What is missing for the SIAI to actually start working on friendly AI? The Singularity Institute is 10 years old, what is planned for the decade ahead?