Because it's bad tactics to endorse it in the open or because sabotaging unfriendly AI research is a case of not even if it's the right thing to do?
I assume you'd slow down or kibosh a not-proved-to-be-friendly AGI project if you had the authority to do so. But you wouldn't interfere if you didn't have legitimate authority over the project? There are plenty of not-illegal, but still ethical norm-breaking opportunities for sabotage, (deny the people on the project tenure, if you're in a university, hire the best researchers away, etc).
Do you think this shouldn't be discussed out of respect for the law, out of respect for the autonomy of researchers, or a mix of both?
Intelligence does not imply benevolence. Surely, there already are people who will try to sabotage unFriendly projects.
I know people have talked about this in the past, but now seems like an important time for some practical brainstorming here. Hypothetical: the recent $15mm Series A funding of Vicarious by Good Ventures and Founders Fund sets off a wave of $450mm in funded AGI projects of approximately the same scope, over the next ten years. Let's estimate a third of that goes to paying for man-years of actual, low-level, basic AGI capabilities research. That's about 1500 man-years. Anything which can show something resembling progress can easily secure another few hundred man-years to continue making progress.
Now, if this scenario comes to pass, it seems like one of the worst-case scenarios -- if AGI is possible today, that's a lot of highly incentivized, funded research to make it happen, without strong safety incentives. It seems to depend on VCs realizing the high potential impact of an AGI project, and of the companies having access to good researchers.
The Hacker News thread suggests that some people (VCs included) probably already realize the high potential impact, without much consideration for safety:
Is there any way to reverse this trend in public perception? Is there any way to reduce the number of capable researchers? Are there any other angles of attack for this problem?
I'll admit to being very scared.