Analogizing AGI mainly to existing software projects probably isn't a good starting point for an useful contribution. The big problems are mostly tied to the unique features an actual AGI would have, not to making a generic software project with some security implications work out right.
For a different analogy, think about a software that fits on a floppy disk that somehow turns any laptop into an explosive device with a nuclear bomb level yield (maybe it turns out you can set up a very specific oscillation pattern in a multicore CPU silicon that will trigger a localized false vacuum collapse). I'm not sure I'd be happy to settle with "code gets stolen anyway, so let's make sure everyone gets access to it". An actual working AGI could be extremely weaponizable both for very cheap and into something much more dangerous than any software engineering analogy gives reason to suppose, and significantly less useful as a defensive than as an offensive measure.
For a different analogy, think about a software that fits on a floppy disk that somehow turns any laptop into an explosive device with a nuclear bomb level yield.
Okay. I get that AGI would be this powerful. What I don't get is that the code for it would fit onto a floppy disk. When you say I am making a mistake analogizing AGI to existing software projects, what precisely do you mean to say? Is it that it really wouldn't need very many programmers? Is it that problems with sloppy, rushed coding would be irrelevant? I'm not sure exactly how this co...
I know people have talked about this in the past, but now seems like an important time for some practical brainstorming here. Hypothetical: the recent $15mm Series A funding of Vicarious by Good Ventures and Founders Fund sets off a wave of $450mm in funded AGI projects of approximately the same scope, over the next ten years. Let's estimate a third of that goes to paying for man-years of actual, low-level, basic AGI capabilities research. That's about 1500 man-years. Anything which can show something resembling progress can easily secure another few hundred man-years to continue making progress.
Now, if this scenario comes to pass, it seems like one of the worst-case scenarios -- if AGI is possible today, that's a lot of highly incentivized, funded research to make it happen, without strong safety incentives. It seems to depend on VCs realizing the high potential impact of an AGI project, and of the companies having access to good researchers.
The Hacker News thread suggests that some people (VCs included) probably already realize the high potential impact, without much consideration for safety:
Is there any way to reverse this trend in public perception? Is there any way to reduce the number of capable researchers? Are there any other angles of attack for this problem?
I'll admit to being very scared.