I know people have talked about this in the past, but now seems like an important time for some practical brainstorming here. Hypothetical: the recent $15mm Series A funding of Vicarious by Good Ventures and Founders Fund sets off a wave of $450mm in funded AGI projects of approximately the same scope, over the next ten years. Let's estimate a third of that goes to paying for man-years of actual, low-level, basic AGI capabilities research. That's about 1500 man-years. Anything which can show something resembling progress can easily secure another few hundred man-years to continue making progress.
Now, if this scenario comes to pass, it seems like one of the worst-case scenarios -- if AGI is possible today, that's a lot of highly incentivized, funded research to make it happen, without strong safety incentives. It seems to depend on VCs realizing the high potential impact of an AGI project, and of the companies having access to good researchers.
The Hacker News thread suggests that some people (VCs included) probably already realize the high potential impact, without much consideration for safety:
...I think this exactly the sort of innovation timeline real venture capitalists should be considering - funding real R&D that could have a revolutionary impact even if the odds are against it.
The company to get all of this right will be the first two trillion dollar company.
Is there any way to reverse this trend in public perception? Is there any way to reduce the number of capable researchers? Are there any other angles of attack for this problem?
I'll admit to being very scared.
Convince programmers to refuse to work on risky AGI projects:
Please provide constructive criticism.
We're in an era where the people required to make AGI happen are in so much demand that if they refused to work on an AGI that wasn't safe, they'd still have plenty of jobs left to choose from. You could convince programmers to adopt a policy of refusing to work on unsafe AGI. These specifics would be required:
Make sure that programmers at all levels have a good way to determine whether the AGI they're working on has proper safety mechanisms in place. Sometimes employees get such a small view of their job and will be told such confident fluff by management, that they have no idea what is going on. I am not qualified to do this, but if someone reading this post is, it might be very important if you write some guidelines for how programmers can tell whether the AGI they're working on might be unsafe from within their employment position. It may be more effective to give them a confidential hotline. Things can get complicated, both in programming, and in corporate culture, and employees may need help sorting out what's going on.
You could create resources to help programmers organize a strike or programmer's walk. Things like: An anonymous web interface where people interested in striking can post their intent - this would help momentum build. A place for people to post stories about how they took action against unsafe AI projects. They might not know how to organize otherwise (especially in large projects) or might need the inspiration to get moving.
If a union is formed around technological safety, the union could make demands that outside agencies must be allowed to check on the project, and that the company must be forthcoming with all safety related information.
On the feasibility of getting through to the programmers
See Also "Sabotage would not work"
Gwern responded to my comment in his Moore's Law thread. I don't know why he responded over there instead of over here but I decided that it was more organized to relocate the conversation to the comment it is about so I put my response to him here.
Do you have evidence one way or the other of what proportion of programmers get the existential risk posed by AGI? In ... (read more)