Organizations concerned about future Snowdens will be less likely to hire someone who takes such a pledge. Indeed, as I would expect these organizations to be putting in place mechanisms to identify future Snowdens I would expect them to be the biggest supporters of getting lots of people to signal their likelihood of becoming another Snowden.
Instead, how about people (such as myself) who will never have the technical skills to help create an AGI take a pledge that we we provide financial support to anyone who suffers great personal loss because he exposed AGI development risks.
My guess is that the most likely result would be that (1) by no means all AI researchers would take the pledge (because such things never get universally adopted), and then (2) the government would preferentially hire people who hadn't taken it or who would credibly undertake to break it in the name of National Security, and then (3) the first-order effect of the pledge would be to reduce the fraction of people in any government AI effort who are at all concerned about Friendliness.
That doesn't seem to be a win.
I suppose that if you could get it near-universally adopted it might work, but that would require a degree of buy-in I see no way of getting.
I'm glad the US gov angle is getting more discussion in LessWrong, but I question this part:
When the government AGI project starts rolling, will it have Snowdens who can warn internally about Unfriendly AI (UFAI) risks? They will probably be ignored and suppressed--that's how it goes in hierarchical bureaucratic organizations.
Seems often not true. The team on the Manhattan Project, for example, rather thoroughly investigated the possibility that the a nuclear test could "ignite" (set off a nuclear chain reaction in) the atmosphere before pro...
I'm not getting the impression that the actual Snowden's actions are going to succeed in stopping universal surveillance.
Equally, if someone leaks information about an unsafe AI being developed by a superpower (or by a cabal of many countries, like the other intelligence services that cooperate with the NSA), that would likely only make the backing governments try to speed up the project (now that it's public, it might have lost some of its lead in a first-take-all game) and to hide it better.
If someone is smart enough to build the first ever AGI, and the...
I suspect that this pledge would be just as "effective" as other pledges, such as the pledge of abstinence, for all the same reasons. The person taking the pledge does not really know him or herself (current or future) well enough to reliably precommit.
I don't really like seeing this type of topic, because most of the interesting things that can be said on the matter shouldn't be discussed in public.
Just a minor nitpick: The intelligence community had nothing to do with the manhattan project; work on nuclear weapons was initiated by direct presidential executive order, creating an entirely new committee for nuclear weapons research, under the control of the US Army (the Department of the Army, the Army's intelligence service, had little to do with the program itself).
"I hereby promise to fight unsafe AGI development in whatever way I can, through internal channels in my organization, by working with outside allies, or even by revealing the risks to the public."
Hmmmm, now potential ethical AGI researchers have a page with that text in their browser history.
a truly ethical consequentialist would understand that exposing unsafe projects is good, while exposing safer projects is bad
Hardly. Sabotaging unsafe projects is good. But exposing them may not be, if it creates other unsafe projects.
Indeed, it seems implausible that exposing unsafe projects to the general public is the best way to sabotage them - anyone who already had the clout to create such a secret project is unlikely to stop, either in the wake of a leak or because secrecy precautions to prevent one are too cumbersome.
Mind you, I haven't thought ...
The AGI will be unfriendly, unless friendliness is a primary goal from the start.
Not proven.
Here is a suggestion for slowing down future secretive and unsafe UFAI projects.
Take the American defense and intelligence community as a case in point. They are a top candidate for the creation of Artificial General Intelligence (AGI): They can get the massive funding, and they can get some top (or near-top) brains on the job. The AGI will be unfriendly, unless friendliness is a primary goal from the start.
The American defense and intelligence community created the Manhattan Project, which is the canonical example for a giant, secret, leading-edge science-technology project with existential-risk implications.
David Chalmers (2010): "When I discussed [AI existential risk] with cadets and staff at the West Point Military Academy, the question arose as to whether the US military or other branches of the government might attempt to prevent the creation of AI or AI+, due to the risks of an intelligence explosion. The consensus was that they would not, as such prevention would only increase the chances that AI or AI+ would first be created by a foreign power."
Edward Snowden broke the intelligence community's norms by reporting what he saw to be tremendous ethical and legal violations. This requires an exceptionally well-developed personal sense of ethics (even if you disagree with those ethics). His actions have drawn a lot of support by those who share his values. Many who condemn him a traitor are still criticizing government intrusions in the basis of his revelations.
When the government AGI project starts rolling, will it have Snowdens who can warn internally about Unfriendly AI (UFAI) risks? They will probably be ignored and suppressed--that's how it goes in hierarchical bureaucratic organizations. Will these future Snowdens have the courage to keep fighting internally, and eventually to report the risks to the public or to their allies in the Friendly AI (FAI) research community
Naturally, the Snowden scenario is not limited to the US government. We can seek ethical dissidents, truthtellers, and whistleblowers in any large and powerful organization that does unsafe research, whether a government or a corporation.
Should we start preparing budding AGI researchers to think this way? We can do this by encouraging people to take consequentialist ethics seriously, which by itself can lead to Snowden-like results. and LessWrong is certainly working on that. But another approach is to start talking more directly about the "UFAI Whistleblower Pledge."
I hereby promise to fight unsafe AGI development in whatever way I can, through internal channels in my organization, by working with outside allies, or even by revealing the risks to the public.
If this concept becomes widespread, and all the more so if people sign on, the threat of ethical whistleblowing will hover over every unsafe AGI project. Even with all the oaths and threats they use to make new employees keep secrets, the notion that speaking out on UFAI is deep in the consensus of serious AGI developers will cast a shadow on every project.
To be clear, the beneficial effect I am talking about here is not the leaks--it is the atmosphere of potential leaks, the lack of trust by management that researchers are completely committed to keeping any secret. For example, post Snowden, the intelligence agencies are requiring that sensitive files only be accessed by two people working together and they are probably tightening their approval guidelines and so rejecting otherwise suitable candidates. These changes make everything more cumbersome.
In creating the OpenCog project, Ben Goertzel advocated total openness as a way of accelerating the progress of those who are willing to expose any dangerous work they might be doing--even if this means that the safer researchers are giving their ideas to the unsafe, secretive ones.
On the other hand, Eliezer Yudkowsky has suggested that MIRI keep its AGI implementation ideas secret, to avoid handing them to an unsafe project. (See "Evaluating the Feasibility of SI's Plans," and, if you can stomach some argument from fictional evidence, "Three Worlds Collide.") Encouraging openness and leaks could endanger Eliezer's strategy. But if we follow Eliezer's position, a truly ethical consequentialist would understand that exposing unsafe projects is good, while exposing safer projects is bad.
So, what do you think? Should we start signing as many current and upcoming AGI researchers as possible to the UFAI Whistleblower Pledge, or work to make this an ethical norm in the community?