Series: How to Purchase AI Risk Reduction
Another method for purchasing AI risk reduction is to raise the safety-consciousness of researchers doing work related to AGI.
The Singularity Institute is conducting a study of scientists who decided to either (1) stop researching some topic after realizing it might be dangerous, or who (2) forked their career into advocacy, activism, ethics, etc. because they became concerned about the potential negative consequences of their work. From this historical inquiry we hope to learn some things about what causes scientists to become so concerned about the consequences of their work that they take action. Some of the examples we've found so far: Michael Michaud (resigned from SETI in part due to worries about the safety of trying to contact ET), Joseph Rotblat (resigned from the Manhattan Project before the end of the war due to concerns about the destructive impact of nuclear weapons), and Paul Berg (became part of a self-imposed moratorium on recombinant DNA back when it was still unknown how dangerous this new technology could be).
What else can be done?
- Academic outreach, in the form of conversations with AGI researchers and "basics" papers like Intelligence Explosion: Evidence and Import or Complex Value Systems are Required to Realize Valuable Futures.
- A scholarly AI risk wiki.
- Short primers on crucial topics.
- Whatever is suggested by our analysis of past researchers who took action in response to their concerns about the ethics of their research, and by other analyses of human behavior.
Naturally, these efforts should be directed toward researchers who are both highly competent and whose work is very relevant to development toward AGI: researchers like Josh Tenenbaum, Shane Legg, and Henry Markram.
The problem of locating "the subjective you" seems to me to have two parts: first, to locate a world, and second, to locate an observer in that world. For the first part, see the grandparent; the second part seems to me to be the same across interpretations.
The point is, code of a theory has to produce output matching your personal subjective input. The objective view doesn't suffice (and if you drop that requirement, you are back to square 1 because you can iterate all physical theories). The CI has that as part of theory, MWI doesn't, you need extra code.
The complexity argument for MWI that was presented doesn't favour MWI, it favours iteration over all possible physical theories, because that key requirement was omitted.
And my original point is not that MWI is false, or that MWI has higher complexity, or... (read more)