Series: How to Purchase AI Risk Reduction
Another method for purchasing AI risk reduction is to raise the safety-consciousness of researchers doing work related to AGI.
The Singularity Institute is conducting a study of scientists who decided to either (1) stop researching some topic after realizing it might be dangerous, or who (2) forked their career into advocacy, activism, ethics, etc. because they became concerned about the potential negative consequences of their work. From this historical inquiry we hope to learn some things about what causes scientists to become so concerned about the consequences of their work that they take action. Some of the examples we've found so far: Michael Michaud (resigned from SETI in part due to worries about the safety of trying to contact ET), Joseph Rotblat (resigned from the Manhattan Project before the end of the war due to concerns about the destructive impact of nuclear weapons), and Paul Berg (became part of a self-imposed moratorium on recombinant DNA back when it was still unknown how dangerous this new technology could be).
What else can be done?
- Academic outreach, in the form of conversations with AGI researchers and "basics" papers like Intelligence Explosion: Evidence and Import or Complex Value Systems are Required to Realize Valuable Futures.
- A scholarly AI risk wiki.
- Short primers on crucial topics.
- Whatever is suggested by our analysis of past researchers who took action in response to their concerns about the ethics of their research, and by other analyses of human behavior.
Naturally, these efforts should be directed toward researchers who are both highly competent and whose work is very relevant to development toward AGI: researchers like Josh Tenenbaum, Shane Legg, and Henry Markram.
It seems to me that such a discount exists in all interpretations (at least those that don't successfully predict measurement outcomes beyond predicting their QM probability distributions). In Copenhagen, locating yourself corresponds to specifying random outcomes for all collapse events. In hidden variables theories, locating yourself corresponds to picking arbitrary boundary conditions for the hidden variables. Since MWI doesn't need to specify the mechanism for the collapse or hidden variables, it's still strictly simpler.
Well, the goal is to predict your personal observations, in MWI you have huge wavefunction on which you need to somehow select the subjective you. The predictor will need code for this, whenever you call it mechanism or not. Furthermore, you need to actually derive Born probabilities from some first principles somehow if you want to make a case for MWI. Deriving those, that's what would be interesting, actually making it more compact (if the stuff you're adding as extra 'first principles' is smaller than collapse). Also, btw, CI doesn't have any actual mec... (read more)