I would like to interject for a moment and say that I would be very willing to volunteer a substantial portion of my time, given that I'm specifically taught what I should do, and given sufficient time to learn.
Although this is a purely anecdotal thought, I believe that there is a significant number of people, including myself, who would like to see the Singularity happen, and would be willing to volunteer vast amounts of time in order to increase the probability of a friendly intelligence explosion happen. You might be underestimating the number of people willing to volunteer for free.
This article rings very true to me:
...In our experience, valuable volunteers are rare. The people who email us about volunteer opportunities generally seem enthusiastic about GiveWell’s mission, and motivated by a shared belief in our goals to give up their free time to help us. Yet, the majority of these people never complete useful work for us.
...almost 80% of people who take the initiative to seek us out and ask for unpaid work fail to complete a single assignment. But maybe this shouldn’t be surprising. Writing an email is quick and exciting; spending a
Series: How to Purchase AI Risk Reduction
One large project proposal currently undergoing cost-benefit analysis at the Singularity Institute is a scholarly AI risk wiki. Below I will summarize the project proposal, because:
The Idea
Think Scholarpedia:
But the scholarly AI risk wiki would differ from Scholarpedia in these respects:
Example articles: Eliezer Yudkowsky, Nick Bostrom, Ben Goertzel, Carl Shulman, Artificial General Intelligence, Decision Theory, Bayesian Decision Theory, Evidential Decision Theory, Causal Decision Theory, Timeless Decision Theory, Counterfactual Mugging, Existential Risk, Expected Utility, Expected Value, Utility, Friendly AI, Intelligence Explosion, AGI Sputnik Moment, Optimization Process, Optimization Power, Metaethics, Tool AI, Oracle AI, Unfriendly AI, Complexity of Value, Fragility of Value, Church-Turing Thesis, Nanny AI, Whole Brain Emulation, AIXI, Orthogonality Thesis, Instrumental Convergence Thesis, Biological Cognitive Enhancement, Nanotechnology, Recursive Self-Improvement, Intelligence, AI Takeoff, AI Boxing, Coherent Extrapolated Volition, Coherent Aggregated Volition, Reflective Decision Theory, Value Learning, Logical Uncertainty, Technological Development, Technological Forecasting, Emulation Argument for Human-Level AI, Evolutionary Argument for Human-Level AI, Extensibility Argument for Greater-Than-Human Intelligence, Anvil Problem, Optimality Notions, Universal Intelligence, Differential Intellectual Progress, Brain-Computer Interfaces, Malthusian Scenarios, Seed AI, Singleton, Superintelligence, Pascal's Mugging, Moore's Law, Superorganism, Infinities in Ethics, Economic Consequences of AI and Whole Brain Emulation, Creating Friendly AI, Cognitive Bias, Great Filter, Observation Selection Effects, Astronomical Waste, AI Arms Races, Normative and Moral Uncertainty, The Simulation Hypothesis, The Simulation Argument, Information Hazards, Optimal Philanthropy, Neuromorphic AI, Hazards from Large-Scale Computation, AGI Skepticism, Machine Ethics, Event Horizon Thesis, Acceleration Thesis, Singularitarianism, Subgoal Stomp, Wireheading, Ontological Crisis, Moral Divergence, Utility Indifference, Personhood Predicates, Consequentialism, Technological Revolutions, Prediction Markets, Global Catastrophic Risks, Paperclip Maximizer, Coherent Blended Volition, Fun Theory, Game Theory, The Singularity, History of AI Risk Thought, Utility Extraction, Reinforcement Learning, Machine Learning, Probability Theory, Prior Probability, Preferences, Regulation and AI Risk, Godel Machine, Lifespan Dilemma, AI Advantages, Algorithmic Complexity, Human-AGI Integration and Trade, AGI Chaining, Value Extrapolation, 5 and 10 Problem.
Most of these articles would contain previously unpublished research (not published even in blog posts or comments), because most of the AI risk research that has been done has never been written up in any form but sits in the brains and Google docs of people like Yudkowsky, Bostrom, Shulman, and Armstrong.
Benefits
More than a year ago, I argued that SI would benefit from publishing short, clear, scholarly articles on AI risk. More recently, Nick Beckstead expressed the point this way:
Chris Hallquist added:
Of course, SI has long known it could benefit from clearer presentations of its views, but the cost was too high to implement it. Scholarly authors of Nick Bostrom's skill and productivity are extremely rare, and almost none of them care about AI risk. But now, let's be clear about what a scholarly AI risk wiki could accomplish:
There are some benefits to the wiki structure in particular:
Costs
This would be a large project, and has significant costs. I'm still estimating the costs, but here are some ballpark numbers for a scholarly AI risk wiki containing all the example articles above: