I do not see the point in an exhaustive list of failure scenarios before the existence of any AI is established.
Yeah, I'm not going to care about reading it, and I really don't think it's possible for anyone to get close to AI without it dawning on them what the thing might be capable of. I mean, why don't we get at least /one/ made before we invest our time and effort into something that, in my belief, wont have been relevant, and in all likelihood, wont get to the people who it needs to get to if they even cared about it.
We have to sometimes be allowed to have the second discussion. There sometimes has to be a discussion among those who agree that X is an issue, about what to do about it. We can't always return to the discussion of whether X is an issue at all, because there's always someone who dissents. Save it for the threads which are about your dissent.
Series: How to Purchase AI Risk Reduction
One large project proposal currently undergoing cost-benefit analysis at the Singularity Institute is a scholarly AI risk wiki. Below I will summarize the project proposal, because:
The Idea
Think Scholarpedia:
But the scholarly AI risk wiki would differ from Scholarpedia in these respects:
Example articles: Eliezer Yudkowsky, Nick Bostrom, Ben Goertzel, Carl Shulman, Artificial General Intelligence, Decision Theory, Bayesian Decision Theory, Evidential Decision Theory, Causal Decision Theory, Timeless Decision Theory, Counterfactual Mugging, Existential Risk, Expected Utility, Expected Value, Utility, Friendly AI, Intelligence Explosion, AGI Sputnik Moment, Optimization Process, Optimization Power, Metaethics, Tool AI, Oracle AI, Unfriendly AI, Complexity of Value, Fragility of Value, Church-Turing Thesis, Nanny AI, Whole Brain Emulation, AIXI, Orthogonality Thesis, Instrumental Convergence Thesis, Biological Cognitive Enhancement, Nanotechnology, Recursive Self-Improvement, Intelligence, AI Takeoff, AI Boxing, Coherent Extrapolated Volition, Coherent Aggregated Volition, Reflective Decision Theory, Value Learning, Logical Uncertainty, Technological Development, Technological Forecasting, Emulation Argument for Human-Level AI, Evolutionary Argument for Human-Level AI, Extensibility Argument for Greater-Than-Human Intelligence, Anvil Problem, Optimality Notions, Universal Intelligence, Differential Intellectual Progress, Brain-Computer Interfaces, Malthusian Scenarios, Seed AI, Singleton, Superintelligence, Pascal's Mugging, Moore's Law, Superorganism, Infinities in Ethics, Economic Consequences of AI and Whole Brain Emulation, Creating Friendly AI, Cognitive Bias, Great Filter, Observation Selection Effects, Astronomical Waste, AI Arms Races, Normative and Moral Uncertainty, The Simulation Hypothesis, The Simulation Argument, Information Hazards, Optimal Philanthropy, Neuromorphic AI, Hazards from Large-Scale Computation, AGI Skepticism, Machine Ethics, Event Horizon Thesis, Acceleration Thesis, Singularitarianism, Subgoal Stomp, Wireheading, Ontological Crisis, Moral Divergence, Utility Indifference, Personhood Predicates, Consequentialism, Technological Revolutions, Prediction Markets, Global Catastrophic Risks, Paperclip Maximizer, Coherent Blended Volition, Fun Theory, Game Theory, The Singularity, History of AI Risk Thought, Utility Extraction, Reinforcement Learning, Machine Learning, Probability Theory, Prior Probability, Preferences, Regulation and AI Risk, Godel Machine, Lifespan Dilemma, AI Advantages, Algorithmic Complexity, Human-AGI Integration and Trade, AGI Chaining, Value Extrapolation, 5 and 10 Problem.
Most of these articles would contain previously unpublished research (not published even in blog posts or comments), because most of the AI risk research that has been done has never been written up in any form but sits in the brains and Google docs of people like Yudkowsky, Bostrom, Shulman, and Armstrong.
Benefits
More than a year ago, I argued that SI would benefit from publishing short, clear, scholarly articles on AI risk. More recently, Nick Beckstead expressed the point this way:
Chris Hallquist added:
Of course, SI has long known it could benefit from clearer presentations of its views, but the cost was too high to implement it. Scholarly authors of Nick Bostrom's skill and productivity are extremely rare, and almost none of them care about AI risk. But now, let's be clear about what a scholarly AI risk wiki could accomplish:
There are some benefits to the wiki structure in particular:
Costs
This would be a large project, and has significant costs. I'm still estimating the costs, but here are some ballpark numbers for a scholarly AI risk wiki containing all the example articles above: