Back in May, Luke suggested the creation of a scholarly AI risk wiki, which was to include a large set of summary articles on topics related to AI risk, mapped out in terms of how they related to the central debates about AI risk. In response, Wei Dai suggested that among other things, the existing Less Wrong wiki could be improved instead. As a result, the Singularity Institute has massively improved the LW wiki, in preparation for a more ambitious scholarly AI risk wiki. The outcome was the creation or dramatic expansion of the following articles:
- 5-and-10
- Acausal Trade
- Acceleration thesis
- Agent
- AGI chaining
- AGI skepticism
- AGI Sputnik moment
- AI advantages
- AI arms race
- AI Boxing
- AI-complete
- AI takeoff
- AIXI
- Algorithmic complexity
- Anvil problem
- Astronomical waste
- Bayesian decision theory
- Benevolence
- Ben Goertzel
- Bias
- Biological Cognitive Enhancement
- Brain-computer interfaces
- Carl Shulman
- Causal decision theory
- Church-Turing thesis
- Coherent Aggregated Volition
- Coherent Blended Volition
- Coherent Extrapolated Volition
- Computing overhang
- Computronium
- Consequentialism
- Counterfactual mugging
- Creating Friendly AI
- Cyc
- Decision theory
- Differential intellectual progress
- Economic consequences of AI and whole brain emulation
- Eliezer Yudkowsky
- Empathic inference
- Emulation argument for human-level AI
- EURISKO
- Event horizon thesis
- Evidential Decision Theory
- Evolutionary algorithm
- Evolutionary argument for human-level AI
- Existential risk
- Expected utility
- Expected value
- Extensibility argument for greater-than-human intelligence
- FAI-complete
- Fallacy
- Fragility_of_value
- Friendly AI
- Fun Theory
- Future of Humanity Institute
- Game theory
- Gödel machine
- Great Filter
- History of AI risk thought
- Human-AGI integration and trade
- Induction
- Infinities in ethics
- Information hazard
- Instrumental convergence thesis
- Intelligence
- Intelligence explosion
- Jeeves Problem
- Lifespan dilemma
- Machine ethics
- Machine learning
- Malthusian Scenarios
- Metaethics
- Moore's law
- Moral divergence
- Moral uncertainty
- Nanny AI
- Nanotechnology
- Neuromorphic AI
- Nick Bostrom
- Nonperson predicate
- Observation selection effect
- Ontological crisis
- Optimal philanthropy
- Optimization power
- Optimization process
- Oracle AI
- Orthogonality thesis
- Paperclip maximizer
- Pascal's mugging
- Prediction market
- Preference
- Prior probability
- Probability theory
- Recursive self-improvement
- Reflective decision theory
- Regulation and AI risk
- Reinforcement learning
- Search space
- Seed AI
- Self Indication Assumption
- Self Sampling Assumption
- Scoring rule
- Simulation argument
- Simulation hypothesis
- Singleton
- Singularitarianism
- Singularity
- Subgoal stomp
- Superintelligence
- Superorganism
- Technological forecasting
- Technological revolution
- Terminal value
- Timeless decision theory
- Tool AI
- Unfriendly AI
- Universal intelligence
- Utility
- Utility extraction
- Utility indifference
- Value extrapolation
- Value learning
- Whole brain emulation
- Wireheading
In managing the project, I focused on content over presentation, so a number of articles still have minor issues such as the grammar and style having room for improvement. It's our hope that, with the largest part of the work already done, the LW community will help improve the articles even further.
Thanks to everyone who worked on these pages: Alex Altair, Adam Bales, Caleb Bell, Costanza Riccioli, Daniel Trenor, João Lourenço, Joshua Fox, Patrick Rhodes, Pedro Chaves, Stuart Armstrong, and Steven Kaas.
LW wiki articles I wish LWers would write/expand: