This is a list of journals and conferences which have previously published papers on the topics of superintelligence, AI safety, AI risk, the AI alignment problem, etc., and which may be accept papers related to these topics in the future.
Human-aligned artificial intelligence is a multiobjective problem
(todo)
Journals and conferences that have published on related topics like AI timeline forecasting, existential risks in general, etc., should be listed here.
(todo)
This is a list of journals and conferences which have previously published papers on the topics of AI safety, AI risk, the AI alignment problem, etc., and which may be accept papers related to these topics in the future.
This list contains only publications related to artificial general intelligence; concerns about the safety of narrow-AI systems are not considered topical for this list.
Examples: Racing to the Precipice: a Model of Artificial Intelligence Development, Social choice ethics in artificial intelligence, Reconciliation between factions focused on near-term and long-term artificial intelligence, On the promotion of safe and socially beneficial artificial intelligence, The problem of superintelligence: political, not technological
Examples: General purpose intelligence: arguing the orthogonality thesis
Examples: Strategic Implications of Openness in AI Development
Examples: Special issue on Superintelligence including Superintelligence As a Cause or Cure For Risks of Astronomical Suffering, Modeling and Interpreting Expert Disagreement About Artificial Superintelligence, Conceptual-Linguistic Superintelligence, Mammalian Value Systems, Robust Computer Algebra, Theorem Proving, and Oracle AI
Notes: there seem to exist two different journals, both of which are called Informatica. Make sure to get the right one.
Examples: Advantages of Artificial Intelligences, Uploads, and Digital Minds, Consciousness and Ethics: Artificially Conscious Moral Agents
Examples: The Singularity: A Philosophical Analysis, Motivational Defeaters of Self-Modifying AGIs, Superintelligence as Moral Philosopher
Examples: A model of pathways to artificial superintelligence catastrophe for risk and decision analysis
Examples: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, Thinking inside the box: Controlling and using an oracle AI, Why AI Doomsayers are like Sceptical Theists and Why it Matters
Examples: Responses to Catastrophic AGI Risk: A Survey, How Feasible is the Rapid Development of Artificial Superintelligence?
Examples: A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents
This is a list of journals and conferences which have previously published papers on the topics of superintelligence, AI
safety,safety, AI risk, the AI alignment problem, etc., and which may be accept papers related to these topics in the future.Note: this page was imported from the LessWrong 1.0 wiki and has not been updated since 2018.