I had three new papers either published or accepted into publication last year; all of them are now available online:
How Feasible is the Rapid Development of Artificial Superintelligence?Physica Scripta 92 (11), 113001.
Abstract: What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: 1) How much more capable could AI become relative to humans, and 2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and pattern recognition. We find that although there are very real limits to prediction, it seems like AI could still substantially improve on human intelligence.
Disjunctive Scenarios of Catastrophic AI Risk.AI Safety and Security (Roman Yampolskiy, ed.), CRC Press. Forthcoming.
Abstract: Artificial intelligence (AI) safety work requires an understanding of what could cause AI to become unsafe. This chapter seeks to provide a broad look at the various ways in which the development of AI sophisticated enough to have general intelligence could lead to it becoming powerful enough to cause a catastrophe. In particular, the present chapter seeks to focus on the way that various risks are disjunctive—on how there are multiple different ways by which things could go wrong, any one of which could lead to disaster. We cover different levels of a strategic advantage an AI might acquire, alternatives for the point where an AI might decide to turn against humanity, different routes by which an AI might become dangerously capable, ways by which the AI might acquire autonomy, and scenarios with varying number of AIs. Whereas previous work has focused on risks specifically only from superintelligent AI, this chapter also discusses crucial capabilities that could lead to catastrophic risk and which could emerge anywhere on the path from near-term “narrow AI” to full-blown superintelligence.
Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Informatica 41 (4).
(with Lukas Gloor)
Abstract: Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk , often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to existential risk but can also help prevent it, superintelligent AI can both be a suffering risk or help avoid it. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks.
In addition, my old paper Responses to Catastrophic AGI Risk (w/ Roman Yampolskiy) was republished, with some minor edits, as the book chapters “Risks of the Journey to the Singularity” and “Responses to the Journey to the Singularity”, in The Technological Singularity: Managing the Journey (Victor Callaghan et al, eds.), Springer-Verlag.
I had three new papers either published or accepted into publication last year; all of them are now available online:
In addition, my old paper Responses to Catastrophic AGI Risk (w/ Roman Yampolskiy) was republished, with some minor edits, as the book chapters “Risks of the Journey to the Singularity” and “Responses to the Journey to the Singularity”, in The Technological Singularity: Managing the Journey (Victor Callaghan et al, eds.), Springer-Verlag.