Series: How to Purchase AI Risk Reduction
Here's another way we might purchase existential risk reduction: the production of short primers on crucial topics.
Resources like The Sequences and NickBostrom.com have been incredibly effective at gathering and creating a community engaged in x-risk reduction (either through direct action or, perhaps more importantly, through donations), but most people who could make a difference probably won't take the time to read The Sequences or academic papers.
One solution? Short primers on crucial topics.
Facing the Singularity is one example. I'm waiting for some work from remote researchers before I write the last chapter, but once it's complete we'll produce a PDF version and a Kindle version. Already, several people (including Jaan Tallinn) use it as a standard introduction they send to AI risk newbies.
Similar documents (say, 10 pages in length) could be produced for topics like Existential Risk, AI Risk, Friendly AI, Optimal Philanthropy, and Rationality. These would be concise, fun to read, and emotionally engaging, while also being accurate and thoroughly hyperlinked/referenced to fuller explanations of each section and major idea (on LessWrong, in academic papers, etc.).
These could even be printed and left lying around wherever we think is most important: say, at the top math, computer science, and formal philosophy departments in the English-speaking world.
The major difficulty in executing such a project would be in finding good writers with the relevant knowledge. Eliezer, Yvain, and myself might qualify, but right now the three of us are otherwise occupied. The time investment of the primary author(s) could be minimized by outsourcing as much of the work as possible to SI's team of remote researchers, writers, and editors.
Estimated cost per primer:
- 80 hours from primary author. (Well, if it's me. I've put about 60 hours into the writing of Facing the Singularity so far, which is of similar length to the proposed primers but I'm adding some padding to the estimate.)
- $4,000 on remote research. (Tracking down statistics and references, etc.)
- $1000 on book design, Kindle version production, etc.
Looking at the page of Facing the Singularity I just realized again how wrong it is from the perspective of convincing people who are not already inclined to believe that stuff. The header, the title, the text...wrong, wrong, wrong!
The advent of an advanced optimization process and its global consequences
The speed of technological progress suggests a non-negligible probability of the invention of advanced general purpose optimization processes, sometime this century, exhibiting many features of general intelligence as envisioned by the proponents of strong AI (artificial intelligence that matches or exceeds human intelligence) while lacking other important characteristics.
This paper will give a rough overview of 1) the expected power of such optimization processes 2) the lack of important characteristics intuitively associated with intelligent agents, like the consideration of human values in optimizing the environment 3) associated negative consequences and their expected scale 4) the importance of research in preparation of such a possibility 5) a bibliography of advanced supplementary material.
I see the problem you're pointing out, but I disagree with your solution. If the title and intro are that technical, then it's not off-putting to skeptics, it's just... boring.
Unless you're being sarcastic?