Without doing any cost-benefit analysis, I can tell you that, of the three so far, this one gives me by far the most fuzzies, just thinking about it. A scholarly wiki? Boring. Research? Boring. Short primers on crucial topics??? That sounded less boring in my head.
I couldn't tell you why this happened. Maybe I just really liked Facing the Singularity more than I realized. Does anyone else have a similar reaction?
Passing out flyers seems superior to leaving books around. It more closely resembles awareness raising methods used by most charities, and I think a flyer can be a more effective sales pitch (with a pointer to a website where you can read more) than a book cover. Additionally it should be cheaper per person reached by far, and could give Less Wrong users practice with rejection therapy.
I have a friend who passed out flyers with some success for his life extension charity, and claims to have a contact in the Berkeley area who will pass out flyers for cheap. He tried to get Michael Anissimov to design an SI flyer for this guy to pass out, but Anissimov didn't end up going for it. Get in touch with me if you want.
If anyone feels that they know the issues (extremely) well enough to co-write a succinct, informative, and punchy SI flyer with me, I encourage them to get in contact: michael@intelligence.org. My other assignments prevent me from following through on this alone, I'm afraid. I do appreciate being encouraged to do this, I just feel that it's too much responsibility to take on alone. Such a flyer would need to be of a high quality to give a favorable impression.
Is the cover design shown here (1) just for fun here on LW, or (2) something you're thinking of actually doing on actual kinda-book-like entities?
If the latter, then you might want to reconsider the merits of making it quite so blatant a ripoff of the famous "Very Short Introduction" series of books. That seems like it might ring some readers' confidence-trick alarm bells. (It certainly does mine.)
Looking at the page of Facing the Singularity I just realized again how wrong it is from the perspective of convincing people who are not already inclined to believe that stuff. The header, the title, the text...wrong, wrong, wrong!
Facing the Singularity
The advent of an advanced optimization process and its global consequences
Sometime this century, machines will surpass human levels of intelligence and ability. This event — the “Singularity” — will be the most important event in our history, and navigating it wisely will be the most important thing we can ever do.
The speed of technological progress suggests a non-negligible probability of the invention of advanced general purpose optimization processes, sometime this century, exhibiting many features of general intelligence as envisioned by the proponents of strong AI (artificial intelligence that matches or exceeds human intelligence) while lacking other important characteristics.
This paper will give a rough overview of 1) the expected power of such optimization processes 2) the lack of important characteristics intuitively associated with intelligent agents, like the consideration of human values in optimizing the environment 3) associated negative consequences and their expected scale 4) the importance of research in preparation of such a possibility 5) a bibliography of advanced supplementary material.
As of 1997 more than 95% of research articles in the Science Citation Index were written in English. Being able to read and write in English is a hard requirement for participation in the community of scholars in STEM disciplines and somewhere between a hard requirement and very, very useful elsewhere. I doubt there are any top level philosophers who can't read English well enough to parse extremely complicated arguments. Whether they can write, speak or lsten as well, dunno.
Facing the Singularity is approximately 14000 words. The hypothetical 10-page primers would probably be even shorter, maybe 3000 words, although hoping to get them down to 10 pages might be optimistic. So if translations to other languages are similarly priced, you're looking at around $600 for all four translations of Facing the Singularity, or around $100 for the shorter primers.
This doesn't include "checks and improvements by multiple translators", but I imagine those can probably obtained more cheaply than an actual translation, and it seems like $2000 is far too high an estimate for the cost.
Before you continue with this you should maybe try to get someone important read 'Facing the Singularity' without trying too hard. If that doesn't work then...
I have my doubts that someone like Terence Tao would read your primer.
For some time now I am watching the real-time stats for my homepage, especially when I post links at places where people of similar calibre to Terence Tao are chatting. And I seldom get more than 2 clicks, even if more than 20 people converse in that thread.
Now it is true that I am a nobody, why would they read a post written on my personal blog? But how would they know that something called 'Facing the Singularity' is more worthy of their attention?
If I really wanted to I would probably be able to get them read my stuff. But that's difficult and would probably take a middleman who shares a link to it on his blog/Google+/Facebook page and whose stuff is subsequently read by top-notch people.
Series: How to Purchase AI Risk Reduction
Here's another way we might purchase existential risk reduction: the production of short primers on crucial topics.
Resources like The Sequences and NickBostrom.com have been incredibly effective at gathering and creating a community engaged in x-risk reduction (either through direct action or, perhaps more importantly, through donations), but most people who could make a difference probably won't take the time to read The Sequences or academic papers.
One solution? Short primers on crucial topics.
Facing the Singularity is one example. I'm waiting for some work from remote researchers before I write the last chapter, but once it's complete we'll produce a PDF version and a Kindle version. Already, several people (including Jaan Tallinn) use it as a standard introduction they send to AI risk newbies.
Similar documents (say, 10 pages in length) could be produced for topics like Existential Risk, AI Risk, Friendly AI, Optimal Philanthropy, and Rationality. These would be concise, fun to read, and emotionally engaging, while also being accurate and thoroughly hyperlinked/referenced to fuller explanations of each section and major idea (on LessWrong, in academic papers, etc.).
These could even be printed and left lying around wherever we think is most important: say, at the top math, computer science, and formal philosophy departments in the English-speaking world.
The major difficulty in executing such a project would be in finding good writers with the relevant knowledge. Eliezer, Yvain, and myself might qualify, but right now the three of us are otherwise occupied. The time investment of the primary author(s) could be minimized by outsourcing as much of the work as possible to SI's team of remote researchers, writers, and editors.
Estimated cost per primer: