Series: How to Purchase AI Risk Reduction

Here's another way we might purchase existential risk reduction: the production of short primers on crucial topics.

Resources like The Sequences and NickBostrom.com have been incredibly effective at gathering and creating a community engaged in x-risk reduction (either through direct action or, perhaps more importantly, through donations), but most people who could make a difference probably won't take the time to read The Sequences or academic papers.

One solution? Short primers on crucial topics.

Facing the Singularity is one example. I'm waiting for some work from remote researchers before I write the last chapter, but once it's complete we'll produce a PDF version and a Kindle version. Already, several people (including Jaan Tallinn) use it as a standard introduction they send to AI risk newbies.

Similar documents (say, 10 pages in length) could be produced for topics like Existential Risk, AI Risk, Friendly AI, Optimal Philanthropy, and Rationality. These would be concise, fun to read, and emotionally engaging, while also being accurate and thoroughly hyperlinked/referenced to fuller explanations of each section and major idea (on LessWrong, in academic papers, etc.).

These could even be printed and left lying around wherever we think is most important: say, at the top math, computer science, and formal philosophy departments in the English-speaking world.

The major difficulty in executing such a project would be in finding good writers with the relevant knowledge. Eliezer, Yvain, and myself might qualify, but right now the three of us are otherwise occupied. The time investment of the primary author(s) could be minimized by outsourcing as much of the work as possible to SI's team of remote researchers, writers, and editors.

Estimated cost per primer:

  • 80 hours from primary author. (Well, if it's me. I've put about 60 hours into the writing of Facing the Singularity so far, which is of similar length to the proposed primers but I'm adding some padding to the estimate.)
  • $4,000 on remote research. (Tracking down statistics and references, etc.)
  • $1000 on book design, Kindle version production, etc.
Translations to other languages could also be produced, for an estimated cost of $2,000 per translation (this includes checks and improvements by multiple translators).

 

New Comment
24 comments, sorted by Click to highlight new comments since:
[-][anonymous]310

Thanks for laying out these specific proposals, Luke.

Whatever you guys decide to go with, I am much more tempted to donate to SI because of these.

The AI Risk wiki still seems worth doing first, because it increases the proportion of Luke Muehlhausers to Holden Karnofskys it creates by rounding out the details of the argument.

Without doing any cost-benefit analysis, I can tell you that, of the three so far, this one gives me by far the most fuzzies, just thinking about it. A scholarly wiki? Boring. Research? Boring. Short primers on crucial topics??? That sounded less boring in my head.

I couldn't tell you why this happened. Maybe I just really liked Facing the Singularity more than I realized. Does anyone else have a similar reaction?

I like this idea overall, but SI might want to think twice before it risks becoming known as a group that leaves books promoting its ideas lying around.

Also, I'm doubtful of the claim that such introductory books can only be usefully produced by top Less Wrong contributors.

Passing out flyers seems superior to leaving books around. It more closely resembles awareness raising methods used by most charities, and I think a flyer can be a more effective sales pitch (with a pointer to a website where you can read more) than a book cover. Additionally it should be cheaper per person reached by far, and could give Less Wrong users practice with rejection therapy.

I have a friend who passed out flyers with some success for his life extension charity, and claims to have a contact in the Berkeley area who will pass out flyers for cheap. He tried to get Michael Anissimov to design an SI flyer for this guy to pass out, but Anissimov didn't end up going for it. Get in touch with me if you want.

I would like to see the LW sandwich board.

It feels like the best sandwich board would combine a provocative, intriguing claim with some sort of insider signal that the board user is conversant with advanced math, programming, or what have you.

The nearest thing I've seen so far:

If anyone feels that they know the issues (extremely) well enough to co-write a succinct, informative, and punchy SI flyer with me, I encourage them to get in contact: michael@intelligence.org. My other assignments prevent me from following through on this alone, I'm afraid. I do appreciate being encouraged to do this, I just feel that it's too much responsibility to take on alone. Such a flyer would need to be of a high quality to give a favorable impression.

Also, I'm doubtful of the claim that such introductory books can only be usefully produced by top Less Wrong contributors.

I never said "only." If you have suggestions, I would love to hear them!

Unlike the wiki idea, this is something that I can wholeheartedly endorse. Even shorter summaries of the primer (flyer or one-page sizes) would be good too.

$1000 on book design, Kindle version production, etc.

I would recommend at least doubling this budget, I think (with the understanding that you don't have to spend it all). These should look really appealing, and it might be beneficial for them to be illustrated on the interior.

Yeah, not a bad idea.

[-]gjm30

Is the cover design shown here (1) just for fun here on LW, or (2) something you're thinking of actually doing on actual kinda-book-like entities?

If the latter, then you might want to reconsider the merits of making it quite so blatant a ripoff of the famous "Very Short Introduction" series of books. That seems like it might ring some readers' confidence-trick alarm bells. (It certainly does mine.)

Looking at the page of Facing the Singularity I just realized again how wrong it is from the perspective of convincing people who are not already inclined to believe that stuff. The header, the title, the text...wrong, wrong, wrong!

Facing the Singularity

The advent of an advanced optimization process and its global consequences

Sometime this century, machines will surpass human levels of intelligence and ability. This event — the “Singularity” — will be the most important event in our history, and navigating it wisely will be the most important thing we can ever do.

The speed of technological progress suggests a non-negligible probability of the invention of advanced general purpose optimization processes, sometime this century, exhibiting many features of general intelligence as envisioned by the proponents of strong AI (artificial intelligence that matches or exceeds human intelligence) while lacking other important characteristics.

This paper will give a rough overview of 1) the expected power of such optimization processes 2) the lack of important characteristics intuitively associated with intelligent agents, like the consideration of human values in optimizing the environment 3) associated negative consequences and their expected scale 4) the importance of research in preparation of such a possibility 5) a bibliography of advanced supplementary material.

I see the problem you're pointing out, but I disagree with your solution. If the title and intro are that technical, then it's not off-putting to skeptics, it's just... boring.

Unless you're being sarcastic?

...say, at the top math, computer science, and formal philosophy departments in the English-speaking world.

People at top academic departments everywhere in the world speak English... (which is probably true even for the janitor when it comes to some western countries).

How well do they though? I've seen a few academics from around me having enough command of English to get by, but they might still miss some of the subtle points. They just can't reason as well in English as they do in their mother tongue.

As of 1997 more than 95% of research articles in the Science Citation Index were written in English. Being able to read and write in English is a hard requirement for participation in the community of scholars in STEM disciplines and somewhere between a hard requirement and very, very useful elsewhere. I doubt there are any top level philosophers who can't read English well enough to parse extremely complicated arguments. Whether they can write, speak or lsten as well, dunno.

A guy who works for a book publisher once told me that they pay about 8 euro's per 1000 words to a good translator for books they translate from foreign languages. So by this calculation you can have a 100.000 words text translated in Romanian for 800 euros.

Facing the Singularity is approximately 14000 words. The hypothetical 10-page primers would probably be even shorter, maybe 3000 words, although hoping to get them down to 10 pages might be optimistic. So if translations to other languages are similarly priced, you're looking at around $600 for all four translations of Facing the Singularity, or around $100 for the shorter primers.

This doesn't include "checks and improvements by multiple translators", but I imagine those can probably obtained more cheaply than an actual translation, and it seems like $2000 is far too high an estimate for the cost.

There are always unanticipated costs. I find that I generally need 1.5-2 times the amount of money that I know the use for beforehand .

The wiki, research and shorter primers proposals signals a little strategic turn in SI. To me, the shorter primer proposal sounds more academic, even if they not come with the "Oxford University Press" in the front.

Before you continue with this you should maybe try to get someone important read 'Facing the Singularity' without trying too hard. If that doesn't work then...

I have my doubts that someone like Terence Tao would read your primer.

For some time now I am watching the real-time stats for my homepage, especially when I post links at places where people of similar calibre to Terence Tao are chatting. And I seldom get more than 2 clicks, even if more than 20 people converse in that thread.

Now it is true that I am a nobody, why would they read a post written on my personal blog? But how would they know that something called 'Facing the Singularity' is more worthy of their attention?

If I really wanted to I would probably be able to get them read my stuff. But that's difficult and would probably take a middleman who shares a link to it on his blog/Google+/Facebook page and whose stuff is subsequently read by top-notch people.