Series: How to Purchase AI Risk Reduction
One more project SI is considering...
When I was hired as an intern for SI in April 2011, one of my first proposals was that SI create a technical document called Open Problems in Friendly Artificial Intelligence. (Here is a preview of what the document would be like.)
When someone becomes persuaded that Friendly AI is important, their first question is often: "Okay, so what's the technical research agenda?"
So You Want to Save the World maps out some broad categories of research questions, but it doesn't explain what the technical research agenda is. In fact, SI hasn't yet explained much of the technical research agenda yet.
Much of the technical research agenda should be kept secret for the same reasons you might want to keep secret the DNA for a synthesized supervirus. But some of the Friendly AI technical research agenda is safe to explain so that a broad research community can contribute to it.
This research agenda includes:
- Second-order logical version of Solomonoff induction.
- Non-Cartesian version of Solomonoff induction.
- Construing utility functions from psychologically realistic models of human decision processes.
- Formalizations of value extrapolation. (Like Christiano's attempt.)
- Microeconomic models of self-improving systems (e.g. takeoff speeds).
- ...and several others open problems.
The goal would be to define the open problems as formally and precisely as possible. Some will be more formalizable than others, at this stage. (As a model for this kind of document, see Marcus Hutter's Open Problems in Universal Induction and Intelligence.)
Nobody knows the open problems in Friendly AI research better than Eliezer, so it would probably be best to approach the project this way:
- Eliezer spends a month writing an "Open Problems in Friendly AI" sequence for Less Wrong.
- Luke organizes a (fairly large) research team for presenting these open problems with greater clarity and thoroughness, in the mainstream academic form.
- These researchers collaborate for several months to put together the document, involving Eliezer when necessary.
- SI publishes the final document, possibly in a journal.
Estimated cost:
- 2 months of Eliezer's time.
- 150 hours of Luke's time.
- $40,000 for contributed hours from staff researchers, remote researchers, and perhaps domain experts (as consultants) from mainstream academia.
I'm uncomfortable with this.
Since this 2012, has MIRI updated it's stance on self-censoring of the AI research agenda and can this be demonstrated with reference to formerly censored material or otherwise?
If not, are there alternative friendly AI focused organisations who accept donations and censor differently or don't censor?
Thanks for your disclosures Lukeprog, I appreciate the general candor and accountability. It was also nice to read that you were an SI intern in 2011 - quickly you rose to the top! :)