Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Proposal for "Open Problems in Friendly AI"

26 Post author: lukeprog 01 June 2012 02:06AM

Series: How to Purchase AI Risk Reduction

One more project SI is considering...

When I was hired as an intern for SI in April 2011, one of my first proposals was that SI create a technical document called Open Problems in Friendly Artificial Intelligence. (Here is a preview of what the document would be like.)

When someone becomes persuaded that Friendly AI is important, their first question is often: "Okay, so what's the technical research agenda?"

So You Want to Save the World maps out some broad categories of research questions, but it doesn't explain what the technical research agenda is. In fact, SI hasn't yet explained much of the technical research agenda yet.

Much of the technical research agenda should be kept secret for the same reasons you might want to keep secret the DNA for a synthesized supervirus. But some of the Friendly AI technical research agenda is safe to explain so that a broad research community can contribute to it.

This research agenda includes:

  • Second-order logical version of Solomonoff induction.
  • Non-Cartesian version of Solomonoff induction.
  • Construing utility functions from psychologically realistic models of human decision processes.
  • Formalizations of value extrapolation. (Like Christiano's attempt.)
  • Microeconomic models of self-improving systems (e.g. takeoff speeds).
  • ...and several others open problems.

The goal would be to define the open problems as formally and precisely as possible. Some will be more formalizable than others, at this stage. (As a model for this kind of document, see Marcus Hutter's Open Problems in Universal Induction and Intelligence.)

Nobody knows the open problems in Friendly AI research better than Eliezer, so it would probably be best to approach the project this way:

  1. Eliezer spends a month writing an "Open Problems in Friendly AI" sequence for Less Wrong.
  2. Luke organizes a (fairly large) research team for presenting these open problems with greater clarity and thoroughness, in the mainstream academic form.
  3. These researchers collaborate for several months to put together the document, involving Eliezer when necessary.
  4. SI publishes the final document, possibly in a journal.

Estimated cost:

  • 2 months of Eliezer's time.
  • 150 hours of Luke's time.
  • $40,000 for contributed hours from staff researchers, remote researchers, and perhaps domain experts (as consultants) from mainstream academia.

 

Comments (14)

Comment author: Vladimir_Nesov 01 June 2012 09:32:44PM 14 points [-]

The implicit analogy drawn in the introduction between Eliezer Yudkowsky and both Henri Poincare and David Hilbert gives a bad arrogance vibe.

Comment author: Normal_Anomaly 01 June 2012 12:18:12PM 9 points [-]

This is the sort of thing I want to see more of from SI: both the technical research agenda info and the regular posts by Luke about what SI is doing right now. Thanks!

Comment author: RobertLumley 01 June 2012 02:59:59AM 6 points [-]

Might it be worth tagging each of these potential proposals with a certain tag so we could look at them all and evaluate them comparatively?

Comment author: lukeprog 01 June 2012 03:15:17AM *  4 points [-]

Here is a table of contents.

Comment author: Wei_Dai 06 June 2012 12:31:22AM *  5 points [-]
  • Second-order logical version of Solomonoff induction.

I'm not sure this is the right problem. See this post I just made.

  • Non-Cartesian version of Solomonoff induction.

UDT seems to solve this well enough that I no longer consider it a major open problem. Is this not your or Eliezer's evaluation?

Comment author: Nisan 02 June 2012 03:49:32PM 4 points [-]

These problems sound really really interesting.

Comment author: asr 01 June 2012 05:28:19AM 2 points [-]

Interesting direction.

Couple small questions:

  • Who is the intended audience for this document? Would you be able to name specific researchers who you are hoping to influence?

  • As an alternative formulation, what's the community in which you hope to publish this?

  • Why is Eliezer-time measured in months, and Luke-time in hours?

  • Do you expect to involve folks who haven't previously been involved with SIAI? If so, when?

  • How large a research team / author list would you expect the final version to have? Is fairly large "5" or "15"?

Comment author: Solvent 01 June 2012 06:24:03AM *  3 points [-]

Why is Eliezer-time measured in months, and Luke-time in hours?

That's a good question, especially considering that 250 hours is on the order of months (6 weeks at 40 hours/week, or 4 weeks at 60 hours/week).

EDIT: Units confusion

Comment author: lukeprog 01 June 2012 09:45:25AM 3 points [-]

Oops, I meant 150 hours for me.

Eliezer's time is measured in months because he tracks his time in days not hours, so I have an easier time predicting how many days (which I can convert to months) something will take Eliezer to complete, rather than how many hours it will take him to complete.

Comment author: faul_sname 01 June 2012 05:32:13PM 1 point [-]

I get 6 weeks at 40 hours/week.

Comment author: Solvent 03 June 2012 11:33:35PM 0 points [-]

Yep, thanks for that.

Comment author: lukeprog 01 June 2012 04:16:19PM 1 point [-]

Who is the intended audience for this document?

Every smart person who is fairly persuaded to care about AI risk and then asks us, "Okay, so what's the technical research agenda?" This is a lot of people.

what's the community in which you hope to publish this?

It doesn't matter much. It's something we would email to particular humans who are already interested.

Why is Eliezer-time measured in months, and Luke-time in hours?

See here.

Do you expect to involve folks who haven't previously been involved with SIAI? If so, when?

Possibly, e.g. domain experts in micro-econ. When we need them.

How large a research team / author list would you expect the final version to have? Is fairly large "5" or "15"?

My guess is 10-ish.

Comment author: [deleted] 29 July 2015 10:03:44AM 0 points [-]

Much of the technical research agenda should be kept secret for the same reasons you might want to keep secret the DNA for a synthesized supervirus. But some of the Friendly AI technical research agenda is safe to explain so that a broad research community can contribute to it.

I'm uncomfortable with this.

Since this 2012, has MIRI updated it's stance on self-censoring of the AI research agenda and can this be demonstrated with reference to formerly censored material or otherwise?

If not, are there alternative friendly AI focused organisations who accept donations and censor differently or don't censor?

Thanks for your disclosures Lukeprog, I appreciate the general candor and accountability. It was also nice to read that you were an SI intern in 2011 - quickly you rose to the top! :)