Tetronian comments on Singularity Institute Executive Director Q&A #2 - Less Wrong

20 Post author: lukeprog 06 January 2012 03:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread.

Comment author: [deleted] 06 January 2012 04:05:30AM 20 points [-]

Bugmaster said:

You followed that up by saying that SIAI is more interested in "technical problems in mathematics, computer science, and philosophy" than in experimental AI research...you either aren't doing any AGI research, or are keeping it so secret that no one knows about it (which makes it impossible to gauge your progress, if any), and you aren't developing any practical applications of AI, either

The only part of this Q&A that is relevant to Bugmaster's question is:

Our researchers have done a ton of work that hasn't been written up and published yet

Bugmaster asked specifically what SI is doing to solve open technical FAI/AGI problems, but in this Q&A you still haven't unpacked "research" and "work." People want to know what the hell you guys do all day. Yes, there are large inferential distances involved, and yes, most of the research must be kept secret, but you haven't even told us which subproblems you've made progress on. This is a major credibility issue--your mission clearly states that you will perform "AI Reflectivity & Friendly AI Research," yet you seem to be unable to provide any concrete examples.

Comment author: Dr_Manhattan 10 January 2012 10:09:12PM 3 points [-]

Our researchers have done a ton of work that hasn't been written up and published yet

Perhaps a solution would be publishing at least titles of "in the works" papers? If it's really a ton this should be an impressive list, and should increase credibility.

Comment author: lukeprog 06 January 2012 06:30:36AM 3 points [-]

Bugmaster asked "what does the SIAI actually do?" and "what is it that you are actually working on, other than growing the SIAI itself?"

Paragraphs 2, 4, 5, 6, and 7 are lists of things that SIAI has been doing.

As for progress on FAI subproblems, that's precisely the part we mostly haven't written up yet, except for stuff forthcoming in the publications I mentioned, which I see as a big problem and am working to solve.

Also, I don't think it's the case that "most" of the research must be kept secret.

Comment author: [deleted] 06 January 2012 01:58:05PM 13 points [-]

I am satisfied with the level of detail you provided for SI's other projects. But you haven't given even the roughest outline of SI's progress on the thing that matters most, actual FAI research. Are these problems so complicated that you can't even summarize them in a few sentences or paragraphs? Frankly, I don't understand why you can't (or won't) say something like, "We've made progress on this, this, and this. Details in forthcoming publications." Even if you were only willing to say something as detailed as, "We fixed some of the problems with timeless decision theory" or "We worked on the AI reflection problem," that would be much more informative than what you've given us. Saying that you've done "a ton of work" isn't really communicating anything.

Comment author: lukeprog 06 January 2012 03:50:26PM *  21 points [-]

Fair enough. I'll share a few examples of progress, though these won't be surprising to people who are on every mailing list, read every LW post, or are in the Bay Area and have regular conversations with us.

  • much progress on the strategic landscape, e.g. differential technological development analyses, which you'll see in the forthcoming Anna/Luke chapter and in Nick's forthcoming monograph, and which you've already seen in several papers and talks over the past couple years (most of them involving Carl).
  • progress on decision theory, largely via the decision theory workshop mailing list, in particular on UDT
  • progress in outlining the sub-problems of singularity research, which I've started to write up here.
  • progress on the value-loading problem, explained here and in a forthcoming paper by Dewey.
  • progress on the reflectivity problem in the sense of identifying lots of potential solutions that probably won't work. :)
  • progress on the preference extraction problem via incorporating the latest from decision neuroscience

Still, I'd say more of our work has been focused on movement-building than on cutting-edge research, because we think the most immediate concern is not on cutting-edge research but on building a larger community of support, funding, and researchers to work on these problems. Three researchers can have more of an impact if they create a platform by which 20 researchers can work on the problem than if they merely do research by themselves.

Comment author: [deleted] 06 January 2012 05:39:15PM 5 points [-]

Thank you, this is exactly the kind of answer I was hoping for.

Comment author: steven0461 07 January 2012 09:35:12PM 3 points [-]

Is the value-loading or value-learning problem the same thing as the problem of moral uncertainty? If no, what am I missing; if yes, why are the official solution candidates different?

Comment author: Bugmaster 06 January 2012 09:15:00PM 3 points [-]

Thanks, this is quite informative, especially your closing paragraph:

Still, I'd say more of our work has been focused on movement-building than on cutting-edge research, because we think the most immediate concern is not on cutting-edge research but on building a larger community of support, funding, and researchers to work on these problems.

This makes sense to me; have you considered incorporating this paragraph into your core mission statement ? Also, what are your thresholds for deciding when to transition from (primarily) community-building to (primarily) doing research ?

Also, you mentioned (in your main post) that the SIAI has quite a few papers in the works, awaiting publication; and apparently there are even a few books waiting for publishers. Would it not be more efficient to post the articles and books in question on Less Wrong, or upload them to Pirate Bay, or something to that extent -- at least, while you wait for the meat-space publishers to get their act together ? Sorry if this is a naive question; I know very little about the publishing world.

Comment author: lukeprog 06 January 2012 09:36:04PM 1 point [-]

what are your thresholds for deciding when to transition from (primarily) community-building to (primarily) doing research ?

We're not precisely sure. It's also a matter of funding. Researchers who can publish "platform research" for academic outreach, problem space clarification, and community building are less expensive than researchers who can solve decision theory, safe AI architectures, etc.

Would it not be more efficient to post the articles and books in question on Less Wrong, or upload them to Pirate Bay, or something to that extent -- at least, while you wait for the meat-space publishers to get their act together ?

Like many academics, we generally do publish early drafts of forthcoming articles long before the final version is written and published. Examples: 1, 2, 3, 4.

Comment author: amcknight 07 January 2012 12:34:56AM 2 points [-]

progress on the preference extraction problem via incorporating the latest from decision neuroscience

I'd love to hear more about what areas you're looking into within decision neuroscience.

For those who are also interested and somehow missed these:
Crash Course in Neuroscience of Motivation
and these two neuroeconomics book reviews.

Comment author: lukeprog 07 January 2012 12:54:50AM 1 point [-]

An example: The subject matter of the second chapter of this book (the three competing systems of motivation) looks to have some implications for the value extraction problem. This is the kind of information about how our preferences work that I imagine we'll use to extrapolate our preferences — or that an AI would use to do the extrapolation for us.

Comment author: XiXiDu 06 January 2012 03:12:15PM *  7 points [-]

But you haven't given even the roughest outline of SI's progress on the thing that matters most, actual FAI research.

From what I understand they can't do that yet. They don't have enough people to make some actual progress on important problems. They also don't have enough money to hire enough people. So they are concentrating on raising awareness of the issue and persuading people to work on it, respectively contribute money to SI.

The real problem I see is the lack of formalized problems. I perceive it to be very important to formalize some actual problems. Doing so will aid raising money and allow others to work on the problems. To be more specific, I don't think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that's nothing that will impress the important academics enough to actually believe him on AI issues. He should have rather written a book on decision theory where he seems to have some genuine ideas.

Comment author: timtyler 06 January 2012 08:10:20PM *  0 points [-]

The real problem I see is the lack of formalized problems.

There was a list of problems posted recently:

To be more specific, I don't think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that's nothing that will impress the important academics enough to actually believe him on AI issues.

Rationality is probably a moderately important factor in planetary collective intelligence. Pinker claims that rational thinking + game theory have also contributed to recent positive moral shifts. Though there are some existing books on the topic, it could well be an area where a relatively small effort could produce a big positive result.

However, I'm not entirely convinced that hpmor.com is the best way to go about it...

Comment author: lukeprog 10 January 2012 04:08:13PM *  4 points [-]

It turns out that HPMOR has been great for SI recruiting and networking. IMO winners apparently read HPMOR. So do an absurd number of Googlers.