Will_Newsome comments on Friendly AI Research and Taskification - Less Wrong

22 Post author: multifoliaterose 14 December 2010 06:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread.

Comment author: Will_Newsome 15 December 2010 10:37:36AM 6 points [-]

I'm not speaking for SIAI as this is more of a Visiting Fellows thing than an SIAI thing, but there are people working on Friendliness, and creating a Friendliness roadmap. We have lists of hundreds of problems, and lists of potentially relevant fields or concepts. Work is getting started on combining these lists into a real roadmap despite the uncertainty and difference of emphasis among researchers. Obviously we'd rather not release things for the public to see unless there were rather good reasons for doing so -- less output means less chance for screwing up public relations, which is important because SIAI Visiting Fellows output is easy to conflate with SIAI output in ways that might be misleading. I've started a blog where I'll put my own thoughts on something-like-Friendliness that I feel are not at all dangerous, and I might encourage other Friendliness researchers to do so as well. I'll link to my blog in a discussion post once I have a few more posts seeded. At some point you might see summaries of collaborative research somewhere. But until we have a better idea of who our audience is and what security precautions are sane, we'd like to work quietly. Again, I'm mostly speaking for myself, kind of speaking for a group of partially-SIAI-affiliated folk, and not at all for SIAI as an organization.

(There aren't that many people that can speak for SIAI, unfortunately. Like, two maybe. If you're an Oppenheimer (strong rationality and remarkable ability to get uber-nerds to work like a well-oiled machine), please consider applying for Visiting Fellowship. We're a bright group, but that has more to do with being bright than it has to do with being a group, and we'd like to change that.)

Comment author: cousin_it 15 December 2010 12:20:58PM *  7 points [-]

You have hundreds of subproblems that need to be solved? And you're making a special effort to keep them secret from people on LW? Just... wow. How dumb would you have to be? Excuse me while I beat my head against the wall for awhile.

Comment author: Will_Newsome 15 December 2010 12:42:11PM *  5 points [-]

They're not lists of hundreds of subproblems that need to be solved, they're lists of hundreds of subproblems. Large difference. We don't know which ones need to be solved, nor if the problems are phrased correctly, et cetera. Nor are we making a special effort to keep them secret; we're just not making an effort to publicize quarter-done brainstorming sessions.

If you want an example of some ideas that I wasn't bothering to publicize yet, then here: http://willnewsome.wordpress.com/ . I just made that today in response to this discussion post. They're not polished, but they might be thought-provoking. At some point we'll probably try to start listing open problems for LW or e.g. mathoverflow people to work on, but we're not at that stage yet. A few months maybe?

Comment author: cousin_it 15 December 2010 12:49:30PM *  2 points [-]

Thanks for the link.

ETA: the angry tone of my first reply was prompted by your use of the words "dangerous" and "security precautions". Your second comment doesn't mention these, which is nice :-)

Comment author: Kaj_Sotala 18 December 2010 11:12:05AM *  5 points [-]

I'm not speaking for SIAI as this is more of a Visiting Fellows thing than an SIAI thing, but there are people working on Friendliness, and creating a Friendliness roadmap. We have lists of hundreds of problems, and lists of potentially relevant fields or concepts.

Meh. Now I'm a bit annoyed in that I did try to poke people into a direction where they'd do something like that when I was there as a Visiting Fellow, but mostly the reaction seemed to be "we should leave all thinking about Friendliness to Eliezer". But upon reflection, I realize that I may not have been as vocal about that as I thought I was (illusion of transparency and all that), so I guess I only have myself to blame for you guys only starting on all the real interesting stuff after I left. ;p

Comment author: Vladimir_Nesov 18 December 2010 11:24:59AM 4 points [-]

Meh. Now I'm a bit annoyed in that I did try to poke people into a direction where they'd do something like that when I was there as a Visiting Fellow, but mostly the reaction seemed to be "we should leave all thinking about Friendliness to Eliezer".

That's... disturbing, although expected. Why isn't Visiting Fellows program used to strengthen this line of research? There could be practical difficulties in moving there quickly, but it's confusing if it's not even a goal.

Comment author: wedrifid 18 December 2010 11:17:22AM 3 points [-]

but mostly the reaction seemed to be "we should leave all thinking about Friendliness to Eliezer".

Huh? But, but... surely there room for at least half a dozen people in that particular basement!

Comment author: multifoliaterose 15 December 2010 08:15:37PM *  5 points [-]
  1. I'm encouraged by what you say here. The doubt as to the value of Friendliness research that I express above is doubt as to the value of researching Friendly AI without a taskification rather than doubt as to the value researching what a taskification might look like.

  2. If you haven't done so I think that it would be worthwhile to ask the SIAI staff whether they might be comfortable with classifying (some of?) the output of the SIAI Visiting Fellows as part of SIAI's output. As I said in response to a comment by WrongBot, I've gathered that the SIAI visiting fellows program is a good thing; but there's been relatively little public documentation of what the SIAI visiting fellows have been doing. I would guess that a policy of such public documentation would improve SIAI's credibility.

  3. While I didn't read your comment in the way that cousin_it did, I can see why he would do so. I've gotten a vague impression from talking to a number of people loosely or directly connected with SIAI that SIAI has been keeping their research secret on the grounds that releases to the public could be dangerous on account of speeding unfriendly AI research. In view of how primitive the study of AGI looks, the apparent infeasibility of SIAI unilaterally building the first AGI and the fact that Friendliness research would not seem to significantly speed the creation of unfriendly AI; such a policy seems highly dubious to me. So I was happy to hear that you and your collaborators are planning on putting some of what you've been doing out in the open in roughly a few months.

  4. Thanks for the link to your blog posts.