Wei_Dai comments on A Scholarly AI Risk Wiki - Less Wrong

22 Post author: lukeprog 25 May 2012 08:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread.

Comment author: Wei_Dai 27 May 2012 07:30:59AM 19 points [-]

I vote for spending the resources in one or more of the following ways instead:

  1. Write down any previously unpublished ideas in SIAI people's heads, as concisely and completely as possible, as blog posts or papers.
  2. Incrementally improve the LW Wiki. Add entries for any of the topics on your list that are missing, and link to existing blog posts and papers.
  3. Make a push for "AI Risk" (still don't like the phrase, but that's a different issue) to become a proper academic discipline (i.e., one that's studied by many academics outside FHI and SIAI). I'm not sure how this is usually done, but I think hosting an academic conference and calling for papers would help accomplish this.

(I've noticed a tendency in Luke's LW writings (specifically the Metaethics and AI Risk Strategy sequences) to want to engage in scholarship and systematically write down the basics in preparation for "getting to the good stuff" and then petering out before actually getting to the good stuff, and don't want to see this repeated by SIAI as a whole, on a larger scale.)

Comment author: lukeprog 30 May 2012 11:01:07PM 4 points [-]

Thanks for your suggestions! All these are, in fact, in the works.

Write down any previously unpublished ideas in SIAI people's heads, as concisely and completely as possible, as blog posts or papers.

Carl is writing up some of these on his blog, Reflective Disequilibrium.

Incrementally improve the LW Wiki. Add entries for any of the topics on your list that are missing, and link to existing blog posts and papers.

This is in my queue of things for remote researchers to do.

Make a push for "AI Risk" to become a proper academic discipline

I'm working with Sotala and Yampolskiy on a paper that summarizes the problem of AI risk and the societal & technical responses to it that have been suggested so far, to give some "form" to the field. I already published my AI Risk Bibliography, which I'll update each year in January. More importantly, AGI-12 is being paired with a new conference called AGI-Impacts. Also, SIAI is developing a "best paper" prize specific to AGI impacts or AI risk or something (we're still talking it through).

Comment author: ghf 27 May 2012 11:36:47PM 4 points [-]

I definitely agree.

For (3), now is the time to get this moving. Right now, machine ethics (especially regarding military robotics) and medical ethics (especially in terms of bio-engineering) are hot topics. Connecting AI Risk to either of these trends would allow you extend and, hopefully, bud it off as a separate focus.

Unfortunately, academics are pack animals, so if you want to communicate with them, you can't just stake out your own territory and expect them to do the work of coming to you. You have to pick some existing field as a starting point. Then, knowing the assumptions of that field, you point out the differences in what you're proposing and slowly push out and extend towards what you want to talk about (the pseudopod approach). This fits well with (1) since choosing what journals you're aiming at will determine the field of researchers you'll be able to recruit from.

One note, if you hold a separate conference, you are dependent on whatever academic credibility SIAI brings to the table (none, at present (besides, you already have the Singularity Summit to work with)). But, if you are able to get a track started at an existing conference, suddenly you can define this as the spot where the cool researchers are hanging out. Convince DARPA to put a little money towards this and suddenly you have yourselves a research area. The DOD already pushes funds for things like risk analyses of climate change and other 30-100 year forward threats so it's not even a stretch.

Comment author: wedrifid 27 May 2012 08:40:22AM 1 point [-]

This exactly.

Wikis just aren't a practical place to put original, ongoing research.