You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Steve_Rayhawk comments on Q&A with experts on risks from AI #2 - Less Wrong Discussion

15 Post author: XiXiDu 09 January 2012 07:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: Steve_Rayhawk 10 January 2012 01:22:11PM *  1 point [-]

In fact, I'd prefer it if Q8 started out with the less-shibbolethy "How much have you read about, or used the concepts of..." or something like that, which replaces a dichotomy with a continuum.

Yeah... I wanted to make the suggested question less loaded, but it would have required more words, and I was unthinkingly preoccupied with worry about a limit on the permitted complexity of a single-sentence question. Maybe I should have split the question across more sentences.

The signaling uses of Q8 seem like a bad idea to me, although it seems a worthwhile thing to ask for Steve Rayhawk's reasons.

My reasons for suggesting Q8 were mostly:

  • First, I wanted to make it easier to narrow down hypotheses about the relationship between respondents' opinions about AI risk and their awareness of progress toward formal, machine-representable concepts of optimal AI design (also including, I guess, progress toward practically efficient mechanized application of those concepts, as in Schmidhuber's Speed Prior and AIXI-tl).

  • Second, I was imagining that many respondents would be AI practitioners who thought mostly in terms of architectures with a machine-learning flavor. Those architectures usually have a very specific and limited structure in their hypothesis space or policy space by construction, such that it would be clearly silly to imagine a system with such an architecture self-representing or self-improving. These researchers might have a conceptual myopia by which they imagine "progress in AI" to mean only "creation of more refined machine-learning-style architectures", of a sort which of course wouldn't lead towards passing a threshold of capability of self-improvement anytime soon. I wanted to put in something of a conceptual speed bump to that kind of thinking, to reduce unthinking dismissiveness in the answers, and counter part of the polarizing/consistency effects that merely receiving and thinking about answering the survey might have on the recipients' opinions. (Of course, if this had been a survey which were meant to be scientific and formally reportable, it would be desirable for the presence of such a potentially leading question to be an experimentally controlled variable.)

With those reasons on the table, someone else might be able to come up with a question that fulfills them better. I also agree with paulfchristiano's comment.