lukeprog comments on [Template] Questions regarding possible risks from artificial intelligence - Less Wrong

7 Post author: XiXiDu 10 January 2012 11:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread.

Comment author: lukeprog 10 January 2012 03:58:17PM 6 points [-]

My preferred rewrite, without spending too much time on it:

Q1a: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached. reason: this matches question #1 of FHI's [machine intelligence survey.]

Q1b: Once we build AIs that are as skilled at technology design and general reasoning as humans are, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better than humans at technology design and general engineering?

Q2a: Do you ever expect AIs to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Q2b: [delete to make questions list less dauntingly long, and increase response rate]

Q2c: What probability do you assign to the possibility of an AI with initially (professional) human-level competence at technology design and general reasoning to use its capacities to self-modify its way up to vastly superhuman general capabilities within a matter of hours/days/< 5 years? ('Self modification' may include the first AI creating improved child AIs, which create further-improved child AIs, etc.)

Q3a: How important is it to figure out how to make superhuman AI provably friendly to us and our values (non-dangerous), before attempting to build AI that is good enough at technology design and general reasoning to undergo radical self-modification?

Q3b: What probability do you assign to the possibility of human extinction as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)?

Q3c: [delete to reduce length of questions list]

Q4: [delete to reduce length of questions list]

Q5: [delete to reduce length of questions list; AI experts are unlikely to be experts on other x-risks]

Q6: [delete to reduce length of questions list; I haven't seen, and don't anticipate, useful answers here]

Q7: [delete to reduce length of questions list]

Comment author: timtyler 10 January 2012 05:35:56PM *  4 points [-]

I endorse the "question deletion" idea.

Comment author: fubarobfusco 10 January 2012 10:03:52PM 1 point [-]

human-level machine intelligence

AIs that are as skilled at technology design and general reasoning as humans are

Are these two expressions supposed (or assumed) to be equivalent?

Comment author: XiXiDu 10 January 2012 08:41:19PM *  1 point [-]

I updated the original post. Maybe we could agree on those questions. Be back tomorrow.

Comment author: lukeprog 11 January 2012 01:55:54AM 1 point [-]

I stand by my preferred rewrites above, but of course it's up to you.

Comment author: jhuffman 11 January 2012 03:53:47PM 0 points [-]

I agree with deleting Q5 and Q6 because not only would I not expect useful responses but also because it may come off as "extremist" if any respondents are not already familiar with UFAI concepts (or if they are familiar and overtly dismissive of them).