Kutta comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: Emile 01 November 2010 03:38:53PM *  7 points [-]

This might be an opportunity to use one of those Debate Tools, see if one of them can be useful for mapping the disagreement.

I would like to have a short summary of where various people stand on the various issues.

The people:

  • Eliezer

  • Ben

  • Robin Hanson

  • Nick Bostrom

  • Ray Kurzweil ?

  • Other academic AGI types?

  • Other vocal people on the net like Tim Tyler ?

The issues:

  • How likely is a human-level AI to go FOOM?

  • How likely is an AGI developed without "friendliness theory" to have values incompatible with those of humans?

  • How easy is it to make an AGI (really frickin' hard, or really really really frickin' hard?)?

  • How likely is it that Ben Goerzel's "toddler AGI" would succeed, if he gets funding etc.?

  • How likely is it that Ben Goerzel's "toddler AGI" would be dangerous, if he succeeded?

  • How likely is it that some group will develop an AGI before 2050? (Or more generally, estimated timelines of AGI)

Comment author: Kutta 01 November 2010 03:53:25PM 5 points [-]

Add Nick Bostrom to the list.

Also, what is exactly Bostrom's take on AI? OP says Bostrom disagrees with Eliezer. Could someone provide a link or reference to that? I have read most of Bostrom's papers some time ago and at the moment I can't recall any such disagreement.

Comment author: CarlShulman 01 November 2010 04:32:53PM *  4 points [-]

I think Nick was near Anders with an x-risk of 20% conditional on AI development by 2100, and near 50% for AI by 2100. So the most likely known x-risk, although unknown x-risks get a big chunk of his probability mass.