Kutta comments on What I would like the SIAI to publish - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
This might be an opportunity to use one of those Debate Tools, see if one of them can be useful for mapping the disagreement.
I would like to have a short summary of where various people stand on the various issues.
The people:
Eliezer
Ben
Robin Hanson
Nick Bostrom
Ray Kurzweil ?
Other academic AGI types?
Other vocal people on the net like Tim Tyler ?
The issues:
How likely is a human-level AI to go FOOM?
How likely is an AGI developed without "friendliness theory" to have values incompatible with those of humans?
How easy is it to make an AGI (really frickin' hard, or really really really frickin' hard?)?
How likely is it that Ben Goerzel's "toddler AGI" would succeed, if he gets funding etc.?
How likely is it that Ben Goerzel's "toddler AGI" would be dangerous, if he succeeded?
How likely is it that some group will develop an AGI before 2050? (Or more generally, estimated timelines of AGI)
Add Nick Bostrom to the list.
Also, what is exactly Bostrom's take on AI? OP says Bostrom disagrees with Eliezer. Could someone provide a link or reference to that? I have read most of Bostrom's papers some time ago and at the moment I can't recall any such disagreement.
I think Nick was near Anders with an x-risk of 20% conditional on AI development by 2100, and near 50% for AI by 2100. So the most likely known x-risk, although unknown x-risks get a big chunk of his probability mass.