You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on [link] [poll] Future Progress in Artificial Intelligence - Less Wrong Discussion

8 Post author: Pablo_Stafforini 09 July 2014 01:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 10 July 2014 12:42:11PM *  0 points [-]

...you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years

Hit the brakes on that line of reasoning! That's not what the question asked. It asked WILL it, not COULD it.

If I have a statement "X will happen", and ask people to assign a probability to it, then if the probability is <=50% I believe it isn't too much to a stretch to paraphrase "X will happen with a probability <=50%" as "It could be that X will happen". Looking at the data of the survey, of 163 people who gave a probability estimate, only 15 people assigned a probability >50% to the possibility that there will be a superhuman intelligence that greatly surpasses the performance of humans within 2 years after the creation of a human level intelligence.

That said, I didn't use the word "could" on purpose in my comment. It was just an unintentional inaccuracy. If you think that is a big deal, then I am sorry. I'll try to be more careful in future.

Comment author: Luke_A_Somers 10 July 2014 02:16:24PM *  1 point [-]

The difference here is that you considered this position to strictly imply being against the possibility of intelligence explosion.

One can consider intelligence explosion a real risk, and then take steps to prevent it, with the resulting estimate being low probability.