Stuart_Armstrong comments on SHRDLU, understanding, anthropomorphisation and hindsight bias - Less Wrong

10 Post author: Stuart_Armstrong 07 April 2014 09:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gunnar_Zarncke 07 April 2014 11:38:01AM 4 points [-]

I have troule upvoting this because I do not see what is driving at. What is its point? You don't say anything wrong and it is clearly applicable to LW, but nonetheless I do not get anything out of it. Maybe because I have heard/read this too often?

Comment author: Stuart_Armstrong 07 April 2014 03:38:08PM *  5 points [-]

The point of this post was to illustrate how the GOFAI people could have got so much wrong and yet still be confident in their beliefs, by looking at what the results of one experiment - SHRDLU - must have felt like to those developers at the time. The post is partially to help avoid hindsight bias: it was not obvious that they were going wrong at the time.

Comment author: ESRogs 11 April 2014 01:02:30AM 0 points [-]

As I was reading I was wondering if there was a modern application that you were hinting at -- some specific case where you think we might be overconfident today. Do you see specific applications today, or is this just something you think we should keep in mind in general?

Comment author: Stuart_Armstrong 17 April 2014 11:28:43AM 2 points [-]

In general. When making predictions about AI, no matter how convincing they seem to us, we should remember all the wrong predictions that felt very convincing to past people for reasons that were very reasonable at the time.