You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

IlyaShpitser comments on AI prediction case study 1: The original Dartmouth Conference - Less Wrong Discussion

7 Post author: Stuart_Armstrong 11 March 2013 06:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread.

Comment author: IlyaShpitser 12 March 2013 12:04:56PM *  5 points [-]

The most general lesson is perhaps on the complexity of language and the danger of using human-understandable informal concepts in the field of AI. The Dartmouth group seemed convinced that because they informally understood certain concepts and could begin to capture some of this understanding in a formal model, then it must be possible to capture all this understanding in a formal model. In this, they were wrong.

I think this is a great lesson to draw. I think another lesson is that Dartmouth folks either haven't noticed or thought they could get around the fact that much of what they are trying to do is covered by statistics, and statistics is difficult. In fact, there turned out to be no royal road for learning from data.


Here's my attempt to translate these lessons for folks who worry about foom:

(a) Taboo informal discussions of powerful AI and/or implications of such. If you can't discuss it in math terms, it's probably not worth discussing.

(b) Pay attention to where related fields are stuck. If e.g. coordination problems are hard, or getting optimization processes (corps, governments, etc.) to do what we want is hard, this is food for thought as far as getting a constructed optimization process to do what we want.

Comment author: Stuart_Armstrong 12 March 2013 12:41:30PM 3 points [-]

I'd add "initial progress in a field does not give a good baseline for estimating ultimate success".

Comment author: PECOS-9 12 March 2013 08:57:01PM 1 point [-]

Here's my attempt to translate these lessons for folks who worry about foom:

(a) Taboo informal discussions of powerful AI and/or implications of such. If you can't discuss it in math terms, it's probably not worth discussing.

I'm not sure how this follows from the previous lesson. Analysing the impact of a new technology seems mostly distinct from the research needed to develop it.

For example, suppose somebody looked at progress in chemistry and declared that soon the dreams of alchemy will be realized and we'd be able to easily synthesize any element we wanted out of any other. I'd call this a similar error to the one made by the Dartmouth group, but I don't think it then follows that we can't discuss what the impacts would be of being able to easily synthesize any element out of any other.

It might be good advice nonetheless, but I don't think it follows from the lesson.