IlyaShpitser comments on AI prediction case study 1: The original Dartmouth Conference - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (13)
I think this is a great lesson to draw. I think another lesson is that Dartmouth folks either haven't noticed or thought they could get around the fact that much of what they are trying to do is covered by statistics, and statistics is difficult. In fact, there turned out to be no royal road for learning from data.
Here's my attempt to translate these lessons for folks who worry about foom:
(a) Taboo informal discussions of powerful AI and/or implications of such. If you can't discuss it in math terms, it's probably not worth discussing.
(b) Pay attention to where related fields are stuck. If e.g. coordination problems are hard, or getting optimization processes (corps, governments, etc.) to do what we want is hard, this is food for thought as far as getting a constructed optimization process to do what we want.
I'd add "initial progress in a field does not give a good baseline for estimating ultimate success".
I'm not sure how this follows from the previous lesson. Analysing the impact of a new technology seems mostly distinct from the research needed to develop it.
For example, suppose somebody looked at progress in chemistry and declared that soon the dreams of alchemy will be realized and we'd be able to easily synthesize any element we wanted out of any other. I'd call this a similar error to the one made by the Dartmouth group, but I don't think it then follows that we can't discuss what the impacts would be of being able to easily synthesize any element out of any other.
It might be good advice nonetheless, but I don't think it follows from the lesson.