Gunnar_Zarncke comments on AlphaGo versus Lee Sedol - Less Wrong

17 Post author: gjm 09 March 2016 12:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (183)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 10 March 2016 12:10:52PM *  3 points [-]

As far as I can tell, Paul's current proposal might still suffer from blackmail, like his earlier proposal which I commented on. I vaguely remember discussing the problem with you as well.

One big lesson for me is that AI research seems to be more incremental and predictable than we thought, and garage FOOM probably isn't the main danger. It might be helpful to study the strengths and weaknesses of modern neural networks and get a feel for their generalization performance. Then we could try to predict which areas will see big gains from neural networks in the next few years, and which parts of Friendliness become easy or hard as a result. Is anyone at MIRI working on that?

Comment author: Gunnar_Zarncke 10 March 2016 06:41:57PM 3 points [-]

One big lesson for me is that AI research seems to be more incremental and predictable than we thought, and garage FOOM probably isn't the main danger.

That may be true but that is hindsight bias. MIRIs (or EYs for that matter) approach to hedge against that being true was nonetheless a very (and maybe given the knowledge at the time only) reasonable approach.