cousin_it comments on AlphaGo versus Lee Sedol - Less Wrong

17 Post author: gjm 09 March 2016 12:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (183)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 09 March 2016 03:58:33PM 8 points [-]

I think that MIRI did a mistake than decided not be evolved in actual AI research, but only in AI safety research. In retrospect the nature of this mistake is clear: MIRI was not recognised inside AI community, and its safety recommendations are not connected with actual AI development paths.

It is like a person would decide not to study nuclear physics but only nuclear safety. It even may work until some point, as safety laws are similar in many systems. But he will not be the first who will learn about surprises in new technology.

Comment author: cousin_it 09 March 2016 06:58:49PM *  6 points [-]

Agreed on all points.

LW was one handshake away from DeepMind, we interviewed Shane Legg and referred to his work many times. But I guess we didn't have the right attitude, maybe still don't. Now is probably a good time to "halt, melt and catch fire" as Eliezer puts it.

Comment author: hg00 12 March 2016 07:09:42PM 1 point [-]

I'm confused what you would have done with the benefit of hindsight (beyond having folks like Jaan Tallin and Elon Musk who were concerned with AI safety become investors in DeepMind, which was in fact done).

Comment author: Larks 10 March 2016 03:10:01AM 1 point [-]

What do you mean by "one handshake"?

Comment author: James_Miller 09 March 2016 07:26:44PM 1 point [-]

Google bought DeepMind for, reportedly, more than $500 million. Other than possibly Eliezer, MIRI probably doesn't have the capacity to employ people that the market places such a high value on.

Comment author: turchin 09 March 2016 07:33:33PM *  5 points [-]

EY could have such price if he invested more time in studying neural networks, but not in writing science fiction. Lesswrong is also full of clever minds which probably could be employed in any tiny AI project.

Comment author: V_V 09 March 2016 10:22:52PM 7 points [-]

EY could have such price if he invested more time in studying neural networks, but not in writing science fiction.

Has he ever demonstrated any ability to produce anything technically valuable?

Comment author: turchin 09 March 2016 10:27:03PM 0 points [-]

He has ability to attract groups of people and write interesting texts. So he could attract good programmers for any task.

Comment author: V_V 09 March 2016 11:45:11PM *  7 points [-]

He has ability to attract groups of people and write interesting texts. So he could attract good programmers for any task.

He has the ability to attract self-selected groups of people by writing texts that these people find interesting. He has shown no ability to attract, organize and lead a group of people to solve any significant technical task. The research output of SIAI/SI/MIRI has been relatively limited and most of the interesting stuff came out when he was not at the helm anymore.

Comment author: Gunnar_Zarncke 10 March 2016 06:38:21PM 1 point [-]

While this may be formally right the question is what it shows (or should show)? Because on the other hand MIRI does have quite some research output as well as impact on AI safety - and that is what they set out for.

Comment author: V_V 10 March 2016 10:39:10PM 2 points [-]

Most MIRI research output (papers, in particular the peer-reviewed ones) was produced under the direction of Luke Muehlhauser or Nate Soares. Under the direction of EY the prevalent outputs were the LessWrong sequences and Harry Potter fanfiction.

The impact of MIRI research on the work of actual AI researchers and engineers is more difficult to measure, my impression is that it has not been very much so far.

Comment author: gjm 11 March 2016 12:55:41AM 1 point [-]

Was Eliezer ever in charge? I thought that during the OB, LW and HP eras his role was something like "Fellow" and other people (e.g., Goertzel, Muelhauser) were in charge.

Comment author: Gunnar_Zarncke 10 March 2016 11:19:51PM 1 point [-]

That looks like judgment from availability bias. How do you think MIRI did go about getting researchers and these better directors? And funding? And all those connections that seem to lead to AI safety being a thing now?

Comment author: cousin_it 09 March 2016 09:21:38PM 2 points [-]

I'm not saying MIRI should've hired Shane Legg. It was more of a learning opportunity.