You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on AlphaGo versus Lee Sedol - Less Wrong Discussion

17 Post author: gjm 09 March 2016 12:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (183)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 09 March 2016 03:58:33PM 8 points [-]

I think that MIRI did a mistake than decided not be evolved in actual AI research, but only in AI safety research. In retrospect the nature of this mistake is clear: MIRI was not recognised inside AI community, and its safety recommendations are not connected with actual AI development paths.

It is like a person would decide not to study nuclear physics but only nuclear safety. It even may work until some point, as safety laws are similar in many systems. But he will not be the first who will learn about surprises in new technology.

Comment author: gjm 09 March 2016 04:54:38PM 10 points [-]

I think that MIRI did a mistake than decided not be evolved in actual AI research [...] MIRI was not recognised inside AI community

Being involved in actual AI research would have helped with that only if MIRI had been able to do good AI research, and would have been a net win only if MIRI had been able to do good AI research at less cost to their AI safety research than the gain from greater recognition in the AI community (and whatever other benefits doing AI research might have brought).

I think you're probably correct that MIRI would be more effective if it did AI research, but it's not at all obvious.

Comment author: turchin 09 March 2016 05:45:20PM 4 points [-]

Maybe it should be some AI research which is relevant to safety, like small self evolving agents, or AI-agent which inspects other agents. It would also generate some profit.