You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

V_V comments on [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning - Less Wrong Discussion

14 Post author: ESRogs 27 January 2016 09:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 29 January 2016 03:28:48PM 0 points [-]

When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn't contradict expert opinions.

The point is how much we should update our AI future timeline beliefs (and associated beliefs about whether it is appropriate to donate to MIRI and how much) based on the current news of DeepMind's AlphaGo success.

There is a difference between "Gib moni plz because the experts say that there is a 10% probability of human-level AI within 2022" and "Gib moni plz because of AlphaGo".

Comment author: ChristianKl 07 February 2016 10:09:48PM -1 points [-]

I understand IlyaShpitser to claim that there are people who update their AI future timeline beliefs in a way that isn't appropriate because of EY statements. I don't think that's true.