You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

RaelwayScot comments on [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning - Less Wrong Discussion

14 Post author: ESRogs 27 January 2016 09:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 28 January 2016 09:19:18PM 6 points [-]

I don't think that fair criticism on that point. As far as I understand MIRI did make the biggest survey of AI experts that asked when those experts predict AGI to arrive:

A recent set of surveys of AI researchers produced the following median dates:

for human-level AI with 10% probability: 2022
for human-level AI with 50% probability: 2040
for human-level AI with 90% probability: 2075

When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn't contradict expert opinions.

Comment author: IlyaShpitser 29 January 2016 05:54:10AM *  1 point [-]

Sure, but it's not just about what experts say on a survey about human level AI. It's also about what info a good Go program has for this question, and whether MIRI's program makes any sense (and whether it should take people's money). People here didn't say "oh experts said X, I am updating," they said "EY said X on facebook, time for me to change my opinion."

Comment author: Kaj_Sotala 29 January 2016 11:11:58AM 6 points [-]

People here didn't say "oh experts said X, I am updating," they said "EY said X on facebook, time for me to change my opinion."

My reaction was more "oh, EY made a good argument about why this is a big deal, so I'll take that argument into account".

Presumably a lot of others felt the same way; attributing the change in opinion to just a deference for tribal authority seems uncharitable.

Comment author: IlyaShpitser 29 January 2016 11:13:19PM 2 points [-]

Say I am worried about this tribal thing happening a lot -- what would put my mind more at ease?

Comment author: Kaj_Sotala 30 January 2016 07:07:01PM *  0 points [-]

I don't know your mind, you tell me? What exactly is it that you find worrying?

My possibly-incorrect guess is that you're worried about something like "the community turning into an echo chamber that only promotes Eliezer's views and makes its members totally ignore expert opinion when forming their views". But if that was your worry, the presence of highly upvoted criticisms of Eliezer's views should do a lot to help, since it shows that the community does still take into account (and even actively reward!) well-reasoned opinions that show dissent from the tribal leaders.

So since you still seem to be worried despite the presence of those comments, I'm assuming that your worry is something slightly different, but I'm not entirely sure of what.

Comment author: Risto_Saarelma 31 January 2016 08:41:56AM 2 points [-]

One problem is that the community has few people actually engaged enough with cutting edge AI / machine learning / whatever-the-respectable-people-call-it-this-decade research to have opinions that are grounded in where the actual research is right now. So a lot of the discussion is going to consist of people either staying quiet or giving uninformed opinions to keep the conversation going. And what incentive structures there are here mostly work for a social club, so there aren't really that many checks and balances that keep things from drifting further away from being grounded in actual reality instead of the local social reality.

Ilya actually is working with cutting edge machine learning, so I pay attention to his expressions of frustration and appreciate that he persists in hanging out here.

Comment author: Kaj_Sotala 31 January 2016 10:12:04AM 0 points [-]

Agreed both with this being a real risk, and it being good that Ilya hangs out here.

Comment author: ChristianKl 29 January 2016 11:16:36AM 0 points [-]

"EY said X on facebook, time for me to change my opinion."

Who do you think said that in this case?

Just to be clear about your position, what do you think are reasonable values for human-level AI with 10% probability/ human-level AI with 50% probability and human-level AI with 90% probability?

Comment author: IlyaShpitser 29 January 2016 01:56:04PM 1 point [-]

I think the question in this thread is about how much the deep learning Go program should move my beliefs about this, whatever they may be. My answer is "very little in a sooner direction" (just because it is a successful example of getting a complex thing working). The question wasn't "what are your belief about how far human level AI is" (mine are centered fairly far out).

Comment author: ChristianKl 07 February 2016 10:06:38PM 0 points [-]

I think this debate is quite hard with terms vague terms like "very little" and "far out". I really do think it would be helpful for other people trying to understand your position if you put down your numbers for those predictions.

Comment author: V_V 29 January 2016 03:28:48PM 0 points [-]

When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn't contradict expert opinions.

The point is how much we should update our AI future timeline beliefs (and associated beliefs about whether it is appropriate to donate to MIRI and how much) based on the current news of DeepMind's AlphaGo success.

There is a difference between "Gib moni plz because the experts say that there is a 10% probability of human-level AI within 2022" and "Gib moni plz because of AlphaGo".

Comment author: ChristianKl 07 February 2016 10:09:48PM -1 points [-]

I understand IlyaShpitser to claim that there are people who update their AI future timeline beliefs in a way that isn't appropriate because of EY statements. I don't think that's true.