You also need to figure out who are actual experts and what do they actually say. That's a non-trivial task -- reading reports on science in mainstream media will just stuff your head with nonsense.
It's true, reading/scholarship is hard (even for scientists).
"EY said X on facebook, time for me to change my opinion."
Who do you think said that in this case?
Just to be clear about your position, what do you think are reasonable values for human-level AI with 10% probability/
human-level AI with 50% probability and human-level AI with 90% probability?
I think the question in this thread is about how much the deep learning Go program should move my beliefs about this, whatever they may be. My answer is "very little in a sooner direction" (just because it is a successful example of getting a complex thing working). The question wasn't "what are your belief about how far human level AI is" (mine are centered fairly far out).
I'm not going to argue that you should pay attention to EY. His arguments convince me, but if they don't convince you, I'm not gonna do any better.
What I'm trying to get at is, when you ask "is there any evidence that will result in EY ceasing to urgently ask for your money?"... I mean, I'm sure there is such evidence, but I don't wish to speak for him. But it feels to me that by asking that question, you possibly also think of EY as the sort of person who says: "this is evidence that AI risk is near! And this is evidence that AI risk is near! Everything is evidence that AI risk is near!" And I'm pointing out that no, that's not how he acts.
While we're at it, this exchange between us seems relevant. ("Eliezer has said that security mindset is similar, but not identical, to the mindset needed for AI design." "Well, what a relief!") You seem surprised, and I'm not sure what about it was surprising to you, but I don't think you should have been surprised.
Basically, even if you're right that he's wrong, I feel like you're wrong about how he's wrong. You seem to have a model of him which is very different from my model of him.
(Btw, his opinion seems to be that AlphaGo's methods are what makes it more of a leap than a self-driving car or than Deep Blue, not the results. Not sure that affects your position.)
"Well, what a relief!"
I think you misunderstood me (but that's my fault for being opaque, cadence is hard to convey in text). I was being sarcastic. In other words, I don't need EY's opinion, I can just look at the problem myself (as you guys say "argument screens authority.")
I feel like you're wrong about how he's wrong.
Look, I met EY and chatted with him. I don't think EY is "evil," exactly, in a way that L. Ron Hubbard was. I think he mostly believes his line (but humans are great at self-deception). I think he's a flawed person, like everyone else. It's just that he has an enormous influence on the rationalist community that immensely magnify the damage his normal human flaws and biases can do.
I always said that the way to repair human frailty issues is to treat rationality as a job (rather than a social club), and fellow rationalists as coworkers (rather than tribe members). I also think MIRI should stop hitting people up for money and get a normal funding stream going. You know, let their ideas of how to avoid UFAI compete in the normal marketplace of ideas.
In what specific areas do you think LWers are making serious mistakes by ignoring or not accepting strong enough priors from experts?
As I said, the ideal is to use expert opinion as prior unless you have a lot of good info, or you think something is uniquely dysfunctional about an area (its rationalist folklore that a lot of areas are dysfunctional -- "the world is mad" -- but I think people are being silly about this). Experts really do know a lot.
If, despite lots of effort, we couldn't create a program that could beat any human in go, wouldn't this be evidence that we were far away from creating smarter-than-human AI?
Are you asking me if I know what the law of iterated expectations is? I do.
I don't think that fair criticism on that point. As far as I understand MIRI did make the biggest survey of AI experts that asked when those experts predict AGI to arrive:
A recent set of surveys of AI researchers produced the following median dates:
for human-level AI with 10% probability: 2022
for human-level AI with 50% probability: 2040
for human-level AI with 90% probability: 2075
When EY says that this news shows that we should put a significant amount of our probability mass before 2050 that doesn't contradict expert opinions.
Sure, but it's not just about what experts say on a survey about human level AI. It's also about what info a good Go program has for this question, and whether MIRI's program makes any sense (and whether it should take people's money). People here didn't say "oh experts said X, I am updating," they said "EY said X on facebook, time for me to change my opinion."
What would worry you that strong AI is near?
This is a good question. I think lots of funding incentive to build integrated systems (like self-driving cars, but for other domains) and enough talent pipeline to start making that stuff happen and create incremental improvements. People in general underestimate the systems engineering aspect of getting artificial intelligent agents to work in practice even in fairly limited settings like car driving.
Go is a hard game, but it is a toy problem in a way that dealing with the real world isn't. I am worried about economic incentives making it worth people's while to keep throwing money and people and iterating on real actual systems that do intelligent things in the world. Even fairly limited things at first.
Does it sway you at all that EY points at self-driving cars and says "these could be taken as a sign as well, but they're not"?
I actually think self-driving cars are more interesting than strong go playing programs (but they don't worry me much either).
I guess I am not sure why I should pay attention to EY's opinion on this. I do ML-type stuff for a living. Does EY have an unusual track record for predicting anything? All I see is a long tail of vaguely silly things he says online that he later renounces (e.g. "ignore stuff EY_2004 said"). To be clear: moving away from bad opinions is great! That is not what the issue is.
edit: In general I think LW really really doesn't listen to experts enough (I don't even mean myself, I just mean the sensible Bayesian thing to do is to just go with expert opinion prior on almost everything.) EY et al. take great pains to try to move people away from that behavior, talking about how the world is mad, about civiliational inadequacy, etc. In other words, don't trust experts, they are crazy anyways.
No, I just remember my AI history (TD gammon, etc.) The question you should be asking is: "is there any evidence that will result in EY ceasing to urgently ask for your money?"
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Statistical tools rely on such proofs.
Statistics is an applied science, similar to engineering. It has to deal with the messy world where you might need to draw conclusions from a small data set of uncertain provenance where some outliers might be data entry mistakes (or maybe not), you are uncertain of the shape of the distributions you are dealing with, have a sneaking suspicion that the underlying process is not stable in time, etc. etc. None of the nice assumptions underlying nice proofs of optimality apply. You still need to analyse this data set.
Except for all that pesky theoretical statistics.