multifoliaterose comments on The Importance of Self-Doubt - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (726)
I certainly do not assign a probability as high as 70% to the conjunction of all three of those statements.
And in case it wasn't clear, the problem I was trying to point out was simply with having forbidden conclusions - not forbidden by observation per se, but forbidden by forbidden psychology - and using that to make deductions about empirical premises that ought simply to be evaluated by themselves.
I s'pose I might be crazy, but you all are putting your craziness right up front. You can't extract milk from a stone!
To be quite clear about which of Unknowns' points I object, my main objection is to the point:
where 'I' is replaced by "Eliezer." I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you're working on. (Maybe even much less than that - I would have to spend some time calibrating my estimate to make a judgment on precisely how low a probability I assign to the proposition.)
My impression is that you've greatly underestimated the difficulty of building a Friendly AI.
I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.
Out of weary curiosity, what is it that you think you know about Friendly AI that I don't?
And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?
On the other hand, assuming he knows what it means to assign something a 10^-9 probability, it sounds like he's offering you a bet at 1000000000:1 odds in your favour. It's a good deal, you should take it.
Indeed. I do not know how many people are actively involved in FAI research, but i would guess that it is only in the the dozens to hundreds. Given the small pool of competition, it seems likely that at some point Eliezer will, or already has, made a unique contribution to the field. Get Multi to put some money on it, offer him 1 cent if you do not make a useful contribution in the next 50 years, and if you do, he can pay you 10 million dollars.
I agree it's kind of ironic that multi has such an overconfident probability assignment right after criticizing you for being overconfident. I was quite disappointed with his response here.
Why does my probability estimate look overconfident?
One could offer many crude back-of-envelope probability calculations. Here's one: let's say there's
That seems conservative to me and implies at least a 1 in 10^4 chance. Obviously there's lots of room for quibbling here, but it's hard for me to see how such quibbling could account for five orders of magnitude. And even if post-quibbling you think you have a better model that does imply 1 in 10^9, you only need to put little probability mass on my model or models like it for them to dominate the calculation. (E.g., a 9 in 10 chance of a 1 in 10^9 chance plus a 1 in 10 chance of a 1 in 10^4 chance is close to a 1 in 10^5 chance.)
I don't find these remarks compelling. I feel similar remarks could be used to justify nearly anything. Of course, I owe you an explanation. One will follow later on.
Unless you've actually calculated the probability mathematically, a probability of one in a billion for a natural language claim that a significant number of people accept as likely true is always overconfident. Even Eliezer said that he couldn't assign a probability as low as one in a billion for the claim "God exists" (although Michael Vassar criticized him for this, showing himself to be even more overconfident than Eliezer.)
I'm afraid I have to take severe exception to this statement.
You give the human species far too much credit if you think that our mere ability to dream up a hypothesis automatically raises its probability above some uniform lower bound.
I am aware of your disagreement, for example as expressed by the absurd claims here. Yes, my basic idea is, unlike you, to give some credit to the human species. I think there's a limit on how much you can disagree with other human beings-- unless you're claiming to be something superhuman.
Did you see the link to this comment thread? I would like to see your response to the discussion there.
At least for epistemic meanings of "superhuman", that's pretty much the whole purpose of LW, isn't it?
My immediate response is as follows: yes, dependency relations might concentrate most of the improbability of a religion to a relatively small subset of its claims. But the point is that those claims themselves possess enormous complexity (which may not necessarily be apparent on the surface; cf. the simple-sounding "the woman across the street is a witch; she did it").
Let's pick an example. How probable do you think it is that Islam is a true religion? (There are several ways to take care of logical contradictions here, so saying 0% is not an option.)
Suppose there were a machine--for the sake of tradition, we can call it Omega--that prints out a series of zeros and ones according to the following rule. If Islam is true, it prints out a 1 on each round, with 100% probability. If Islam is false, it prints out a 0 or a 1, each with 50% probability.
Let's run the machine... suppose on the first round, it prints out a 1. Then another. Then another. Then another... and so on... it's printed out 10 1's now. Of course, this isn't so improbable. After all, there was a 1/1024 chance of it doing this anyway, even if Islam is false. And presumably we think Islam is more likely than this to be false, so there's a good chance we'll see a 0 in the next round or two...
But it prints out another 1. Then another. Then another... and so on... It's printed out 20 of them. Incredible! But we're still holding out. After all, million to one chances happen every day...
Then it prints out another, and another... it just keeps going... It's printed out 30 1's now. Of course, it did have a chance of one in a billion of doing this, if Islam were false...
But for me, this is my lower bound. At this point, if not before, I become a Muslim. What about you?
You've been rather vague about the probabilities involved, but you speak of "double digit negative exponents" and so on, even saying that this is "conservative," which implies possibly three digit exponents. Let's suppose you think that the probability that Islam is true is 10^-20; this would seem to be very conservative, by your standards. According to this, to get an equivalent chance, the machine would have to print out 66 1's.
If the machine prints out 50 1's, and then someone runs in and smashes it beyond repair, before it has a chance to continue, will you walk away, saying, "There is a chance at most of 1 in 60,000 that Islam is true?"
If so, are you serious?
The product of two probabilities above your threshold-for-overconfidence can be below your threshold-for-overconfidence. Have you at least thought this through before?
For instance, the claim "there is a God" is not that much less spectacular than the claim "there is a God, and he's going to make the next 1000 times you flip a coin turn up heads." If one-in-a-billion is a lower bound for the probability that God exists, then one-in-a-billion-squared is a generous lower bound for the probability that the next 1000 times you flip a coin will turn up heads. (One-in-a-billion-squared is about 2-to-the-sixty). You're OK with that?
Yes. As long as you think of some not-too-complicated scenario where the one would lead to the other, that's perfectly reasonable. For example, God might exist and decide to prove it to you by effecting that prediction. I certainly agree this has a probability of at least one in a billion squared. In fact, suppose you actually get heads the next 60 times you flip a coin, even though you are choosing different coins, it is on different days, and so on. By that point you will be quite convinced that the heads are not independent, and that there is quite a good chance that you will get 1000 heads in a row.
It would be different of course if you picked a random series of heads and tails: in that case you still might say that there is at least that probability that someone else will do it (because God might make that happen), but you surely cannot say that it had that probability before you picked the random series.
This is related to what I said in the torture discussion, namely that explicitly describing a scenario automatically makes it far more probable to actually happen than it was before you described it. So it isn't a problem if the probability of 1000 heads in a row is more likely than 1 in 2-to-1000. Any series you can mention would be more likely than that, once you have mentioned it.
Also, note that there isn't a problem if the 1000 heads in a row is lower than one in a billion, because when I made the general claim, I said "a claim that significant number of people accept as likely true," and no one expects to get the 1000 heads.
Probabilities should sum to 1. You're saying moreover that probabilities should not be lower that some threshhold. Can I can get you to admit that there's a math issue here that you can't wave away, without trying to fine-tune my examples? If you claim you can solve this math issue, great, but say so.
Edit: -1 because I'm being rude? Sorry if so, the tone does seem inappropriately punchy to me now. -1 because I'm being stupid? Tell me how!
I set a lower bound of one in a billion on the probability of "a natural language claim that a significant number of people accept as likely true". The number of such mutually exclusive claims is surely far less than a billion, so the math issue will resolve easily.
Yes, it is easy to find more than a billion claims, even ones that some people consider true, but they are not mutually exclusive claims. Likewise, it is easy to find more than a billion mutually exclusive claims, but they are not ones that people believe to be true, e.g. no one expects 1000 heads in a row, no one expects a sequence of five hundred successive heads-tails pairs, and so on.
I didn't downvote you.
My estimate does come some effort at calibration, although there's certainly more that I could do. Maybe I should have qualified my statement by saying "this estimate may be a gross overestimate or a gross underestimate."
In any case, I was not being disingenuous or flippant. I have carefully considered the question of how likely it is that Eliezer will be able to play a crucial role in a FAI project if he continues to exhibit a strategy qualitatively similar to his current one and my main objection to SIAI's strategy is that I think it extremely unlikely that Eliezer will be able to have an impact if he proceeds as he has up until this point.
I will be detailing why I don't think that Eliezer's present strategy toward working toward an FAI is a fruitful one in a later top level post.
It sounds, then, like you're averaging probabilities geometrically rather than arithmetically. This is bad!
I understand your position and believe that it's fundamentally unsound. I will have more to say about this later.
For now I'll just say that the arithmetical average of the probabilities that I imagine I might ascribe to Eliezer's current strategy resulting in an FAI to be 10^(-9).
I don't understand this remark.
What probability do you assign to your succeeding in playing a critical role on the Friendly AI project that you're working on? I can engage with a specific number. I don't know if your object is that my estimate is off by a single of order of magnitude or by many orders of magnitude.
I should clarify that my comment applies equally to AGI.
I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don't know because you didn't give a number) then there would be people in the scientific community who would be working on AGI.
Yes, this possibility has certainly occurred to me. I just don't know what your different non-crazy beliefs might be.
Why do you think that AGI research is so uncommon within academia if it's so easy to create an AGI?
This question sounds disingenuous to me. There is a large gap between "10^-9 chance of Eliezer accomplishing it" and "so easy for the average machine learning PhD." Whatever else you think about him, he's proved himself to be at least one or two standard deviations above the average PhD in ability to get things done, and some dimension of rationality/intelligence/smartness.
My remark was genuine. Two points:
I think that the chance that any group of the size of SIAI will develop AGI over the next 50 years is quite small.
Eliezer has not proved himself to be at the same level of the average machine learning PhD at getting things done. As far as I know he has no experience with narrow AI research. I see familiarity with narrow AI as a prerequisite to AGI research.
He actually stated that himself several times.
Yes, ok, this does not mean his intellectual power isn't on par, but his ability to function in an academic environment.
Well...
Most things can be studied through the use of textbooks. Some familiarity with AI is certainly helpful, but it seems that most AI-related knowledge is not on the track to FAI (and most current AGI stuff is nonsense or even madness).
The reason that I see familiarity with narrow AI as a prerequisite to AGI research is to get a sense of the difficulties present in designing machines to complete certain mundane tasks. My thinking is the same as that of Scott Aaronson in his The Singularity Is Far posting: "there are vastly easier prerequisite questions that we already don’t know how to answer."
FAI research is not AGI research, at least not at present, when we still don't know what it is exactly that our AGI will need to work towards, how to formally define human preference.
So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer's goal is for SIAI to actually build an AGI unilaterally. That's where my low probability was coming from.
It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.
As I've said, I find your position sophisticated and respect it. I have to think more about your present point - reflecting on it may indeed alter my thinking about this matter.
Um, and there aren't?
Give some examples. There may be a few people in the scientific community working on AGI, but my understanding is that basically everybody is doing narrow AI.
What is currently called the AGI field will probably bear no fruit, perhaps except for the end-game when it borrows then-sufficiently powerful tools from more productive areas of research (and destroys the world). "Narrow AI" develops the tools that could eventually allow the construction of random-preference AGI.
The folks here, for a start.
Why are people boggling at the 1-in-a-billion figure? You think it's not plausible that there are three independent 1-in-a-thousand events that would have to go right for EY to "play a critical role in Friendly AI success"? Not plausible that there are 9 1-in-10 events that would have to go right? Don't I keep hearing "shut up and multiply" around here?
Edit: Explain to me what's going on. I say that it seems to me that events A, B are likely to occur with probability P(A), P(B). You are allowed to object that I must have made a mistake, because P(A) times P(B) seems too small to you? (That is leaving aside the idea that 10-to-the-minus-nine counts as one of these too-small-to-be-believed numbers, which is seriously making me physiologically angry, ha-ha.)
The 1-in-a-billion follows not from it being plausible that there are three such events, but from it being virtually certain. Models without such events will end up dominating the final probability. I can easily imagine that if I magically happened upon a very reliable understanding of some factors relevant to future FAI development, the 1 in a billion figure would be the right thing to believe. But I can easily imagine it going the other way, and absent such understanding, I have to use estimates much less extreme than that.
I'm having trouble parsing your comment. Could you clarify?
A billion is not so big a number. Its reciprocal is not so small a number.
Edit: Specifically, what's "it" in "it being virtually certain." And in the second sentence -- models of what, final probability of what?
Edit 2: -1 now that I understand. +1 on the child, namaste. (+1 on the child, but I just disagree about how big one billion is. So what do we do?)
"it being virtually certain that there are three independent 1 in 1000 events required, or nine independent 1 in 10 events required, or something along those lines"
Models of the world that we use to determine how likely it is that Eliezer will play a critical role through a FAI team. Final probability of that happening.
A billion is big compared to the relative probabilities we're rationally entitled to have between models where a series of very improbable successes is required, and models where only a modest series of modestly improbable successes is required.
Yes, this is of course what I had in mind.
Replied to this comment and the other (seeming contradictory) one here.