taw comments on The Importance of Self-Doubt - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (726)
Agree with your principle but not exactly the particular expression or figures. A relative, not absolute, measure seems more appropriate. I think Eliezer has been careful to never give figures to success probabilities. But see 'shut up and do the impossible'.
I would perhaps change the claim to 'doing more than anyone else to save the world'. I'm not certain what self evaluated probability could be so claimed by Eliezer. I would accept as credible something far higher than 10^-4, probably higher than 10^-3. Even at 10^-2 I wouldn't sneer. But the figure is sufficiently lower than "1" as to make Eliezer's reducto argument just strike me as absurd.
Haven't there been a lot more than a million people in history that claimed saving the world, with 0 successes? Without further information, reasonable estimates are from 0 to 1/million.
Can you name ten who claimed to do so via non-supernatural/extraterrestrial means? Even counting claims of the supernatural I would be surprised to learn there had been a million.
But there is further information. We must expect Eliezer to make use of all of the information available to him when making such an implied estimation and similarly use everything we have available when evaluating credibility of any expressed claims.
Nitpick: Do you mean credulity or credibility?
The one that makes sense. Thanks. :)
Unfortunately Internet doesn't let me guess if you meant this humorously or not.
Entirely seriously. I also don't see anything particularly funny about it.
Why does it apply to Eliezer and not to every other person claiming to be a messiah?
It does apply to every person. The other information you have about the claimed messiahs may allow you to conclude them not worthy of further consideration (or any consideration). The low prior makes that easy. But if you do consider other arguments for some reason, you have to take them into account. And some surface signals can be enough to give you grounds for in fact seeking/considering more data. Also, you are never justified in active arguing from ignorance: if you expend some effort on the arguing, you must consider the question sufficiently important, which must cause you to learn more about it if you believe yourself to be ignorant about potentially conclusion-changing detail.
See also: How Much Thought, Readiness Heuristics.
I am confused. Something has gone terribly wrong with my inferential distance prediction model.
And I have no idea what you refer to.
Approximately this entire comment branch.
I don't know to what you refer either, but I can guess. The thing is, my guesses haven't been doing very well lately, so I would appreciate some feedback. Were you suggesting that you would have thought that taw should have more easily understood your point, but he didn't (because inferential distance between you was greater than expected)?
I admit was being obscure so I'm rather impressed that you followed my reasoning - especially since it included a reference that you may not be familiar with. I kept it obscure because I wanted the focus to be on my confusion while minimising slight to taw.
Actually this whole post-thread has been eye opening and or confusing and or surprising to me. I've been blinking and double taking all over the place: "people think?", "that works?", etc. What surprised me most (and in a good way) was the degree to which all the comments have been a net positive. Political and personal topics so often become negative sum but this one didn't seem to.
Reference class of "people who claimed to be saving the world and X" has exactly the same number of successes as reference class of "people who claimed to be saving the world and not X", for every X.
It will be smaller, so you could argue that evidence against Eliezer will be weaker (0 successes in 1000 tries vs 0 successes in 1000000 tries), but every such X needs evidence by Occam's razor (or your favourite equivalent). Otherwise you can take X = "wrote Harry Potter fanfiction" to ignore pretty much all past failures.
A million? The only source of that quantity of would-be saviours I can think of is One True Way proselytising religions, but those millions are not independent -- Christianity and Islam are it.
There has been at least one technological success, so that's a success rate of 1 out of 3, not 0 out of a million.
But the whole argument is wrong. Many claimed to fly and none succeeded -- until someone did. Many claimed transmutation and none succeeded -- until someone did. Many failed to resolve the problem of Euclid's 5th postulate -- until someone did. That no-one has succeeded at a thing is a poor argument for saying the next person to try will also fail (and an even worse one for saying the thing will never be done). You say "without further information", but presumably you think this case falls within that limitation, or you would not have made the argument.
So there is no short-cut to judging the claims of a messianic zealot. You have to do the leg-work of getting that "further information": studying his reasons for his claims.
Just for a starter:
And for every notable prophet or peace activist or whatever there are thousands forgotten by history.
And if you count Petrov - it's not obvious why as he didn't save the world - in any case he wasn't claiming that he's going to save the world earlier, so P(saved the world|claimed to be world-savior) is less than P(saved the world|didn't claim to be world-savior).
You seem to be horribly confused here. I'm not arguing that nobody will ever save the world, just that a particular person claiming to is extremely unlikely.
Given how low the chance is, I'll pass.
You should count Bacon, who believed himself– accurately– to be taking the first essential steps toward understanding and mastery of nature for the good of mankind. If you don't count him on the grounds that he wasn't concerned with existential risk, then you'd have to throw out all prophets who didn't claim that their failure would increase existential risk.
Accurately? Bacon doesn't seem to have any special impact on anything, or on existential risks in particular.
Man, I hope you don't mean that.
He believed that the scientific method he developed and popularized would improve the world in ways that were previously unimaginable. He was correct, and his life accelerated the progress of the scientific revolution.
The claim may be weaker than a claim to help with existential risk, but it still falls into your reference class more easily than a lot of messiahs do.
This looks like a drastic overinterpretation. He seems like just another random philosopher, he didn't "develop scientific method", empiricism was far older and modern science far more recent than Bacon, and there's little basis for even claiming radically discontinuous "scientific revolution" around Bacon's times.
I'll give you more than two, but that still doesn't amount to millions, and not all of those claimed to be saving the world. But now we're into reference class tennis. Is lumping Eliezer in with people claiming to be god more useful than lumping him in with people who foresee a specific technological existential threat and are working to avoid it?
Of course, but the price of the Spectator's Argument is that you will be wrong every time someone does save the world. That may be the trade you want to make, but it isn't an argument for anyone else to do the same.
Unlike Eliezer, I refuse to see this as a bad thing. Reference classes are the best tool we have for thinking about rare events.
You mean like people protesting nuclear power, GMOs, and LHC? Their track record isn't great either.
How so? I'm not saying it's entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it's extremely unlikely.
This is ambiguous.
The most likely parse means: It's nearly certain that not one person in the class [*] will turn out to actually save the world.
This is extremely shaky.
Or, you could mean: take any one person from that class. That one person is extremely unlikely to actually save the world.
This is uncontroversial.
[*] the class of all the people who would seem like crackpots if you knew them when (according to them) they're working to save the world, but before they actually get to do it (or fail, or die first without the climax ever coming).
I agree, but Eliezer strongly rejects this claim. Probably by making a reference class for just himself.
Because you are making a binary decision based on that estimate:
With that rule, you will always make that decision, always predict that the unlikely will not happen, untii the bucket goes to the well once too often.
Let me put this the other way round: on what evidence would you take seriously someone's claim to be doing effective work against an existential threat? Of course, first there would have to be an existential threat, and I recall from the London meetup I was at that you don't think there are any, although that hasn't come up in this thread. I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not whether you eventually updated from that position.
Eliezer's claims are not that he's doing effective work, his claims are pretty much of being a messiah saving humanity from super-intelligent paperclip optimizers. That requires far more evidence. Ridiculously more, because you not only have to show that his work reduces some existential threat, but at the same time it doesn't increase some other threat to larger degree (pro-technology vs anti-technology crowds suffer from this - it's not obvious who's increasing and who's decreasing existential threats). You can as well ask me what evidence would I need to take seriously someone's claim that he's a second coming of Jesus - in both cases it would need to be truly extraordinary evidence.
Anyway, the best understood kind of existential threats are asteroid impacts, and there are people who try to do something about them, some even in US Congress. I see a distinct lack of messiah complexes and personality cults there, very much unlike AI crowd which seems to consist mostly of people with delusions of grandeur.
Is there any other uncontroversial case like that?
The outcome showed that Aumann was wrong, mostly.
Yes, if you accept religious lunatics as your reference class.
Try peak oil/anti-nuclear/global warming/etc. activists then? They tend to claim their movement saves the world, not themselves personally, but I'm sure I could find sufficient number of them who also had some personality cult thrown in.
Sure, but that would 1) reduce you 1/100000 figure, esp. if you take only the leaders of the said movement. And I would not find claims of saving the world by anti-nuke scientists in say the 1960s preposterous.
I think that if you accept that AGI is "near", that FAI is important to try in order to prevent it, and that EY was at the very least the person who brought spotlight to the problem (which is a fact), you can end up thinking that he might actually make a difference.
Yeah, I'm tickled by the estimate that so far 0 people have saved the world. How do we know that? The world is still here, after all.
Eliezer has already placed a Go stone on that intersection, it turns out.
As the comments discuss, that was not an extinction event, barring further burdensome assumptions about nuclear winter or positive feedbacks of social collapse.
In any case Wikipedia disagrees with this story.
No, the Permanent Mission of the Russian Federation to the United Nations disagrees with this story, and Wikipedia quotes that disagreement. The very next section explains why that disagreement may be incorrect.
Do you have any candidates in mind, or some plausible scenario how the world might have been saved by a single person without achieving due prominence?
I already did, there was a huge number of such movements, most of them highly obscure (not unlike Eliezer). I'd expect some power law distribution in prominence, so for every one we've heard about there'd be far more we didn't.
I don't, and the link from AGI to FAI is as weak as from oil production statistics to civilizational collapse peakoilers promised.
Ok, thinking how close we are to AGI is a prior I do not care to argue about, but don't you think AGI is a concern? What do you mean by a weak link?
The part where development of AGI fooms immediately into superintelligence and destroys the world. Evidence for it in not even circumstantial, it is fictional.
Ok, of course it's fictional - hasn't happened yet!
Still, when I imagine something that is smarter than man who created it, it seems it would be able to improve itself.I would bet on that; I do not see a strong reason why this would not happen. What about you? Are you with Hanson on this one?