wedrifid comments on Best career models for doing research? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (999)
Repeating "But I say so!" with increasing emphasis until it works. Been taking debating lessons from Robin?
It seems to me that the natural effect of a group leader persistently arguing from his own authority is Evaporative Cooling of Group Beliefs. This is of course conducive to confirmation bias and corresponding epistemological skewing for the leader; things which seem undesirable for somebody in Eliezer's position. I really wish that Eliezer was receptive to taking this consideration seriously.
The thing is he usually does. That is one thing that has in the past set Eliezer apart from Robin and impressed me about Eliezer. Now it is almost as though he has embraced the evaporative cooling concept as an opportunity instead of a risk and gone and bought himself a blowtorch to force the issue!
Maybe, given the credibility he has accumulated on all these other topics, you should be willing to trust him on the one issue on which he is asserting this authority and on which it is clear that if he is right, it would be bad to discuss his reasoning.
The well known (and empirically verified) weakness in experts of the human variety is that they tend to be systematically overconfident when it comes to judgements that fall outside their area of exceptional performance - particularly when the topic is one just outside the fringes.
When it comes to blogging about theoretical issues of rationality Eliezer is undeniably brilliant. Yet his credibility specifically when it comes to responding to risks is rather less outstanding. In my observation he reacts emotionally and starts making rookie mistakes of rational thought and action. To the point when I've very nearly responded 'Go read the sequences!' before remembering that he was the flipping author and so should already know better.
Also important is the fact that elements of the decision are about people, not game theory. Eliezer hopefully doesn't claim to be an expert when it comes to predicting or eliciting optimal reactions in others.
We were talking about his credibility in judging whether this idea is a risk, and that is within his area of expertise.
Was it not clear that I do not assign particular credence to Eliezer when it comes to judging risks? I thought I expressed that with considerable emphasis.
I'm aware that you disagree with my conclusions - and perhaps even my premises - but I can assure you that I'm speaking directly to the topic.
I took wedrifid's point as being that whether EY is right or not, the bad effect described happens. This is part of the lose-lose nature of the original problem (what to do about a post that hurt people).
I do not consider this strong evidence as there are many highly intelligent and productive people who hold crazy beliefs:
I could continue this list with people like Ted Kaczynski or Roger Penrose. I just wanted show that intelligence and rational conduct do not rule out the possibility of being wrong about some belief.
How extensively have you searched for experts who made correct predictions outside their fields of expertise? What would you expect to see if you just searched for experts making predictions outside their field of expertise and then determined if that prediction were correct? What if you limited your search to experts who had expressed the attitude Eliezer expressed in Outside the Laboratory?
"Rule out"? Seriously? What kind of evidence is it?
You extracted the "rule out" phrase from the sentence:
From within the common phrase 'do not rule out the possibility' no less!
You then make a reference to '0 and 1s not probabilities' with exaggerated incredulity.
To put it mildly this struck me as logically rude and in general poor form. XiXiDu deserves more courtesy.
None of this affects my point that ruling out the possibility is the wrong, (in fact impossible), standard.
Not exaggerated. XiXiDu's post did seem to be saying: here are these examples of experts being wrong so it is possible that an expert is wrong in this case, without saying anything useful about how probable it is for this particular expert to be wrong on this particular issue.
You have made an argument accusing me of logical rudeness that, quite frankly, does not stand up to scrutiny.
Better evidence than I've ever seen in support of the censored idea. I have these well-founded principles, free speech and transparency, and weigh them against the evidence I have in favor of censoring the idea. That evidence is merely 1.) Yudkowsky's past achievements, 2.) his output and 3.) intelligence. That intelligent people have been and are wrong about certain ideas while still being productive and right about many other ideas is evidence to weaken #3. That people lie and deceive to get what they want is evidence against #1 and #2 and in favor of transparency and free speech, which are both already more likely to have a positive impact than the forbidden topic is to have a negative impact.
And what are you trying to tell me with this link? I haven't seen anyone stating numeric probability estimations regarding the forbidden topic. And I won't state one either, I'll just say that it is subjectively improbable enough to ignore it because there are possible too many very-very-low-probability events to take into account (for every being that will harm me if I don't do X there is another being that will harm me if I do X, which cancel out each other). But if you'd like to pull some number out of thin air, go ahead. I won't because I don't have enough data to even calculate the probability of AI going FOOM versus a slow development.
You have failed to address my criticisms of you points, that you are seeking out only examples that support your desired conclusion, and that you are ignoring details that would allow you to construct a narrower, more relevant reference class for your outside view argument.
I was telling you the "ruling out the possibility" is the wrong, (in fact impossible), standard.
Only now I understand your criticism. I do not seek out examples to support my conclusion but to weaken your argument that one should trust Yudkowsky because of his previous output. I'm aware that Yudkowsky can very well be right about the idea but do in fact believe that the risk is worth taking. Have I done extensive research on how often people in similar situations have been wrong? Nope. No excuses here, but do you think there are comparable cases of predictions that proved to be reliable? And how much research have you done in this case and about the idea in general?
I don't, I actually stated a few times that I do not think that the idea is wrong.
Seeking out just examples that weaken my argument, when I never predicted that no such examples would exist, is the problem I am talking about.
What made you think that supporting your conclusion and weakening my argument are different things?
You shouldn't seek to "weaken an argument", you should seek what is the actual truth, and then maybe ways of communicating your understanding. (I believe that's what you intended anyway, but think it's better not to say it this way, as a protective measure against motivated cognition.)
Thank-you for pointing this out.
Taleb quote doesn't qualify. (I won't comment on others.)
I should have made more clearly that it is not my intention to indicate that I believe that those people, or crazy ideas in general, are wrong. But there are a lot of smart people out there who'll advocate opposing ideas. Using their reputation of being highly intelligent to follow through on their ideas is in my opinion not a very good idea in itself. I could just believe Freeman Dyson that existing simulation models of climate contain too much error to reliably predict future trends. I could believe Peter Duesberg that HIV does not cause aids, after all he is a brilliant molecular biologist. But I just do not think that any amount of reputation is enough evidence to believe extraordinary claims uttered by such people. And in the case of Yudkowsky, there doesn't even exist much reputation and no great achievements at all that would justify some strong belief in his infallibility. What there exists in Yudkowsky's case seems to be strong emotional commitment. I just can't tell if he is honest. If he really believes that he's working on a policy for some future superhuman intelligence that will rule the universe, then I'm going to be very careful. Not because it is wrong, but because such beliefs imply huge payoffs. Not that I believe he is the disguised Dr. Evil, but can we be sure enough to just trust him with it? Censorship of certain ideas does bear more evidence against him as it does in favor of his honesty.
Huh, so there was a change? Curious. Certainly looking over some of Eliezer's past writings there are some that I identify with a great deal.
Far be it from me to be anything but an optimist. I'm going with 'exceptions'. :)
I don't think this rhetoric is applicable. Several very intelligent posters have deemed the idea dangerous; a very intelligent you deems it safe. You argue they are wrong because it is 'obviously safe'.
Eliezer is perfectly correct to point out that, on the whole of it, 'obviously it is safe' just does not seem like strong enough evidence when it's up against a handful of intelligent posters who appear to have strong convictions.
Pardon? I don't believe I've said any such thing here or elsewhere. I could of course be mistaken - I've said a lot of things and don't recall them all perfectly. But it seems rather unlikely that I did make that claim because it isn't what I believe.
This leads me to the conclusion that...
... This rhetoric isn't applicable either. ;)
I should have known I wouldn't get away with that, eh? I actually don't know if you oppose the decision because you think the idea is safe, or because you think that censorship is wronger than the idea is dangerous, or whether you even oppose the decision at all and were merely pointing out appeals to authority. If you could fill me on the details, I could re-present the argument as it actually applies.
Thankyou, and yes I can see the point behind what you were actually trying to say. It just important to me that I am not misrepresented (even though you had no malicious intent).
There are obvious (well, at least theoretically deducible based on the kind of reasoning I tend to discuss or that used by harry!mor) reasons why it would be unwise to give a complete explanation of all my reasoning.
I will say that 'censorship is wronger' is definitely not the kind of thinking I would use. Indeed, I've given examples of things that I would definitely censor. Complete with LOTR satire if I recall. :)