Vladimir_Nesov comments on Best career models for doing research? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (999)
Here, I'm talking about factual explanation, not normative estimation. The actions are explained by holding a certain belief, better than by alternative hypotheses. Whether they were correct is a separate question.
You'd need to explain this step in more detail. I was discussing a communication protocol, where does "testing against reality" enter that topic?
Ah, I thought you were talking about whether the decision solved the problem, not whether the failed decision was justifiable in terms of the theory.
I do think that if a decision theory leads to quite as spectacular a failure in practice as this one did, then the decision theory is strongly suspect.
As such, whether the decision was justifiable is less interesting except in terms of revealing the thinking processes of the person doing the justification (clinginess to pet decision theory, etc).
"Belief in the decision being a failure is an argument against adequacy of the decision theory", is simply a dual restatement of "Belief in the adequacy of the decision theory is an argument for the decision being correct".
This statement appears confusing to me: you appear to be saying that if I believe strongly enough in the forbidden post having been successfully suppressed, then censoring it will not have in fact caused it to propagated widely, nor will it have become an object of fascination and caused a reputational hit to LessWrong and hence SIAI. This, of course, makes no sense.
I do not understand how this matches with the effects observable in reality, where these things do in fact appear to have happened. Could you please explain how one tests this result of the decision theory, if not by matching it against what actually happened? That being what I'm using to decide whether the decision worked or not.
Keep in mind that I'm talking about an actual decision and its actual results here. That's the important bit.
No, I think you're nitpicking to dodge the question, and looking for a more convenient world.
I think at this point it's clear that you really can't be expected to give a straight answer. Well done, you win!
Have you tried?
I read your comment, understood the error you made, and it was about not seeing the picture clearly enough. If you describe the situation in terms of the components I listed, I expect you'll see what went wrong. If you don't oblige, I'll probably describe the solution tomorrow.
Edit in response to severe downvoting: Seriously? It's not allowed to entertain exercises about a conversational situation? (Besides, I was merely explaining an exercise given in another comment.) Believe, argument can be a puzzle to understand, and not a fight. If clumsy attempts to understand are discouraged, how am I supposed to develop my mastery?
If you're genuinely unaware of the status-related implications of the way you phrased this comment, and/or of the fact that some people rate those kinds of implications negatively, let me know and I'll try to unpack them.
If you're simply objecting to them via rhetorical question, I've got nothing useful to add.
If it matters, I haven't downvoted anyone on this thread, though I reserve the right to do so later.
I understand that status-grabbing phrasing can explain why downvotes were in fact made, but object that they should be made for that reason here, on Less Wrong. If I turn out to be wrong, then sure. There could be other reasons beside that.
Likely this, but it's not completely clear to me what you mean.
Not as an affiliation signal, since the question is about properties of my comments, not of the people who judge them. But since you are not one of the downvoters, this says that you have less access to the reasons behind their actions than if you were one of them.
I am not one of the downvoters you are complaining about but the distinction is a temporal one, not one of differing judgement. I have since had the chance to add my downvote. That suggests my reasoning may have a slightly higher correlation at least. :)
Something I have observed is that people can often get away with status grabbing ploys but they will be held to a much higher standard while they are doing so. People will extend more grace to you when you aren't insulting them, bizarrely enough.
I often observe that the one state of mind that leads me to sloppy thinking is that of contempt. Contempt is also the signal you were laying on thickly in your comments here and thinking displayed therein was commensurably shoddy. Not in the sense that they were internally inconsistent but in as much as they didn't relate at all well with the comments that you were presuming to reply to. (Whether the 'contempt' causality is, in fact, at play is not important - it is the results that get the votes.)
I wouldn't normally make such critiques but rhetorically or not you asked for one and this is a sincere reply.
I meant that if you were asking the question as a way of expressing your objections I had nothing useful to add.
Yes. Of course, if the question isn't about the people who judge the comments, then access to those people's motivations isn't terribly relevant to the question.
To be fair, I think the parent of the downvoted comment also has status implications:
It's a serious accusation hurled at the wrong type of guy IMO - Vladimir probably takes the objectivity award on this forum. I think his response was justified and objective, as usual.
When someone says "look, here is this thing you did that led to these clear problems in reality" and the person they're talking to answers "ah, but what is reality?" then the first person may reasonably consider that dodging the question.
If you believe that "decision is a failure" is evidence that the decision theory is not adequate, you believe that "decision is a success" is evidence that the decision theory is adequate.
Since a decision theory's adequacy is determined by how successful its decisions are, you appear to be saying "if a decision theory makes a bad decision, it is a bad decision theory" which is tautologically true.
Correct me if I'm wrong, but Vladimir_Nesov is not interested in whether the the decision theory is good or bad, so restating an axiom of decision theory evaluation is irrelevant.
The decision was made by a certain decision theory. The factual question "was the decision-maker holding to this decision theory in making this decision?" is entirely unrelated to the question "should the decision-maker hold to this decision theory given that it makes bad decisions?". To suggest otherwise blurs the prescriptive/descriptive divide, which is what Vladimir_Nesov is referring to when he says
I believe that if the decision theory clearly led to an incorrect result (which it clearly did in this case, despite Vladimir Nesov's energetic equivocation), then it is important to examine the limits of the decision theory.
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust, then this failure in a non-hypothetical situation suggests a flaw in its robustness, and it should be regarded as less reliable than it may have been regarded previously.
Assuming the decision was made by robust TDT.
The decision you refer to here... I'm assuming it is this still the Eliezer->Roko decision? (This discussion is not the most clearly presented.) If so for your purposes you can safely consider 'TDT/CDT' irrelevant. While acausal (TDTish) reasoning is at play in establishing a couple of the important premises, they are not relevant to the reasoning that you actually seem to be criticising.
ie. The problems you refer to here are not the fault of TDT or of abstract reasoning at all - just plain old human screw ups with hasty reactions.
That's the one, that being the one specific thing I've been talking about all the way through.
Vladimir Nesov cited acausal decision theories as the reasoning here and here - if not TDT, then a similar local decision theory. If that is not the case, I'm sure he'll be along shortly to clarify.
(I stress "local" to note that they suffer a lack of outside review or even notice. A lack of these things tends not to work out well in engineering or science either.)
Good, that had been my impression.
Independently of anything that Vladmir may have written it is my observation that the 'TDT-like' stuff was mostly relevant to the question "is it dangerous for people to think X?" Once that has been established the rest of the decision making, what to do after already having reached that conclusion, was for most part just standard unadorned human thinking. From what I have seen (including your references to reputation self sabotage by SIAI) you were more troubled by the the latter parts than the former.
Even if you do care about the more esoteric question "is it dangerous for people to think X?" I note that 'garbage in, garbage out' applies here as it does elsewhere.
(I just don't like to see TDT unfairly maligned. Tarnished by association as it were.)
See section 7 of the TDT paper (you'll probably have to read from the beginning to familiarize yourself with concepts). It doesn't take Omega to demonstrate that CDT errs, it takes mere ability to predict dispositions of agents to any small extent to get out of CDT's domain, and humans do that all the time. From the paper:
I wouldn't use this situation as evidence for any outside conclusions. Right or wrong, the belief that it's right to suppress discussion of the topic entails also believing that it's wrong to participate in that discussion or to introduce certain kinds of evidence. So while you may believe that it was wrong to censor, you should also expect a high probability of unknown unknowns that would mess up your reasoning if you tried to take inferential steps from that conclusion to somewhere else.
I haven't been saying I believed it was wrong to censor (although I do think that it's a bad idea in general). I have been saying I believe it was stupid and counterproductive to censor, and that this is not only clearly evident from the results, but should have been trivially predictable (certainly to anyone who'd been on the Internet for a few years) before the action was taken. And if the LW-homebrewed, lacking in outside review, Timeless Decision Theory was used to reach this bad decision, then TDT was disastrously inadequate (not just slightly inadequate) for application to a non-hypothetical situation and it lessens the expectation that TDT will be adequate for future non-hypothetical situations. And that this should also be obvious.
Yes, the attempt to censor was botched and I regret the botchery. In retrospect I should have not commented or explained anything, just PM'd Roko and asked him to take down the post without explaining himself.
This is actually quite comforting to know. Thank you.
(I still wonder WHAT ON EARTH WERE YOU THINKING at the time, but you'll answer as and when you think it's a good idea to, and that's fine.)
(I was down the pub with ciphergoth just now and this topic came up ... I said the Very Bad Idea sounded silly as an idea, he said it wasn't as silly as it sounded to me with my knowledge. I can accept that. Then we tried to make sense of the idea of CEV as a practical and useful thing. I fear if I want a CEV process applicable by humans I'm going to have to invent it. Oh well.)
And I would have taken it down. My bad for not asking first most importantly.
It is evidence for said conclusions. Do you mean, perhaps, that it isn't evidence that is strong enough to draw confident conclusions on its own?
To follow from the reasoning the embedded conclusion must be 'you should expect a higher probability'. The extent to which David should expect higher probability of unknown unknowns is dependent on the deference David gives to the judgement of the conscientious non-participants when it comes to the particular kind of risk assessment and decision making - ie. probably less than Jim does.
(With those two corrections in place the argument is reasonable.)
I agree, and in this comment I remarked that we were assuming this statement all along, albeit in a dual presentation.