Vladimir_Nesov comments on Best career models for doing research? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (999)
It's a difficult question that a discussion in comments won't do justice (too much effort required from you also, not just me). Read the posts if you will (see "Decision theory" section).
Also keep in mind this comment: we are talking about what justifies attaining a very weak belief, and this isn't supposed to feel like agreement with a position, in fact it should feel like confident disagreement. Most of the force of the absurd decision is created by moral value of the outcome, not by its probability.
(I obviously recognize that this decision aligns with what you'd expect from a motivation to cover one's incorrect reasons. Sometimes that happens by accident.)
I'm slowly moving through the sequences, I'll comment back here if/when I finish the posts as well.
In the mean time, I've been told that if you can't explain something simply then you don't really understand it... wanna take a fast and loose whack?
edit: did you drastically edit your comment?
I believe this is utter nonsense, play on the meaning of the word "explain". If explaining is to imply understanding by the recipient, then clearly fast explaining of great many things is not possible, otherwise education wouldn't be necessary. Creating an illusion of understanding, or equivalently a shallow understanding might be manageable of course, the easier the less educated and rational the victim.
Interesting. I've found that intuitive explanations for relatively complex things are generally easier than a long, exact explanation.
Basically fast explanations use hardware accelerated paths to understanding (social reasoning, toy problems that can be played with, analogies), and then leave it to the listener to bootstrap themselves. If you listen to the way that researchers talk, it's basically analogies and toy problems, with occasional black-board sessions if they're mathy.
It's hard to understand matrix inversion by such a route, which I think you're saying is roughly what's required to understand why you believe this censorship to be rational.
But, for the record, it ain't no illusionary understanding when I talk fast with a professor or fellow grad student.
Certainly easier, but don't give comparable depth of understanding or justify comparable certainty in statements about the subject matter. Also, the dichotomy is false, since detailed explanations are ideally accompanied by intuitive explanations to improve understanding.
What we were talking about instead is when you have only a fast informal explanation, without the detail.
It's because they already have the rigor down. See this post by Terence Tao.
Yes, I do that, sorry. What I consider improvement over the original.
I understand your argument re: very weak belief... but it seems silly.
How is this different than positing a very small chance that a future dictator will nuke the planet unless I mail a $10 donation to green peace?
Do you have any reason to believe that it's more likely that a future dictator, or anyone else, will nuke the planet if you don't send a donation to Greenpeace than if you do?
I agree that you are not justified in seeing a difference, unless you understand the theory of acausal control to some extent and agree with it. But when you are considering a person who agrees with that theory, and makes a decision based on it, agreement with the theory fully explains that decision, this is a much better explanation than most of the stuff people are circulating here. At that point, disagreement about the decision must be resolved by arguing about the theory, but that's not easy.
You appear to be arguing that a bad decision is somehow a less bad decision if the reasoning used to get to it was consistent ("carefully, correctly wrong").
No, because the decision is tested against reality. Being internally consistent may be a reason for doing something that it is obvious to others is just going to be counterproductive - as in the present case - but it doesn't grant a forgiveness pass from reality.
That is: in practical effects, sincere stupidity and insincere stupidity are both stupidity.
You even say this above ("There is only one proper criterion to anyone's actions, goodness of consequences"), making your post here even stranger.
(In fact, sincere stupidity can be more damaging, as in my experience it's much harder to get the person to change their behaviour or the reasoning that led to it - they tend to cling to it and justify it when the bad effects are pointed out to them, with more justifications in response to more detail on the consequences of the error.)
Think of it as a trolley problem. Leaving the post is a bad option, the consequences of removing it are then the question: which is actually worse and results in the idea propagating further? If you can prove in detail that a decision theory considers removing it will make it propagate less, you've just found where the decision theory fails.
Removing the forbidden post propagated it further, and made both the post itself and the circumstances of its removal objects of fascination. It has also diminished the perceived integrity of LessWrong, as we can no longer be sure posts are not being quietly removed as well as loudly; this also diminished the reputation of SIAI. It is difficult to see either of these as working to suppress the bad idea.
More importantly it removed lesswrong as a place where FAI and decision theory can be discussed in any depth beyond superficial advocacy.
The problem is more than the notion that secret knowledge is bad - it's that secret knowledge increasingly isn't possible, and increasingly isn't knowledge.
If it's science, you almost can't do it on your own and you almost can't do it as a secret. If it's engineering, your DRM or other constraints will last precisely as long as no-one is interested in breaking them. If it's politics, your conspiracy will last as long as you aren't found out and can insulate yourself from the effects ... that one works a bit better, actually.
I don't believe this is true to any significant extent. Why do you believe that? What kind of questions are not actually discussed that could've been discussed otherwise?
You are serious?
... just from a few seconds brainstorming. These are the kinds of questions that can not be discussed without, at the very least, significant bias due to the threat of personal abuse and censorship if you are not careful. I am extremely wary of even trivial inconveniences.
Yes.
This doesn't seem like an interesting question, where it intersects the forbidden topic. We don't understand decision theory well enough to begin usefully discussing this. Most directions of discussion about this useless question are not in fact forbidden and the discussion goes on.
We don't formally understand even the usual game theory, let alone acausal trade. It's far too early to discuss its applications.
It wasn't Vladimir_Nesov's interest that you feigned curiosity in and nor is it your place to decide what things others are interested in discussing. They are topics that are at least as relevant as such things as 'Sleeping Beauty' that people have merrily prattled on about for decades.
That you support a censorship of certain ideas by no means requires you to exhaustively challenge every possible downside to said censorship. Even if the decision were wise and necessary there is allowed to be disappointing consequences. That's just how things are sometimes.
The zeal here is troubling.
What do you mean by "decide"? Whether they are interested in that isn't influenced by my decisions, and I can well think about whether they are, or whether they should be (i.e. whether there is any good to be derived from that interest).
I opened this thread by asking,
You answered this question, and then I said what I think about that kind of questions. It wasn't obvious to me that you didn't think of some other kind of questions that I find important, so I asked first, not just rhetorically.
What you implied in this comment seems very serious, and it was not my impression that something serious was taking place as a result of the banning incident, so of course I asked. My evaluation of whether the topics excluded (that you've named) are important is directly relevant to the reason your comment drew my attention.
The other way around. I don't "support censorship", instead I don't see that there are downsides worth mentioning (besides the PR hit), and as a result I disagree that censorship is important. Of course this indicates that I generally disagree with arguments for the harm of the censorship (that I so far understood), and so I argue with them (just as with any other arguments I disagree with that are on topic I'm interested in).
No zeal, just expressing my state of belief, and not willing to yield for reasons other than agreement (which is true in general, the censorship topic or not).
(Should I lump everything in one comment, or is the present way better? I find it more clear if different concerns are extracted as separate sub-threads.)
That some topics are excluded is tautological, so it's important what kind of topics were. Thus, stating "nor is it your place to decide what things others are interested in discussing" seems to be equivalent with stating "censorship (of any kind) is bad!", which is not very helpful in the discussion of whether it's in fact bad. What's the difference you intended?
You do see the irony there I hope...
Would you have censored the information? If not, do you think it would be a good idea to discuss the subject matter on an external (public) forum? Would you be interested to discuss it?
No irony. You don't construct complex machinery out of very weak beliefs, but caution requires taking very weak beliefs into account.
Here, I'm talking about factual explanation, not normative estimation. The actions are explained by holding a certain belief, better than by alternative hypotheses. Whether they were correct is a separate question.
You'd need to explain this step in more detail. I was discussing a communication protocol, where does "testing against reality" enter that topic?
Ah, I thought you were talking about whether the decision solved the problem, not whether the failed decision was justifiable in terms of the theory.
I do think that if a decision theory leads to quite as spectacular a failure in practice as this one did, then the decision theory is strongly suspect.
As such, whether the decision was justifiable is less interesting except in terms of revealing the thinking processes of the person doing the justification (clinginess to pet decision theory, etc).
"Belief in the decision being a failure is an argument against adequacy of the decision theory", is simply a dual restatement of "Belief in the adequacy of the decision theory is an argument for the decision being correct".
This statement appears confusing to me: you appear to be saying that if I believe strongly enough in the forbidden post having been successfully suppressed, then censoring it will not have in fact caused it to propagated widely, nor will it have become an object of fascination and caused a reputational hit to LessWrong and hence SIAI. This, of course, makes no sense.
I do not understand how this matches with the effects observable in reality, where these things do in fact appear to have happened. Could you please explain how one tests this result of the decision theory, if not by matching it against what actually happened? That being what I'm using to decide whether the decision worked or not.
Keep in mind that I'm talking about an actual decision and its actual results here. That's the important bit.
No, I think you're nitpicking to dodge the question, and looking for a more convenient world.
I think at this point it's clear that you really can't be expected to give a straight answer. Well done, you win!
If you believe that "decision is a failure" is evidence that the decision theory is not adequate, you believe that "decision is a success" is evidence that the decision theory is adequate.
Since a decision theory's adequacy is determined by how successful its decisions are, you appear to be saying "if a decision theory makes a bad decision, it is a bad decision theory" which is tautologically true.
Correct me if I'm wrong, but Vladimir_Nesov is not interested in whether the the decision theory is good or bad, so restating an axiom of decision theory evaluation is irrelevant.
The decision was made by a certain decision theory. The factual question "was the decision-maker holding to this decision theory in making this decision?" is entirely unrelated to the question "should the decision-maker hold to this decision theory given that it makes bad decisions?". To suggest otherwise blurs the prescriptive/descriptive divide, which is what Vladimir_Nesov is referring to when he says
I believe that if the decision theory clearly led to an incorrect result (which it clearly did in this case, despite Vladimir Nesov's energetic equivocation), then it is important to examine the limits of the decision theory.
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Omega (who does not exist), and is therefore more robust, then this failure in a non-hypothetical situation suggests a flaw in its robustness, and it should be regarded as less reliable than it may have been regarded previously.
Assuming the decision was made by robust TDT.
If you're interested, we can also move forward as I did over here by simply assuming EY is right, and then seeing if banning the post was net positive
It's not "moving forward", it's moving to a separate question. That question might be worth considering, but isn't generally related to the original one.
Why would the assumption that EY was right be necessary to consider that question?
I agree that it was net negative, specifically because the idea is still circulating, probably with more attention drawn to it than would happen otherwise. Which is why I started commenting on my hypothesis about the reasons for EY's actions, in an attempt to alleviate the damage, after I myself figured it out. But that it was in fact net negative doesn't directly argue that given the information at hand when the decision was made, it had net negative expectation, and so that the decision was incorrect (which is why it's a separate question, not a step forward on the original one).
More than enough information about human behavior was available at the time. Negative consequences of the kind observed were not remotely hard to predict.
Yes, quite likely. I didn't argue with this point, though I myself don't understand human behavior enough for that expectation to be obvious. I only argued that the actual outcome isn't a strong reason to conclude that it was expected.
I like the precision of your thought.
All this time I thought we were discussing if blocking future censorship by EY was a rational thing to do -- but it's not what we were discussing at all.
You really are in it for the details -- if we could find a way of estimating around hard problems to solve the above question, that's only vaguely interesting to you -- you want to know the answers to these questions.
At least that's what I'm hearing.
It sounds like the above was your way of saying you're in favor of blocking future EY censorship, which gratifies me.
I'm going to do the following things in the hope of gratifying you:
Writing up a post on less wrong for developing political muscles. I've noticed several other posters seem less than savvy about social dynamics, so perhaps a crash course is in order. (I know that there are certainly several in the archives, I guarantee I'll bring several new insights [with references] to the table).
Reread all your comments, and come back at these issues tomorrow night with a more exact approach. Please accept my apology for what I assume seemed a bizarre discussion, and thanks for thinking like that.
Night!
I didn't address that question at all, and in fact I'm not in favor of blocking anything. I came closest to that topic in this comment.
Your answer actually includes "you should try reading the sequences."
The reference to the sequences is not the one intended (clarified by explicitly referring to "Decision theory" section in the grandparent comment).