Vladimir_Nesov comments on What I've learned from Less Wrong - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (232)
I wonder if the main reason for why a post like Yvain's is upvoted is not because it is great but because everyone who reads it instantly agrees. Of course it is great in the sense that it sums up the issue in a very clear and concise manner. But has it really changed your mind? It seems naturally to me think that way, the post states what I always thought but was never able to express that clearly, that's why I like it. The problem is, how do we get people to read it who disagree? I've recently introduced a neuroscientist to Less Wrong via that post. He read it and agreed with everything. Then he said it's naive to think that this will be adopted any time soon. What he meant is that all this wit is useless if we don't get the right people to digest it. Not people like us who agree anyway, probably before ever reading that post in the first place.
Regarding Eliezers post I even have my doubts that it is very useful given confused nerdy folks. The gist of that post seems to be that people should pinpoint their disagreements before one talks at cross-purposes. But it gives the impression that propositional assertions do not yield sensory experience. Yet human agents are physical systems just as trees. If you tell them certain things you can expect certain reactions. I believe that article might be inconsistent with other assertions made in this community like taking logical implications of general beliefs serious. The belief that the decimal expansion of Pi is infinite will never pay rent in future anticipations.
I'm also skeptic about another point in the original post, namely that most people’s beliefs aren’t worth considering. This I believe might be conterproductive. Consider that most people express this attitude towards existential risks from artificial intelligence. So if you link up people to that one post, out of context and then they hear about the SIAI, what might they conclude if they take that post serious?
The point about truth is another problematic idea. I really enjoyed The Simple Truth, but in the light of all else I've come across I'm not convinced that truth is a useful term to adopt anywhere but in the most informal discussions. If you are like me and grew up in a religious environment you are told that there exist absolute truth. Then if you have your doubts and start to learn more you are told that skepticism is an epistemological position, and ‘there is no truth-there is truth’ are metaphysical/linguistic positions. When you learn even more and come across concepts like the uncertainty principle, Gödel's incompleteness theorems, halting problem or Tarski’s Truth Theorem the nature of truth becomes even more uncertain. Digging even deeper won't revive the naive view of truth either. And that is just the tip of the iceberg, as you will see once your learn about Solomonoff induction and Minimum Message Length.
ETA Fixed the formatting. My last paragraph was eaten before!
That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally). The progress is made by putting such arguments into words, to be followed by other people faster and more reliably than they were arrived at, even if arriving at them is in some contexts almost inevitable.
Additionally, clarity offered by a carefully thought-through exposition isn't something to expect without a targeted effort. This clarity can well serve as the enabling factor for making the next step.
And to avoid people giving in to their motivated cognition, you present the steps in order, and the conclusion at the end. To paraphrase Yudkowsky's explanation of Bayes Theorem:
This method of presenting great arguments is probably the most important thing I learned from philosophy, incidentally.
"That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally)."
Also how great propaganda works.
If you are going to describe a "great argument" I think you need to put more emphasis on it being tied to the truth rather than being agreeable. I would say truly great arguments tend not to be agreeable, b/c the real world is so complex that descriptions without lots of nuance and caveats are pretty much always wrong. Whereas simplicity is highly appealing and has a low cognitive processing cost.
Oh. I only agree with argument steps that are truthful.
There are nevertheless also conclusions that you agreed with all along. Sometimes hindsight bias makes you think you agreed all along when you really didn't. But other times you genuinely agreed all along.
You can skip to the end of Yvain's post (the one referenced here) and read the summary - assuming you haven't read the post already. Specifically, this statement: "We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective." If you agree with this statement without first reading Yvain's argument for it, then that's evidence that you already agreed with Yvain's conclusions without needing to be led gradually step by step through his long argument.