TimS comments on Our Phyg Is Not Exclusive Enough - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
Let's be explicit here - your suggestion is that people like me should not be here. I'm a lawyer, and my mathematics education ended at Intro to Statistics and Advanced Theoretical Calculus. I'm interested in the cognitive bias and empiricism stuff (raising the sanity line), not AI. I've read most of the core posts of LW, but haven't gone through most of the sequences in any rigorous way (i.e. read them in order).
I agree that there seem to be a number of low quality posts in discussion recently (In particular, Rationally Irrational should not be in Main). But people willing to ignore the local social norms will ignore them however we choose to enforce them. By contrast, I've had several ideas for posts (in Discussion) that I don't post, but I don't think it meets the community's expected quality standard.
Raising the standard for membership in the community will exclude me or people like me. That will improve the quality of technical discussion, at the cost of the "raising the sanity line" mission. That's not what I want.
If you're interested in concrete feedback, I found your engagement in discussions with hopeless cases a negative contribution, which is a consideration unrelated to the quality of your own contributions (including in those discussions). Basically, a violation of "Don't feed the clueless (just downvote them)" (this post suggests widening the sense of "clueless"), which is one policy that could help with improving the signal/noise ratio. Perhaps this policy should be publicized more.
I support not feeding the clueless, but I would like to emphasize that that policy should not bleed into a lack of explaining downvotes of otherwise clueful people. There aren't many things more aggravating than participating in a discussion where most of my comments get upvoted, but one gets downvoted and I never find out what the problem was--or seeing some comment I upvoted be at -2, and not knowing what I'm missing. So I'd like to ask everyone: if you downvote one comment for being wrong, but think the poster isn't hopeless, please explain your downvote. It's the only way to make the person stop being wrong.
Case in point: this discussion currently includes 30 comments, an argument with a certain Clueless, most of whose contributions are downvoted-to-hidden. That discussion shouldn't have taken place, its existence is a Bad Thing. I just went through it and downvoted most of those who participated, except for the Clueless, who was already downvoted Sufficiently.
I expect a tradition of discouraging both sides of such discussions would significantly reduce their impact.
While I usually share a similar sentiment, upon consideration I disagree with your prediction when it comes to the example conversation in question.
People explaining things to the Clueless is useful. Both to the person doing the explaining and anyone curious enough to read along. This is conditional on the people in the interaction having the patience to try to decipher the nature of the inferential distance try to break down the ideas into effective explanations of the concepts - including links to relevant resources. (This precludes cases where the conversation degenerates into bickering and excessive expressions of frustration.)
Trying to explain what is usually simply assumed - to a listener who is at least willing to communicate in good faith - can be a valuable experience to the one doing the explaining. It can encourage the re-examination of cached thoughts and force the tracing of the ideas back to the reasoning from first principles that caused you to believe them in the first place.
There are many conversations where downvoting both sides of a discussion is advisable, yet it isn't conversations with the "Clueless" that are the problem. It is conversations with Trolls, Dickheads and Debaters of Perfect Emptiness that need to go.
Startlingly, Googling "Debaters of Perfect Emptiness" turned up no hits. This is not the best of all possible worlds.
Think "Lawyer", "Politician" or the bottom line.
Sorry, I wasn't clear. I understood perfectly well what you meant by the phrase and was delighted by it. What I meant to convey was that I was saddened to discover that I lived in a universe where it was not a phrase in common usage, which it most certainly ought to be.
Oh, gotcha. I'm kind of surprised we don't have a post on it yet. Lax of me!
I accept your criticism in the spirit it was intended - but I'm not sure you are stating a local consensus instead of your personal preference. Consider the recent exchange I was involved in. It doesn't appear to me that the more wrong party has been downvoted to oblivion, and he should have been by your rule. (Specifically, the Main post has been downvoted, but not the comment discussion)
Philosophically, I think it is unfortunate that the people who believe that almost all terminal values are socially constructed are the some people who think empiricism is a useless project. I don't agree with the later point (i.e. I think empiricism is the only true cause of human advancement), but the former point is powerful and has numerous relevant implications for Friendly AI and raising the sanity line generally. So when anti-empiricism social construction people show up, I try to persuade them that empiricism is worthwhile so that their other insights can benefit the community. Whether this persuasion is possible is a distinct question from whether the persuasion is a "good thing."
Note that your example is not that pattern, and I haven't responded to Clueless. C is anti-empiricism, but he hasn't shown anything that makes me think that he has anything valuable to contribute to the community - he's 100% confused. So it isn't worth my time to try to persuade him to be less wrong.
I'm stating an expectation of a policy's effectiveness.
I think Monkeymind is deliberately trying to gather lots of negative karma as fast as possible. Maybe for a bet?
If the goal was -100, then writing should stop now (prediction).
I, for one, would like to see discussion of LW topics from the perspective of someone knowledgeable about the history of law; after all law is humanity's main attempt to formalize morality, so I would expect some overlap with FAI.
I don't mind people who haven't read the sequences, as long as they don't start spouting garbage that's already been discussed to death and act all huffy when we tell them so; common failure modes are "Here's an obvious solution to the whole FAI problem!", "Morality all boils down to X", and "You people are a cult, you need to listen to a brave outsider who's willing to go against the herd like me".
No martyrs allowed.
I don't propose simply disallowing people who havn't read everything from being taken seriously, if they don't say anything stupid. It's fine if you havn't read the sequences and don't care about AI or heavy philosophy stuff, I just don't want to read dumb posts about those topics that come from someone having not read the stuff.
As a matter of fact, I was careful to not propose much of anything. Don't confuse "here's a problem that I would like solved" with "I endorse this stupid solution that you don't like".
Fair enough. But I think you threw a wide net over the problem. To the extend you are unhappy that noobs are "spouting garbage that's been discussed to death" and aren't being sufficiently punished for it, you could say that instead. If that's not what you are concerned about, then I have failed to comprehend your message.
Exclusivity might solve the problem of noobs rehashing old topics from the beginning (and I certainly agree that needing to tell everyone that beliefs must make predictions about the future gets old very fast). But it would have multiple knock-on effects that you have not even acknowledged. My intuition is that evaporative cooling would be bad for this community, but your sense may differ.
I'm not the one who downvoted you, but if I were to hazard a guess, I'd say your were downvoted because when you start off by saying "people like me", it immediately sets off a warning in my head. That warning says that you have not separated personal identity from your judgment process. At the very least, by establishing yourself as a member of "people like me", you signify that you have already given up on trying to be less wrong, and resigned yourself to being more wrong. (I strongly dislike using the terms "less wrong" and "more wrong" to describe elites and peasants of LW, but I'm using them to point out to you the identity you've painted for yourself.)
Also, there is /always/ something you can do about a problem. The answer to this particular problem is not, "Noobs will be noobs, let's give up".
If by "giving up on trying to be less wrong," you mean I'm never going to be an expert on AI, decision theory, or philosophy of consciousness, then fine. I think that definition is idiosyncratic and unhelpful.
Raising the sanity line does not require any of those things.
Don't put up straw men; I never said that to be less wrong, you had to do all those things. "less wrong" represents a attitude towards the world, not an endpoint.
Then I do not understand what you mean when you say I am "giving up on trying to be less wrong"
Could I get an explanation for the downvotes?