timtyler comments on Harry Potter and the Methods of Rationality discussion thread, part 4 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (649)
Speech seems like an evolved adaptation that makes human society work better.
Why are people voting Tim's comment down so hard? Are there actually three people out there, let alone a majority of LWers, who do not believe it is correct?
I was just thinking how there's a weird hivemind thing going on with the downvotes. Well-written and cordial posts arguing against the site's preferred positions are being summarily downvoted to invisibility.
This doesn't look like a very healthy discussion dynamic.
I haven't seen any recent examples of this recently (since the last times cryonics evangelism was considered, of course.) I suspect that instead you do not recognise the kinds of error in reasoning that have been detected and responded to.
I have been using the Kibitzer since I started posting, and my handle on this matter is that well-written, cordial posts that don't use LW techniques are downvoted. That is, they argue against the preferred position, and they are downvoted because they argue badly. Small corroborations: the posts that get summarily upvoted are ones that point out lack-of-rationality in the arguments, upvotes on topics when they aren't flawed.
If that seems like an unhealthy discussion dynamic then you should review the LW techniques for rationality and make a top level post explaining how using these techniques, or how requiring everyone to use these techniques, could result in unhealthy discussions.
Possibility: Well-written, cordial posts are your criteria for upvotes because cordiality and well-writtenness usually correlate with clear thinking and good reasoning. This is true over most of the blog, except for the edge cases. These cases have their roots in subtle cognitive biases, not gross emotional biases, and it's possible that lack of writing skill and cordiality points out gross emotional biases but not subtler ones.
I think I feel the problem is more a mismatch between the subtlety of the problem and the bluntness of the tool. Downvotes are a harsh and low-signal way of pointing problems in arguments, and seem more suited to punishing comments which can be identified as crap at a glance. Since this site isn't doing the free-for-all comedy club thing Slashdot and Reddit have going, I'm not sure that the downvote mechanic quite belongs here to begin with. Users posting downright nonsense and noise don't even belong on the site, and bad arguments can be ignored or addressed instead of just anonymously downvoting them.
And yes, this probably should go to a toplevel post, but I don't have the energy for that scale of meta-discussion right now.
Downvoting mechanism is one way of making sure that obvious nonsense-posting gets visibly and quickly discouraged. Without it, there would be more nonsense.
I don't think that's actually true. There are very few nonsense posts (or at least, very few that get voted down); and when there are, downvoting doesn't always discourage the poster. When I see a post with a negative score, it's more often one that is controversial, or that disagrees with LessWrong dogma, or that was made by someone unpopular here, or that is in the middle of a flamewar between two users, or that is part of a longer conversation where one poster has triggered an "omega wolf" reaction from the rest of the pack by acting conciliatory.
Downvoting wrong comments may be harsh for the person being downvoted, but hopefully in the long run it can encourage better comments, or at least make it easier to find good comments.
There may be some flaws in the karma system or the way it's used by the community, but I don't see any obvious improvements, or any other systems that would obviously work better.
Look at mwaser: he complains a lot about being downvoted, but he also got a lot of feedback for what people found lacking in his post. Yes, a portion of the downvotes he gets may be due to factors unrelated to the quality of his arguments (he repeatedly promotes his own blog, and complains about the downvotes being a proof of community irrationality - both can get under people's skin), which is a bit unfortunate, but not a fatal flaw of the karma system.
I've never made the claim that the downvotes are "proof" of community irrationality. In fact, given what I believe to be the community's goals, I see them as entirely rational.
I have claimed that certain upvotes are irrational (i.e. those without any substance). The consensus reply seems to be that they still fulfill a purpose/goal for a large percentage of the regulars here. By definition, that makes those upvotes rational (yes, I AM reversing my stand on that issue because I have been "educated" on what the community's goals apparently are)..
I am very appreciative of the replies that have substance. I am currently of the opinion, however, that the karma system actually reduces the amount of replies since it allows someone to be easily and anonymously dismissed without good arguments/cause.
By curiosity, what do you consider to be the community's goals?
1) In itself, reducing the amount of replies is a feature, not a bug; I expect most readers would prefer few comments of high quality than many comments of varying quality.
2) the only instances of 'someone being dismissed without good arguments/cause" have been obvious spam and cranks. I don't think it's a fair description of the reaction to your comments, however; you've had plenty of detailed criticism.
The stated goal of the community is to refine the art of human rationality. Unfortunately, rationality is an instrumental goal dependent upon the next-level-up or terminal goal. Most people, including me (initially, at least), assume that the next goal up is logical argumentation or discovery of how to reason better.
Most of the practices here are rational in terms of a specific individual's goals (mostly in terms of maintaining beliefs) but are strictly contrary to good argumentation techniques. The number of ridiculous strawmen, arguments by authority, arguments by pointing to something long and rambling that has nothing to do with the initial argument, etc. is nothing short of overwhelming.
So the next goal up clearly isn't rational argumentation. Assuming that it was was the mistake that I made in the post Irrational Upvotes (and why I subsequently retracted my stand that they were irrational). They are rational in relation to another goal. My error was in my assumption of the goal.
One of Eliezer's main points is learning how to learn where you go wrong. This community is far worse at that than most I've seen. Y'all know how to argue/debate "logically" much better -- but it's normally to the purpose of retaining your views, not discovering where you might have gone wrong or where you might do better.
(I'll cover 1 and 2 in subsequent comments -- thanks for a high-quality response)
Some things to consider on these points (mostly because I have not noticed a prevalence of these issues)
Lastly, this comment:
is definitely a concern for ALL LWers. I suspect you have stumbled onto a case analogous to theism: it is not the case that we wish to retain our atheism and therefore we argue to keep that view - we really, truly, have considered all the evidence and all the arguments, and we reject it on those grounds.
Has it got to the point where replying to this would be a violation of the 'Do not feed the trolls' convention? I had written a brief response but realize it may be better to ignore instead. But I will defer to the judgement of others here... if there are people who are still taking mwaser seriously then I will engage as appropriate.
I did. The feedback that actually told him something came as replies. I'm not seeing how the use of downvotes actually helped there, and it did seem to add unnecessary nastiness to the exchange.
I agree it's a bit harsh, and not always useful. It's a bit of a pity that the karma system doesn't allow to make a difference between "5 people found this post not-that-great" and "5 people found this post absolutely terrible".
Maybe it would be nice to have a system that would allow for more nuance, but it would also have to be easy enough to understand and use, and not be easy to game.
Also, I would say that the downvotes did have some utility, by expressing "you should pay more attention to criticism, most people here disagree with you".
For example, make 'terrible' votes cost karma.
What about the ability to mark a comment as obsolete if you changed your mind? It will then be under the fold but people won't be able to downvote it anymore. Or should people who changed their mind be punished infinitely? I noticed that I often delete comments that get downvoted if I changed my mind, e.g. understood where I was wrong, because they keep getting downvoted long after the discussion ended. By deleting it I destroy the context and consistency of the discussion. But I also do not want to be downvoted anymore for something I don't believe and I want to signal that I changed my mind.
Not a bad idea; having all votes public may also be an improvement.
Still, I suspect that whatever the system, there would be someone to argue that it sucks, which isnt't an excuse to not improve it, but a reason to be cautious.
I haven't read the above thread. But here's an idea I had about the Karma system: If you want to downvote someone you're asked to provide a reply explaining why you downvoted the comment. If you downvote 5 times without explaining yourself you'll lose 1 Karma point.
It always really bothers me if I get downvoted without getting feedback because without feedback I'm unable to improve, refine my writing skills or rationality. What's the point then? Merely losing Karma score will led people to conclude (unjustified) that they are downvoted for various reasons but not that they may be wrong or that their comment simply does not add anything valuable to the debate. Negative Karma without feedback causes resentment in all people except those who already acquired enough rationality skills and realization to infer that there might be something wrong with their comment and not with the person downvoting it. The Karma system as it is will therefore discourage newcomers and make them conclude that LW is merely an echo-chamber and does not tolerate their precious critique.
A notion for a slightly more informative karma system. Each person can apply 1, 2, or 3 karma points (plus or minus).
Instead of just giving the number of points, the slot after the date has total points, number of plus points, number of minus points, and number of voters.
I realize there's a little redundancy, but I think that would be alright to make it more convenient for anyone who doesn't want to be constantly doing routine arithmetic.
The idea would probably be a little graph showing point accumulation over time, but that seems like too much added work for the site.
The kibitzer does nothing to protect people from groupthink.
What exactly do you mean by groupthink? Let's taboo the word a bit:
Those last two are important parts of groupthink. Without that last one, mathematicians are guilty of groupthink, because they all apply the same (somtimes flawed) processes and get the same answers. Maths isn't groupthink because attempts are made to discover and fix flaws, and these attempts aren't ignored out of hand.
The kibitzer blocks out names and karma scores; so you can't tell what the group consensus is (either by the person's name; "the community thinks this guy is a troll" or by vote; "-5? this post must be bad"). I follow the same process as everyone else in evaluating a comment, but I don't know if I've gotten the same answer as them. In practice, when I've checked, I do get the same answer, so it satisfies the first two conditions. But is the process flawed? And is meeting the group's consensus more important than fixing these flaws?
That would be a systemic problem that deserves its own top level post.
Speech, sexual selection rituals, sex itself, cooperation in the social insects ... There are many things which seem to require a more complex and subtle narrative for their explanation than the usual simple Darwinian story of a mutant individual doing better than his conspecific competitors and then passing on his genes.
But that doesn't mean that a died-in-the-wool neo-Darwinian needs to accept the group-selection explanation any more than an Ayn Rand fan confronted with a skyscraper has to admit that Kropotkin was right after all.
However, I am taking your implicit advise and dutifully upvoting Tim's comment.
I wasn't suggesting that speech evolved via group selection - just that it evidently did evolve - and so proposing the existence of "evolved adaptations that make human society work better" is not an error.
Tim's comment doesn't say that speech evolved via group selection. It could be that it did not; in that case, Tim's comment would be pointing out that Eliezer was unjustified in calling out a belief in "evolved adaptations that make human society work better" as an error.
I have seen people observe that they tend to be inclined to downvote tim readily, having long since abandoned giving him the benefit of the doubt. (This is not my position.)
Absolutely - when considering what it means in multiple level context which Tim explicitly quoted he is wrong on a group-selection-caps level of wrongness. (I was not someone who voted but I just added mine.)
I thought your (PhilGoetz) post on group selection was a good one, particularly with the different kinds of (subscripted) group selection that you mentioned and mentions of things like ants. But now that I see what prompted the post and what position you were trying to support I infer that you actually are confused about group selection, not merely presenting a more nuanced understanding.
... is spot on.
It surely is an unsympatthetic reading to conclude from: "What if some of our cognitive biases are evolved adaptations that make human society work better?" - that those adaptations did not also benefit social human individuals, and may have evolved for that purpose.
You may note that I took care to emphasize that my reply was to what you were conveying in the context. Phil's comment does postulate group selection. While as a standalone sentence your comment is literally correct I downvoted it because it constitutes either a misunderstanding of the conversation or a flawed argument for an incorrect position.
What is the incorrect position? If you say "that group selection is possible", please state your reasons for being so certain about it.
In any case, my comment does not postulate group selection. It wasn't even on my mind when I wrote it.
I do not. That would be a bizarre position to take (or assume, for that matter). I elsewhere indicated my appreciation for your post on the subject, with particular emphasis on an example you gave where group selection does apply. My support does not extend to the position your comment here conveys and I instead (obviously) repeat Eliezer's objection.
(Equally obviously there is nothing to be gained by continuing this conversation. It is based on nothing more than what meaning some unimportant comments convey and whether or not people have cause to accede to your demand (implied request?) to up-vote Tim.)
Thanks for clarifying that. Not just an unsympathetic interpretation, an innacurate one.
You are reading in too much context. You only have to look at the portion reproduced in Tim's comment. Eliezer asserted that there is no such thing as evolved adaptations that make human society work better. Tim provided an example, proving Eliezer wrong.
If you think I'm confused, try to say why. So far, no one has presented any evidence that I am "confused" about anything in <EDIT>the group selection post</EDIT>. There is some disagreement about definitions; but that is not confusion.
Close, but not exactly correct. My interpretation of what Eliezer EMOTED is that there are no adaptations which evolved because they make human society work better. That would be group selection by Eliezer's definition. Eliezer might well accept the existence of adaptations which evolved because they make humans work better and that incidentally also make society work better.
ETA. Ok, it appears that a literal reading of what EY wrote supports your interpretation. But I claim my interpretation matches what he meant to say. That is, he was objecting to what he thought you meant to say. Oh, hell. Why did I even decide to get involved in this mess?
I believe this to be correct representation of Eliezer's meaning and that meaning to be be an astute response to the parent.
Even though I wrote the parent, and already told you that's not what I meant?
Claiming that the parent invoked group selection means claiming that human societies can't evolve adaptations that make society work better except via group selection. Claiming that the parent should thus be criticized means claiming both that, and that group selection is not a viable hypothesis. Tim provided a counter example to the first claim; my later post on group selection provided a counterexample to the second.
FWIW, I agree that a careful reading of your comment suggests the possibility that group selection was not in your mind and therefore that EY jumped to a conclusion. I believe your claim now that group selection was not on your mind. But, I have to say, it certainly appeared to me at first that your point was group-selectionist. I almost responded along those lines even before EY jumped in with both feet.
I do not agree. In particular I don't accept your premises.
It is not necessary for you to persuade me because this conversation is not important. I observe that the likelyhood that either of us succeeding in persuading the other of anything here is beyond negligible.
Using "because" on evolution is tricky -- particularly when co-evolution is involved --and society and humans are definitely co-evolving. Which evolved first -- the chicken or the chicken egg (i.e. dinosaur-egg-type arguments explicitly excluded).