You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

niceguyanon comments on Inferential silence - Less Wrong Discussion

44 Post author: Kaj_Sotala 25 September 2013 12:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread.

Comment author: niceguyanon 25 September 2013 02:59:57PM 4 points [-]

Many times I find a post or comment interesting and decisive but feel as thought I am inadequate of the technical knowledge required, so I refrain from voting. I find myself up voting comments and posts that are interesting and relatively easy to digest, however there are posts that I think are interesting and important but because I have a long way to go to in grasping concepts I don't vote.

I happen to really like Wai Dai's comments and posts but I don't believe I have ever up up-voted anything because I feel like I'm just a guy on the internet, not an AI researcher, what do I know what is interesting or important enough to up vote? Maybe I should change my mentality about up voting things.

Comment author: TheOtherDave 25 September 2013 04:16:25PM 11 points [-]

One position on voting which often gets endorsed here is "upvote what I want more of; downvote what I want less of."
By that standard, if you think the post is interesting and important, you should upvote it,whether you feel adequate to judge its technical content or not.

Comment author: BaconServ 25 September 2013 06:08:21PM 5 points [-]

I feel that the policy you state is read once and ignored for whatever reason. A mere reminder on an individual basis seems unlikely to effectively address this issue: There are too many humble users who feel their mere vote is irrelevant and would cause undue bias.

I feel that this entire topic is one of critical importance because a failure to communicate on the part of rationalists is a failure to refine the art of rationality itself. While we want to foster discussion, we don't want to become a raving mass not worth the effort of interacting with (à la reddit). If we are who we claim to be, that is, if we consider ourselves rationalists and consider the art of rationality worth practicing at all, then I would task any rationalist with participating to the best of their ability in these comments: This is an importation discussion we cannot afford to allow to pass by into obscurity.

Comment author: satt 26 September 2013 01:17:59AM 1 point [-]

Huh, interesting. I'd already noticed I try to

  1. upvote what I want more of, and downvote what I want less of

  2. avoid voting on technical material I don't think I can adequately judge

but I never noticed the tension between those two heuristics before. In practice I guess I prioritize #2. If I can't (or can't be bothered to) check over a technical comment, I usually don't vote on it, however much I (dis)like it.

Comment author: TheOtherDave 26 September 2013 05:15:14AM 2 points [-]

Well, it makes some sense to not vote if you genuinely don't know whether you want more comments like that one (e.g. "I do if it's accurate, I don't if it isn't, I don't know which is true")

Comment author: Vaniver 26 September 2013 04:06:01AM 8 points [-]

I happen to really like Wai Dai's comments and posts but I don't believe I have ever up up-voted anything because I feel like I'm just a guy on the internet, not an AI researcher, what do I know what is interesting or important enough to up vote? Maybe I should change my mentality about up voting things.

So, I know I often feel disappointed that my technical comments (as well as technical comments others make) get approximately the tenth of the karma that, say, my HPMOR comments get. I have a policy of being much more willing to upvote comments that display technical prowess just because they display technical prowess.

But... I also generally feel like a technical expert able to determine the prowess of the comment, and I get frustrated when I see wrong comments upvoted highly. So I don't think I would endorse a policy of "upvote things that look technical because they look technical."

Comment author: jsteinhardt 26 September 2013 07:43:15AM 1 point [-]

Agree with both of these, although I'll also sometimes upvote things that you (and a small set of other users) have commented positively on, even if it is not something I understand well enough to verify the validity of.

Basically, I try to amplify the votes of technically literate users. Although this does require at least being technically literate enough to know who is technically literate. (I'll also, for instance, upvote any comment paulfchristiano makes about cryptography, since I already know he's in expert in that area.)

Comment author: ESRogs 25 September 2013 06:34:59PM 2 points [-]

As another random guy on the internet, I find Wei Dai's comments and posts interesting and important and I don't refrain from upvoting them.

One excuse you can use to give yourself permission to upvote what looks good to you is that even if you're wrong, you'll at least have given increased visibility to a viewpoint that is attractive but needs correcting. I think of my upvotes as not necessarily saying, "I endorse this as correct and the final word on the matter," but rather, "this seems like a good point and if it's wrong I'd like someone to explain why."

Comment author: BaconServ 25 September 2013 06:56:21PM *  1 point [-]

As someone who tends towards wanting to provide such explanations, even as the Devil's advocate, I feel that a significant amount of upvotes makes a reply more "dangerous" to attempt to dismantle in terms of the potential for downvotes received. For example: I feel at though your opinion is widely held and there is a significant potential for my current comment to be downvoted. I may be wrong, but I feel as though the perception that my perception is incorrect itself will tend towards a downvoting, even in the presence of such an explanation written for the very purpose of trying to disable that memetic response. I can easily see a potential for openly challenging the meme to increase the downvote potential of my post, and I suspect that the irony of having predicted well will not be apparent until my comment has already received a significant ratio of downvotes. The logic as I understand it is that this will be interpreted as a plea to not be downvoted, resulting in an aggressive attitude extremely suggestive of being downvoted.

My the same metric, if my present comment does not receive the quality of downvotes I predict, then my analysis is shown to be effectively "wrong." I now have the choice to backspace and avoid the conflict, but I have talked myself into an intense curiosity about the result of this experiment.

Reviewing now, I place a high probability on a general apathy towards deconstructing any part of this message.