See also: Warnock's Dilemma http://en.wikipedia.org/wiki/Warnock%27s_dilemma
The problem with no response is that there are five possible interpretations:
—Bryan C. Warnock
The existing karma system does a good job of addressing the first two possibilities, but the last three cases are still pretty hard to distinguish. Kaj_Sotala seems to be talking about cases 4 and 5, more or less.
As long as we're talking about a technical solution, it seems like the relevant dimension that Kaj is talking about is difficulty/understandability as opposed to agreement or general quality, and I can imagine a few different solutions to this[1]. That said, I'm not convinced that this would tell you the information you're after, since readers who have a strong technical background will be more likely to read difficult, technical posts and may vote them as relatively easy to understand. And the situation is the same with less technical readers and easier posts.
The third case could be addressed to a certain extent just by tracking number of views and displaying that somewhere. Maybe also with summary statistics like (upvotes - downvotes) / views. Views could be triggered when a post is clicked on, voted on, or just visible for greater than, say, 10 seconds, which will produce some false positives and false negatives but might be better than nothing.
–––––
[1] E.g. two new voting buttons ("too basic", "too difficult"), a larger set of radio buttons, or a slider. Not sure what the icons would be, but maybe something like "1,2,3" for basic and integral signs for difficult.
Another possible interpretation:
Disagree with the post; can't personally refute it, but believe that someone who shares my views (and is more knowledgeable) could.
Point one is addressed with "like" "favorite" "star" or any number of "upvote" mechanisms; it should be a relatively solved problem in our present context. This applies to point two with the equivalent downvote button.
The third point is the one that seems most relevant to our present context, as discussed in http://lesswrong.com/r/discussion/lw/iq6/inferential_silence/9ssf
The fourth point seems largely relevant to the comment directly preceding yours chronologically: http://lesswrong.com/r/discussion/lw/iq6/inferential_silence/9ssp
I am unsure about the fifth point. Reviewing this comment as I draft it, I realize that it can form a sort of branching point for discussion within this comment section, and I wonder if I should or ought to be allowed to ask you to edit your post to serve as the main branching point so the I might delete mine and tidy up the discussion, or else edit out redundant parts and leave certain parts, this paragraph in particular, for later discussion analysis. I feel as though it is then proper to encourage the upvoting of your post aloud so as to make it appear at the top of the thread for having outlined the best known catalog and being a branching point for discussion.
This problem is rampant in lectures and academic talks. Many times, a speaker will be saying something which to her is of course obvious, and she'll put it up on the board and ask if everyone gets it. There'll be a lot of blank faces and maybe one guy nodding reflexively. And she'll be like: "OK, if everyone gets it we'll move on"; when actually no one would've got it.
This is mitigated if there's a small subset of people who try to engage deeply with the speaker and constantly ask for clarifications. Often they are senior professors since they have no danger of losing status by asking stupid questions; but they need not be. This small subset energizes the rest and forces the speaker to be clear and engaging.
Similarly, if you're on LW, and if some post or comment is confusing to you, don't skip it and leave it to people who might have the right background. Ask for clarifications. Tell yourself that you're doing a huge favor to the poster because her ideas are clarified and absorbed into the community.
If you feel something is just obviously right and nothing need be said: well, with high probability you're wrong. In academic talks, I've realized that the feeling that something is obvious is almost always misplaced. In fact this is the illusion of explanatory depth. To counteract this during talks, I often tell myself that this Obvious Thing is probably the result of years of debate among some community and probably there're still holes that need to be fixed. And if I'm not seeing the subtleties, then I'm not looking close enough.
if some post or comment is confusing to you, don't skip it and leave it to people who might have the right background. Ask for clarifications.
I am hereby publicly committing to doing this at least once per day for the next week.
What if one day you don't read any comment you find confusing? You keep on reading until you find one?
This problem is rampant in lectures and academic talks. Many times, a speaker will be saying something which to her is of course obvious, and she'll put it up on the board and ask if everyone gets it. There'll be a lot of blank faces and maybe one guy nodding reflexively. And she'll be like: "OK, if everyone gets it we'll move on"; when actually no one would've got it.
Many times the lecture is just a duty amongst pursuits more interesting, and the speaker secretly hopes that nobody asks, and it shows.
Uhm, I don't know if this is relevant, but how soon after publishing the article you make the comment? If it is two days or more, most people probably just don't read the discussion anymore. (Or even if they read, they may feel that it's too late to start an interesting debate there.)
That's definitely a factor, but not always (e.g. it wasn't the case with the Wei Dai article I gave as an example).
I think this might be the most salient point of all.
What would the data look like if we were to look at the correlation between the timing of a comment (compared to the timing of the original post) and it's vote rank?
I'd guess it is likely that the first "good" comment drives the discussion, as it will be read by more people, voted on more often, responded to more frequently, etc.
What if no comments were displayed on articles for 48 hours after publication of the article? At the 48-hour mark, all the comments recieved would be displayed simultaneously and made available for vote rank & response...you have ~20 comments, displayed in random order (is it possible to code for unranked comments to display in random order on a per-login basis?) Would this help remove the (hypothesized) early-comments-win bias?
This sounds wonderfully non-useful, but the thought came to mind upon reading your post: There could be an option to flag a post as, "I really think I solved this."
The immediate objection is that it will be overused. However, the comments just on this post indicate that many LWer are unwilling to be outgoing, giving a strong indication that it very much would not be overused. I myself am finding several reasons I would hesitate to select that option, even if I were very much of the opinion that I was posting the kind of definitive reply on which this topic is built. For one, I would expect an excessively critical eye would be applied to my post and I might be downvoted even further than I might be had I posted without the "audacity" to think I had "solved the debate."
While writing this reply, I further abstracted that there are two distinct types of discussion happening in comments: Idle thoughts and attempts at definitive resolution. To further help understand this issue, I will be replying to the comments that appear to me to be of the latter variety.
Re-reading this comment, I realize that I didn't specify why the initial thought I considered worthy of sharing was thought in the first place: I was trying to think of ways to increase time-unbounded discussion. It seems to me that the time-bound nature of discussion here is a primary hurdle to overcoming this issue.
Many times I find a post or comment interesting and decisive but feel as thought I am inadequate of the technical knowledge required, so I refrain from voting. I find myself up voting comments and posts that are interesting and relatively easy to digest, however there are posts that I think are interesting and important but because I have a long way to go to in grasping concepts I don't vote.
I happen to really like Wai Dai's comments and posts but I don't believe I have ever up up-voted anything because I feel like I'm just a guy on the internet, not an AI researcher, what do I know what is interesting or important enough to up vote? Maybe I should change my mentality about up voting things.
One position on voting which often gets endorsed here is "upvote what I want more of; downvote what I want less of."
By that standard, if you think the post is interesting and important, you should upvote it,whether you feel adequate to judge its technical content or not.
I feel that the policy you state is read once and ignored for whatever reason. A mere reminder on an individual basis seems unlikely to effectively address this issue: There are too many humble users who feel their mere vote is irrelevant and would cause undue bias.
I feel that this entire topic is one of critical importance because a failure to communicate on the part of rationalists is a failure to refine the art of rationality itself. While we want to foster discussion, we don't want to become a raving mass not worth the effort of interacting with (à la reddit). If we are who we claim to be, that is, if we consider ourselves rationalists and consider the art of rationality worth practicing at all, then I would task any rationalist with participating to the best of their ability in these comments: This is an importation discussion we cannot afford to allow to pass by into obscurity.
Huh, interesting. I'd already noticed I try to
upvote what I want more of, and downvote what I want less of
avoid voting on technical material I don't think I can adequately judge
but I never noticed the tension between those two heuristics before. In practice I guess I prioritize #2. If I can't (or can't be bothered to) check over a technical comment, I usually don't vote on it, however much I (dis)like it.
Well, it makes some sense to not vote if you genuinely don't know whether you want more comments like that one (e.g. "I do if it's accurate, I don't if it isn't, I don't know which is true")
I happen to really like Wai Dai's comments and posts but I don't believe I have ever up up-voted anything because I feel like I'm just a guy on the internet, not an AI researcher, what do I know what is interesting or important enough to up vote? Maybe I should change my mentality about up voting things.
So, I know I often feel disappointed that my technical comments (as well as technical comments others make) get approximately the tenth of the karma that, say, my HPMOR comments get. I have a policy of being much more willing to upvote comments that display technical prowess just because they display technical prowess.
But... I also generally feel like a technical expert able to determine the prowess of the comment, and I get frustrated when I see wrong comments upvoted highly. So I don't think I would endorse a policy of "upvote things that look technical because they look technical."
Agree with both of these, although I'll also sometimes upvote things that you (and a small set of other users) have commented positively on, even if it is not something I understand well enough to verify the validity of.
Basically, I try to amplify the votes of technically literate users. Although this does require at least being technically literate enough to know who is technically literate. (I'll also, for instance, upvote any comment paulfchristiano makes about cryptography, since I already know he's in expert in that area.)
As another random guy on the internet, I find Wei Dai's comments and posts interesting and important and I don't refrain from upvoting them.
One excuse you can use to give yourself permission to upvote what looks good to you is that even if you're wrong, you'll at least have given increased visibility to a viewpoint that is attractive but needs correcting. I think of my upvotes as not necessarily saying, "I endorse this as correct and the final word on the matter," but rather, "this seems like a good point and if it's wrong I'd like someone to explain why."
As someone who tends towards wanting to provide such explanations, even as the Devil's advocate, I feel that a significant amount of upvotes makes a reply more "dangerous" to attempt to dismantle in terms of the potential for downvotes received. For example: I feel at though your opinion is widely held and there is a significant potential for my current comment to be downvoted. I may be wrong, but I feel as though the perception that my perception is incorrect itself will tend towards a downvoting, even in the presence of such an explanation written for the very purpose of trying to disable that memetic response. I can easily see a potential for openly challenging the meme to increase the downvote potential of my post, and I suspect that the irony of having predicted well will not be apparent until my comment has already received a significant ratio of downvotes. The logic as I understand it is that this will be interpreted as a plea to not be downvoted, resulting in an aggressive attitude extremely suggestive of being downvoted.
My the same metric, if my present comment does not receive the quality of downvotes I predict, then my analysis is shown to be effectively "wrong." I now have the choice to backspace and avoid the conflict, but I have talked myself into an intense curiosity about the result of this experiment.
Reviewing now, I place a high probability on a general apathy towards deconstructing any part of this message.
Generally, LW's comment section, as opposed to some other forums/blogs, doesn't seem to me to really encourage comprehensive discussion. It is a place to read an article and the comments on the article, and then your comment in response to what has been said so far. Anytime you "engage" someone on the internet, it can get a bit long-winded to follow that conversation to it's conclusion. I "ignore" most comments (no response) as a matter of decorum, even when I'd enjoy personally engaging on a topic.
Other times, I'm satisfied the comments have pretty much summed it all up. Or, the whole damned topic is above my head.
Is your observation a recent one? Or has LW always been like this?
LW's comment section, as opposed to some other forums/blogs, doesn't seem to me to really encourage comprehensive discussion
Out of curiosity, what about it seems to discourage extended discussion -- is it something like the threading structure or more like the tone and social norms? Or looking at it another way, what other forums seem to encourage fuller discussion and what's the difference between those and LessWrong?
(For context, when I think of other discussion forums, what comes to mind is reddit, facebook and google+ posts, and the comments sections on e.g. scottaaronson.com/blog or marginalrevolution.com. I find the quality of discussions on LessWrong to be superior to all of these. So I'm wondering if you would make a different evaluation, or if you're comparing to different forums.)
Out of curiosity, what about it seems to discourage extended discussion -- is it something like the threading structure
I think the threading structure makes group discussion of a point of contention unproductive. The latest replies are dispersed throughout the page, and there is limited nesting in practice. Lengthy back and forths are left to the small handful of those posting and getting reply notifications.
As far as technology for discussion goes, The Web Zillion Point O has miles to go to catch up to the technology of Usenet and trn.
It is, perhaps, a little bit format and a little bit social norm. Please don't mistake me: I generally like LW's style, as I think forum discussions tend to go on ad naseum, end up having little to do with the original topic, and devolve into little more than a pissing match between two people who don't recognize they are largely debating definitions.
Well, there is a balance between saying too much and not saying enough. The internet (and namely forums) are notoriously guilty of the former.
I like LW because it seems to avoid the afformentioned pissing matches that are common to the forums I've looked at. I get good information and do my own research.
If I want to beat a topic to death (and I sometimes do), I've observed LW is not the sort of place where that is welcome (or at least it just doesn't seem to happen).
Ah, reading back again, I realize I have committed several types of inferential silence in relation to your comment, in particular, I failed to upvote despite that I considered it discussion-relevant, even so far as using it as evidence to derive points I made in other comments without direct citation. I think the question at the end of your comment threw me off in that, not feeling an answer to your questions was necessary, I should not comment or interact with your post.
To answer your question, I refer to MarkL's comment: http://lesswrong.com/r/discussion/lw/iq6/inferential_silence/9sss I'm unsure if MarkL should rather have replied to your post instead of commenting separately, or if it makes much of a difference.
I've noticed this phenomenon. I always attributed it to the idea that most people gain their motivation to post via some sort of competitive attitude*, and once something ends the debate, there's nothing much more to do but shrug it off and move on. I sometimes imagine the person struggling to respond, and then coming up with some unrelated reason it's not worth their time, and then forgetting about it.
Also, anything sufficiently new as to be groundbreaking is often met with "meh" much, much earlier than the time anybody starts appreciating it. It's much less common for some big insight to be recognized right away than for it to drift around in the dust for a long time, until people dig it back up later on and give it credit.
*I'm not saying their real motivation is competition. I'm just saying that's the visceral thing that impels the action often. For example, there may be a lot of these 'debate-ending' posts that aren't responded to simply because people had akrasia and didn't get around to responding the way they would have if it was still a heated competition.
I'd like to think that this community can recognize and overcome that, accepting reasoning more readily, and realizing Singularity-type thoughts much sooner. Rather, I'll believe this until proven otherwise. Not sure how else to motivate myself to bother trying.
If we had developer resource, we'd have buttons on each comment that allowed us to register one of several standard opinions on a comment, like "broadly agree" etc. I once again wonder whether hacking on LW would be high-value activity.
I have been practicing intense automation for a consider amount of time now and I feel as though I could easily provide advice (effectively: lead a team) to make the task of hacking on LessWrong significantly trivial. In such a case, there would be no question as to the value of the experiment because it could be conducted so readily.
However, we already have buttons for what you describe in the generalized upvoting and downvoting options. The issue is not that we do not have such functions, but we are not utilizing them optimally; replies that may be exceptionally relevant are being ignored in whole.
What about a wider variety of options for buttons?
For example:
Also, is there any current incentive to vote at all?
I do not like the "This is too obvious" button. I think people will press it often on things that are not actually obvious, and this will discourage stating the obvious.
I think a single extra "duh" button, that does not reduce karma but does affect the percentage of positive votes, would do.
Also maybe turn the thumbs up into a "thanks!" button and the thumbs down into a "waste of time" one, or something.
Precedent: the science subreddit got way better after they renamed the upvote button into "solid science" and the downvote button into "not science".
Because the karma is numeric, those buttons should all also have numeric values +1 (want more of this) or -1 (want less of this). It would be probably good to preserve the old-style "upvote" and "downvote" for people who just want to vote and not analyze why exactly they felt this way (because having to analyze everything would be a trivial obstacle to voting).
So perhaps the good mechanism would be to first show the choice of downvote and upvote, and after the vote is made, display relevant subcategories and let voter optionally choose one of them which fits most.
Upvote -- nice, interesting, informative...
Downvote -- trivial, wrong, off-topic, flamebait, trolling...
Then the total score of the comment should also display a most frequent flag or two. Such as "5 points, interesting", or "-10, trolling". Maybe display the flag only if overwhelming majority of voters agree on it; otherwise just numerical value without flag.
Would such buttons be used or useful at all? Is expressing your opinion in general useful? Won't such buttons be even more susceptible to the humility issue? Is it just a problem of showing up?
There is an argument to be made, I think, that merely the fact of your wanting such features is evidence of their being likely to be used, but I do feel it should be made rationally beyond, "This would be cool for me."
I do not deny that such a thing is useful, and I have thought about it considerably, even to an implementation level, but I question if it is the solution here. I don't think it is.
One possible value of more diverse buttons would be more specific feedback to the comment author, as well as providing data to all participnats about what type of comments "do well" here.
As it stands, highly upvoted comments appear to me to mean the group agrees, strongly downvoted comments just disappear & comments with low votes are ??? (possible answers: ignored, too technical, wrong, redundant, obvious, disagreed with but not strongly, poorly written but so poorly written that no one wanted to make fun of that guy, late to the discussion so no one saw it because no one reads that thread anymore, some combination of the afformentioned, other group in-fighting and politicking I'm not privy to, etc.)
Of course the buttons would only be valuable to the extent readers will participate and use them. Is there an incentive to voting beyond the general good & welfare of the forum?
As the premise of the topic, I think so, yes. If relevant and definitive comments such as those described are being ignored, I think that poses a tangible problem. I've actually thought this for awhile, and I thank Kaj for having the guts to say-so. I'm not concerned with what kind of posts do well; I feel confident I could play the part if there were motivation for me to do so. What I am personally interested in is optimizing the ongoing discussion. The votes I receive on the comments I'm making here will let me know a deeper quality of information about LessWrong's understanding of the value of conversation.
The general good and welfare of this forum is not protected, in my opinion, by vote rating. Rather, if we had that kind of problem, I would allocate none of my effort to trying to resolve the issue of the topic. I see potential far beyond such trivial apprehensions.
Isn't the point to have a karma system that incentivizes post & comments that contribute to an optimized ongoing discussion? Wouldn't that system simultaneously reward (1) the poster & (2) the community?
As it is, and as the OP notes, decisive, relevant comments often escape the mechanism that is supposed to rank them. We have no idea how well we, as a community, are objectively doing in the pursuit of resolving the issue of the topic. Popular opinions and posters may be strongly biasing the discussion and steering the whole group into dubious positions. Dissenters (who are less altruistic than you) may grow tired of a format that tends to push their comments to the bottom of the pile and provides no feedback.
If the karma system does nothing to protect the general good and welfare of the forum, then get rid of it. And then test that hypothesis. (I posit you will see a decrease, both quantitatively and qualitatively, in contributions.) After eliminating karma, if LW indeed has "that kind of problem", then you will know definitively and can move on to the pursuit of less trivial matters.
If little numbers are all that motivates contribution, I don't think the motivations lost are or were rational.
I've written more in this single comment section in a few hours than I have in my entire experience on LessWrong outside of this topic. If I were looking for upvotes, I would not act in nearly the same manner or with the same motivation for quality of reasoning. Really, if not for this topic, I would sacrifice principles and reasoning I consider rational for the sake of blending in and seeming more acceptable in the eyes of the voters for the express purpose of getting to 20 karma points in case I decide I have a purpose posting something at the discussion level that I would expect to be immediately downvoted on. If not for this discussion, I'd be here solely to blend in until I have something nasty to say loudly.
As cold as it sounds, no, I wouldn't consider it a loss. It really is a Boolean for me, which I feel isn't giving you the proper feedback your argument deserves, but the individual contributors to LessWrong either have the potential to seriously address and rationally discuss this issue or else there will be a stasis of extremely unoptimal discussion practices that carries no value to me (beyond what amounts to advertising).
In this case I want an FML button, a "Here's a cute kitten" button, and a "Mothers Against Drunk Posting" button. To start with. :-D
I have an alternate perspective. If something doesn't attract a lot of attention, I don't really want to put a lot of effort into thinking about it, because I have other things to do, and it seems no one cares either way.
But if I put some thought into something, and make a fairly detailed post, and get 20 net upvotes but no comments, and the next lower post in the thread has four net upvotes, I don't necessarily know what it was about what I posted that made it not just good, but so much better then everything else, and it seems like knowing that would be helpful so I could repeat that.
This thread here is a good example of this: (at the time that I posted it: Obviously, more people may add discussion/upvotes/downvotes)
http://lesswrong.com/r/discussion/lw/inm/signaling_of_what_precisely/9rfm
If I had made a simpler, smaller post that I felt a bit more certain about, I would just assume the upvotes meant "Yes, I agree." But in this case, people were strongly upvoting something I myself was uncertain about, which seems to leave me with "I'm glad you approve, but I don't know what you approve of because I myself don't feel confident about what the answer is, I'm just trying to break the question apart into separate possibilities and detailing some of the limited evidence I do have."
That's probably exactly it. The process of thinking about the question and analyzing it for me is something I'd want to encourage, particularly in subjects in which I have little to no expertise, so I'd upvote it.
(This has the obvious dangers, of course, but there's usually enough expert-ish people on any given topic on LW that controversial assumptions get brought out into the open fast. )
Really brainstorming now, I consider the premise that focus on features related to single posts are orthogonal; the core of the topic is not in any given comment, but in the relations between comments themselves. Let's see what my brain turns up...
Reviewing the comments here, I am reminded that sub-comments often have higher vote counts than the comments they are in reply to: Are we valuing the answers to the questions posed in the first-order comments? Moreover, if the base of the comment tree is not voted highly but the comments within it are, what should appear when sorted by leading? Destructuring the apparatus, and from writing the "branching point" comment earlier, I wonder about free-form connection of comments in reply to other comments. (To say nothing of the implementation details.) What if we were to allow users other than the author of a post to mark it as reply-relevant to another post? Which posts would end up deserving of first-order leading status? Moreover, what about comments between articles? Are articles themselves not comments, of a sort? Should definitive-solution-to-the-topic comments more properly be classified as articles unto themselves? Should an article be written instead? What is the optimal method of communication and collaboration for a community of rationalists? If a "comment cloud" is deemed to be a useful structural apparatus, what implementations could be constructed to make interfacing with the ever-evolving live discussion realistically usable? What dynamics and habits would interfere with any given interface? Is "thumbs up/down" a proper branching point for human psychology? Is there a better metric a community of rationalists could use to order posts? I myself vastly prefer sorting by controversial on reddit, but I've found myself questioning if that is the best way to sort LessWrong as well. On reddit, to increase the controversy rating of a comment, you vote it in the direction of zero. I honestly advocate this to anyone who understands the concept behind the idea, at least for the majority of reddit. This of course alters the meaning and implications of "karma" entirely, but I think it a useful heuristic: If people aren't disagreeing about it, I question how much it will interest me.
I'm having difficulty not finding the continuous comment cloud idea worthy of experimentation in terms of usefulness, but it would require many different viewpoints assessing things in their own preferred manner to produce a result. Unsure of the exact mechanics of such a structure, I would have advocated that a programmable interface be provided, such that each individual can view comments in the way they find most optimally useful to themselves, if we had managed to find ourselves in a civilization more apt to scripting. Still, I would advocate many flavors of interface optimization, with each user simultaneously trying to provide as much data to the different paradigms being tested as possible. Though perhaps I'm dreaming too far ahead on that front.
Even without an immediate implementation, I still find the idea useful: What can we do with the existing implementation to make it behave more optimally? In trying to reference other comments within this section, I've found that the hyperlink method is limited: I'm unsure how many users would look at the URL to see if it was a comment within this document, or if most users are liable to simply click the link regardless. Should I be clicking all the links, to ensure total coverage of the discussion on my part? Can we expect useful reformation if only a few participants are willing to change/override their commenting habits?
That all said, I feel as though I'm missing the point that I was aiming for: Am I still focusing on the comments rather than the discussions they communicate? Is the implementation irrelevant? Would trying to adopt a new method of browsing comments be of any usefulness?
What makes discussions useful? How do we detect that usefulness? Was my time spent brainstorming aloud useless to all others but myself in determining how not to solve this problem?
And most importantly: Which of those three questions is the one most worth trying to provide feedback on?
I managed to bake these thoughts:
There are three things worth upvoting:
There's no way for me to upvote the discussion resulting from a comment without upvoting the root comment or the author of that command. Given those two side-effects don't seem important to me, I've opted to upvote comments in which interesting discussions have occurred, though I'm not honestly sure sorting by leading doesn't already serve that purpose. Honestly, I sort by new. If I'm going to participate, I might as well read all the comments to see if what I want to say had already been said. I have severe doubts that a "karma" system ultimately makes sense for anything other than articles. If titles and feedback meant nothing, we could as well list only the author and karma of articles in the index. We could get rid of date too, for that matter.
Now that I think about it, I can't remember the names of half the authors who have written posts I've updated. I'd like to dream up a system to help me track authors who tend to write comments I personally consider worth reading, but then I'm going to end up reading all the comments if I am to participate, regardless of "quality."
But then again, as per my comments in A game of angels and devil, maybe I'm focusing on higher-order abstract qualities of discussion when the majority of readers simply need to be beaten into epistemic submission. Am I the only one that had already well-understood, independently formalized, and completely internalized a supermajority of articles on LessWrong prior to even discovering the site? Honestly, I'd like to wait and see, but there's been plenty of time already. The fix is in for this moment in the system's evolution: This article was an abysmal failure. I'm about ready to try to collect the names of the individuals worth the effort of continued discussion with and create a safe haven away from the mass all too willing to divide itself out of the picture.
If there already exists such a cabal, it has done an abysmal job of improving matters here. I hope my recognition of the epistemic arrogance of others isn't itself mistaken for epistemic arrogance, but if such a cabal were at all worthy of being inducted in to, they would already have more than enough epistemic humility to recognize that. I guess the fix is already in on that, too: This entire cluster of organizations and individuals is failing to accomplish its own goals. I can already model the paranoia of any existing cabals here. I guess if I'm the least paranoid among those capable, I really don't have any solution other than to keep lurking the IRC and grabbing up the individuals that exhibit the readily detectable qualities necessary. Hint hint. Hint. IRC. Hint.
I can't wait until my cabal has enough members to actively search. ~_~
I suppose this isn't the best place to ask, given the inherent reception lag, but it occurs to me that I might be writing vague comments, and being downvoted for not being clear enough in my expression of ideas.
Can anyone help verify if this is (strongly) the case?
Every now and then, I write an LW comment on some topic and feel that the contents of my comment pretty much settles the issue decisively. Instead, the comment seems to get ignored entirely - it either gets very few votes or none, nobody responds to it, and the discussion generally continues as if it had never been posted.
Similarly, every now and then I see somebody else make a post or comment that they clearly feel is decisive, but which doesn't seem very interesting to me. Either it seems to be saying something obvious, or I don't get its connection to the topic at hand in the first place.
This seems like it would be about inferential distance: either the writer doesn't know the things that make the reader experience the comment as uninteresting, or the reader doesn't know the things that make the writer experience the comment as interesting. So there's inferential silence - a sufficiently long inferential distance that a claim doesn't provoke even objections, just uncomprehending or indifferent silence.
But "explain your reasoning in more detail" doesn't seem like it would help with the issue. For one, we often don't know beforehand when people don't share our assumptions. Also, some of the comments or posts that seem to encounter this kind of a fate are already relatively long. For example, Wei Dai wondered why MIRI-affiliated people don't often respond to his posts that raise criticisms, and I essentially replied that I found the content of his post relatively obvious so didn't have much to say.
Perhaps people could more often explicitly comment if they notice that something that a poster seems to consider a big thing doesn't seem very interesting or meaningful to them, and briefly explain why? Even a sentence or two might be helpful for the original poster.