ChrisHallquist comments on Only You Can Prevent Your Mind From Getting Killed By Politics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (143)
I kind of want to respond "what hypo rational said," but let me see if I can say it more clearly:
That's certainly true. I just think you can get a lot more information much faster directly examining how someone's beliefs change in response to new evidence.
Well, it's definitely not the bit that isn't specific enough to provide (much) information about the vast number of people in the world who believe in climate change because it is a tribal signifier. The existence of God is pretty unique in being both insanely improbable and widely believed. Incidentally, Stuart's post doesn't actually argue otherwise. His argument actually doesn't even fit his thesis: what he's trying to say is that disbelief in anthropogenic climate change is indicative of a higher degree of irrationality than theism, not that it is more indicative. That might actually be true just based on the average denier of climate change but it's hard to apply that standard universally when the certainty of climate scientists is only at 95%. 5% uncertainty leaves a little room for intelligent, rational skepticism among people who already tend to be suspicious of many established scientific theories. Conversely the median probability assigned to God's existence in these parts is 0.
In other words: yes, the median climate change denier might indeed by less rational than the median theist. But the probability of anthropogentic climate change being wrong is much higher than the probability that God exists -- which makes in unreliable as a test. Also, that's clearly the quote my opponent will discover if I ever decide to run for public office.
Eh. Here was his thesis:
I sort of feel like the determination that theism is irrational and it's role as the Plimsoll line for participating at Less Wrong is pretty central to the brand. In a lot of ways the community grew out of the atheist blogosphere and we don't even really let theists argue here. I know some Right-leaning posters are already leery of a left-ward tilt to Less Wrong: I can imagine them being annoyed by how his proposal sounds.
But at this point I think we're over-analyzing the post.
I don't think Stuart's test is particularly useful by itself, so don't take this as me defending it. His post is also vague and short enough to allow for several interpretations.
What do you mean by "directly examine"? What if you can't interact with the person but want to determine whether reading their book is worthwhile for example? Using a few belief litmus tests could be a great way to prevent wasting your time. There are other similar situations.
If there's anything good about a belief litmus test, it's that it's simpler to apply than anything else. Probing someone's belief structure might take a lot of time, and might be socially unacceptable in certain situations. It might not be easy to assess why a person fails to update, as they might have other conflicting beliefs you're not aware of. Like any test, there will be false positives and false negatives. I think it's a matter personal preference how many you're willing to accept, and depends on how much effort you're willing to put into testing.
A default test, not the default test. I think we're both nitpicking here and it's pretty pointless.
Please define Plimsoll line. Is there a reason you didn't use a more readily understandable word? I've seen theists stepping out of the closet and being upvoted here. It's just when they come here with the default arguments we've seen a million times that they get downvoted to oblivion.
That's truly bizarre, considering that I basically managed to lose 100 karma points for arguing fairly typical social-democratic positions on LessWrong just yesterday.
Now, yes, "politics is the mind-killer", but people get mind-killed in a direction, and the direction here is very definitely neoliberal, ie: economically market-populist proprietarian, culturally liberal.
Have you considered that you lost your karma not because you argued typical social-democratic positions, but because you argued them badly?
That is entirely possible. However, in that case, I would expect that other people would argue social-democratic positions well (assuming we hold that social-democratic positions have the same prior probability as those of any other ideology of equivalent complexity), and receive upvotes for it. Instead, I just saw an overwhelmingly neoliberal consensus in which I was actually one of the two or three people explaining or advocating left-wing positions at all.
Think of the Talmud's old heuristic for a criminal court: a clear majority ruling is reliable, but a unanimous or nearly unanimous ruling indicates a failure to consider alternatives.
Now, admittedly, neoliberal positions appear often appealingly simple, even when counterintuitive. The problem is that they appear simple because the complexity is hiding in unexamined assumptions, assumptions often concealed in neat little parables like "money, markets, and businesses arise as a larger-scale elaboration of primitive barter relations". These parables are simple and sound plausible, so we give them very large priors. Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons (that is: any theory about history which neatly leads to us will sound simpler than one that leads to some alternative present, even if real history was in fact more complicated and our real present less genuinely probable).
So overall, it seems that for LessWrong, any non-neoliberal position (ie: position based on refuting those parables) is going to have a larger inferential distance and take a nasty complexity penalty compared to simply accepting the parables and not going looking for historical evidence. This may be a fault of anthropic bias, or even possibly a fault of Bayesian thinking itself (ie: large priors lead to very-confident belief even in the absence of definite evidence).
This particular example doesn't seem troublesome to me, because I'm comfortable with the idea of bartering for debt. That is, my neighbor gives me a cow, and now I owe him one- then I defend his home from raiders, and give him a chicken, and then we're even. A tinker comes to town, and I trade him a pot of alcohol for a knife because there's no real trust of future exchanges, and so on. Coinage eventually makes it much easier to keep track of these things, because then we don't have my neighbor's subjective estimate of how much I owe him versus my subjective estimate of how much I owe my neighbor, we can count pieces of silver.
Now, suppose I'm explaining to a child how markets work. There are simply less moving pieces to tell it as "twenty chickens for a cow" than "a cow now for something roughly proportional to the value of the cow in the future," and so that's the explanation I'll use, but the theory still works for what actually happened. (Indeed, no doubt you can explain the preference for debt over immediate bartering as having lower frictional costs for transactions.)
In general, it's important to keep "this is an illustrative example" separate from "this is how it happened," which I don't know if various neoliberals have done. Adam Smith, for example, claims that barter would be impractical, and thus people immediately moved to currency, which was sometimes things like cattle but generally something metal.
In this particular thread or on LW in general?
In the particular thread, it's likely that such people didn't have time or inclination to argue, or maybe just missed this whole thing altogether. On LW in general, I don't know -- I haven't seen enough to form an opinion.
In any case the survey results do not support your thesis that LW is dominated by neoliberals.
Haven't seen much unanimity on sociopolitical issues here.
On the other hand there is that guy Bayes... hmm... what did you say about unanimity? :-D
Graeber's views are not quite mainstream consensus ones. And, as you say, *any* historical narrative will sound simple for anthropic reasons -- it's not something specific to neo-liberalism.
Not sure what you are proposing as an alternative to historical narratives leading to what actually happened. Basing theories of reality on counterfactuals doesn't sound like a good idea to me.
The survey results are out? Neat!
I'm not saying we should base theories on counterfactuals. I'm saying that we should account for anthropic bias when giving out complexity penalties. The real path reality took to produce us is often more complicated than the idealized or imagined path.
The question is: are they non-mainstream in economics, anthropology, or both? I wouldn't trust him to make any economic predictions, but if he tells me that the story of barter is false, I'm going to note that his training, employment, and social proof are as an academic anthropologist working with pre-industrial tribal cultures.
Previous years' survey results: 2012, 2011, 2009. The 2013 survey is currently ongoing.
How would that work?
I am not sure what the mainstream consensus in anthropology looks like, but I have the impression that Graeber's research is quite controversial.
At minimum, it does seem like many anthropologists see Graeber's work as much more tied into his politics than things even often are in that field, and that's a field that has serious issues with that as a whole.
Considering how many of their comments have been downvoted, including inquiries like this one, and other recent events, such as those discussed by Ialdabaoth and others here, my guess is that's not what is going on here.
I hope you realize the epistemical dangers of automatically considering all negative feedback as malicious machinations of your dastardly enemies...
While I take your point, it seems unlikely that that's what's motivating the response here. eli_sennesh and Eugine_Nier are about as far apart from each other politically as you can get without going into seriously fringe positions, with ialdabaoth in the middle, but there's evidence of block downvoting for all of them. You'd need a pretty dastardly enemy to explain all of that.
(I don't think block downvoting's responsible for most of eli's recent karma loss, though.)
Block, meaning organized effort? Definitely not. But I definitely find a -100 karma hit surprising, considering that even very hiveminded places like Reddit are very slow to accumulate comment votes in one direction or the other.
EDIT: And now I'm at +13 karma, which from -48 is simply absurd again. Is the system intended to produce dramatic swings like that? Have I invoked the "complain about downvoting, get upvoted like mad" effect seen normally on Reddit?
There's a fairly common pattern where someone says something that a small handful of folks downvote, then other folks come along and upvote the comment back to zero because they don't feel it deserves to be negative, even though they would not have upvoted it otherwise. You've been posting a lot lately, so getting shifts of several dozen karma back and forth due to this kind of dynamic is not unheard of, though it's certainly extreme.
Concerted, not necessarily organized. It's possible for one person to put a pretty big dent in someone else's karma if they're tolerant of boredom and have a reasonable amount of karma of their own; you get four possible downvotes to each upvote of your own (upvotes aren't capped), which is only rate-limiting if you're new, downvoting everything you see, or heavily downvoted yourself.
This just happens to have been a sensitive issue recently, as the links in JoshuaZ's ancestor comment might imply.
Well, I'm sorry for kvetching, then.
I understand block downvoting as a user (one, but possibly more) just going through each and every post by a certain poster and downvoting each one without caring about what it says.
It is not an "organized effort" in the sense of a conspiracy.
Blockvoting may or may not be going on in this case, but at this point, I also assign a high probability that there are people who here downvote essentially all posts that potentially seem to be arguing for positions that are generally seen as to be on the left-end of the political spectrum. That seems include posts which are purely giving data and statistics.
Ah, well. I blame Clippy, then.
As I mentioned, I accept the block downvoting exists, it's pretty obvious. However the question is what remains after you filter it out. And as you yourself point out, in this case the remainder is still negative.
Of course that would be epistemically dangerous. Dare I say it, as assuming that all language used by people one doesn't like is adversarial?
More to the point, I haven't made any such assumption. There are contexts where negative feedback and discussion is genuine and useful, and some of eli's comments have been unproductive, and I've actually downvoted some of them. That doesn't alter the fact that there's nothing automatic going on: in the here and now, we have a problem involving at least one person, and likely more, downvoting due primarily for disagreement rather than anything substantial, and that that is coming from a specific end of the political spectrum. That doesn't say anything about "dastardly enemies"- it simply means that karma results on these specific issues are highly likely in this context to be not representative, especially when people are apparently downvoting Eli's comments that are literal answers to questions that they don't like, such as here.
The possibilities that Eli's comments were downvoted "politically" and that they were downvoted "on merits" are not mutually exclusive. It's likely that both things happened.
Block down- and up-voting certainly exists. However, as has been pointed out, you should treat this as noise (or, rather, the zero-information "I don't like you" message) and filter it out to the degree that you can.
Frankly, I haven't looked carefully at votes in that thread, but some of Eli's posts were silly enough to downvote on their merits, IMHO. I have a habit of not voting on posts in threads that I participate in, but if I were just an observer, I would have probably downvoted a couple.
I agree that both likely happened. But if a substantial fraction was happening to the first, what does that suggest?
And how do you suggest one do so in this context?
Look at short neutral "utility" posts and add back the missing karma to all the rest.
For example if somewhere in the thread there were a post "Could you clarify?" and that post got -2 karma, you would just assume that two people block-downvoted everything and add 2 karma to every post in the thread.
If you want to be more precise about it, you can look at the "% positive" number which will help you figure out how much karma to add back.
I am not sure it's worth the bother, though.
To be clear, I don't think someone's net-stalking me. That would be ridiculous. But I do think there's a certain... tone and voice that's preferred in a LessWrong post, and I haven't learned it yet. There's a way to "sound more rational", and votes are following that.
Well, one possibility is that fairly typical social-democratic positions are "left" of LW's earlier position according to those "Right-leaning posters," and therefore constitute a left-ward tilt from their perspective.