Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Only You Can Prevent Your Mind From Getting Killed By Politics

38 Post author: ChrisHallquist 26 October 2013 01:59PM

Follow-up to: "Politics is the mind-killer" is the mind-killerTrusting Expert Consensus

Gratuitous political digs are to be avoided. Indeed, I edited my post on voting to keep it from sounding any more partisan than necessary. But the fact that writers shouldn't gratuitously mind-kill their readers doesn't mean that, when they do, the readers' reaction is rational. The rules for readers are different from the rules for writers. And it especially doesn't mean that when a writer talks about a "political" topic for a reason, readers can use "politics!" as an excuse for attacking a statement of fact that makes them uncomfortable.

Imagine an alternate history where Blue and Green remain important political identities into the early stages of the space age. Blues, for complicated ideological reasons, tend to support trying to put human beings on the moon, while Greens, for complicated ideological reasons, tend to oppose it. But in addition to the ideological reasons, it has become popular for Greens to oppose attempting a moonshot on the grounds that the moon is made of cheese, and any landing vehicle put on the moon would sink into the cheese.

Suppose you're a Green, but you know perfectly well that the claim the moon is made of cheese is ridiculous. You tell yourself that you needn't be too embarrassed by your fellow Greens on this point. On the whole, the Green ideology is vastly superior to the Blue ideology, and furthermore some Blues have begun arguing we should go to the moon because the moon is made of gold and we could get rich mining the gold. That's just as ridiculous as the assertion that the moon is made of cheese.

Now imagine that one day, you're talking with someone who you strongly suspect is a Blue, and they remark on how irrational it is for so many people to believe the moon is made of cheese. When you hear that, you may be inclined to get defensive. Politics is the mind-killer, arguments are soldiers, so the point about the irrationality of the cheese-mooners may suddenly sound like a soldier for the other side that must be defeated.

Except... you know the claim that the moon is made of cheese is ridiculous. So let me suggest that, in that moment, it's your duty as a rationalist to not chastise them for making such a "politically charged" remark, and not demand they refrain from saying such things unless they make it perfectly clear they're not attacking all Greens or saying it's irrational to oppose a moon shot, or anything like that.

Quoth Eliezer:

Robin Hanson recently proposed stores where banned products could be sold.  There are a number of excellent arguments for such a policy—an inherent right of individual liberty, the career incentive of bureaucrats to prohibit everything, legislators being just as biased as individuals.  But even so (I replied), some poor, honest, not overwhelmingly educated mother of 5 children is going to go into these stores and buy a "Dr. Snakeoil's Sulfuric Acid Drink" for her arthritis and die, leaving her orphans to weep on national television.

I was just making a simple factual observation.  Why did some people think it was an argument in favor of regulation?

Just as commenters shouldn't have assumed Eliezer's factual observation was an argument in favor of regulation, you shouldn't assume the suspected Blue's observation is a pro-moon shot or anti-Green argument.

The above parable was inspired by some of the discussion of global warming I've seen on LessWrong. According to the 2012 LessWrong readership survey, the mean confidence of LessWrong readers in human-caused global warming is 79%, and the median confidence is 90%. That's more or less in line with the current scientific consensus.

Yet references to anthropogenic global warming (AGW) in posts on LessWrong often elicit negative reactions. For example, last year Stuart Armstrong once wrote a post titled, "Global warming is a better test of irrationality than theism." His thesis was non-obvious, yet on reflection, I think, probably correct. AGW-denialism is a closer analog to creationism than theism. As bad as theism is, it isn't a rejection of a generally accepted (among scientists) scientific claim with a lot of evidence behind it just because the claim clashes with your ideological. Creationism and AGW-denialism do fall under that category, though.

Stuart's post was massively down voted—currently at -2, but at one point I think it went as low as -7. Why? Judging from the comments, not because people were saying, "yeah, global warming denialism is irrational, but it's not clear it's worse than theism." Here's the most-upvoted comment (currently at +44), which was also cited as "best reaction I've seen to discussion of global warming anywhere" in the comment thread on my post Trusting Expert Consensus:

Here's the main thing that bothers me about this debate. There's a set of many different questions involving the degree of past and current warming, the degree to which such warming should be attributed to humans, the degree to which future emissions would cause more warming, the degree to which future emissions will happen given different assumptions, what good and bad effects future warming can be expected to have at different times and given what assumptions (specifically, what probability we should assign to catastrophic and even existential-risk damage), what policies will mitigate the problem how much and at what cost, how important the problem is relative to other problems, what ethical theory to use when deciding whether a policy is good or bad, and how much trust we should put in different aspects of the process that produced the standard answers to these questions and alternatives to the standard answers. These are questions that empirical evidence, theory, and scientific authority bear on to different degrees, and a LessWronger ought to separate them out as a matter of habit, and yet even here some vague combination of all these questions tends to get mashed together into a vague question of whether to believe "the global warming consensus" or "the pro-global warming side", to the point where when Stuart says some class of people is more irrational than theists, I have no idea if he's talking about me. If the original post had said something like, "everyone whose median estimate of climate sensitivity to doubled CO2 is lower than 2 degrees Celsius is more irrational than theists", I might still complain about it falling afoul of anti-politics norms, but at least it would help create the impression that the debate was about ideas rather than tribes.

If you read Stuart's original post, it's clear this comment is reading ambiguity into the post where none exists. You could argue that Stuart was a little careless in switching between talking about AGW and global warming simpliciter, but I think his meaning is clear: he thinks rejection of AGW is irrational, which entails that he thinks the stronger "no warming for any reason" claim is irrational. And there's no justification whatsoever for suggesting Stuart's post could be read as saying, "if your estimate of future warming is only 50% of the estimate I prefer you're irrational"—or as taking a position on ethical theories, for that matter. 

What's going on here? Well, the LessWrong readership is mostly on-board with the scientific view on global warming. But many identify as libertarians, and they're aware that in the US many other conservatives/libertarians reject that scientific consensus (and no, that's not just a stereotype). So hearing someone say AGW denialism is irrational is really uncomfortable for them, even if they agree. This leaves them wanting some kind of excuse to complain, one guy thinks of "this is ambiguous and too political" as that excuse, and a bunch of people upvote it.

(If you still don't find any of this odd, think of the "skeptic" groups that freely mock ufologists or psychics or whatever, but which are reluctant to say anything bad about religion, even though in truth the group is dominated by atheists. Far from a perfect parallel, but it's still worth thinking about.)

When the title for this post popped into my head, I had to stop and ask myself if it was actually true, or just a funny Smokey the Bear reference. But in an important sense it is: the broader society isn't going to stop spontaneously labeling various straightforward empirical questions as Blue or Green issues. If you want to stop your mind from getting killed by whatever issues other people have decided are political, the only way is to control how you react to that.

Comments (143)

Comment author: blacktrance 30 October 2013 03:26:46AM 9 points [-]

To contribute a "trick" that, in my experience, makes this easier, when you hear a political point, disentangle the empirical claims from the normative claims, and think to yourself, "Even if their empirical claims are correct, that doesn't necessarily mean I should accept their normative claims. I should examine the two separately."

Comment author: Lumifer 30 October 2013 02:41:57PM 5 points [-]

Yep, good advice. Disentangling descriptive from normative is a useful habit in general, not only in politics.

Comment author: eli_sennesh 26 November 2013 05:47:14PM 3 points [-]

In general, your internal type-checker should reject any and all mixing of descriptive and normative claims. It doesn't matter if the domain is politics or chess.

Comment author: Ishaan 27 October 2013 08:20:17PM *  16 points [-]

Now imagine that one day, you're talking with someone who you strongly suspect is a Blue, and they remark on how irrational it is for so many people to believe the moon is made of cheese.

I'm a big fan of "Agree Denotationally But Object Connotationally" when this is the case

Or, when talking to your fellow Greens about the moon, you would "agree connotationally but object denotationally". I find that for me this is actually even more common than the reverse.

think of the "skeptic" groups that freely mock ufologists or psychics or whatever, but which are reluctant to say anything bad about religion, even though in truth the group is dominated by atheists.

Okay, let's run with that example. If someone says something like "Theist are stupid"...I agree denotatively in that I think theism is foolish and I'm aware that holding theistic beliefs is negatively correlated with intelligence. I disagree connotationally with the disdain and patronizing attitude which is implicit in the statement, and I dislike the motivations which the person probably had for making it. If the same person had said "religiosity is negatively correlated with intelligence", then I would have no objections -it's the exact same information but the tone indicates that they are simply stating a fact. For particularly charged topics, explicit disclaimers voiding the connotations which normally occur are helpful.

I'm not sure it's practical, as a reader, to read writing and extract purely the denotative information, simply because of the sheer volume of useful information which is embedded within the connotations. If language is about communicating mental states and inferring the mental states of others, you can't communicate nearly as effectively if you toss out connotation.

TL:DR for Yvain's post: "Your statement is technically true, but I disagree with the connotations. If you state them explicitly, I will explain why I think they are wrong"

Comment author: Will_Newsome 28 October 2013 05:34:37AM 4 points [-]
Comment author: Douglas_Knight 26 October 2013 04:47:48PM 8 points [-]

a minor typo:

median confidence ... is 79%, and the mean confidence is 90%.

That is impossible with confidence bounded by 100%. Take an extreme case: just over half the population puts 79%, half 100%. Then the mean is just under 89.5. I checked that you switched the mean and median.

Comment author: ChrisHallquist 26 October 2013 04:51:18PM 1 point [-]

Fixed.

Comment author: Jack 26 October 2013 10:13:14PM *  16 points [-]

The whole idea of having a belief as a litmus test for rationality seems totally backward. The whole point is how you change your beliefs in response to new evidence.

Meanwhile, if a lot of people have a belief that isn't true it is almost necessarily politically salient. The existence of God isn't an issue that is debated in the halls of government: but it is still hugely about group identity which means that people can get mind-killed about it. The only reason it works as any kind of litmus test is that everyone here is/was already a part of the same group when it comes to theism.

I think the true objection to Stuart's post was less about climate change and more about branding Less Wrong with an issue that has ideological salience. And that seems totally fair to me. If you have a one issue litmus test it's sort of weird to make it one that isn't specific enough to screen out even the most irrational liberals. At the very least add a sub-test asking if a person thinks carbon emissions are responsible for the Hurricane Sandy disaster, their confidence that climate change causes more hurricanes and what (if any) existential risk they assign to it. Catch the folks who think the moon is made out of gold in the filter.

Comment author: hyporational 27 October 2013 07:27:02AM *  10 points [-]

The whole idea of having a belief as a litmus test for rationality seems totally backward. The whole point is how you change your beliefs in response to new evidence.

I think this is a very uncharitable interpretation of what the post in question is trying to say. First, the post isn't proposing a litmus test, but a test that is better than theism in identifying irrationality. Second, how would you know if someone changes their beliefs in response to new evidence without assessing their beliefs in relation to shared evidence? There's no way Stuart was stupid enough to think evidence shouldn't be shared for this to work.

ETA: I'm not a native speaker, and I'm not sure how people use the word litmus test anymore.

Comment author: TheOtherDave 27 October 2013 05:34:31PM 8 points [-]

"Litmus test" in common U.S. usage means a quick and treated-as-reliable proxy indicator for whether a system is in a given state. To treat X as a litmus test for rationality, for example, is to be very confident that a system is rational if the system demonstrates X, and (to a lesser extent) to be very confident that a system is irrational if the system fails to demonstrate X.

Comment author: Jack 28 October 2013 07:26:11AM 0 points [-]

This is how I meant it.

Comment author: hyporational 28 October 2013 01:56:49AM *  0 points [-]

That's what I thought first too, but it seems to also have a political meaning.

treated-as-reliable

You mean the test can be completely unreliable, like many political litmus tests probably are?

Comment author: TheOtherDave 28 October 2013 02:25:14AM 1 point [-]

Yes, I do mean that.

Comment author: hyporational 28 October 2013 02:32:13AM 0 points [-]

What a sadly disfigured figure of speech. Chemists would disapprove :(

I wonder if there are many more like it.

Comment author: Nornagest 28 October 2013 01:59:56AM 0 points [-]

That's pretty much the same meaning; just read "person or policy" for "system", and "ideologically acceptable" for "in a given state".

Comment author: ChrisHallquist 28 October 2013 04:34:09AM 2 points [-]

I kind of want to respond "what hypo rational said," but let me see if I can say it more clearly:

  • Yes the point of rationality is how you change your beliefs in response to new evidence, but some beliefs are evidence that the person who holds the belief isn't doing a very good job of that.
  • Admittedly, any single belief is just one bit of information about a person's rationality, and maybe Stuart should have acknowledged that. But it still makes sense to talk about which bits are more informative.
  • I doubt Stuart meant to suggest AGW should be "the" litmus test for LessWrong, or a central part of LessWrong's branding, or anything like that. Again, the question is just which bit is more informative.
Comment author: Jack 28 October 2013 08:17:14AM 3 points [-]

Yes the point of rationality is how you change your beliefs in response to new evidence, but some beliefs are evidence that the person who holds the belief isn't doing a very good job of that.

That's certainly true. I just think you can get a lot more information much faster directly examining how someone's beliefs change in response to new evidence.

Admittedly, any single belief is just one bit of information about a person's rationality, and maybe Stuart should have acknowledged that. But it still makes sense to talk about which bits are more informative.

Well, it's definitely not the bit that isn't specific enough to provide (much) information about the vast number of people in the world who believe in climate change because it is a tribal signifier. The existence of God is pretty unique in being both insanely improbable and widely believed. Incidentally, Stuart's post doesn't actually argue otherwise. His argument actually doesn't even fit his thesis: what he's trying to say is that disbelief in anthropogenic climate change is indicative of a higher degree of irrationality than theism, not that it is more indicative. That might actually be true just based on the average denier of climate change but it's hard to apply that standard universally when the certainty of climate scientists is only at 95%. 5% uncertainty leaves a little room for intelligent, rational skepticism among people who already tend to be suspicious of many established scientific theories. Conversely the median probability assigned to God's existence in these parts is 0.

In other words: yes, the median climate change denier might indeed by less rational than the median theist. But the probability of anthropogentic climate change being wrong is much higher than the probability that God exists -- which makes in unreliable as a test. Also, that's clearly the quote my opponent will discover if I ever decide to run for public office.

I doubt Stuart meant to suggest AGW should be "the" litmus test for LessWrong, or a central part of LessWrong's branding, or anything like that. Again, the question is just which bit is more informative.

Eh. Here was his thesis:

Theism is often a default test of irrationality on Less Wrong, but I propose that global warming denial would make a much better candidate.

I sort of feel like the determination that theism is irrational and it's role as the Plimsoll line for participating at Less Wrong is pretty central to the brand. In a lot of ways the community grew out of the atheist blogosphere and we don't even really let theists argue here. I know some Right-leaning posters are already leery of a left-ward tilt to Less Wrong: I can imagine them being annoyed by how his proposal sounds.

But at this point I think we're over-analyzing the post.

Comment author: hyporational 28 October 2013 10:29:20AM 2 points [-]

I don't think Stuart's test is particularly useful by itself, so don't take this as me defending it. His post is also vague and short enough to allow for several interpretations.

That's certainly true. I just think you can get a lot more information much faster directly examining how someone's beliefs change in response to new evidence.

What do you mean by "directly examine"? What if you can't interact with the person but want to determine whether reading their book is worthwhile for example? Using a few belief litmus tests could be a great way to prevent wasting your time. There are other similar situations.

If there's anything good about a belief litmus test, it's that it's simpler to apply than anything else. Probing someone's belief structure might take a lot of time, and might be socially unacceptable in certain situations. It might not be easy to assess why a person fails to update, as they might have other conflicting beliefs you're not aware of. Like any test, there will be false positives and false negatives. I think it's a matter personal preference how many you're willing to accept, and depends on how much effort you're willing to put into testing.

Theism is often a default test of irrationality on Less Wrong, but I propose that global warming denial would make a much better candidate.

A default test, not the default test. I think we're both nitpicking here and it's pretty pointless.

I sort of feel like the determination that theism is irrational and it's role as the Plimsoll line for participating at Less Wrong is pretty central to the brand.

Please define Plimsoll line. Is there a reason you didn't use a more readily understandable word? I've seen theists stepping out of the closet and being upvoted here. It's just when they come here with the default arguments we've seen a million times that they get downvoted to oblivion.

Comment author: eli_sennesh 26 November 2013 07:06:22PM *  0 points [-]

I know some Right-leaning posters are already leery of a left-ward tilt to Less Wrong:

That's truly bizarre, considering that I basically managed to lose 100 karma points for arguing fairly typical social-democratic positions on LessWrong just yesterday.

Now, yes, "politics is the mind-killer", but people get mind-killed in a direction, and the direction here is very definitely neoliberal, ie: economically market-populist proprietarian, culturally liberal.

Comment author: Lumifer 26 November 2013 07:26:20PM 3 points [-]

considering that I basically managed to lose 100 karma points for arguing fairly typical social-democratic positions on LessWrong just yesterday.

Have you considered that you lost your karma not because you argued typical social-democratic positions, but because you argued them badly?

Comment author: eli_sennesh 26 November 2013 08:24:36PM 1 point [-]

That is entirely possible. However, in that case, I would expect that other people would argue social-democratic positions well (assuming we hold that social-democratic positions have the same prior probability as those of any other ideology of equivalent complexity), and receive upvotes for it. Instead, I just saw an overwhelmingly neoliberal consensus in which I was actually one of the two or three people explaining or advocating left-wing positions at all.

Think of the Talmud's old heuristic for a criminal court: a clear majority ruling is reliable, but a unanimous or nearly unanimous ruling indicates a failure to consider alternatives.

Now, admittedly, neoliberal positions appear often appealingly simple, even when counterintuitive. The problem is that they appear simple because the complexity is hiding in unexamined assumptions, assumptions often concealed in neat little parables like "money, markets, and businesses arise as a larger-scale elaboration of primitive barter relations". These parables are simple and sound plausible, so we give them very large priors. Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons (that is: any theory about history which neatly leads to us will sound simpler than one that leads to some alternative present, even if real history was in fact more complicated and our real present less genuinely probable).

So overall, it seems that for LessWrong, any non-neoliberal position (ie: position based on refuting those parables) is going to have a larger inferential distance and take a nasty complexity penalty compared to simply accepting the parables and not going looking for historical evidence. This may be a fault of anthropic bias, or even possibly a fault of Bayesian thinking itself (ie: large priors lead to very-confident belief even in the absence of definite evidence).

Comment author: Vaniver 26 November 2013 09:22:30PM *  7 points [-]

Now, admittedly, neoliberal positions appear often appealingly simple, even when counterintuitive. The problem is that they appear simple because the complexity is hiding in unexamined assumptions, assumptions often concealed in neat little parables like "money, markets, and businesses arise as a larger-scale elaboration of primitive barter relations". These parables are simple and sound plausible, so we give them very large priors. Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons (that is: any theory about history which neatly leads to us will sound simpler than one that leads to some alternative present, even if real history was in fact more complicated and our real present less genuinely probable).

This particular example doesn't seem troublesome to me, because I'm comfortable with the idea of bartering for debt. That is, my neighbor gives me a cow, and now I owe him one- then I defend his home from raiders, and give him a chicken, and then we're even. A tinker comes to town, and I trade him a pot of alcohol for a knife because there's no real trust of future exchanges, and so on. Coinage eventually makes it much easier to keep track of these things, because then we don't have my neighbor's subjective estimate of how much I owe him versus my subjective estimate of how much I owe my neighbor, we can count pieces of silver.

Now, suppose I'm explaining to a child how markets work. There are simply less moving pieces to tell it as "twenty chickens for a cow" than "a cow now for something roughly proportional to the value of the cow in the future," and so that's the explanation I'll use, but the theory still works for what actually happened. (Indeed, no doubt you can explain the preference for debt over immediate bartering as having lower frictional costs for transactions.)

In general, it's important to keep "this is an illustrative example" separate from "this is how it happened," which I don't know if various neoliberals have done. Adam Smith, for example, claims that barter would be impractical, and thus people immediately moved to currency, which was sometimes things like cattle but generally something metal.

Comment author: Lumifer 26 November 2013 08:40:01PM *  3 points [-]

I would expect that other people would argue social-democratic positions well

In this particular thread or on LW in general?

In the particular thread, it's likely that such people didn't have time or inclination to argue, or maybe just missed this whole thing altogether. On LW in general, I don't know -- I haven't seen enough to form an opinion.

In any case the survey results do not support your thesis that LW is dominated by neoliberals.

but a unanimous or nearly unanimous ruling indicates a failure to consider alternatives.

Haven't seen much unanimity on sociopolitical issues here.

On the other hand there is that guy Bayes... hmm... what did you say about unanimity? :-D

Problem is, they are also complete ahistorical, and only sound simple for anthropic reasons

Graeber's views are not quite mainstream consensus ones. And, as you say, *any* historical narrative will sound simple for anthropic reasons -- it's not something specific to neo-liberalism.

Not sure what you are proposing as an alternative to historical narratives leading to what actually happened. Basing theories of reality on counterfactuals doesn't sound like a good idea to me.

Comment author: eli_sennesh 26 November 2013 08:48:26PM -1 points [-]

In any case the survey results do not support your thesis that LW is dominated by neoliberals.

The survey results are out? Neat!

Not sure what you are proposing as an alternative to historical narratives leading to what actually happened. Basing theories of reality on counterfactuals doesn't sound like a good idea to me.

I'm not saying we should base theories on counterfactuals. I'm saying that we should account for anthropic bias when giving out complexity penalties. The real path reality took to produce us is often more complicated than the idealized or imagined path.

Graeber's view are not quite mainstream consensus ones.

The question is: are they non-mainstream in economics, anthropology, or both? I wouldn't trust him to make any economic predictions, but if he tells me that the story of barter is false, I'm going to note that his training, employment, and social proof are as an academic anthropologist working with pre-industrial tribal cultures.

Comment author: Vaniver 26 November 2013 09:20:20PM *  4 points [-]

The survey results are out? Neat!

Previous years' survey results: 2012, 2011, 2009. The 2013 survey is currently ongoing.

Comment author: Lumifer 26 November 2013 09:26:31PM 3 points [-]

I'm saying that we should account for anthropic bias when giving out complexity penalties.

How would that work?

The question is: are they non-mainstream in economics, anthropology, or both?

I am not sure what the mainstream consensus in anthropology looks like, but I have the impression that Graeber's research is quite controversial.

Comment author: JoshuaZ 26 November 2013 09:30:51PM 0 points [-]

At minimum, it does seem like many anthropologists see Graeber's work as much more tied into his politics than things even often are in that field, and that's a field that has serious issues with that as a whole.

Comment author: JoshuaZ 26 November 2013 07:47:03PM 1 point [-]

Considering how many of their comments have been downvoted, including inquiries like this one, and other recent events, such as those discussed by Ialdabaoth and others here, my guess is that's not what is going on here.

Comment author: Lumifer 26 November 2013 07:58:41PM 4 points [-]

I hope you realize the epistemical dangers of automatically considering all negative feedback as malicious machinations of your dastardly enemies...

Comment author: Nornagest 26 November 2013 08:04:38PM *  2 points [-]

While I take your point, it seems unlikely that that's what's motivating the response here. eli_sennesh and Eugine_Nier are about as far apart from each other politically as you can get without going into seriously fringe positions, with ialdabaoth in the middle, but there's evidence of block downvoting for all of them. You'd need a pretty dastardly enemy to explain all of that.

(I don't think block downvoting's responsible for most of eli's recent karma loss, though.)

Comment author: eli_sennesh 26 November 2013 08:26:21PM *  0 points [-]

(I don't think block downvoting's responsible for most of eli's recent karma loss, though.)

Block, meaning organized effort? Definitely not. But I definitely find a -100 karma hit surprising, considering that even very hiveminded places like Reddit are very slow to accumulate comment votes in one direction or the other.

EDIT: And now I'm at +13 karma, which from -48 is simply absurd again. Is the system intended to produce dramatic swings like that? Have I invoked the "complain about downvoting, get upvoted like mad" effect seen normally on Reddit?

Comment author: TheOtherDave 26 November 2013 08:37:48PM 5 points [-]

There's a fairly common pattern where someone says something that a small handful of folks downvote, then other folks come along and upvote the comment back to zero because they don't feel it deserves to be negative, even though they would not have upvoted it otherwise. You've been posting a lot lately, so getting shifts of several dozen karma back and forth due to this kind of dynamic is not unheard of, though it's certainly extreme.

Comment author: Nornagest 26 November 2013 08:30:41PM *  3 points [-]

Concerted, not necessarily organized. It's possible for one person to put a pretty big dent in someone else's karma if they're tolerant of boredom and have a reasonable amount of karma of their own; you get four possible downvotes to each upvote of your own (upvotes aren't capped), which is only rate-limiting if you're new, downvoting everything you see, or heavily downvoted yourself.

This just happens to have been a sensitive issue recently, as the links in JoshuaZ's ancestor comment might imply.

Comment author: Lumifer 26 November 2013 08:54:02PM 2 points [-]

Block, meaning organized effort?

I understand block downvoting as a user (one, but possibly more) just going through each and every post by a certain poster and downvoting each one without caring about what it says.

It is not an "organized effort" in the sense of a conspiracy.

Comment author: JoshuaZ 26 November 2013 08:29:29PM 0 points [-]

Blockvoting may or may not be going on in this case, but at this point, I also assign a high probability that there are people who here downvote essentially all posts that potentially seem to be arguing for positions that are generally seen as to be on the left-end of the political spectrum. That seems include posts which are purely giving data and statistics.

Comment author: Lumifer 26 November 2013 08:24:56PM 1 point [-]

As I mentioned, I accept the block downvoting exists, it's pretty obvious. However the question is what remains after you filter it out. And as you yourself point out, in this case the remainder is still negative.

Comment author: JoshuaZ 26 November 2013 08:03:42PM *  0 points [-]

I hope you realize the epistemical dangers of automatically considering all negative feedback as malicious machinations of your dastardly enemies...

Of course that would be epistemically dangerous. Dare I say it, as assuming that all language used by people one doesn't like is adversarial?

More to the point, I haven't made any such assumption. There are contexts where negative feedback and discussion is genuine and useful, and some of eli's comments have been unproductive, and I've actually downvoted some of them. That doesn't alter the fact that there's nothing automatic going on: in the here and now, we have a problem involving at least one person, and likely more, downvoting due primarily for disagreement rather than anything substantial, and that that is coming from a specific end of the political spectrum. That doesn't say anything about "dastardly enemies"- it simply means that karma results on these specific issues are highly likely in this context to be not representative, especially when people are apparently downvoting Eli's comments that are literal answers to questions that they don't like, such as here.

Comment author: Lumifer 26 November 2013 08:15:20PM 4 points [-]

The possibilities that Eli's comments were downvoted "politically" and that they were downvoted "on merits" are not mutually exclusive. It's likely that both things happened.

Block down- and up-voting certainly exists. However, as has been pointed out, you should treat this as noise (or, rather, the zero-information "I don't like you" message) and filter it out to the degree that you can.

Frankly, I haven't looked carefully at votes in that thread, but some of Eli's posts were silly enough to downvote on their merits, IMHO. I have a habit of not voting on posts in threads that I participate in, but if I were just an observer, I would have probably downvoted a couple.

Comment author: JoshuaZ 26 November 2013 08:16:26PM *  0 points [-]

The possibilities that Eli's comments were downvoted "politically" and that they were downvoted "on merits" are not mutually exclusive. It's likely that both things happened.

I agree that both likely happened. But if a substantial fraction was happening to the first, what does that suggest?

However, as has been pointed out, you should treat this as noise (or, rather, the zero-information "I don't like you" message) and filter it out to the degree that you can.

And how do you suggest one do so in this context?

Comment author: eli_sennesh 26 November 2013 08:25:48PM 1 point [-]

To be clear, I don't think someone's net-stalking me. That would be ridiculous. But I do think there's a certain... tone and voice that's preferred in a LessWrong post, and I haven't learned it yet. There's a way to "sound more rational", and votes are following that.

Comment author: TheOtherDave 26 November 2013 07:56:43PM 1 point [-]

That's truly bizarre, considering that I basically managed to lose 100 karma points for arguing fairly typical social-democratic positions

Well, one possibility is that fairly typical social-democratic positions are "left" of LW's earlier position according to those "Right-leaning posters," and therefore constitute a left-ward tilt from their perspective.

Comment author: Watercressed 27 October 2013 12:04:51AM 0 points [-]

I generally agree with this post, but since people's beliefs are evidence for how they change their beliefs in response to evidence, I would call it bias-inducing and usually tribal cheering instead of totally backwards.

Comment author: Jack 27 October 2013 12:11:56AM 3 points [-]

If not "totally backwards" surely "orthogonal". Why not a test that supplies it's own evidence and asks the one being tested to come to a conclusion? Like the Amanda Knox case was for people here who hadn't heard of it before reading about it here.

Comment author: hyporational 28 October 2013 10:31:30AM 1 point [-]

There are several situations where that's not possible. Also it takes effort to test someone like that.

Comment author: Watercressed 27 October 2013 12:27:24AM 1 point [-]

I wouldn't call it orthogonal either. Rationality is about having correct beliefs, and I would label a belief-based litmus test rational to the extent it's correct.

Writing a post about how $political_belief is a litmus test is probably a bad idea because of the reasons you mentioned.

Comment author: Jack 27 October 2013 01:09:34AM 3 points [-]

Rationality is about have correct beliefs. But a single belief that has only two possible answers is never going to stand in for the entirety of a person's belief structure. That's why you have to look at the process by which a person forms beliefs to have any idea if they are rational.

Comment author: Viliam_Bur 28 October 2013 11:12:55AM *  4 points [-]

a single belief that has only two possible answers is never going to stand in for the entirety of a person's belief structure.

Exactly. If there is any hope in using a list of beliefs as a test of rationality, it will need multiple items.

You know, IQ tests also don't have a single question. Neither do any other personality tests.

Comment author: army1987 28 October 2013 07:39:34PM 3 points [-]

OTOH the Cognitive Reflection Test has a shockingly low three questions and I've been told it's surprisingly accurate.

Comment author: Viliam_Bur 29 October 2013 09:44:04AM *  1 point [-]

I'd call it the "Paying-Good-Attention-While-Doing-Simple-Math Test". :D

But yeah... I can imagine that something similarly simple could be an important part of rationality. Some simple task that predicts the ability to do more complex tasks of a similar type.

However, in that case the test will resemble a kind of puzzle, instead of pattern-matching "Do you agree with Greens?"

Specifically for updating, I can imagine a test where the person is gradually given more and more information; the initial information is an evidence of an outcome "A", but most of the latter information is an evidence of an outcome "B". The person is informally asked to make a guess soon after the beginning (when the reasonable answer is "A"), and at the end they are asked to provide a final answer. Some people would probably get stuck as "A", and some would update to "B". But the test would involve some small numbers, shapes, coins, etc.; not real-life examples.

Comment author: Vaniver 03 November 2013 06:33:57PM 4 points [-]

Specifically for updating, I can imagine a test where the person is gradually given more and more information; the initial information is an evidence of an outcome "A", but most of the latter information is an evidence of an outcome "B". The person is informally asked to make a guess soon after the beginning (when the reasonable answer is "A"), and at the end they are asked to provide a final answer. Some people would probably get stuck as "A", and some would update to "B". But the test would involve some small numbers, shapes, coins, etc.; not real-life examples.

I've seen experiments that tested this; I thought they were mentioned in Thinking and Deciding or Thinking Fast and Slow, but I didn't see it in a quick check of either of those. If I recall the experimental setup correctly (I doubt I got the numbers right), they began with a sequence that was 80% red and 20% blue, which switched to being 80% blue and 20% red after n draws. The subjects' estimate that the next draw would be red stayed above 50% for significantly longer than n draws from the second distribution, and some took until 2n or 3n draws from the second distribution to assign 50% chance to each, at which point almost two thirds of the examples they had seen were blue!

Comment author: army1987 02 November 2013 08:10:40PM 0 points [-]

But the test would involve some small numbers, shapes, coins, etc.; not real-life examples.

I dunno... people who do fine at the Wason selection task with ages and drinks get it wrong with numbers and colours. (I'm not sure whether that's a bug or a feature.)

Comment author: Viliam_Bur 03 November 2013 04:32:16PM *  4 points [-]

That seems to me like a reason not to test the skill on real-life examples.

We wouldn't want a rationality test that a person can pass with original wording, but will fail if we replace "Republicans" by "Democrats"... or by Green aliens. We wouldn't want the person to merely recognize logical fallacies when spoken by Republicans. This is in my opinion a risk with real-life examples. Is the example with drinking age easier because it is easier to imagine, or because it is something we already agree with?

Okay, I am curious here... what exactly would happen if we replaced the Wason selection task with something that uses words from real life (is less abstract), but is not an actual rule (therefore it cannot be answered using only previous experience)? For example: "Only dogs are allowed at jumping competitions, cats are not allowed. We have a) a dog going to unknown competition; b) a cat going to unknown competition; c) an unknown animal going to swimming competition, and d) an unknown animal going to jumping competition -- which of these cases do you have to check thoroughly to make sure the rule is not broken?"

Comment author: ChristianKl 27 October 2013 02:39:00AM 1 point [-]

I generally agree with this post, but since people's beliefs are evidence for how they change their beliefs in response to evidence, I would call it bias-inducing and usually tribal cheering instead of totally backwards.

If I would want to estimate people rationality from beliefs I would look at whether the belief is nuanced. There are a lot of people who say irrational stuff such that they evidence we have for global warming is comparable to the evidence we have for evolution. In reality the p value doesn't even approach the 5 sigma level that you need to validate a result about a new result in particle physics.

It's just as irrational as being a global warming denier who thinks that p(global warming)<0.5.

Yet we do see smart people making both mistakes. You have smart people who claim that the evidence for global warming is comparable to evolution and you have smart people who are global warming deniers.

People don't get mind killed by political issues because they are dumb. It might be completely rational for them because signaling is more important for them. If you want a useful metric do judge someone rationality don't take something where group identities matter a good deal.

The metric is just too noisy because the person might get something from signaling group identity. I think the only reason to choose such a metric is because you get yourself mindkilled and want to label people who don't belong to your tribe as irrational and seek some rationalisation for it.

As far as empirics go, collegue educated Republicans just have a higher rate of climate change denial than Republicans who didn't go to collegue.

While we can discuss whether collegue causes people to be more rational it certainly correlates with it.

If you want to use beliefs to judge people rationality, calibrate the test. Give people ratioanlity quizes and quiz them for their beliefs. If you get strong correlations you have something that you can use. Don't intellectually analyse the content of the beliefs and think about what rational people should believe if you want an effective metric.

Comment author: hyporational 27 October 2013 08:07:05AM *  -2 points [-]

RETRACTED: It wasn't my intention to start another global warming debate.

If I would want to estimate people rationality from beliefs I would look at whether the belief is nuanced.

Lots of insane beliefs are nuanced.

In reality the p value doesn't even approach the 5 sigma level that you need to validate a result about a new result in particle physics.

Requiring the same strength of evidence from climate science as from particle physics would be insane.

There are a lot of people who say irrational stuff such that they evidence we have for global warming is comparable to the evidence we have for evolution.

From Stuart's post: "Of course, reverse stupidity isn't intelligence: simply because one accepts AGW, doesn't make one more rational."

People don't get mind killed by political issues because they are dumb. It might be completely rational for them because signaling is more important for them.

Choosing to signal wouldn't be mindkill as it's understood hjhink the only reason to choose such a metric is because you get yourself mindkilled and want to label people who don't belong to your tribe as irrational and seek some rationalisation for it.

Labeling people seems to be exactly what you're doing yourself here. I can think of at least three more reasons.

I think Stuart simply underestimated the local mindkill caused by global warming debate in other people, or failed to understand that local mindkill isn't necessarily a good metric for irrationality. Neither of those require him to be mindkilled about the topic himself. One possibility is he failed to evaluate evidence of global warming himself and overestimated the probability of the relevant propositions.

You seem to be conflating intelligence and rationality in this comment. You probably know they're not the same thing.

All this being said, I don't agree with what Stuart was saying in his post. I have no opinion of global warming and haven't read about it much.

Comment author: ChristianKl 27 October 2013 02:31:56PM *  1 point [-]

Requiring the same strength of evidence from climate science as from particle physics would be insane.

What do you mean with "require"? If I say that climate science has the same strength of evidence as evolution than we can debate whether climate change does fulfill the 5 sigma criteria criteria.

I think it does, therefore the strength of evidence for climate change is not the same as the strength of evidence for evolution.

Why does it matter? It a X-risk that global warming doesn't really exist and we do geoengineering that seriously wrecks our planet. That risk might be something like p=0.001 but it does exist. It's greater than the risk of an asteroid destroying our civilisation in the next 100 years.

To the extend that one cares about X-risks it's important to distinguish claims with 2-3 sigma from those who pass 5 sigmas. It's just not the same level of evidence.

If we want to stay alive over the next hundred years it's important that decision makers in our society don't manuver us into an X-risk because they treat 2-3 sigma the same way as they treat treat 5 sigmas.

You seem to be conflating intelligence and rationality in this comment.

I don't use the word intelligence in the comment you quote. I use it in another post as a proxy variable. I equate rationality for the ability to update your beliefs in order to win.

Comment author: hyporational 27 October 2013 03:19:21PM *  0 points [-]

You used the words smart and dumb, I suppose that counts. I failed to understand most of your reply.

What do you mean with "require"?

I mean you don't need to be even nearly that certain for the findings to be actionable.

It a X-risk that global warming doesn't really exist and we do geoengineering

What's the expected utility of that compared to the expected utility of AGW? If you're too uncertain, why not just try to drastically reduce emissions instead of do major geoengineering? What's the expected utility of reducing emissions?

Comment author: Moss_Piglet 27 October 2013 06:28:37PM 1 point [-]

What's the expected utility of that compared to the expected utility of AGW? If you're too uncertain, why not just try to drastically reduce emissions instead of do major geoengineering? What's the expected utility of reducing emissions?

The current understanding of climate sensitivity is that since Carbon Dioxide gas will remain in the upper atmosphere for decades (and possibly centuries) even a complete halt on emissions will not avert warming predicted for the next century or so. And the models currently favored have pretty dire predictions for that level of warming, even if they're less severe than the alternative.

The only realistic solution, and naturally the one most strongly opposed by environmental groups, is solar radiation management. This would be very expensive, about $700M a year according to David Keith, and has potential risks which should be tested before any implementation plan. So not a silver bullet, but still much cheaper and safer in the long run than the standard environmental agenda even according to their own data.

(Note: I am assuming for the sake of argument that current climate models are accurate, but that is an assumption which should be questioned. Climate modeling is still in it's infancy and most existing models have difficulty with predictions even as close as a decade out. Warming is probably happening but that does not mean that any given prediction of warming is accurate, for reasons which should be obvious.)

Comment author: army1987 28 October 2013 09:07:16AM 0 points [-]

The current understanding of climate sensitivity is that since Carbon Dioxide gas will remain in the upper atmosphere for decades (and possibly centuries) even a complete halt on emissions will not avert warming predicted for the next century or so.

Methane has a shorter lifetime, though (though my five minutes' research tells me we've already stopped increasing methane emissions).

Comment author: Jack 27 October 2013 08:20:02PM *  0 points [-]

Are you saying that solar radiation management is an alternative to long-term emissions reduction? Or that, in addition to eventually tapering off greenhouse gas emissions, we're going to have to do something to keep temperatures down, and the best option is solar radiation management?

(edit: apparently I wrote social radiation management)

Comment author: Moss_Piglet 27 October 2013 10:53:41PM *  6 points [-]

Reducing emissions is a good goal, but energy needs will continue to increase even as we decrease the number of tons of carbon dioxide per kWh. As the population increases and becomes more wealthy there's not much we can do but put out more carbon dioxide; that's one of the reasons people bent on lowering world population and wealth have attached themselves to the environmental movement.

If the stigma against nuclear power goes away, or the technological issues which make speculative energy sources like wind/solar/fusion unprofitable are resolved, we could see a bigger dip but even then the century-long trend will probably be one of increase. SRM is the most realistic way I can think of to head off serious disasters until then.

Comment author: ChristianKl 27 October 2013 04:37:50PM *  1 point [-]

I mean you don't need to be even nearly that certain for the findings to be actionable.

If I ask "What's the evidence for global warming being real?" in searching for an accurate description of the world. Having accurate maps of the world is useful.

In the above example, saying that the evidence for global warming is like that for evolution is like claiming the moon is made of cheese.

The belief might help you to convince people to reduce emissions. Believing that the moon is made of cheese might help you to discourage people from going to the moon.

If the reason that someone advocates the ridiculous claim that the evidence for global warming is comparable to that for evolution, is that it helps him convince people to lower emission that person is mindkilled by his politics.

What's the expected utility of that compared to the expected utility of AGW? If you're too uncertain, why not just try to drastically reduce emissions instead of do major geoengineering? What's the expected utility of reducing emissions?

Right, because our political leaders excel at doing rational good expected utility comparisions... Memes exist in the real world. They have effects. Promoting false beliefs about the certainity of science has dangers.

I'm not in the position to have the power to choose that the world drastically reduces emissions or whether it does major geoengineering and scientists aren't either. Scientists do have a social responsiblity to promote accurate beliefs about the world.

Whether or not we should reduce emissions is a different question. If you can't mentally separate: "Should we reduce emissions" from "What's the evidence for global warming?" you likely mindkilled about the second question and hold beliefs that aren't accurate descriptions of reality.

Comment author: TheOtherDave 26 October 2013 05:32:24PM 8 points [-]

Just as commenters shouldn't have assumed Eliezer's factual observation was an argument in favor of regulation,

But did they assume it?
Or did they conclude it based on inferences from Eliezer's comment and the broader context?

To recast that in more local-jargon, Bayesian terms... how high was their prior probability that Eliezer was making an argument in favor of regulation, and how much evidence in favor of that proposition was the comment itself, and did they over-weight that evidence?

Beats me, I wasn't there.
I might not be able to tell, even if I had been there.
But saying they "assumed" it in this context connotes that their priors were inappropriately high.

I'm not sure that connotation is justfiied, either in the specific case you quote Eliezer as discussing, or in the general case you and he treat it as illustrative of.

Maybe, instead, they were overweighting the evidence provided by the comment itself.

Or maybe they were weighting the evidence properly and arriving at, say, a .7 confidence that Eliezer was making an argument in favor of regulation, and (quite properly) made their bet as though that was the case... and turned out, in this particular case, to be wrong, as they should expect in 3 out of 10 cases.

you shouldn't assume the suspected Blue's observation is a pro-moon shot or anti-Green argument.

Sure, agreed. But here again, not assuming it doesn't preclude me from concluding it.

When I choose to make an utterance, I am not only providing you with the utterance's propositional content. I am also providing you with the information entailed by the fact that I chose to utter it.

When you make inferences about my motives from that information, you might of course be mistaken. But that doesn't mean you shouldn't make such inferences.

The same goes for your hypothetical Blue.

Comment author: Douglas_Knight 26 October 2013 06:55:06PM 2 points [-]

You weren't there. You can't reconstruct what it was like to be there. But you can read his comment. It contains the word "tradeoff" four times. Can you suggest what disclaimers he should have used instead?

(but the comments responding to Eliezer seem pretty reasonable to me.)

Comment author: TheOtherDave 26 October 2013 07:30:43PM *  0 points [-]

Can you suggest what disclaimers he should have used instead?

Let's assume for the sake of comity that I can't.
What follows?

To address your broader question, though: it seems likely to me that there is no wording which reliably causes observers to believe that I'm genuinely just making a factual observation and that I'm not covertly implying any arguments, since I can't think of any way of preventing people who are covertly implying arguments from using the same wording, which will shortly thereafter cause clever observers to stop trusting that wording.

This certainly includes bald assertions like "Hey, guys, I'm genuinely just making a factual observation here and totally NOT covertly implying any arguments, OK?" which even unsophisticated deceivers know enough to use, but it also covers more sophisticated variations.

That said, it also seems likely to me that for any given audience there exists wording that will manipulate that audience into believing I'm genuinely just making a factual observation, and a sufficiently skilled manipulator can find that wording. I don't claim to be such a manipulator. (Of course, if I were, it would probably be in my best interests not to claim to be.)

Then again, such a manipulator could presumably do this even when that belief is false.

The approach I usually endorse in such cases is to not worry about it and concentrate on more generally behaving in a trustworthy way, counting on observant members of the community to recognize that and to consequently trust me to not be playing rhetorical games. (That's not to say I always succeed, nor that I never play rhetorical games.) In other words, I count on the cultivation of personal reputation over iterated trials.

Of course, deceivers of all stripes similarly count on the cultivation of personal reputation over iterated trials.

Expensive signaling helps here, of course, but isn't always an option.

Comment author: fubarobfusco 26 October 2013 05:46:15PM *  2 points [-]

But did they assume it?
Or did they conclude it based on inferences from Eliezer's comment and the broader context?

People often say "assume" when they mean "jump to a conclusion" or "invalidly or incorrectly infer". That seems to be what's meant here.

Comment author: TheOtherDave 26 October 2013 06:08:22PM *  1 point [-]

Agreed. But as I said, it's not clear to me that inferring the propositions under discussion is invalid or incorrect, so to the extent that "invalidly or incorrectly infer" is what's meant, I'm skeptical of the claim. Ditto for "jump to a conclusion" for the most common connotations of that phrase.

When I wrote the comment it seemed more charitable to give the claim the reading under which I agree with it, and then point out the more complicated reality of which it is a narrow slice, than to give the claim the reading under which I simply doubt that it's true. In retrospect, though, I'm not sure it was.

Either way, though, my main point is that inferring that someone is making a covert argument while seeking to maintain the social cover of just making a factual observation is not necessarily unjustified in cases like these.

Comment author: hyporational 27 October 2013 09:41:25AM *  0 points [-]

The more important question is whether people should state hostile inferences based on usually flimsy evidence. I think vocally pointing out intentions behind factual claims is a very effective way to discourage rational discussion and cause mindkill because the rate of false positives is so high. Manufacturing plausible deniability by just stating facts works precisely because deniability in such a case should be plausible to have any relevant discussion at all.

Comment author: TheOtherDave 27 October 2013 05:29:09PM 2 points [-]

I don't think I agree.

To take your comment as an example... on one level, it's a series of claims. "X is the more important question." "Y is an effective way to discourage rational discussion." "The rate of false positives in Y is very high." Etc. And I could respond to it on that level, discussing whether those claims are accurate or not. And that seems to be the kind of discussion you're encouraging.

Had you instead responded by saying "The average rainfall in Missouri is 3.5 inches per year" I could similarly discuss whether that claim is accurate or not.

But that would be an utterly bizarre response. Why would it be bizarre? Because I would have no idea what the intention behind citing that fact could possibly be. Your comment, by contrast, seems to have a fairly clear intention behind it, so it's not bizarre at all.

So far, I don't think I've said anything in the least bit controversial. (If you disagree with any of the above, probably best to pause here and resolve that disagreement before continuing.)

Continuing... so, OK. You have certain intentions in making the comment you made... call those intentions I1. I have inferred certain intentions on your part... call those I2. And, as above, were I to lack a plausible I2, I would be utterly bewildered by the whole conversation, as in the Missouri rainfall example... which I'm not.

Now... if I understand your view correctly, you believe that if I articulate I2 I will effectively discourage rational discussion and cause mindkill, because I'm likely to be mistaken... that is, I2 is not likely to equal I1. It's better, on your view, for me to continue holding I2 without articulating it.

Yes? Or have I misunderstood your view?

If I've understood your view correctly, I disagree with it completely.

Comment author: hyporational 28 October 2013 11:10:38AM 2 points [-]

I tried to focus on people attacking negative intentions/connotations. I was expressing myself poorly and my comment had a lot of hidden assumptions. My comment was not even wrong. Your response is clear and helpful, thanks. I'm not sure I can improve upon my original comment, but here are some thoughts on the matter:

I think it would be useful to categorize intentions/connotations further. I see no problem in articulating hostile intentions behind a comment rudely stating that someone is fat for example. I think the reason for this is that the connotations of that kind of a statement are common knowledge and high probability. If you disapprovingly point out such connotations, nobody can claim that you're trying to sneak them into the other person's comment to dismiss it unfairly.

Then again I think there's this category of statements where it seems to me that connotations can vary wildly. Even if you have a good reason to think that some particular connotation is the most probable, it's just one option among many. Here the rate of false positives will be high. I feel in such situations attacking one connotation over another seems like a dishonest way to dismiss a statement.

I acknowledge that situational factors complicate matters further.

Comment author: TheOtherDave 28 October 2013 01:46:05PM 1 point [-]

Even if you have a good reason to think that some particular connotation is the most probable, it's just one option among many. Here the rate of false positives will be high.

Sure, that's true. We might disagree about how high my confidence in a particular most-probable-interpretation of the motives behind a particular statement can legitimately be, but it's clear that for some statements that confidence will be fairly low.

I feel in such situations attacking one connotation over another seems like a dishonest way to dismiss a statement.

Do you have any sense of why you feel this way?

For example, do you believe it is a dishonest way to dismiss a statement? Or just that it seems that way? (Seems that way to whom?)

Comment author: somervta 27 October 2013 12:32:41AM 2 points [-]

typo:

where Blue and Green remain important remain important political identities

Comment author: Vladimir_Nesov 26 October 2013 06:00:21PM *  2 points [-]

You shouldn't assume the suspected Blue's observation is a pro-moon shot or anti-Green argument.

("Shouldn't assume", taken literally, sounds like an endorsement of forming beliefs for reasons other than their correctness. I think I agree with the intended point, but I'd put it somewhat differently.)

Rather than focusing on the factual question of whether a remark is motivated by identity signaling, it's sufficient to disapprove of participation in any moves that are clearly motivated by signaling or engage with the question of whether other moves are motivated by signaling (when that's not clear). It's the same principle as with not engaging with attention-seeking trolling: there is no "assuming" that someone isn't acting in bad faith, but engagement in that mode is discouraged.

Comment author: Vaniver 27 October 2013 09:22:19PM 3 points [-]

Just as commenters shouldn't have assumed Eliezer's factual observation was an argument in favor of regulation

Eliezer's response there always struck me as odd. Was he making a simple factual observation? When you read the comment in question, it reads to me as the summary of an argument that regulation is necessary. Eliezer doesn't endorse that argument- he doesn't think that regulation should be necessary- but he's making the claim "society will require regulation because of argument X." Unsurprisingly, people respond to X as an argument for regulation, but a cursory glance doesn't show me any comments where people attribute to Eliezer endorsement of that argument.

Comment author: falenas108 28 October 2013 03:51:00AM 3 points [-]

That isn't how it read to me. He says, "Some poor, honest, well-intentioned, stupid mother of 5 kids will shop at a banned store and buy a Snake's Sulfuric Acid Drink for her arthritis and die, leaving her orphaned children to cry on national television. Afterward the banned stores will be immediately closed down, based on that single case, regardless of their net benefit."

That sounds to me like he's saying this will happen regardless, and it still might be a net plus but it's something proponents will have to address.

Comment author: Vaniver 28 October 2013 02:39:23PM *  0 points [-]

That sounds to me like he's saying this will happen regardless

The bolded section means that Eliezer doesn't endorse the argument, not that it is not an argument.

it still might be a net plus but it's something proponents will have to address.

Why would the proponents have to address it, unless it was an argument against their position? Otherwise it would be a non sequitor.

[Edit] To be clear, I agree that policy debates should not be one-sided. But the way I interpret that is that there are both positive and negative consequences for any policy, and the positive consequences are arguments for and the negative consequences are arguments against.

Comment author: falenas108 28 October 2013 07:17:05PM -1 points [-]

Okay, seems like it was mostly a semantics disagreement then.

Though I am a bit caught up on your saying Eliezer doesn't endorse the argument. Using your terminology, I think he does endorse the argument, meaning he thinks that's a legitimate point against having "banned stores." But, he also endorses other arguments for them, and to him, those weigh more.

Comment author: Vaniver 28 October 2013 07:58:39PM *  2 points [-]

I believe Eliezer endorses the decision principle "choose the option with largest net benefit," but predicts that democratic societies will operate under the decision principle "choose the option which can be best defended publicly."

That is, his comment as a whole makes three related points: first, a consequence of having stores where banned products are sold is that unintelligent customers will kill or seriously injure themselves with the products sold therein, second, this consequence is sad, and third, democratic societies are unwilling to allow consequences that are visibly that sad. For me to say he endorses the argument, I would require that he say or imply "and those societies are right," when I think he heavily implies that he understands but disagrees with their argument.

Comment author: ChristianKl 26 October 2013 11:23:00PM *  3 points [-]

He think Stuart is factually wrong and the global warming question isn't a good predictor. Fortunately that's something we can test.

Before we run the numbers, what's your confidence interval for the IQ difference in the LessWrong poll of 2012 between on the people who believe that p(global warming)>0.9 versus people p(global warming)<0.5?

If you just correlate p values with IQ, what's your confidence interval for the resulting correlation coefficient?

As IQ might not be rationality, how do you think the global warming answer will predict whether someone gives rational answers to the CFAR questions?

Comment author: Nornagest 27 October 2013 03:07:33AM *  3 points [-]

I'll bite.

My 90% confidence interval for the correlation between IQ and p(global warming) is orgjrra ebhtuyl artngvir mreb cbvag bar naq cbfvgvir mreb cbvag gjb, jvgu n crnx pybfr gb mreb. V'q or yrff fhecevfrq gb frr n pbeeryngvba orgjrra c(tybony jnezvat) naq gur PSNE dhrfgvbaf (gubhtu V'q whfg hfr 5-7, nf gur bguref frrz gb unir zber cbgragvny pbasbhaqref), ohg V'q fgvyy rkcrpg dhvgr n ybj bar.

(ROT13ed to avoid anchoring future readers.)

Comment author: Lumifer 27 October 2013 03:52:11AM 3 points [-]

You need to specify the "global warming" part better. "The global climate has warmed since the beginning of the XX century" is a different claim from "Human emissions of CO2 caused the warming of the global climate" which is a different claim from "The current warming is unprecedented in known history" which is a different claim from "We need to reduce the CO2 emissions".

Comment author: ChristianKl 27 October 2013 05:02:54AM *  2 points [-]

In this post I intend to reference the Lesswrong census. In it the question was worded:

P(Warming)
What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions?

Hopefully we will have another census this year. If you think that there is better question to get at the hard core of the global warming issue, I also invite you to make prediction about how such a question would correlate. The question could be added to the next poll and we could then see how the results of the question correlate.

Comment author: Lumifer 27 October 2013 05:18:09AM 3 points [-]

The way the question was worded it asked two different questions (maybe even three) and I'm not sure the respondents treated it as a logical expression along the lines of is.true((A OR B) AND C)...

I don't know what do you mean by the "hard core of the global warming issue".

Comment author: army1987 27 October 2013 04:31:05PM 2 points [-]

The way the question was worded it asked two different questions (maybe even three) and I'm not sure the respondents treated it as a logical expression along the lines of is.true((A OR B) AND C)...

That would probably correlate with rationality too.

Comment author: ChristianKl 28 October 2013 02:22:19PM 1 point [-]

I'm not responsible for the question being worded the way it is. I don't think the wording is optimal.

If you think the question gets interpreted by different people in a different way, propose a better question to measure global warming beliefs for the next census.

Comment author: Lumifer 28 October 2013 03:51:07PM 3 points [-]

propose a better question to measure global warming beliefs

The first question is what is it that you want to measure.

Comment author: JoshuaZ 28 October 2013 02:28:42PM 1 point [-]

Whether you are responsible or not is distinct from whether it will do a good job measuring what you want it to measure.

Comment author: ChristianKl 28 October 2013 04:07:19PM 0 points [-]

Whether you are responsible or not is distinct from whether it will do a good job measuring what you want it to measure.

Responsibility changes the meaning of the word 'good'. If I design something to measure Y I have a higher standard for 'good' than when I search for an already existing measure of Y.

If people who read the post say: "I don't think IQ correlates with the answer of that question" that an answer that moves the discussion forward.

If they say: "I think IQ correlates with the answer to a differently worded question about global warming" that also moves the discussion forward. We can test that hypothesis in the next census.

If you don't like IQ as proxy than we had the CFAR questions in the last census to measure rationality. They are also not perfect and we can think up a better metric for the next census.

Comment author: roystgnr 27 October 2013 04:40:48PM 1 point [-]

For that matter, "I estimate human emissions of CO2 caused 49% of the warming of the global climate" is a different question from "I estimate human emissions of CO2 caused 51% of the warming of the global climate". Is it really a fantastic expression of rationality to say that people making the first claim are basically creationists, but people making the second claim are upstanding rationalists whose numbers help to demonstrate how much popular support I have?

If you try to lump people into discrete categories over a continuously varying question then you are inherently introducing ambiguity; the first step toward setting up a Worst Argument in the World is the creation of overly-broad categories, after all. If you demand that Turquoise people self-identify as Blues or Greens, you shouldn't be surprised when you get suspected of having motives other than the pure refinement of rational thought.

Comment author: [deleted] 27 October 2013 05:26:30PM 1 point [-]

Well you can probably say that anyone who thinks humans are entirely responsible, or not responsible at all is irrational on that question.

Comment author: eli_sennesh 26 November 2013 07:13:32PM -2 points [-]

Scientists have already found p(null hypothesis) < 0.05 on AGW. It's time we stopped variations of probability estimates over nuanced versions of possible positions and accepted the proposition supported by statistically significant evidence and a consensus of experts behind that evidence.

(Side note: Yes, I know I just blasphemed against the Great God Bayes by invoking frequentist statistics. Too bad.)

Comment author: katydee 27 November 2013 06:53:48PM *  2 points [-]

You know, when I first read this post I thought "You have some interesting points, but this is obviously just a clever argument that's going to be used to justify posting stupid bullshit to LessWrong," so I downvoted. I didn't make that remark in public, though, because it would be rude and maybe I would end up being wrong.

Now that I see what this post is being used to justify, it seems clear that my prediction was correct.

Comment author: ChrisHallquist 28 November 2013 03:41:52AM -1 points [-]

Why are people upvoting a comment that doesn't actually object to anything in this post, and just refers to another post I wrote as "stupid bullshit?"

Comment author: katydee 28 November 2013 10:45:14PM *  1 point [-]

Why are people upvoting a comment that doesn't actually object to anything in this post, and just refers to another post I wrote as "stupid bullshit?"

Perhaps they agree with me?

I honestly didn't want to post any of this, and indeed withheld my objection at first, because I (like Eliezer) think "that's just a clever argument" can quickly become a fully general debating tactic. But it's striking to me how quickly my prediction was, in my view, proven correct, so I thought it was worth drawing attention to.

Comment author: Douglas_Knight 26 October 2013 04:44:28PM 0 points [-]

If you read Stuart's original post, it's clear

I hate this rhetoric. I did read Stuart's post.

If you'd read Vaniver's comment, you'd agree that Stuart was acting in bad faith. So you didn't read it, but then you responded to it! It is extremely rude to respond to a comment you haven't read.

Comment author: ChrisHallquist 26 October 2013 04:51:58PM 5 points [-]

Do you have an actual argument that there was ambiguity in Stuart's post?

Comment author: Vaniver 27 October 2013 09:49:33PM 5 points [-]

How about Stuart_Armstrong's response to satt's comment? It looks to me like Stuart agrees there was ambiguity there.

(And, to be clear, by "ambiguity there" I am using ambiguity as a one-place word by choosing the maximum of the two-place ambiguity among the actual readers of the post. Stuart has no ambiguity about what Stuart meant, but Steven does, and so the one-place ambiguity is Steven's ambiguity.)

Comment author: Douglas_Knight 27 October 2013 02:04:20PM *  3 points [-]

If you'd read my comment, it's clear that I am objecting to your rhetoric. Only you can prevent the jump to the assumption that I have a dog in the fight.

Comment author: BaconServ 27 October 2013 07:52:29AM 0 points [-]

Christ is it hard to stop constantly refreshing here and ignore what I know will be a hot thread.

I've voted on the article, I've read a few comment, cast a few votes, made a few replies myself. I'm precommitting to never returning to this thread and going to bed immediately. If anyone catches me commenting here after the day of this comment, please downvote it.

Damn I hope nobody replies to my comments...

Comment author: [deleted] 27 October 2013 01:36:11AM 0 points [-]

the broader society isn't going to stop spontaneously labeling various straightforward empirical questions as Blue or Green issues. If you want to stop your mind from getting killed by whatever issues other people have decided are political, the only way is to control how you react to that.

This is true and embodies the quality of Tsuyoku naritai.

Comment author: JoshuaZ 28 October 2013 02:06:56AM 1 point [-]

and no, that's no just a stereotype)

Typo- "no" should be "not".

Comment author: ChrisHallquist 28 October 2013 04:19:09AM -1 points [-]

Thanks. Fixed.

Comment author: army1987 27 October 2013 10:34:26AM 1 point [-]

Typo: “a rejection of and” should be “a rejection of a”.

Comment author: ChristianKl 27 October 2013 12:25:01AM 1 point [-]

If you read Stuart's original post, it's clear this comment is reading ambiguity into the post where none exists. You could argue that Stuart was a little careless in switching between talking about AGW and global warming simpliciter, but I think his meaning is clear: he thinks rejection of AGW is irrational, which entails that he thinks the stronger "no warming for any reason" claim is irrational. And there's no justification whatsoever for suggesting Stuart's post could be read as saying, "if your estimate of future warming is only 50% of the estimate I prefer you're irrational"—or as taking a position on ethical theories, for that matter.

No. It's not about the binary choice of whether or not global warming is real. Someone who thinks that there a great amount of uncertainity in the science of global warming wouldn't be labeled irrational by steven0461 criteria as long as he admits to a certain median estimate.

There are various rational arguments that indicate that climate scientists are overconfident in their own knowledge. But even if you believe in them the conclusion that the median estimate of climate sensitivity to doubled CO2 is lower than 2 degrees Celsius is irrational.

Comment author: eli_sennesh 26 November 2013 07:15:40PM -2 points [-]

There are various rational arguments that indicate that climate scientists are overconfident in their own knowledge.

Really? It's always possible to make plausible-sounding, rational-sounding arguments for almost any proposition, especially when you can formulate them as conditional probabilities. It's much harder to actually gather the statistics to back those up. I'd like to see these, please.

Comment author: ChristianKl 26 November 2013 09:27:34PM 2 points [-]

As a start, there are plenty of study that show that most humans are most of the time overconfident.

Long-Term Capital Management sank because of what their creators considered to be a 10 sigma event. I would guess that climate models quite often use normal distributions as a proxy for things that in 99% of the cases behave the same way as normal distributions.

A second issue is that climate scientists generally validate their models through "hindcasts". They think making accurate hindcasts is nearly the same as making accurate forecasts.

Comment author: JoshuaZ 26 November 2013 09:57:53PM *  -1 points [-]

As a start, there are plenty of study that show that most humans are most of the time overconfident.

Beware fully general counterarguments. In this case, the issue of confidence goes just as well to professional climate scientists as people who aren't, and then other cognitive biases, such as Dunning-Kruger start becoming relevant.

Comment author: ChristianKl 27 November 2013 03:23:31AM *  1 point [-]

Overconfidence shouldn't let us believe that p =0.5. It however would make sense to deduct a few percentage points from the result.

If a climate scientists tell you something is 0.99 likely to be true, maybe it makes sense to treat the event as 0.95 likely to be true.

You don't need to fully understand how something works to know that someone doesn't have 0.99 certainity for a claim.

Comment author: JoshuaZ 27 November 2013 03:46:39AM 0 points [-]

Ok. That seems like a reasonable argument. So how much of a reduction is warranted may be up in the air then. There's also a serious denotative v. connotative issue here, since one needs to carefully distinguish the actual statement "Climate scientists are likely overconfident just as almost everyone is" with all the statements made doubting climate science, anthropogenic global warming, etc. If you are only talking about a drop from .99 to .95 (or even from say .99 to .9) that isn't going to impact policy considerations much.

Comment author: ChristianKl 27 November 2013 04:01:27AM 0 points [-]

If you are only talking about a drop from .99 to .95 (or even from say .99 to .9) that isn't going to impact policy considerations much.

I think it matters when it comes to geoengineering policy making. If the policy community thinks that climate scientists are really good at predicting climate, I think there a good chance that they will go sooner or later for geoengineering.

If we want to stay alive in the next century it would be good if policy makers can distinguish events with 0.9, 0.99 and 0.999 certainty.

Even a 0.001 chance that a given asteroid will extinguish humanity is too high. It's valuable to keep in mind that small chances happen from time to time and that you have to do scenario planning that integrates them.

Comment author: JoshuaZ 27 November 2013 04:28:03AM *  0 points [-]

Sure, but right now, almost no one is talking about geoengineering as a serious solution. The policy focus right now is much more on carbon dioxide production reduction. So in the context of where these discussions are occurring, these differences will matter. Right now, I'd focus much more on getting policy makers to be able to reliably distinguish something like 0.9 from something like 0.1. In this particular issue, even that is apparently difficult. Getting an ability to appreciate an extremely rough estimate is a much higher policy priority.

Comment author: ChristianKl 27 November 2013 01:07:54PM 2 points [-]

Sure, but right now, almost no one is talking about geoengineering as a serious solution.

That a very poor perspective when you care about existential risk. Memes do have effects 10 or 20 years down the road.

It bad to say things that are clearly false like that the evidence for climate change is comparable to that for evolution. Evolution being true is something with much better evidence than p=0.999.

The point of Lesswrong isn't to focus on ideas with short-term considerations. It's rather to focus on finding methods to think rationally about issues. It's about letting a lot of people in their twenties learn those methods. Than when those smart people are in positions of authority when they are in their thirties or forties, you get a payoff.

If scientists lie to the world to get policy makers to make good short-term policy decision that expensive over the long term. Scientists shouldn't orient themselves towards short-term decision making but focus on staying with the truth.

Comment author: Lumifer 26 November 2013 07:28:11PM *  -1 points [-]

I'd like to see these, please.

Go, start reading

http://wattsupwiththat.com/
http://climateaudit.org/

Comment author: AspiringRationalist 26 October 2013 04:16:24PM 1 point [-]

If you still don't find any of this odd, think of the "skeptic" groups that freely ufologists or psychics or whatever

Is that statement missing a word?

Comment author: ChrisHallquist 26 October 2013 04:34:33PM 0 points [-]

Yup. Fixed.