What are your contrarian views?
As per a recent comment this thread is meant to voice contrarian opinions, that is anything this community tends not to agree with. Thus I ask you to post your contrarian views and upvote anything you do not agree with based on personal beliefs. Spam and trolling still needs to be downvoted.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (806)
Dualism is a coherent theory of mind and the only tenable one in light of our current scientific knowledge.
Do you mean that, without strong evidence that we don't have, we should assume dualism, or that we have strong evidence for dualism?
If it's the second one, can you give me an example of such a piece of evidence?
Which dualism?
I upvoted because I disagree (strongly) with the second conjunct, but I do agree that certain varieties of dualism are coherent, and even attractive, theories of mind.
There is no territory, it's maps all the way down.
"The territory" is just whatever exists. It may well be an infinite series of entities, each more refined than the last. It's still a territory.
If there is no territory, what is a map?
There are no maps, it's reality all the way up.
That sounds awfully like social constructionism.
Can you unpack this? At the moment it seems nonsensical, in a "throwing together random words and hoping people read profound insights into it" way.
Sure. Have you actually seen "the territory"? Of course not. There are plenty of unexplained observations out there. We assume that these come from some underlying "reality" which generates them. And it's a fair assumption. It works well in many cases. But it is still an assumption, a model. To quote Brienne Strohl on noticing:
To most people the map/territory observation is such a "one and the same". I'm suggesting that it's only a hypothesis. It gives way when making a map changes the territory (hello, QM). It is also unnecessary, because the useful essence of the map/territory model is that "future is partially predictable", in a sense that it is possible to take our past experiences, meditate on it for a while, figure out what to expect in the future and see our expectations at least partially confirmed. There is no need to attach the notion of some objective reality causing this predictability, though admittedly it does feel good to pretend that we stand on a solid ground, and not on some nebulous figment of imagination.
If you extract this essence, that future experiences are predictable from the past ones, and that we can shape our future experiences based on the knowledge of the past, it is enough to do science (which is, unsurprisingly, designing, testing and refining models). There is no indication that this model building will one day be exhausted. In fact, there is plenty of evidence to the contrary. It has happened many times throughout human history that we thought that our knowledge was nearly complete, there was nothing more to discover, except for one or two small things here and there. And then those small things became gateways to more surprising observations.
Yet we persist in thinking that there are ultimate laws of the universe, and that some day we might discover them all. I posit that there are no such laws, and we will continue digging deeper and deeper, without ever reaching the bottom... because there is no bottom.
I think this post should win the thread for blowing the most minds. (I'll upvote even though I think your position is tenable, since I only assign it 20% probability or so.)
[Please read the OP before voting. Special voting rules apply.]
The replication initiative (the push to replicate the majority of scientific studies) is reasonably likely to do more harm than good. Most of the points raised by Jason Mitchell in The Emptiness of Failed Replications are correct.
Imagine a physicist arguing that replication has no place in physics, because it can damage the careers of physicists whose experiments failed to replicate! Yet that's precisely the argument that the article makes about social psychology.
I read this trying to keep as open a mind as possible, and I think there is SOME value to SOME of what he said (ie no two experiments are totally the same and replicators often are motivated to prove the first study wrong)... But one thing that really set me off is that he genuinely considers a study that doesn't prove its hypothesis as a failure, not even acknowledging that IN PRINCIPLE, this study has proven the hypothesis wrong, which is valuable knowledge all the same.
Which is so jarring with what I consider the very basis of science that I find difficult to take Mitchell seriously.
[Please read the OP before voting. Special voting rules apply.]
Human value is not complex, wireheading is the optimal state, and Fun Theory is mostly wrong.
Open borders is a terrible idea and could possibly lead to the collapse of civilization as we know it.
EDIT: I should clarify:
Whether you want open borders and whether you want the immigration status quo are different questions. I happen to be against both, but it is perfectly consistent for somebody to be against open borders but be in favor of the current level of immigration. The claim is specifically about completely unrestricted migration as advocated by folks like Bryan Caplan. Please direct your upvotes/downvotes to the former claim, rather than the latter.
[Please read the OP before voting. Special voting rules apply.]
Current levels of immigration are also terrible, and will significantly speed up the collapse of the Western world.
Citation required.
I'm not clear on whether it's actually a good idea, but if Bryan Caplan's arguments are the best available, it's definitely a horrible idea. He sidesteps all the potential problems without addressing them, or in some cases draws analogies that, when actually considered properly, indicate that it would be a bad idea.
I particularly like how he manages to switch between deontology and consequentialism in the same argument.
[Please read the OP before voting. Special voting rules apply.]
As a first approximation, people get what they deserve in life. Then add the random effects of luck.
Max L.
Why do Africans deserve so much less than Americans? Why did people in the past deserve so much less than current people? Why do people with poor parents deserve less than people with rich parents?
I count "the circumstances into which you are born" as luck. I'd guess it is the biggest component of luck, along with being struck by a disabling genetic condition or exposed to pandemic. So, the first observation has more salience in similar groups of people. So, for example, the group of people that I hang out with or work with are roughly similar enough for desert to have more salience than luck.
But perhaps that means that birth-luck should be the first approximation, then desert, then additional luck.
Max L.
Can you give me an example of something that is neither desert nor luck?
Very nice question; better, in fact than the statement to which you responded.. Examples I have in mind: - Personal level injustice. - Social injustice. - How other people treat you.
But my primary point was whether things for which we are personally responsible is a bigger or lesser influence than luck. That is, if I am guessing with little knowledge, I am going to guess desert before luck for most groups with which I'd be interacting.
(Also, I am thinking that variation in luck, when the fact of variation if predictable and bad luck can be insured or mitigated, is desert, not luck.)
Particular applications might make it more clear. If you don't have a job in America, and you appear physically able to work, my first guess is that you are the biggest contributor to your unemployment. If you are unhealthy in America, and weren't born with it, my first approximation will be that you contributed mightily to your poor health. And so on.
Max L.
What ethical theory are you using for your definition of "deserve"?
It is a fine question, since the word "deserve" is the link between an observation and a judgment about the person. I don't think I need an answer to it to make the observation that most people here don't hold that view. Which is a good thing, because I don't think I have a satisfactory answer beyond rough moral intuition.
Max L.
[Please read the OP before voting. Special voting rules apply.]
Feminism is a good thing. Privilege is real. Scott Alexander is extremely uncharitable towards feminism over at SSC.
Yes, Yes, No. Still upvoting, because "Scott Alexander" and "uncharitable" in the same sentence does not compute.
I consider him a modern G.K. Chesterton. He's eloquent, intelligent, and wrong.
Do you mind telling me how you think he's being uncharitable? I agree mostly with your first two statements. (If you don't want to put it on this public forum because hot debated topic etc I'd appreciate it if you could PM; I won't take you down the 'let's argue feminism' rabbit-hole.)
(I've always wondered if there was a way to rebut him, but I don't know enough of the relevant sciences to try and construct an argument myself, except in syllogistic form. And even then, it seems his statements on feminists are correct.)
Fortunately, LW is not an appropriate forum for argument on this subject, but for an example of an uncharitable post, see Social Justice and Words, Words, Words.
For a very quick example, see this Tumblr post. Mr. Alexander finds an example of a neoreactionary leader trying to be mean to a transgender woman inside the NRx sphere, and then shows the vast majority response of (non-vile) neoreactionaries to at least be less exclusionary than that, even though they have ideological issues with the diagnosis or treatment of gender dysphoria. Then he describes a feminist tumblr which develops increasingly misgendering and rude ways to describe disagreeing transgender men.
I don't know that this is actually /wrong/. All the actual facts are true, and if anything understate their relevant aspects -- if anything, I expect Ozy's understated the level of anti-transmale bigotry floating around the 'enlightened' side of Tumblr. I don't find NRx very persuasive, but there are certainly worse things that could be done than using it as a blunt "you must behave at least this well to ride" test. I don't know that feminism really needs external heroes: it's certainly a large enough group that it should be able to present internal speakers with strong and well-grounded beliefs. And I can certainly empathize with holding feminists to a higher standard than neoreactionaries hold themselves.
The problem is that it's not very charitable. Scott's the person that's /come up/ with the term "Lizardman's Constant" to describe how a certain percentage of any population will give terrible answers to really obvious questions. He's a strong advocate of steelmanning opposing viewpoints, and he's written an article about the dangers of only looking at the .
But he's looking at a viewpoint shown primarily in the <5% margin feminist tumblr, and comparing them to a circle of the more polite neoreactionaries (damning with faint praise as that might be, still significant), and, uh, I'm not sure that we should be surprised if the worst of the best said meaner things than the best of the worst.
I'm not sure he /needs/ to be charitable, again -- feminism should have its own internal speakers, I think mainstream modern feminism could use better critics than whoever's on Fox News next, so on -- but it's an understandable criticism.
((Upvoting the thread starter, but more because one and two are mu statements; either closed questions or not meaningful. Weakly agree on third.))
How would you define "privilege"?
Easier difficulty setting for your life in some context through no fault or merit of your own.
So would you describe someone tall as having "height privilege" because they're better at basketball?
I'd argue that height privilege (up to a point, typically around 6'6") is a real thing, having nothing to do with being good at sports. There is a noted experiment, which my google-fu is currently failing to turn up, in which participants were shown a video of an interview between a man and a woman. In one group, the man was standing on a footstool behind his podium, so that he appeared markedly taller than the woman. In the other group, the man was standing in a depression behind his podium, so t that he appeared shorter. The content of the interview was identical.
Participants rated the man in the "taller" condition as more intelligent and more mature than the same man in the "shorter" condition. That's height privilege.
There's also a large established correlation between height and income, though not enough to completely rule out a potential common cause like "good genes" or childhood nutrition.
This is a good definition. In particular, "Anti-oppressionists use "privilege" to describe a set of advantages (or lack of disadvantages) enjoyed by a majority group, who are usually unaware of the privilege they possess. ... A privileged person is not necessarily prejudiced (sexist, racist, etc) as an individual, but may be part of a broader pattern of *-ism even though unaware of it."
No, this is not a motte.
Why the "majority group" qualifier? Privilege has been historically associated with minorities, like aristocracy.
Does it have to be a majority group? For example, does this compared with this count as an example of "black privilege"? Would you describe the fact that some people are smarter (or stronger) than others as "intelligence privilege" (or "strength privilege")?
Why focus only specific majority groups and thereby ignore things like men in domestic violence issues getting a lot less help from society than women?
Nearly everyone has some advantages and disadvantages. It's often not helpful to conflate that huge back of advantages and disadvantages into a single variable.
That's in the bailey, because of "enjoyed by a majority group."
According to the 2013 LW survey, the when asked their opinion of feminism, on a scale from 1 (low) to 5 (high), the mean response was 3.8 , and social justice got a 3.6. So it seems that "feminism is a good thing" is actually not a contrarian view.
If I might speculate for a moment, it might be that LW is less feminist that most places, while still having an overall pro-feminist bias.
Like a few others, I agree with the first two but emphatically disagree with the last. And if you were right about it, I'd expect Ozy to have taken Scott to task about it, and him to have admitted to being somewhat wrong and updated on it.
EDIT: This has, in fact, happened.
See this tumblr post for an example of Ozy expressing dissatisfaction with Scott's lack of charity in his analysis of SJ (specifically in the "Words, Words, Words" post). My impression is that this is a fairly regular occurrence.
You might be right about him not having updated. If anything it seems that his updates on the earlier superweapons discussion have been reverted. I'm not sure I've seen anything comparably charitable from him on the subject since. I don't follow his thoughts on feminism particularly closely, so I could easily be wrong (and would be glad to find I'm wrong here).
OK, those things have indeed happened, to some degree. Above comment corrected.
I still don't understand what is uncharitable about the Wordsx3 post specifically. It accurately describes the behavior of a number of people I know (as in, have met, in person, and interacted with socially, in several cases extensively in a friendly manner), and I have no reason to consider them weak examples of feminist advocacy and every reason to consider typical (their demographics match the stereotype). I have carefully avoided catching the receiving end of it, because friends of mine have honestly challenged aspects of this kind of thing and been ostracized for their trouble.
There's something wrong with the first link (I guess you typed the URL on a smartphone autocorrecting keyboard or similar).
EDIT: I think this is the correct link.
Yeah, that happened when I edited a different part from my phone. Thanks, fixed.
[Please read the OP before voting. Special voting rules apply.]
Superintelligence is an incoherent concept. Intelligence explosion isn't possible.
How smart does a mind have to be to qualify as a "superintelligence"? It's pretty clear that intelligence can go a lot higher than current levels.
What do you predict would happen if we uploaded Von Neumann's brain onto an extremely fast, planet-sized supercomputer? What do you predict would happen if we selectively bred humans for intelligence for a couple million years? "Impractical" would be understandable, but I don't see how you can believe superintelligence is "incoherent".
As for "Intelligence explosion isn't possible", that's a lot more reasonable, e.g. see the entire AI foom debate.
[Please read the OP before voting. Special voting rules apply.]
Buying a lottery ticket every now and then is not irrational. Unless you have thoroughly optimized the conversion of every dollar you own into utility-yielding investments and expenses, the exposure to large positive tail risk netted by spending a few dollars on lottery tickets can still be rational.
Phrased another way, when you buy a lottery ticket you aren't buying an investment, you're buying a possibility that is not available otherwise.
If one lottery ticket is worth while, why not two? Are you assigning a nonlinear value to the probability of winning the lottery? That causes a number of problems.
At the risk of looking even more like an idiot: Buying one $1 lottery ticket earns you a tiny chance - 1 in 175,000,000 for the Powerball - of becoming absurdly wealthy. The Powerball gets as high as $590,500,000 pretax. NOT buying that one ticket gives you a chance of zero. So buying one ticket is "infinitely" better than buying no tickets. Buying more than one ticket, comparably, doesn't make a difference.
I like to play with the following scenario. A LessWrong reader buys a lottery ticket. They almost certainly don't win. They have one dollar less to donate to MIRI and because they're not wealthy they may not have enough wealth to psychologically justify donating anything to MIRI anyway. However, in at least one worldline, somewhere, they win a half a billion dollars and maybe donate $100,000,000 to MIRI. So from a global humanity perspective, buying that lottery ticket made the difference between getting FAI built and not getting it built. The one dollar spent on the ticket, in comparison, would have had a totally negligible impact.
I fully realize that the number of universes (or whatever) where the LessWrong reader wins the lottery is so small that they would be "better off" keeping their dollar according to basic economics, but the marginal utility of one extra dollar is basically zero.
edit: Digging myself in even deeper, let me attempt to simplify the argument.
You want to buy a Widget. The difference in net utility, to you, between owning a Widget and not owning a Widget is 3^3^3^3 utilons. Widgets cost $100,000,000. You have no realistic means of getting $100,000,000 through your own efforts because you are stuck in a corporate drone job and you have lots of bills and a family relying on you. So the only way you have of ever getting a Widget is by spending negligible amounts of money buying "bad" investments like lottery tickets. It is trivial to show that buying a lottery ticket is rational in this scenario: (Tiny chance) x (Absurdly, unquantifiably vast utility) > (Certain chance) x ($1).
Replace Widget with FAI and the argument may feel more plausible.
So your utility function is nonlinear with respect to probability. You don't use expected utility. It results in certain inconsistencies. This is discussed in the article the allais paradox, but I'll give a lottery example here.
Suppose I offer you a choice between paying one dollar and getting a one in a million chance of winning $500,000, and paying two dollars and getting a one in one million chance of winning $500,000 and a one in two million chance of winning $500,001. You figure that what's basically a 0.00015% chance of winning vs. a 0.0001% chance isn't worth paying another dollar for, so you just pay the one dollar.
On the other hand, suppose I only offer you the first option, but, once you see if you've won, you get another chance. If you win, you don't really want another lottery ticket, since it's not a big deal anymore. So you buy a ticket, and if you lose, you buy another ticket. This results in a 0.0001% chance of ending up with $499,999, a 0.00005% chance of ending up with $499,998, and a 99.99985% chance of ending up with -2$. This is exactly the same set of probabilities as you had for the second option before.
No it would not. Or at least, it's highly unlikely for you to know that.
Suppose MIRI has their probability of success increased by 50 percentage points if they get a 100 million dollar donation. This means that, if 100 million people all donate a dollar, their probability of success goes up by 50 percentage points. Each successive one will change the probability by a different amount, but on average, each donation will increase the chance of success by one in 200 million. Furthermore, it's expected that the earlier donations would make a bigger difference, due to the law if diminishing returns. This means that donating one dollar improves MIRI's probability of success by more than one in 200 million, and is therefore better than getting a one in 100 million chance of donating 100 million dollars.
Even if MIRI does end up needing a minimum amount of money or something and becomes an exception to the law of diminishing returns, they know more about their financial situation, and since they're dealing with large amounts of money all at once, they can be more efficient about it. They can make a bet precisely tailored to their interests and with odds that are more fair.
You're looking at the (potential) benefits and ignoring the costs. The costs are not negligible: "Thirteen percent of US citizens play the lottery every week. The average household spends around $540 annually on lotteries and poor households spend considerably more than the average." (source).
Buying a second ticket doubles your chances, obviously.
For each timeline where you buy a lottery ticket there is one where you don't. Under MWI you don't make any choices -- you choose everything, always.
You've never been poor, have you? :-/
It is just as trivial to show that you should spend all your disposable income and maybe more on lottery tickets in this scenario.
There are ways to win a lottery without buying a ticket. For example, someone may buy you a ticket as a present, without your knowledge, which then wins.
No, it is much more likely that you'll win the lottery by buying tickets than by not buying tickets (assuming it's unlikely to be gifted a ticket), but the cost of being gifted a ticket is zero, which makes not buying tickets an "infinitely" better return on investment.
[Please read the OP before voting. Special voting rules apply.]
The dangers of UFAI are minimal.
Do you think that it is unlikely for a UFAI to be created, that if a UFAI is created it will not be dangerous, or both?
[Please read the OP before voting. Special voting rules apply.]
For many smart people, academia is one of the highest-value careers they could pursue.
Highest value for the person, for society, or both?
Also, by "high value" do you mean purely monetary or do you mean other benefits?
Society. For the second question, not quite sure what it would mean to provide monetary value to society, since money is how people trade for things within society rather than some extrinsic good.
Clarify "many"?
[Please read the OP before voting. Special voting rules apply.]
Utilitarianism is a moral abomination.
AI boxing will work.
EDIT: Used to be "AI boxing can work." My intent was to contradict the common LW positions that AI boxing is either (1) a logical impossibility, or (2) more difficult or more likely to fail than FAI.
"Can" is a very weak claim. With what probability will it work?
[Please read the OP before voting. Special voting rules apply.]
It would be of significant advantage to the world if most people started living on houseboats.
[Please read the OP before voting. Special voting rules apply.]
Fossil fuels will remain the dominant source of energy until we build something much smarter than ourselves. Efforts spent on alternative energy sources are enormously inefficient and mostly pointless.
Related claim: the average STEM-type person has no gut-level grasp of the quantity of energy consumed by the economy and this leads to popular utopian claims about alternative energy.
It isn't very hard to do a little digging here. http://en.wikipedia.org/wiki/Electricity_generation#mediaviewer/File:Annual_electricity_net_generation_in_the_world.svg
China's aggressive nuclear strategy seems reasonable.
Not exactly sure what you mean by "digging." I already comprehend the quantities of energy being consumed because of my education and experience in related fields, it's the average person who I think does not, since I hear them saying things about how a small increase in solar panel efficiency is going to completely and rapidly "cure us of our fossil fuel addiction."
Also, your figure only reflects electricity generation, not total energy consumption which is a much higher figure. Currently non-hydrocarbon fuel sources for transportation is very fringe.
The truth is that the price of fossil fuels has always and will continue to fluctuate in accord with simple supply-demand economics for a long time to come; the cheaper it gets to make energy via alternative methods, the cheaper fossil fuels will become to undercut those alternative sources.
I looked through the numbers and the trend line. I updated in your direction. Even nuclear can't make a big dent without true mass production of reactors, which almost certainly will not happen.
Having political beliefs is silly. Movements like neoreaction or libertarianism or whatever will succeed or fail mostly independently of whether their claims are true. Lies aren't threatened by the truth per se, they're threatened by more virulent lies and more virulent truths. Various political beliefs, while fascinating and perhaps true, are unimportant and worthless.
Arguing for or against various political beliefs functions mostly (1) to signal intelligence or allegiance or whatever, and (2) as mental masturbation, like playing Scrabble. "I want to improve politics" is just a thin veil that system 2 throws over system 1's urges to achieve (1) and (2).
If you actually think that improving politics is a productive thing to do, your best bet is probably something like "ensure more salt gets iodized so people will be smarter", or "build an FAI to govern us". But those options don't sound nearly as fun as writing political screeds.
(While "politics is the mind-killer" is LW canon, "believing political things is stupid" seems less widely-held.)
While I mostly agree, trying to devise political systems that would encourage a smarter populace (ex. SSC's Graduation Speech with the guaranteed universal income and abolishing public schools) seems like a potentially worthwhile enterprise.
I agree that forming political beliefs is not a productive use of my time in the same way that earning a salary to donate to SCI to cure people of parasites is. I disagree that this makes it silly. The reasons you gave may not be the most noble of reasons, but they are still perfectly valid.
[Please read the OP before voting. Special voting rules apply.]
There probably exists - or has existed at some time in the past - at least one entity best described as a deity.
[Please read the OP before voting. Special voting rules apply.]
Frequentist statistics are at least as appropriate as, if not more appropriate than, Bayesian statistics for approaching most problems.
[Please read the OP before voting. Special voting rules apply.]
Reductionism as a cognitive strategy has proven useful in a number of scientific and technical disciplines. However, reductionism as a metaphysical thesis (as presented in this post) is wrong. Verging on incoherent, even. I'm specifically talking about the claim that in reality "there is only the most basic level".
[Please read the OP before voting. Special voting rules apply.]
The notion of freedom is incoherent. People would be better off abandoning the pursuit of it.
What do you think of Free Will Is as Real as Baseball?
Freedom meaning what?
Free choice? I don't believe in that.
The right to make any choice which doesn't impair the choices of others? I strongly agree with that.
[Please read the OP before voting. Special voting rules apply.]
Causal connections should not be part of our most fundamental model of the Universe. Everything that is useful about causal narratives is a consequence of the Second Law of Thermodynamics, which is irrelevant when we're talking about microscopic interactions. Extrapolating our macroscopic fascination with causation into the microscopic realm has actually impeded the exploration of promising possibilities in fundamental physics.
That would explain why it took so long for someone to discover timeless physics.
That sentence has the same air of paradox about it as "Many solipsists believe ...". (Perhaps deliberately?)
Roko's Basilisk legitimately demonstrates a problem with LW. "Rationality" that leads people to believe such absurd ideas is messed up, and 1) the presence of a significant number of people psychologically affected by the basilisk and 2) the fact that Eliezer accepts that basilisk-like ideas can be dangerous are signs that there is something wrong with the rationality practiced here.
Are you sure you have pinpointed the right culprit? Why exactly "rationality"? "Zooming in" and "zooming out" would lead to potentially different conclusions. E.g. G.K.Chesterton would probably blame atheism[1]. Zooming out even more, for example, someone immersed in Eastern thought might even blame Western thought in general. Despite receiving vastly disproportionate share of media attention it was such a small part of LessWrong history and thought (by the way, is anything that any LWer ever came up with a part of LW thought?) that it seems to wrong to put the blame on LessWrong or rationality in general.
Furthermore, which would you say is better, an ability to formulate an absurd idea and then find its flaws (or, for e.g. mathematical ideas, exactly under what strange conditions they hold) or inability to formulate absurd ideas at all? Ability to come up with various absurd ideas is an unavoidable side effect of having an imagination. What is important is not to start believing it immediately, because in the history of any really new and outlandish idea at the very beginning there is an important asymmetry (which arises due to the fact that coming up with any complicated idea takes time) - an idea itself has already been invented but the good counterarguments do not yet exist (this is similar to the situation where a new species is introduced to an island where it does not have natural predators, which are introduced only later). This also applies to the moment when a new outlandish idea is introduced to your mind and you haven't heard any counterarguments by that moment, one must nevertheless exercise caution. Especially if that new idea is elegant and thought provoking whereas all counterarguments are comparatively ugly and complicated and thus might feel unsatisfactory even after you have heard them.
Was there really a significant number of people or is this just, well, an urban legend? The fact that some people are affected is not particularly surprising - it seems to be consistent with the existence of e.g. OCD. Again, one must remember that not everyone thinks the same way and the common thing between people affected might have been something other than acquaintance with LW and rationality which you seem to imply (correct me if my impression was wrong).
I think it is better to give Eliezer a chance to explain himself why he did what he did. My understanding is that whenever someone introduces a person to new variant of this concept without explaining proper counterarguments it takes time for that person to acquaint themselves with them. In very specific instances that might lead to unnecessary worrying about it and potentially even some actions (most people would regard this idea as too outlandish and too weird whether or not it was correct and compartmentalize everything even if it was). A clever devil's advocate could potentially come up with more and more elaborate versions of this idea which take more and more time to take down. As you can see, it is not necessary for any form of this idea to be correct for this gap to expand.
Personally I understand (and share) the appeal of various interesting speculative ideas and and the frustration that someone thinks that this is supposedly bad for some people, which seems against my instincts and the highly valuable norm of free marketplace of ideas.
At this point in time, however, the basilisk seems to be more often brought up in order to dismiss all LW, rather than only this specific idea, thus it is no wonder that many people get defensive even if they do not believe it.
All of this does not touch the question whether the whole situation was handled the way it should have been handled.
[1] Although the source says that famous quote is misattributed. Huh. I remember reading a similar idea in one of "Father Brown" short stories. I'll have to check it.
(excuse my english, feel free to correct mistakes)
The quotes indicate that I'm not blaming rationality, I'm blaming something that's called rationality. You're replying as if I'm blaming real rationality, which I'm not.
Censoring substantial references to the basilisk was partly done in the name of protecting the people affected. This requires that there be a significant number of people, not just that there be the normal number of people who can be affected by any unusual idea.
His explanations have varied. The explanation you linked to is fairly innocuous; it implies that he is only banning discussion because people get harmed when thinking about it. Someone else linked a screengrab of Eliezer's original comment which implies that he banned it because it can make it easier for superintelligences to acausally blackmail us, which is very different from the one you linked.
Does "rolling my eyes and reading something else" count as "psychologically affected"?
My contrarian idea: Roko's basilisk is no big deal, but intolerance of making, admitting, or accepting mistakes is cultish as hell.
[Please read the OP before voting. Special voting rules apply.]
An AI which followed humanity's CEV would make most people on this site dramatically less happy.
Do you mean that, if shown the results, we would decide that we don't like humanity's CEV, or that humanity desires that we be unhappy?
What Nancy said, so 1, and instrumentally but not terminally 2.
Or possibly that if the majority of people got what they want, most people at LW would be incidentally made unhappy.
My intuition is in agreement with this, but I would love a more worked out description of your own thoughts (in part because my own thoughts aren't clear).
Most of humanity hates deviants and I don't think there's anything incoherent about that value.
I don't think you could get enough of humanity to agree on what should be considered "deviant" to make that value cohere.
What cross-section of humanity are you familiar with?
[opening post special voting rules yadda yadda]
Biological hominids descended from modern humans will be the keystone species of biomes loosely descended from farms pastures and cities optimized for symbiosis and matter/energy flow between organisms, covering large fractions of the Earth's land, for tens of millions of years. In special cases there may be sub-biomes in which non-biological energy is converted into biomass, and it is possible that human-keystone ocean-based biomes might appear as well. Living things will continue to be the driving force of non-geological activity on Earth, with hominid-driven symbiosis (of which agriculture is an inefficient first draft) producing interesting new patterns materials and ecosystems.
Upvoted because it is much too specific (too many conjunctions) to be true. Even if many of them sound plausible.
Bah, I'm always doing that. I have clusters of related suspicions which I put down in one big chunk rather than as separate possibly independent points.
If I had to extract a main point it would be the first bit, biological hominids descended from modern humans existing tens of millions of years from now with their most obvious alterations to the world being an extension of what we have begun with agriculture.
Meta-comment: I'm not sure that structure or voting scheme is particularly useful. The hope would be to allow conversation about contrarian viewpoints which are actually worth investigating. I'm not sure how you separate the wheat from the chaff, but that should be the goal...
Yes. Contrarian position: This thread would be better if we upvoted contrarian positions that are interesting or caused updates, not those that we disagree with.
A word of advice: Perhaps anyone posting a comment here with the intention of voicing a contrarian opinion and getting upvotes for disagreement should indicate the fact explicitly in their comment. Otherwise I predict that the upvote/downvote signal will be severely corrupted by people voting "normally". (Especially if these comments produce discussion -- if A posts something you strongly disagree with and B posts a very good and clearly-explained reason for disagreeing, what are you supposed to do? I suggest the right thing here is to upvote both A and B, but it's liable to be easy to get confused...)
[EDITED to add: 1. For the avoidance of doubt, of course the above is not intended to be a controversial opinion and if you vote on it you should do so according to the normal conventions, not the special ones governing this discussion. 2. It is possible to edit your own comments; if you read the above and think it's sensible, but have already posted a contrarian opinion here, you can fix it.]
The universe we perceive is probably a simulation of a more complex Universe. In breaking with the simulation hypothesis, however, the simulation is not originated by humans. Instead, our existence is simply an emergent property of the physics (and stochasticity) of the simulation.
English has a pronoun that can be used for either gender and, as an accident of history not some hidden agenda, said pronoun in English is "he/him/&c."
Edited: VAuroch is the best kind of correct on "neuter" pronouns. Changed, though that might make a view less controversial than I thought (all but 2 readers agree, really?) even less so :)
I consider this an incoherent claim. "A neuter pronoun", inherently, is one that can be applied to individuals regardless of gender (actual or grammatical). That's what people want when they wish English had a neuter pronoun. 'He/him/his' is not such a pronoun. "They/them/their" is.
Nope. "Of all the men and women here, one will prove his worth" is grammatical and does not imply a man IMO. I'm not defining myself right of course, just clarifying why my contrarian claim is coherent.
That was historically true, but many women and nonbinary people disagree with the statement that it is still true. And it was never neuter; it used to be the case that using male pronouns for an unspecified person was grammatically valid.
[Please read the OP before voting. Special voting rules apply.]
Artificial intelligences are overrated as a threat, and institutional intelligences are underrated.
Overrated in LessWrong, or globally? I upvoted assuming the former, although I agree that institutional intelligences are an underrated threat.
Social problems are nearly impossible to solve. The methods we have developed in the hard sciences and engineering are insufficient to solve them.
Would you disagree with the claim that several significant social problems have in fact been solved over the history of human civilization, at least in parts of the world? Or are you saying that those were the low-hanging fruit and the social problems that remain are nearly impossible to solve?
What would you say about the progress that has been made towards satisfying the Millennium Development Goals?
Looking at the list, I would say that to the extent progress has been made towards them (and to the extent they're worthy goals, the "sustainable development" one is trying to solve the wrong problem and the "gender equality" one is just incoherent) it is incidental to the efforts of the UN.
There are some I hold:
These are 10 different propositions. Fortunately I disagree with most of them so can upvote the whole bag with a clear conscience, but it would be better for this if you separated them out.
I agree with this meta-comment. Should I downvote it?
See my earlier comment on this.
Care to explain this one?
[Contrarian thread special voting rules]
I bite the bullet on the repugnant conclusion
[Contrarian thread special voting rules]
I would not want to be cryonically frozen and resurrected as my sense of who I am is tied into social factors that would be lost
Would you be willing to freeze if your family did? Your friends and family? Your whole country? Or even if everyone in the world was preserved, would you expect the structure of society post-resurrection be different enough that you would refuse preservation?
I'm not usre about the friends and family examples, it would depend what I thought that future society would be like. If cryonics was the norm I probably wouldn't opt out of it because I would have reasonable expectation of, if resurrection was successful, there being other people in the same situation so there would be infrastructure to support us.
The social factors I'm thinking of include the skills, qualifications and experience that I have developed in my life, which would likely be irrelevant in a world that can resurrect me. At best I would be a historical curiosity with nothing to contribute.
Developing a rationalist identity is harmfull. Promoting a "-ism" or group affilication with the label "rational" is harmful.
Meta
I think LW is already too biased towards contrarian ideas - we don't need to encourage them more with threads like this.
Treated as a "contrarian opinion" and upvoted.
I think this thread is for opinions that are contrarian relative to LW, and not to the mainstream.
e.g. my opinion on open borders is something that a great majority of people share but is contrarian here, shown by the fact that as of the time of writing it is currently tied for highest-voted in the thread.
[META]
Previous incarnations of this idea: Closet survey #1, The Irrationality Game (More, II, III)
[Please read the OP before voting. Special voting rules apply.]
The SF Bay Area is a lousy place to live.
Max L.
[Please read the OP before voting. Special voting rules apply.]
Politically, the traditional left is broadly correct.
Correct meaning what? I'm interpreting "the traditional left" as a value system instead of a set of statements about the world.
Correct meaning that we would prefer the outcomes of their policy suggestions to the outcomes of other policies, or I guess generically that their values are an effective mechanism for generating good policies.
"Traditional" left meaning what? Communism? Socialism? Democrats?
Traditional as in not the radical left or any post-neocon positions. Socialism. Approximately the position of the leftmost of the two biggest political parties in a typical western-european country.
In the typical Western-European country, the leftmost of the two parties has abandoned Socialism and instead espouses the politics of Social Democracy or the Third Way.
[Please read the OP before voting. Special voting rules apply.]
American intellectual discourse, including within the LW community, is informed to a significant extent by folk beliefs existing in the culture at large. One of these folk beliefs is an emphasis on individualism -- both methodological and prescriptive. This is harmful: methodological individualism ignores the existence of shared cultures and coordination mechanisms that can be meaningfully abstracted across groups of individuals, and prescriptive individualism deprives those who take it seriously of community, identity, and ritual, all of which are basic human needs.
[meta]
Is there some way to encourage coherence in people's stated views? For some of the posts in this thread I can't tell whether I agree or disagree because I can't understand what the view is. I feel an urge to downvote such posts, although this could easily be a bad idea, since extreme contrarian views will probably seem less coherent. On the other hand, if I can't even understand what is being claimed in the first place then it's hard for me to get much benefit out of it.
That's to be expected with it comes to contrarian views. A lot of positions are not widely held because they are complicated to understand or take certain background knowledge.
If you would give me a bunch of academic math problems I wouldn't understand the problems. In math it's fairly easy to say: Hey math is complicated, it's okay that I don't know enough about the topic to understand the claim. But in other areas the same applies. Understanding what other people think is often hard when they differ substantially from yourself.
This thread is mixed up. A top level meta comment (like in the irrationality sequence) is missing for example.
[Please read the OP before voting. Special voting rules apply.]
There is nothing morally wrong about eating meat, and vegetarianism/veganism aren't morally superior to meat-eating.
That looks like a mainstream position, not contrarian.
It's contrarian among LWers, which is what the OP asked for.
Is that so? I know there are some vocal vegetarians on LW, I am not sure that makes them the local mainstream.
I think there are more LW members who are meat-eating and feel hypocritical/gulity about it than there are actual vegetarians.
Looking at the 2013 poll:
I can't speak to the feeling of guilt, but vegetarians are a small minority here.
My current understanding of U.S. laws on cryonics is that you have to be legally pronounced brain-dead before you can be frozen. I think that defeats the entire purpose of cryonics; I can't trust attempts to reverse-engineer my brain if I'm already brain-dead; that is, if my brain cells are already damaged beyond resuscitation. I don't live in the U.S. anyway, but sometimes I consider moving there just to be close to cryonics facilities. However, as long as I can't freeze my intact brain, I can't trust the procedure.
brain dead does not necessarily refer to damaged brain cells. It often refers to electrical activity. As people have been resuscitated after the cessation of brain activity (i.e. human's are cold bootable) without loss of personality it seems reasonable to still give cryonics a go.
[Please read the OP before voting. Special voting rules apply.]
Moral realism is true.
[Please read the OP before voting. Special voting rules apply.]
You can expect to have about as much success effectively and systematically teaching rationality as you could in effectively and systematically teaching wisdom. Attempts for a systematic rationality curriculum will end up as cargo cultism and hollow ingroup signaling at worst and heuristics and biases research literature scholarship at best. Once you know someone's SAT score, knowing whether they participated in rationality training will give very little additional predictive power on whether they will win at life.
I'd like to hear a more substantive argument if you've got one. Do you think there are few general-purpose life skills (e.g. those purportedly taught in Getting Things Done, How to Win Friends and Influence People, etc.)? What's your best evidence for this?
[Please read the OP before voting. Special voting rules apply.]
Humanities is not only an useful method of knowing about the world - but, properly interfaced, ought to be able to significantly speed up science.
(I have a large interval for how controversial this is, so pardon me if you think it's not.)
Do you mean humanities in the abstract or the people currently occupying humanities departments?
This seems pretty similar to the irrationality game. That's not necessarily a bad thing, but personally I would try the following formula next time (perhaps this should be a regular thread?):
Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone's position flatly stated doesn't usually improve the accuracy of my beliefs.
Ask people to avoid upvoting views they already agree with. This is to prevent the thread from becoming an echo chamber of edgy "contrarian" views that are in fact pretty widespread already.
Ask people to vote up only those comments that cause them to update or change their mind on some topic. Increased belief accuracy is what we want; let's reward that.
Ask people to downvote spam and trolling only. Through this restriction on the use of downvotes, we lessen the anticipated social punishment for sharing an unpopular view that turns out to be incorrect (which is important counterfactually).
Encourage people to make contrarian factual statements rather than contrarian value statements. If we believe different things about the world, we have a better chance of having a productive discussion than if we value different things in the world.
Not sure if these rules should apply to top-level comments only or every comment in the thread. Another interesting question: should playing devil's advocate be allowed, i.e. presenting novel arguments for unpopular positions you don't actually agree with, and in under what circumstances (are disclaimers required, etc.)
You could think of my proposed rules as being about halfway between irrationality game and a normal LW open thread. Perhaps by doing binary search, we can figure out what the optimal degree to facilitate contrarianism is, and even make every Nth open thread a "contrarian open thread" that operates under those rules.
Another interesting way to do contrarian threads might be to pick particular views that seem popular on Less Wrong and try to think of the best arguments we can for why they might be incorrect. Kind of like a collective hypothetical apostasy. The advantage of this is that we generate potentially valuable contrarian positions no one is holding yet.
This has the problem that beliefs with a large inferential distance won't get stated.
The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.
I sense this opinion is not that marginal here, but it does go against the established orthodoxy: I'm pro-specks.
[Please read the OP before voting. Special voting rules apply.]
The study and analysis of human movement is very underfunded. There a lot of researches into getting information about static information such as DNA or X-ray but very little about getting dynamic information about how humans move.
I agree with this, so I'm telling you instead of upvoting.
Except for the purpose of making CGI actors' movements look realistic.
I think raising the sanity waterline is the most important thing we can do, and we do too little of it because our discussions tend to happen amongst ourselves, i.e. with people who are far from that waterline.
Any attempt to educate people, including the attempt to educate them about rationality, should focus on teens, or where possible on children, in order to create maximum impact. HPMOR does that to some degree, but Less Wrong usually presupposes cognitive skills that the very people who'd benefit most from rationality do not possess. It is very much in-group discussion. If "refining the art of human rationality" is our goal, we should be doing a lot more outreach and a lot more production of very accessible rationality materials. Simplified versions of the sequences, with more pictures and more happiness. CC licensed leaflets and posters. Classroom materials. Videos (compare the SciShow video on Bayes' Theorem), because that's how many curious young minds get their extracurricular knowledge these days.
In fact, if we crowdfunded somebody with education materials production experience to do that (or better yet, crowdfund two or three and let them compete for the next round), I'd contribute significantly.
Is this supposed to be a contrarian view on LW? If it is, I am going to cry.
Unless we reach a lot of young people, we risk than in 30-40 years the "rationalist movement" will be mostly a group of old people spending most of their complaining about how things were better when they were young. And the change will come so gradually we may not even notice it.
I don't think anybody has explicitly spoken out against it, but it seems to me everyone acts quite opposed to the idea.
I think video are the wrong medium. Videos have the problem of getting people to think they understand something when they don't. People learn all the right buzzwords but that doesn't mean that they actually are more rational.
Kaj Sotala for example designs a game for his master thesis that's intended to teach Bayes method. I think such a game would be much more valuable than a video that explains Bayes method.
We have prediction book and the Credence game as tools to teach people to be more rational. They aren't yet on a quality level where the average person will use them. Focusing more energy on updating those concepts and making them work better is more valuable than producing videos.
CFAR also does develop teaching materials. A core feature of CFAR is that it actually focuses on produces quality instead of just producing videos and hoping that those videos will have an impact. I know that there someone in Germany who teaches a high school class based on CFAR inspired material.
Seems pretty sensible to me. I'm not that worried about a 30-40 year old "rationalist" movement, however... in the same way the ideas on LW appealed to me as a teen, it seems likely that they will end up appealing to other teens, if they end up hearing about them (stuff like e.g. HPMOR makes it likely that they will).
[Please read the OP before voting. Special voting rules apply.]
The necessary components of AGI are quite simple, and have already been worked out in most cases. All that is required is a small amount of integrative work to build the first UFAI.
[Contrarian thread, special voting rules apply]
Engaging in political processes (and learning how to do so) is a useful thing, and is consistently underrated by the LW consensus.
Just a reminder, the local meme "politics is the mind killer" is an injunction not against discussing politics, but against using political examples in a non-political argument.
Agreed. But there is also a generally negative attitude towards politics
[Please read the OP before voting. Special voting rules apply.]
Somewhere between 1950 and 1970 too many people started studying physics, and now the community of physicists has entered a self-sustaining state where writing about other people's work is valued much, much more than forming ideas. Many modern theories (string theory, AdS/CFT correspondence, renormalisation of QFT) are hard to explain because they do not consist of an idea backed by a mathematical framework but solely of this mathematical framework.
Changing minds is usually impossible. People will only be shifted on things they didn't feel confident about in the first place. Changes in confidence are only weakly influenced by system 2 reasoning.
Dollars and utilons are not meaningfully comparable.
Edited to restate: Dollars (or any physical, countable object) cannot stand in for utilons.
Utilons do not exist. They are abstractions defined out of idealized, coherent preferences. To the extent that they are meaningful, though, their whole point is that anything one might have a preference over can be quantified in utilons--including dollars.
Can you explain what is wrong with the following comparison?
The value of a dollar in utilons is equal to the increase in expected utilons brought by being given another dollar.
Friendliness by mathematical proof about exact trustworthiness of future computing principles is misguided.
[ Please read the OP before voting. Special voting rules apply.]
MWI is wrong, and relational QM is right.
Physicalism is wrong, because of the mind body problem, and other considerations, and dual aspect neutral monism is right.
STEM types are too quick to reject ethical Objectivism. Moreover moral subjectivism is horribly wrong. Don't know what the right answer is, but it could be some kind of Kantianism or Contractarianism.
Arguing to win is good, or to be precise, it largely coincides with truth seeking,
There is no kind of smart that makes you uniformly good at everything.
Even though philosophy has no established body of facts, it is possible to be bad at philosophy and make mistakes in it. Scientists who try to solve longstanding philosophical problems in their lunch breaks end up making fools of themselves. Philosophy is not broken science.
A physicalistically respectable form of free will is defensible.
Bayes is oversold, Quantifying what you haven't first understood is pointless. Being a good rationalist at the day to day level has a lot to do with noticing your own biases, and with emotional maturity, than mental arithmetic.
MIRI hasn't made a strong case for AI dangers.
The standard theism/atheism debate is stale, broken and pointless..people who cant understand metaphysics arguing with people who believe it but cant articulate it.
All epistemological positions boil down to fundamental uproveable, intuitions. Empiricism doesn't escape betause it is based on the intuition that if you can see something, it is really there. STEM types have an overly optimistic view of their existed8logo, because they are accelerated out of worrying about fundamental issues.
Rationality is more than one thing.
Too much statements in a single post.
I'm upvoting top-level comments which I think are in the spirit of this post but I personally disagree with (in the case of comments with several sentences, if I disagree with their conjunction), downvoting ones I don't think are in the spirit of this post (e.g. spam, trolling, views which are clearly not contrarian either on LW nor in the mainstream), and leaving alone ones which are in the spirit of this post but I already agree with. Is that right?
What about comments I'm undecided about? I'm upvoting them if I consider them less likely than my model of the average LWer does and leaving them alone otherwise. Is that right?
I interpret the intention as "upvote serious ones you disagree with, downvote trolls, ignore those you agree with". In other words, you are not judging what you think LW finds contrarian, you are reporting whether you agree with the views posters perceive as contrarian, not penalizing people for misjudging what is contrarian.
Hopefully this thread is a useful tool for figuring out which views are the most out of the LW mainstream, but still are taken seriously by the community. 10+ upvotes would probably be in the ballpark.
[Please read the OP before voting. Special voting rules apply.]
Improving the typical human's emotional state — e.g. increasing compassion and reducing anxiety — is at least as significant to mitigating existential risks as improving the typical human's rationality.
The same is true for unusually intelligent and capable humans.
For that matter, unusually intelligent and capable humans who hate or fear most of humanity, or simply don't care about others, are unusually likely to break the world.
(Of course, there are cases where failures of rationality and failures of compassion coincide — the fundamental attribution error, for instance. It seems to me that attacking these problems from both System 1 and System 2 will be more effective than either approach alone.)
Once you actually take human nature into account (especially, the the things that cause us to feel happiness, pride, regret, empathy), then most seemingly-irrational human behavior actually turns out to be quite rational.
Conscious thought processes are often deficient in comparison to subconscious ones, both in terms of speed and in terms of amount of information they can integrate together to make decisions.
From 1 and 2 it follows that most attempts at trying to consciously improve 'rational' behavior will end up falling short or backfiring.
[Please read the OP before voting. Special voting rules apply.]
Utilitarianism relies on so many levels of abstraction as to be practically useless in most situations.
This looks like two posts I saw quite a while ago where contrarian posts were also intended to be up-voted. I can't seem to find those posts (searching for contrarian doesn't match anything and searching for 'vote' is obviously useless). Nonetheless those posts urged to mark each contrarian comment to clearly indicate the opposite voting semantics to avoid unsuspecting readers being misled by the votes. Maybe someone can provide the links?
Now that there's the karma toll, using downvotes to mean anything other than ‘I don't think this comment or post belongs here’ is a bad idea. Also, now we have poll syntax.
I'd want to vote comments in this thread according to whether they're interesting or boring, regardless of whether I agree with them.
I really wish there was a way to suspend the toll for irrationality game posts.
I only remember this one:
http://lesswrong.com/r/discussion/lw/jvg/irrationality_game_iii/
Our society is ruled by a Narrative which has no basis in reality and is essentially religious in character. Every component of the Narrative is at best unjustified by actual evidence, and at worst absurd on the face of it. Moreover, most leading public intellectuals never seriously question the Narrative because to do so is to be expelled from their positions of prestige. The only people who can really poke holes in the Narrative are people like Peter Thiel and Nassim Taleb, whose positions of wealth and prestige are independently guaranteed.
The lesson is that in the modern world, if you want to be a philosopher, you should first become a billionaire. Then and only then will you have the independence necessary to pursue truth.
What exactly does that Narrative say?
Why would he answer you without first being a billionaire?
Anti-contrarianism.
Finding better ways for structuring knowledge is more important than faster knowledge transfer through devices such as high throughput Brain Computer Interfaces.
It's a travesty that outside of computer programming languages few new languages get invented in the last two decades.
[Please read the OP before voting. Special voting rules apply.]
The truth of a statement depends on the context in which the statement is made.
The full meaning of a statement depends on the context in which it is made.
I think this is uncontroversial if taken as referring to the following two things:
and controversial but not startlingly so if taken as referring to the following:
Are you intending to state something more than those?
Just to clarify, you mean that there is a context in which "0 = 1" is a true statement, which is not tantamount to redefining "0", "=", or "1"? That is, in some alternate universe, "0 = 1" is consistent with the axioms of Peano arithmatic?
the United States prison system is a tragedy on par or exceeding the horror of the Soviet gulags. In my opinion the only legitimate reason for incarcerating people is to prevent crime. The USA currently has 7 times the OECD average number of prisoners and crime rates similar to the OECD average. 6/7 of the Us penial system population is a little over 2 million people. If we are unnescesarily incarcerating anywhere close to 2 million people right now then the USA is a morally hellish country.
note: Less than half of the inamtes in the USa are there for drug related charges. It is very close to 50% federally but less at the state level. Immediately pardoning all criminals primarily gets us to 3.5 times the OECd average.
Is your claim that they're in prison for crimes they didn't commit, or that we should let more crimes go unpunished?
I'm not the OP, but I'll throw a quote into this thread:
So which crimes would you take off the books and what percent of prisoners would that remove?
We can start with the drug war, things like civil forfeiture, and go on from there. You might be interested in this book.
The problems with the US criminal justice system go much deeper than just the abundance of laws, of course.
Civil forfeiture doesn't fill prisons.
The problem with having to many felonies is not that prisons get filled with people being punished for silly things, it's that the people who do get punished for silly things tend to correlate with the people actively opposing the current administration.
This seems close to the (liberal) mainstream. Why do you think it is contrarian on LW?
I do not think most people consider this a problem on the par of the Soviet Gulag. Though possibly I am wrong.
The problem with the Soviet Gulag wasn't so much its size, but rather the whole system it was part of and things which got you sent to it.