What are your contrarian views?
As per a recent comment this thread is meant to voice contrarian opinions, that is anything this community tends not to agree with. Thus I ask you to post your contrarian views and upvote anything you do not agree with based on personal beliefs. Spam and trolling still needs to be downvoted.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (806)
the United States prison system is a tragedy on par or exceeding the horror of the Soviet gulags. In my opinion the only legitimate reason for incarcerating people is to prevent crime. The USA currently has 7 times the OECD average number of prisoners and crime rates similar to the OECD average. 6/7 of the Us penial system population is a little over 2 million people. If we are unnescesarily incarcerating anywhere close to 2 million people right now then the USA is a morally hellish country.
note: Less than half of the inamtes in the USa are there for drug related charges. It is very close to 50% federally but less at the state level. Immediately pardoning all criminals primarily gets us to 3.5 times the OECd average.
Is your claim that they're in prison for crimes they didn't commit, or that we should let more crimes go unpunished?
False dichotomy, It's about sentence length, eg three strikes.
So if we reduced sentences what effect do you think that would have on crime rates? Remember three strikes was passed in response to crime rates being too high.
Drastically increasing sentences didn't drastically reduce crime, so...
Comparable countries have lower crime rates and lower prison populations, so they must be doing something right.
You don't have to keep moving the big lever up and down: you can get Smart on Crime.
Well, the crime did fall. Whether it was due to increased sentences or something else is still being debated.
They also have fewer people from populations with high predisposition to violence (and yes, I mean blacks).
The last was disappointingly predictable.
This seems close to the (liberal) mainstream. Why do you think it is contrarian on LW?
I do not think most people consider this a problem on the par of the Soviet Gulag. Though possibly I am wrong.
Bitcoin and a few different altcoins can all coexist in the future and each have significant value, each fulfilling different functions based on their technical details.
Coherent Extapolated Volition is a bad way of approaching friendliness.
Once you actually take human nature into account (especially, the the things that cause us to feel happiness, pride, regret, empathy), then most seemingly-irrational human behavior actually turns out to be quite rational.
Conscious thought processes are often deficient in comparison to subconscious ones, both in terms of speed and in terms of amount of information they can integrate together to make decisions.
From 1 and 2 it follows that most attempts at trying to consciously improve 'rational' behavior will end up falling short or backfiring.
I agree, which probable makes it contrarian.
History isn't over in any Fukuyamian sense; in fact the turmoil of the twenty-first century will dwarf the twentieth. A US-centered empire will likely take shape by century's end.
I will elaborate if requested.
[ Please read the OP before voting. Special voting rules apply.]
MWI is wrong, and relational QM is right.
Physicalism is wrong, because of the mind body problem, and other considerations, and dual aspect neutral monism is right.
STEM types are too quick to reject ethical Objectivism. Moreover moral subjectivism is horribly wrong. Don't know what the right answer is, but it could be some kind of Kantianism or Contractarianism.
Arguing to win is good, or to be precise, it largely coincides with truth seeking,
There is no kind of smart that makes you uniformly good at everything.
Even though philosophy has no established body of facts, it is possible to be bad at philosophy and make mistakes in it. Scientists who try to solve longstanding philosophical problems in their lunch breaks end up making fools of themselves. Philosophy is not broken science.
A physicalistically respectable form of free will is defensible.
Bayes is oversold, Quantifying what you haven't first understood is pointless. Being a good rationalist at the day to day level has a lot to do with noticing your own biases, and with emotional maturity, than mental arithmetic.
MIRI hasn't made a strong case for AI dangers.
The standard theism/atheism debate is stale, broken and pointless..people who cant understand metaphysics arguing with people who believe it but cant articulate it.
All epistemological positions boil down to fundamental uproveable, intuitions. Empiricism doesn't escape betause it is based on the intuition that if you can see something, it is really there. STEM types have an overly optimistic view of their existed8logo, because they are accelerated out of worrying about fundamental issues.
Rationality is more than one thing.
There are so many problems with this post I wish I could vote several times.
One example: how can you claim both "A physicalistically respectable form of free will is defensible" and "Physicalism is wrong?"
Easily. The wrongness of physicalism doesn't imply the wrongness of everything that is merely compatible with it.
Too much statements in a single post.
[Please read the OP before voting. Special voting rules apply.]
Homeownership is not a good idea for most people.
Please elaborate.
The largest avoidable source of pain and boredom in the life of a typical western citizen is their commute. - The sane response to this problem is to live as close to your place of employment as at all practical - valuing the time spent commuting at the same rate as your hourly earnings, the monthly penalty for living any significant distance at all from your job can get quite absurd for professionals just in financial terms, and the human cost is greater, because it is sucking up the time you actually can dispose of as you wish.
Home ownership increases the costs of moving residence dramatically compared to renting, and is thus not a good idea unless you have a job which you anticipate keeping for a far greater period of time than is typical in modern society. IE, do you have tenure or the effective equivalent? Then buying over renting makes sense. If you don't, all buying does is make it hurt a lot more to move when you get a new job.
Provably-secure computing is undervalued as a mechanism for guaranteeing Friendliness from an AI.
[Please read the OP before voting. Special voting rules apply.]
Utilitarianism relies on so many levels of abstraction as to be practically useless in most situations.
I've always had problems with MWI, but it's just a gut feeling. I don't have the necessary specialized knowledge to be able to make a decent argument for or against it. I do concede it one advantage: it's a Copernican explanation, and so far Copernican explanations have a perfect record of having been right every time. Other than that, I find it irritating, most probably because it's the laziest plot device in science-fiction.
You can't solve AI friendliness in a vacuum. To build a friendly AI, you have to simultaneously work on the AI and the code of ethics it should use, because they are interdependent. Until you know how the AI models reality most effectively you can't know if your code of ethics uses atoms that make sense to the AI. You can try to always prioritize the ethics aspects and not make the AI any smarter until you have to do so, but you can't first make sure that you have an infallible code of ethics and only start building the AI afterwards.
[Please read the OP before voting. Special voting rules apply.]
The truth of a statement depends on the context in which the statement is made.
I think this is uncontroversial if taken as referring to the following two things:
and controversial but not startlingly so if taken as referring to the following:
Are you intending to state something more than those?
There are some people who believe that there something called objective reality and you can check whether a statement is true in objective reality.
I say that a statement might be true in context A but false in context B.
I don't think you answered my question. (Perhaps because you think it's meaningless or embodies false presuppositions or something.)
Aside from the facts that (1) the same utterance can mean different things in different contexts, (2) indexical terms can refer differently in different contexts, and (3) different values and preferences may be implicit in different contexts, do you think there are further instances in which the same statement may have different truth values in different contexts?
(I think the boundary between #1 and "real" differences in truth value is rather fuzzy, which I concede might make my question unanswerable.)
Some concrete examples may be useful. The following seem like examples where one can avoid 1,2,3. Are they ones where you think the truth value might be context-dependent, and if so could you briefly explain what sort of context differences would change the truth value?
The fact that you claim to get 7 digits of accuracy by multiplying two 4 digit numbers is very questionable. If I would go after my physics textbook 1234 times 4321 = 5332000 would be the prefered answer and 1234 times 4321 = 5332114 would be wrong as the number falsely got 3 additional digits of accuracy.
A more exotic issue is whether times is left or right associative. The python pep on matrix multiplication is quite interesting. It goes through edge cases such as whether matrix multiplication is right or left associative.
Red is actually a quite nice example. Does it mean #FF0000? If so, the one that my monitor displays? The one that my printer prints? On is red not a property of an object but a property of the light and it means light with a certain wavelength? That means that if I light the room a certain way the colors of objects change. If it's a property of the object, what's when the object emits red light but doesn't reflect it? Alternatively red could also be something that triggers the color receptors of humans in a specific way. In that case small DNA changes in the person who perceives red alter slightly what red means. But "human red" is even more complex because the brain does comlex postprocessing after the color receptors have given a certain output.
If red means #FF0000 then is #EE0000 also red or is it obviously not red because it's not #FF0000? What do you do when someone with design experience and who therefore has many names for colors comes along and says that freshly spilled human blood is crimson rather than red? If we look up the color crimson you will find that Indiana University has IU crimson and the University of Kansas has KU crimson. Different values for crimson make it hard to decide whether or not the blood is actually colored crimson.
Depending on how you define red mixing it with green and blue might give you white or it might give you black.
I used to naively think that I can calculate the difference between two colors by calculating the Hamilton distance of the hex values. There even a W3C recommendation of defining the distance of colors for website design that way. It turns out it you actually need a formula that's more complex and I'm still not sure whether the one the folks gave me on ux.stackexchange is correct for human color perception. Of course you need to have a concept of distance if you want to say that red is #FF0000 +- X.
I also had lately on LW a disagreement about what colors mean when I use red to mean whatever my monitor shows me for red/#FF0000 because my monitor might not be rightly calibrated.
You might naively think that the day after September 2 is always September 3. That turns out not to be true. There also a case where a September 14 follows after a September 2. Some people think that a minute always has 60 seconds but the official version is that it can also sometimes have 61. It get's worse. You don't know how many leap seconds will be introduced in the next ten years. It get's announced only 6 months in advance. That means it's practically impossible to build a clock that tells the time accurately down to a second in ten years. If you look closer at statements things usually get really messy.
The US airforce shoot down an US helicopter in Iraq partly because they don't consider helicopters to be aircraft. Most of the time you can get away with making vague statements for practical purposes but sometimes a change in context changes the truth value of a statement and then you are screwed.
Multiplication: so this looks like you're again referring to meanings being context-dependent (in this case the meaning of "= 5332114"). So far as I can see, associativity has nothing whatever to do with the point at issue here and I don't understand why you bring it up; what am I missing?
Redness: yeah, again in some contexts "red" might be taken to mean some very specific colour; and yes, colour is a really complicated business, though most of that complexity seems to me to have as little to do with the point at issue as associativity has to do with the question of what 1234x4321 is.
So: It appears to me that what you mean by saying that statements' truth values are context-dependent is that (1) their meanings are context-dependent and (2) people are often less than perfectly precise and their statements apply to cases they hadn't considered. All of which is true, but none of which seems terribly controversial. So, sorry, no upvote for contrarianism from me on this occasion :-).
[Please read the OP before voting. Special voting rules apply.]
Politically, the traditional left is broadly correct.
I downvoted you because I mostly agree - depending on how broadly you mean broadly. I suspect this is a not uncommon position here, and I would not even be surprised if it were a plurality position.
That's fine. In some recent threads I've taken what I felt was a mainstream if leftist position, written (IMO) reasonable, positive arguments - and been downvoted for it, to the extent that I'm entertaining the hypothesis that LW is full of libertarians who are strongly opposed to such views. Confirming that out one way or the other is useful information.
I've had the same feeling. I suspect there are loud reactionary and libertarian minorities and a large number of liberal quiet people
I also have the general impression that in the past few months there has been an uptick of uncharitable tinman-attacks on progressivism by libertarians in the LW comment threads. Curiously, there seems to be less overt hostility between reactionaries and progressives, even though they're much further apart than libertarians and progressives (although this might be because the more hostile Nrx were more likely to exit after the creation of Moreright).
Our society is ruled by a Narrative which has no basis in reality and is essentially religious in character. Every component of the Narrative is at best unjustified by actual evidence, and at worst absurd on the face of it. Moreover, most leading public intellectuals never seriously question the Narrative because to do so is to be expelled from their positions of prestige. The only people who can really poke holes in the Narrative are people like Peter Thiel and Nassim Taleb, whose positions of wealth and prestige are independently guaranteed.
The lesson is that in the modern world, if you want to be a philosopher, you should first become a billionaire. Then and only then will you have the independence necessary to pursue truth.
What possible evidence can you have for the existence of a great truth which is by definition not available to you?
It's available to me, I just can't talk about it with other people.
Anti-contrarianism.
[Contrarian thread, special voting rules apply]
Engaging in political processes (and learning how to do so) is a useful thing, and is consistently underrated by the LW consensus.
Just a reminder, the local meme "politics is the mind killer" is an injunction not against discussing politics, but against using political examples in a non-political argument.
Agreed. But there is also a generally negative attitude towards politics
Dollars and utilons are not meaningfully comparable.
Edited to restate: Dollars (or any physical, countable object) cannot stand in for utilons.
Can you explain what is wrong with the following comparison?
The value of a dollar in utilons is equal to the increase in expected utilons brought by being given another dollar.
The problem is the law of diminishing marginal utility. Translating from dollars to utilons is not straightforward at all; how much utility that dollar gives you depends on factors like how many dollars you already have, how much you owe, what services you can sell, and how much you know about what to do with money. For that same reason, utilons do not add up linearly by giving you a second, third, etc., dollar.
If you had another dollar, then the value of the next dollar in utilons would decrease. Unless you're both an egoist and terrible with money, it will only be a slight decrease. After all, a dollar isn't much compared to all of the money you will make over your life.
I do not think that the fact that utilons do not add up linearly means that the conversion is not useful. For one thing, it allows you to express the law of diminishing marginal utility.
[Please read the OP before voting. Special voting rules apply.]
There is nothing morally wrong about eating meat, and vegetarianism/veganism aren't morally superior to meat-eating.
For most of the vegetarians I know, the issue isn't inherently eating meat. It's the way the animals are treated before they are killed.
Maybe you know a weird subset of vegetarians, but I don't think most would be fine with eating a dead animal that has been very well treated throughout its life.
Developing a rationalist identity is harmfull. Promoting a "-ism" or group affilication with the label "rational" is harmful.
Meta
I think LW is already too biased towards contrarian ideas - we don't need to encourage them more with threads like this.
Treated as a "contrarian opinion" and upvoted.
Meta: It is easy to take a position that is held by a significant number of people and exaggerate it to the point where nobody holds the exaggerated version. Does that count as a contrarian opinion (since nobody believes the exaggerated version that was stated) or as a non-contrarian opinion (since people believe the non-exaggerated version)?
(Edit: This is not intended to be a controversial opinion. It's meta.)
My understanding is that the idea is to post opinions you actually hold that count as contrarian.
I think raising the sanity waterline is the most important thing we can do, and we do too little of it because our discussions tend to happen amongst ourselves, i.e. with people who are far from that waterline.
Any attempt to educate people, including the attempt to educate them about rationality, should focus on teens, or where possible on children, in order to create maximum impact. HPMOR does that to some degree, but Less Wrong usually presupposes cognitive skills that the very people who'd benefit most from rationality do not possess. It is very much in-group discussion. If "refining the art of human rationality" is our goal, we should be doing a lot more outreach and a lot more production of very accessible rationality materials. Simplified versions of the sequences, with more pictures and more happiness. CC licensed leaflets and posters. Classroom materials. Videos (compare the SciShow video on Bayes' Theorem), because that's how many curious young minds get their extracurricular knowledge these days.
In fact, if we crowdfunded somebody with education materials production experience to do that (or better yet, crowdfund two or three and let them compete for the next round), I'd contribute significantly.
Is this supposed to be a contrarian view on LW? If it is, I am going to cry.
Unless we reach a lot of young people, we risk than in 30-40 years the "rationalist movement" will be mostly a group of old people spending most of their complaining about how things were better when they were young. And the change will come so gradually we may not even notice it.
[Please read the OP before voting. Special voting rules apply.]
Somewhere between 1950 and 1970 too many people started studying physics, and now the community of physicists has entered a self-sustaining state where writing about other people's work is valued much, much more than forming ideas. Many modern theories (string theory, AdS/CFT correspondence, renormalisation of QFT) are hard to explain because they do not consist of an idea backed by a mathematical framework but solely of this mathematical framework.
Changing minds is usually impossible. People will only be shifted on things they didn't feel confident about in the first place. Changes in confidence are only weakly influenced by system 2 reasoning.
English has a pronoun that can be used for either gender and, as an accident of history not some hidden agenda, said pronoun in English is "he/him/&c."
Edited: VAuroch is the best kind of correct on "neuter" pronouns. Changed, though that might make a view less controversial than I thought (all but 2 readers agree, really?) even less so :)
I consider this an incoherent claim. "A neuter pronoun", inherently, is one that can be applied to individuals regardless of gender (actual or grammatical). That's what people want when they wish English had a neuter pronoun. 'He/him/his' is not such a pronoun. "They/them/their" is.
Singular "they" existed but then it waned out of use. It has seen some comebacks. If gender information would not be criticfal it woudl have been "he" that would ahve vaned. It might not be a hidden agenda but more like ununderstood or emergent derived agenda.
[Please read the OP before voting. Special voting rules apply.]
Humanities is not only an useful method of knowing about the world - but, properly interfaced, ought to be able to significantly speed up science.
(I have a large interval for how controversial this is, so pardon me if you think it's not.)
Although the social sciences have undeniably helped a lot with our understanding of ourselves, their refusal to follow the scientific method is disgraceful.
When I said humanities I didn't mean social sciences; in fact, I thought social sciences explicitly followed the scientific method. Maybe the word points to something different in your head, or you slipped up. Either way, when I say humanities, I actually mean fields like philosophy and literature and sociology which go around talking about things by taking the human mind as a primitive.
The whole point of the humanities is that it's a way of doing things that isn't the scientific method. The disgraceful thing is the refusal to interface properly with scientists and scientific things - but there's no shortage of scientists who refuse to interface with humanities either, when you come down to it. My head's canonical example is Indian geneticists who try to go around finding genetic caste differences; Romila Thapar once gave an entertaining rant about how anything they found they'd be reading noise as signal because the history of caste was nothing like these people imagined.
And, on the other hand, we have many Rortys and Bostroms and Thapars in the humanities who do interface.
Funny humanities people were saying the same thing about genetic racial differences until said difference started showing up.
Do you mean humanities in the abstract or the people currently occupying humanities departments?
[Please read the OP before voting. Special voting rules apply.]
Improving the typical human's emotional state â e.g. increasing compassion and reducing anxiety â is at least as significant to mitigating existential risks as improving the typical human's rationality.
The same is true for unusually intelligent and capable humans.
For that matter, unusually intelligent and capable humans who hate or fear most of humanity, or simply don't care about others, are unusually likely to break the world.
(Of course, there are cases where failures of rationality and failures of compassion coincide â the fundamental attribution error, for instance. It seems to me that attacking these problems from both System 1 and System 2 will be more effective than either approach alone.)
I sense this opinion is not that marginal here, but it does go against the established orthodoxy: I'm pro-specks.
Roko's Basilisk legitimately demonstrates a problem with LW. "Rationality" that leads people to believe such absurd ideas is messed up, and 1) the presence of a significant number of people psychologically affected by the basilisk and 2) the fact that Eliezer accepts that basilisk-like ideas can be dangerous are signs that there is something wrong with the rationality practiced here.
I agree with this so much that, in order to not affect the mechanics of this thread, I'm going to upvote some other post of yours.
wait. now I'm not sure how to vote on THIS comment, which is brilliant.
My contrarian idea: Roko's basilisk is no big deal, but intolerance of making, admitting, or accepting mistakes is cultish as hell.
There are some I hold:
These are 10 different propositions. Fortunately I disagree with most of them so can upvote the whole bag with a clear conscience, but it would be better for this if you separated them out.
What do you mean by this? Do you just mean that it doesn't make any sense for something infinite to actually exist, or do you mean that set theory, which claims the existence of an infinite set as an axiom, is inconsistent?
Why? It seems to me that it would be obvious if the standard theory that Venus gets most of its heat from the sun was wrong, since we can easily see how much it absorbs and emits and look at the difference. Besides which, you'd need expertise to have a reasonable chance of coming up with the correct explanation on your own. Do you have relevant expertise?
Well, I have learned that explaining one of my views from this thread just costed me karma points.
I'll pass those two.
Social problems are nearly impossible to solve. The methods we have developed in the hard sciences and engineering are insufficient to solve them.
Yvain seems to agree.
The problem with Yvain's argument is that it appears to be an example of the PHB fallacy "anything I don't understand is easy to do". Or rather the "a little knowledge" problem "anything I sort of understand is easy to do".
During the Enlightenment, when people first started talking about reorganizing society on a large scale, it seemed like a panacea. Now that we have several centuries extremely messy experience with it, we know that it's harder than it at first appeared and there are many complications. Now that developments in biology seem to make it possible to make changes to biology it again looks like a panacea (at least to the people who haven't learned the lessons of the previous failure). And just as before, I predict people will discover that it's a lot more complicated, probably just a messily.
Would you disagree with the claim that several significant social problems have in fact been solved over the history of human civilization, at least in parts of the world? Or are you saying that those were the low-hanging fruit and the social problems that remain are nearly impossible to solve?
What would you say about the progress that has been made towards satisfying the Millennium Development Goals?
[Please read the OP before voting. Special voting rules apply.]
Reductionism as a cognitive strategy has proven useful in a number of scientific and technical disciplines. However, reductionism as a metaphysical thesis (as presented in this post) is wrong. Verging on incoherent, even. I'm specifically talking about the claim that in reality "there is only the most basic level".
[Please read the OP before voting. Special voting rules apply.]
The SF Bay Area is a lousy place to live.
Max L.
Now, now. The rule of the game is to upvote if you disagree and don't vote otherwise. I lived there for four years, so I think I'm qualified to have an opinion.
Max L.
I downvoted because I agree.
That's really more a personal taste than a view. The SF Bay Area is not inherently a good or bad place to live. Since you're the only person qualified to judge if you like living their, your opinion on the matter can hardly be considered contrarian. Not unless the majority of the people on LessWrong think you're wrong about not liking living there.
Friendliness by mathematical proof about exact trustworthiness of future computing principles is misguided.
[Please read the OP before voting. Special voting rules apply.]
Utilitarianism is a moral abomination.
I am very interested in this.
Exactly what is repugnant about utilitarianism? (Moi, I find that it leads to favoring torture over 3^^^3 specks, which is beyond facepalming; I'd like to hear your view.)
I guess the moral assumptions based on which you condem utilitarianism are the same you would propose instead. What moral theory do you espouse?
Having political beliefs is silly. Movements like neoreaction or libertarianism or whatever will succeed or fail mostly independently of whether their claims are true. Lies aren't threatened by the truth per se, they're threatened by more virulent lies and more virulent truths. Various political beliefs, while fascinating and perhaps true, are unimportant and worthless.
Arguing for or against various political beliefs functions mostly (1) to signal intelligence or allegiance or whatever, and (2) as mental masturbation, like playing Scrabble. "I want to improve politics" is just a thin veil that system 2 throws over system 1's urges to achieve (1) and (2).
If you actually think that improving politics is a productive thing to do, your best bet is probably something like "ensure more salt gets iodized so people will be smarter", or "build an FAI to govern us". But those options don't sound nearly as fun as writing political screeds.
(While "politics is the mind-killer" is LW canon, "believing political things is stupid" seems less widely-held.)
While I mostly agree, trying to devise political systems that would encourage a smarter populace (ex. SSC's Graduation Speech with the guaranteed universal income and abolishing public schools) seems like a potentially worthwhile enterprise.
I agree that forming political beliefs is not a productive use of my time in the same way that earning a salary to donate to SCI to cure people of parasites is. I disagree that this makes it silly. The reasons you gave may not be the most noble of reasons, but they are still perfectly valid.
[Please read the OP before voting. Special voting rules apply.]
Artificial intelligences are overrated as a threat, and institutional intelligences are underrated.
[Please read the OP before voting. Special voting rules apply.]
Moral realism is true.
[Please read the OP before voting. Special voting rules apply.]
You can expect to have about as much success effectively and systematically teaching rationality as you could in effectively and systematically teaching wisdom. Attempts for a systematic rationality curriculum will end up as cargo cultism and hollow ingroup signaling at worst and heuristics and biases research literature scholarship at best. Once you know someone's SAT score, knowing whether they participated in rationality training will give very little additional predictive power on whether they will win at life.
Upvoted because I disagree with the implicit assumption that the best way of teaching rationality-as-winning would look like heuristics and biases scholarship, rather than teaching charisma, networking, action, signaling strategies, and how to stop thinking.
What's the difference?
This seems pretty similar to the irrationality game. That's not necessarily a bad thing, but personally I would try the following formula next time (perhaps this should be a regular thread?):
Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone's position flatly stated doesn't usually improve the accuracy of my beliefs.
Ask people to avoid upvoting views they already agree with. This is to prevent the thread from becoming an echo chamber of edgy "contrarian" views that are in fact pretty widespread already.
Ask people to vote up only those comments that cause them to update or change their mind on some topic. Increased belief accuracy is what we want; let's reward that.
Ask people to downvote spam and trolling only. Through this restriction on the use of downvotes, we lessen the anticipated social punishment for sharing an unpopular view that turns out to be incorrect (which is important counterfactually).
Encourage people to make contrarian factual statements rather than contrarian value statements. If we believe different things about the world, we have a better chance of having a productive discussion than if we value different things in the world.
Not sure if these rules should apply to top-level comments only or every comment in the thread. Another interesting question: should playing devil's advocate be allowed, i.e. presenting novel arguments for unpopular positions you don't actually agree with, and in under what circumstances (are disclaimers required, etc.)
You could think of my proposed rules as being about halfway between irrationality game and a normal LW open thread. Perhaps by doing binary search, we can figure out what the optimal degree to facilitate contrarianism is, and even make every Nth open thread a "contrarian open thread" that operates under those rules.
Another interesting way to do contrarian threads might be to pick particular views that seem popular on Less Wrong and try to think of the best arguments we can for why they might be incorrect. Kind of like a collective hypothetical apostasy. The advantage of this is that we generate potentially valuable contrarian positions no one is holding yet.
This has the problem that beliefs with a large inferential distance won't get stated.
The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.
[Please read the OP before voting. Special voting rules apply.]
An AI which followed humanity's CEV would make most people on this site dramatically less happy.
[Please read the OP before voting. Special voting rules apply.]
Fossil fuels will remain the dominant source of energy until we build something much smarter than ourselves. Efforts spent on alternative energy sources are enormously inefficient and mostly pointless.
Related claim: the average STEM-type person has no gut-level grasp of the quantity of energy consumed by the economy and this leads to popular utopian claims about alternative energy.
Is this a claim about the choices we will make or what is possible? If 1 I can buy it as an argument that states will not be rational enough to choose better options, if 2 I think its false.
[Please read the OP before voting. Special voting rules apply.]
American intellectual discourse, including within the LW community, is informed to a significant extent by folk beliefs existing in the culture at large. One of these folk beliefs is an emphasis on individualism -- both methodological and prescriptive. This is harmful: methodological individualism ignores the existence of shared cultures and coordination mechanisms that can be meaningfully abstracted across groups of individuals, and prescriptive individualism deprives those who take it seriously of community, identity, and ritual, all of which are basic human needs.
[Contrarian thread special voting rules]
I bite the bullet on the repugnant conclusion
AI boxing will work.
EDIT: Used to be "AI boxing can work." My intent was to contradict the common LW positions that AI boxing is either (1) a logical impossibility, or (2) more difficult or more likely to fail than FAI.
[META]
Previous incarnations of this idea: Closet survey #1, The Irrationality Game (More, II, III)
[Please read the OP before voting. Special voting rules apply.]
For many smart people, academia is one of the highest-value careers they could pursue.
Clarify "many"?
[opening post special voting rules yadda yadda]
Biological hominids descended from modern humans will be the keystone species of biomes loosely descended from farms pastures and cities optimized for symbiosis and matter/energy flow between organisms, covering large fractions of the Earth's land, for tens of millions of years. In special cases there may be sub-biomes in which non-biological energy is converted into biomass, and it is possible that human-keystone ocean-based biomes might appear as well. Living things will continue to be the driving force of non-geological activity on Earth, with hominid-driven symbiosis (of which agriculture is an inefficient first draft) producing interesting new patterns materials and ecosystems.
Are you imagining these human descendents will be technology using?
Yes, as hominids have been for more than a million years. An expanded toolkit though, even compared to today (though its possible that not all of our current tools will have the futures many of us expect, in the long run). Good manipulation of electromagnetism alone is having very interesting effects that we have only really begun to touch on, and I expect biotechnology and related things to have interesting roles to play. All of this will have to occur within the context of ecological laws which are pretty immutable, and living systems are very good at evolving and replicating and surviving in many contexts on this planet.
Meta-comment: I'm not sure that structure or voting scheme is particularly useful. The hope would be to allow conversation about contrarian viewpoints which are actually worth investigating. I'm not sure how you separate the wheat from the chaff, but that should be the goal...
Upvote interestingness, downvote incoherence, ignore agreement and disagreement?
Open borders is a terrible idea and could possibly lead to the collapse of civilization as we know it.
EDIT: I should clarify:
Whether you want open borders and whether you want the immigration status quo are different questions. I happen to be against both, but it is perfectly consistent for somebody to be against open borders but be in favor of the current level of immigration. The claim is specifically about completely unrestricted migration as advocated by folks like Bryan Caplan. Please direct your upvotes/downvotes to the former claim, rather than the latter.
Why do you believe this? Countries with the most liberal immigration policies today don't seem to be on the verge of collapse.
You should visit Bradford someday.
I'm sure Bradford isn't the greatest place to live, but (1) it's better than many US inner cities, (2) the UK seems quite far from collapse, and generally (3) "such-and-such a country allows quite a lot of immigration, and there is one city there that has a lot of immigrants and isn't a very nice place" seems a very very very weak argument against liberal immigration policies.
I'm being flippant of course. I didn't intend it as a serious argument.
Quick response:
1) You cannot compare the UK's cities to the US' cities because the US has a 14% black population and the UK does not. "Inner city" is a codeword for the kind of black dysfunction that thankfully the UK does not possess.
2) The UK is not close to collapse because we don't have fully Open Borders yet. For all its faults, the EU's migration framework isn't quite letting in millions of third-worlders yet.
3) Of course.
If you don't mind, I don't want to get into a lengthy debate on the subject.
I am quite happy not to have a lengthy debate with you on this topic.
The difference is only apparent; both societies have treated their nonwhites like trash. The British Empire merely avoided its "dysfunction" problem at home by outsourcing it to India.
A non-representative event happened and was blown out of proportion by media.
What event are you talking about? If you mean the Pakistani rape gang, that was Rotheram, not Bradford.
Ebola?
Ebola is more an argument for colonialism than against open borders but let's not be picky.
Ebola is an example of a locally-originated virulent existential threat open borders fail to contain, biological, social or otherwise. Controlled borders, despite all the issues, at least can act as an immune system of sorts.
Yes I agree, I was just being facetious :s
Collapsing civilisation as we know it is presumably not a bad thing if you think that our current civilisation is fundamentally unjust or suboptimally allocates resources based on arbitrary geographic boundaries.
I'll take both of those over the Camp of the Saints.
[Please read the OP before voting. Special voting rules apply.]
Current levels of immigration are also terrible, and will significantly speed up the collapse of the Western world.
Citation required.
[Please read the OP before voting. Special voting rules apply.]
There probably exists - or has existed at some time in the past - at least one entity best described as a deity.
Define deity?
[Please read the OP before voting. Special voting rules apply.]
Frequentist statistics are at least as appropriate as, if not more appropriate than, Bayesian statistics for approaching most problems.
There is no territory, it's maps all the way down.
Every computation requires something that instatiates it, ie a abstract or concrete machine to run on. In a very extreme case you might come up with a very abstract idea. However then the instation provider is the imaginer. Every bit of information requires a transfer of energy. Instation is transitive relation. If there is simulation of me it neccesarily instanties my thoughts too.
Also the parent comment implies a belief in panpsychisism.
Can you unpack this? At the moment it seems nonsensical, in a "throwing together random words and hoping people read profound insights into it" way.
Sure. Have you actually seen "the territory"? Of course not. There are plenty of unexplained observations out there. We assume that these come from some underlying "reality" which generates them. And it's a fair assumption. It works well in many cases. But it is still an assumption, a model. To quote Brienne Strohl on noticing:
To most people the map/territory observation is such a "one and the same". I'm suggesting that it's only a hypothesis. It gives way when making a map changes the territory (hello, QM). It is also unnecessary, because the useful essence of the map/territory model is that "future is partially predictable", in a sense that it is possible to take our past experiences, meditate on it for a while, figure out what to expect in the future and see our expectations at least partially confirmed. There is no need to attach the notion of some objective reality causing this predictability, though admittedly it does feel good to pretend that we stand on a solid ground, and not on some nebulous figment of imagination.
If you extract this essence, that future experiences are predictable from the past ones, and that we can shape our future experiences based on the knowledge of the past, it is enough to do science (which is, unsurprisingly, designing, testing and refining models). There is no indication that this model building will one day be exhausted. In fact, there is plenty of evidence to the contrary. It has happened many times throughout human history that we thought that our knowledge was nearly complete, there was nothing more to discover, except for one or two small things here and there. And then those small things became gateways to more surprising observations.
Yet we persist in thinking that there are ultimate laws of the universe, and that some day we might discover them all. I posit that there are no such laws, and we will continue digging deeper and deeper, without ever reaching the bottom... because there is no bottom.
Thanks for explaining, upvoted. But I still don't see how this could possibly make sense.
But our models have become more accurate over time. We've become, if you will, "less wrong". If there's no territory, what have we been converging to?
...Yes? I see it all the time.
I seem to recall someone (EY?) defining "reality" as "that which generates our observations". Which seems like a fairly natural definition to me. If it's just maps generating our observations, I'd call the maps part of the territory. (Like a map with a picture of the map itself on the territory. Except, in your world, I guess, there's no territory to chart so the map is a map of itself.) This feels like arguing about definitions.
I see how this might sorta make sense if we postulate that the Simulator Gods are trying really hard to fuck with us. Though still, in that case, I think the simulating world can be called a territory.
Indeed they have. We can predict the outcome of future experiments better and better.
We've become, if you will, "less wrong".
Yep.
If there's no territory, what have we been converging to?
Why do you think we have been converging to something? Every new model asks generates more questions than it answers. Sure, we know now why emitted light is quantized, but we have no idea how to deal, for example, with the predicted infinite vacuum energy.
No, you really don't. What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing.
It is not a definition, it's a hypothesis. At least in the way Eliezer uses it. I make no assumptions about the source of observations, if any.
First, I made no claims that maps generate anything. maps are what we use to make sense of observations. Second, If you define the territory the usual way, as "reality", then of course maps are part of the territory, everything is.
Not quite. You construct progressively more accurate models to explain past and predict future inputs. In the process, you gain access to new and more elaborate inputs. This does not have to end.
I realize that is how you feel. The difference is that if the assumption of the territory implies that we have a chance to learn everything there is to learn some day, construct the absolutely accurate map of the territory (possibly at the price of duplicating the territory and calling it a map). I am not convinced that it is a good assumption. Quite the opposite, our experience shows that it is a bad one, it has been falsified time and again. And bad models should be discarded, no matter how comforting they may be.
What is the point of science, otherwise? Better prediction of observations? But you can't explain what an observantion is.
If the territory theory is able to explain the purpose of science, and the no-territory theory is not , the territory theory is better.
..according to a map which has "inputs from the territory" marked on it.
Well, you need to. If the territory theory can explain the very existence of observations, and the no-territory theory cannot, the territory theory is better,
Inputs from where?
No it doesn't. "The territory exists, but is not perfectly mappable" is a coherent assumption, particularly in view if the definition of the territory as the source of observations.
[Please read the OP before voting. Special voting rules apply.]
Feminism is a good thing. Privilege is real. Scott Alexander is extremely uncharitable towards feminism over at SSC.
Like a few others, I agree with the first two but emphatically disagree with the last. And if you were right about it, I'd expect Ozy to have taken Scott to task about it, and him to have admitted to being somewhat wrong and updated on it.
EDIT: This has, in fact, happened.
See this tumblr post for an example of Ozy expressing dissatisfaction with Scott's lack of charity in his analysis of SJ (specifically in the "Words, Words, Words" post). My impression is that this is a fairly regular occurrence.
You might be right about him not having updated. If anything it seems that his updates on the earlier superweapons discussion have been reverted. I'm not sure I've seen anything comparably charitable from him on the subject since. I don't follow his thoughts on feminism particularly closely, so I could easily be wrong (and would be glad to find I'm wrong here).
Imo this quote from her response is a pretty weak argument:
"The concept of female privilege is, AFAICT, looking at the disadvantages gender-non-conforming men face, noticing that women with similar traits donât face those disadvantages, and concluding that this is because women are advantaged in society. "
In order for this to be a sensible counterpoint you would need to either say "gender conforming male privilege" or you would need to show that there are few men who mind conforming to gender roles. I don't really see why anyone believes most men are fine with living out standard gender norms and I certainly don't see how anyone has evidence for this.
If a high percentage fo men are gender non-conforming and such men are at a disdadvantage in society then the concept of male privilege is seriously weakened. And using it is dangerous as it might harm those men to here that they are "privileged" when this is not the case (at least in terms of gender, maybe they are rich etc).
I agree with claim 1 for some definitions of feminism and not for others. I agree with claim 2. I think that Scott would agree wtih claim 1 (for some definitions) and with claim 2 as well, so I disagree with claim 3.
Can you defend these statements?
I can, but I don't want to fall into that inferential canyon.
I think that if you actually can defend them, it might be worth it to go through the canyon. Inferential canyons are a lot easier to cross when your targets are aware of their existence and are willing and able to discuss responsibly.
("worth it" is of course relative to other ways you discuss with strangers on the internet}
Yes, Yes, No. Still upvoting, because "Scott Alexander" and "uncharitable" in the same sentence does not compute.
I consider him a modern G.K. Chesterton. He's eloquent, intelligent, and wrong.
Do you mind telling me how you think he's being uncharitable? I agree mostly with your first two statements. (If you don't want to put it on this public forum because hot debated topic etc I'd appreciate it if you could PM; I won't take you down the 'let's argue feminism' rabbit-hole.)
(I've always wondered if there was a way to rebut him, but I don't know enough of the relevant sciences to try and construct an argument myself, except in syllogistic form. And even then, it seems his statements on feminists are correct.)
Fortunately, LW is not an appropriate forum for argument on this subject, but for an example of an uncharitable post, see Social Justice and Words, Words, Words.
For a very quick example, see this Tumblr post. Mr. Alexander finds an example of a neoreactionary leader trying to be mean to a transgender woman inside the NRx sphere, and then shows the vast majority response of (non-vile) neoreactionaries to at least be less exclusionary than that, even though they have ideological issues with the diagnosis or treatment of gender dysphoria. Then he describes a feminist tumblr which develops increasingly misgendering and rude ways to describe disagreeing transgender men.
I don't know that this is actually /wrong/. All the actual facts are true, and if anything understate their relevant aspects -- if anything, I expect Ozy's understated the level of anti-transmale bigotry floating around the 'enlightened' side of Tumblr. I don't find NRx very persuasive, but there are certainly worse things that could be done than using it as a blunt "you must behave at least this well to ride" test. I don't know that feminism really needs external heroes: it's certainly a large enough group that it should be able to present internal speakers with strong and well-grounded beliefs. And I can certainly empathize with holding feminists to a higher standard than neoreactionaries hold themselves.
The problem is that it's not very charitable. Scott's the person that's /come up/ with the term "Lizardman's Constant" to describe how a certain percentage of any population will give terrible answers to really obvious questions. He's a strong advocate of steelmanning opposing viewpoints, and he's written an article about the dangers of only looking at the .
But he's looking at a viewpoint shown primarily in the <5% margin feminist tumblr, and comparing them to a circle of the more polite neoreactionaries (damning with faint praise as that might be, still significant), and, uh, I'm not sure that we should be surprised if the worst of the best said meaner things than the best of the worst.
I'm not sure he /needs/ to be charitable, again -- feminism should have its own internal speakers, I think mainstream modern feminism could use better critics than whoever's on Fox News next, so on -- but it's an understandable criticism.
((Upvoting the thread starter, but more because one and two are mu statements; either closed questions or not meaningful. Weakly agree on third.))
Being 5% of the group doesn't mean they are 5% of the influence. The loudest 5% may get to set the agenda of the remaining 95% if the remaining ones are willing to go along with things they don't particularly care about, but don't oppose enough to make these things deal-breakers either.
See also: http://www.smbc-comics.com/?id=2939
It also helps if the 5% have arguments for their positions.
How would you define "privilege"?
This is a good definition. In particular, "Anti-oppressionists use "privilege" to describe a set of advantages (or lack of disadvantages) enjoyed by a majority group, who are usually unaware of the privilege they possess. ... A privileged person is not necessarily prejudiced (sexist, racist, etc) as an individual, but may be part of a broader pattern of *-ism even though unaware of it."
No, this is not a motte.
That's in the bailey, because of "enjoyed by a majority group."
Does it have to be a majority group? For example, does this compared with this count as an example of "black privilege"? Would you describe the fact that some people are smarter (or stronger) than others as "intelligence privilege" (or "strength privilege")?
Why the "majority group" qualifier? Privilege has been historically associated with minorities, like aristocracy.
Why focus only specific majority groups and thereby ignore things like men in domestic violence issues getting a lot less help from society than women?
Nearly everyone has some advantages and disadvantages. It's often not helpful to conflate that huge back of advantages and disadvantages into a single variable.
Easier difficulty setting for your life in some context through no fault or merit of your own.
So would you describe someone tall as having "height privilege" because they're better at basketball?
[Please read the OP before voting. Special voting rules apply.]
The dangers of UFAI are minimal.
[Please read the OP before voting. Special voting rules apply.]
Superintelligence is an incoherent concept. Intelligence explosion isn't possible.
[Please read the OP before voting. Special voting rules apply.]
As a first approximation, people get what they deserve in life. Then add the random effects of luck.
Max L.
What ethical theory are you using for your definition of "deserve"?
Why do Africans deserve so much less than Americans? Why did people in the past deserve so much less than current people? Why do people with poor parents deserve less than people with rich parents?
[Please read the OP before voting. Special voting rules apply.]
Human value is not complex, wireheading is the optimal state, and Fun Theory is mostly wrong.
What would you have to see to convince you otherwise?
I think it would take an a priori philosophical argument, rather than empirical evidence.
[Please read the OP before voting. Special voting rules apply.]
The replication initiative (the push to replicate the majority of scientific studies) is reasonably likely to do more harm than good. Most of the points raised by Jason Mitchell in The Emptiness of Failed Replications are correct.
I read this trying to keep as open a mind as possible, and I think there is SOME value to SOME of what he said (ie no two experiments are totally the same and replicators often are motivated to prove the first study wrong)... But one thing that really set me off is that he genuinely considers a study that doesn't prove its hypothesis as a failure, not even acknowledging that IN PRINCIPLE, this study has proven the hypothesis wrong, which is valuable knowledge all the same.
Which is so jarring with what I consider the very basis of science that I find difficult to take Mitchell seriously.
Dualism is a coherent theory of mind and the only tenable one in light of our current scientific knowledge.
[Please read the OP before voting. Special voting rules apply.]
As long as you get the gist (think in probability instead of certainty, update incrementally when new evidence comes along), there's no additional benefit to learning Bayes' Theorem.
[Read the OP before voting for special voting rules.]
The many worlds interpretation of quantum mechanics is categorically confused nonsense. Its origins lie in a map/territory confusion, and in the mind projection fallacy. Configuration space is a map, not territoryâit is an abstraction used for describing the way that things are laid out in physical space. The density matrix (or in the special case of pure states, the state vector, or the wave function) is a subjective calculational tool used for finding probabilities. It's something that exists in the mind. Any 'interpretation' of quantum mechanics which claims that any of these things exists in reality (e.g. MWI) therefore commits the mind projection fallacy.
While I am not pro-wireheading and I expect this to be only a semi-contrarian position here...
Happiness is actually far more important than people give it credit for, as a component of a reflectively coherent human utility function. About two thirds of all statements made of the form, "$HIGHERORDERVALUE is more important than being happy!" are reflectively incoherent and/or pure status-signaling. The basic problem that needs addressing is of distinction between simplistic pleasures and a genuinely happy life full of variety, complexity, and subtlety, but the signaling games keep this otherwise obvious distinction from entering the conversation simply because happiness of all kinds is signaled to be low-status.
[Please read the OP before voting. Special voting rules apply.]
The necessary components of AGI are quite simple, and have already been worked out in most cases. All that is required is a small amount of integrative work to build the first UFAI.
What do you mean by that. Technical all that is required is the proper arrangement of transistors.
I mean that the component pieces such as planning algorithms, logic engines, pattern extractors, evolutionary search, etc. have already been worked out, and that there exist implementable designs for combining these pieces together into an AGI. There aren't any significant known unknowns left to be resolved.
Then where's the AI?
All the pieces for bitcoin were known and available in 1999. Why did it take 10 years to emerge?
[Contrarian thread special voting rules]
I would not want to be cryonically frozen and resurrected as my sense of who I am is tied into social factors that would be lost
Would you be willing to freeze if your family did? Your friends and family? Your whole country? Or even if everyone in the world was preserved, would you expect the structure of society post-resurrection be different enough that you would refuse preservation?
I'm not usre about the friends and family examples, it would depend what I thought that future society would be like. If cryonics was the norm I probably wouldn't opt out of it because I would have reasonable expectation of, if resurrection was successful, there being other people in the same situation so there would be infrastructure to support us.
The social factors I'm thinking of include the skills, qualifications and experience that I have developed in my life, which would likely be irrelevant in a world that can resurrect me. At best I would be a historical curiosity with nothing to contribute.
[Please read the OP before voting. Special voting rules apply.]
It would be of significant advantage to the world if most people started living on houseboats.
Is there even enough coast for that?
If people didn't live in cities, they'd have to commute more. There would be a large increase in transportation costs.
Where I live there is an abundance of canals. "Most people" is perhaps an exaggeration, but the main points in defence of increased houseboating would be:
(1) a house is a large, expensive, immobile and illiquid asset. A houseboat is rather less expensive, which frees up capital for other purposes.
(2) the internet makes it less necessary for most people to live in cities.
(3) there would be less costs associated with moving between different areas.
Your mileage may vary. Getting internet made me yearn to move to a larger city where I could meet more interesting people and do more interesting stuff---which in the end I did.
Sounds like a Dutch city.
But, it seems, no less desired. See e.g. LW meetups.
If you don't want much cost of moving you can simply rent a flat.
I am pretty sure that out of two equivalent houses the one which floats would be noticeably more expensive, and more expensive to maintain, too. Houseboats are typically less expensive than houses because they are smaller and less convenient.
Aren't RVs even cheaper?
Indeed. I would in principle be willing to apply a similar argument to RVs, but (since living in an RV holds no aesthetic appeal for me, whereas houseboating does) I am rather less aware of what the logistics would be like.
And shacks made out of plywood and corrugated iron are cheaper still.
I find it difficult to believe that houseboats are inherently less expensive. It seems more likely that there's some reason house boats cannot be made as large and expensive as regular houses, so the average houseboat is much cheaper than the average house, even if it's more expensive than a house of the same quality.
The internet gets much more difficult if you don't live in cities. While it mitigates the costs of people not living near each other, it does not remove them. There are still lots of people putting large amounts of time into physically commuting.
Why not use mobile homes? They can't be stacked in three dimensions like apartments, but at least you can put them in two-dimensional grids.