Dualism is a coherent theory of mind and the only tenable one in light of our current scientific knowledge.
[Please read the OP before voting. Special voting rules apply.]
Human value is not complex, wireheading is the optimal state, and Fun Theory is mostly wrong.
the human value is not complex point is frankly aging very well with the rise of LLMs
You just pointed out that what a LLM learned for even a very simple game with extensive clean data turned out to be "a bag of heuristics": https://www.lesswrong.com/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1?commentId=ykmKgL8GofebKfkCv
Well, it would certainly be nice if that were true, but all the interpretability research thus far has pointed out the opposite of what you seem to be taking it to. The only cases where the neural nets turn out to learn a crisp, clear, extrapolable-out-many-orders-of-magnitude-correctly algorithm, verified by interpretability or formal methods to date, are not deep nets. They are tiny, tiny nets either constructed by hand or trained by grokking (which appears to not describe at all any GPT-4 model, and it's not looking good for their successors either). The bigger deeper nets certainly get much more powerful and more intelligent, but they appear to be doing so by, well, slapping on ever more bags of heuristics at scale. Which is all well and good if you simply want raw intelligence and capability, but not good if anything morally important hinges on them reasoning correctly for the right reasons, rather than heuristics which can be broken when extrapolated far enough or manipulated by adversarial processes.
[Please read the OP before voting. Special voting rules apply.]
The replication initiative (the push to replicate the majority of scientific studies) is reasonably likely to do more harm than good. Most of the points raised by Jason Mitchell in The Emptiness of Failed Replications are correct.
Imagine a physicist arguing that replication has no place in physics, because it can damage the careers of physicists whose experiments failed to replicate! Yet that's precisely the argument that the article makes about social psychology.
Open borders is a terrible idea and could possibly lead to the collapse of civilization as we know it.
EDIT: I should clarify:
Whether you want open borders and whether you want the immigration status quo are different questions. I happen to be against both, but it is perfectly consistent for somebody to be against open borders but be in favor of the current level of immigration. The claim is specifically about completely unrestricted migration as advocated by folks like Bryan Caplan. Please direct your upvotes/downvotes to the former claim, rather than the latter.
[Please read the OP before voting. Special voting rules apply.]
Current levels of immigration are also terrible, and will significantly speed up the collapse of the Western world.
White people have treated all nonwhites like trash at some point or another
I think that most peoples have treated some other tribe as trash at some point or another. The particular case which prompted this response was the English and the Irish, but the list of examples is very long.
[Please read the OP before voting. Special voting rules apply.]
As a first approximation, people get what they deserve in life. Then add the random effects of luck.
Max L.
Why do Africans deserve so much less than Americans? Why did people in the past deserve so much less than current people? Why do people with poor parents deserve less than people with rich parents?
[Please read the OP before voting. Special voting rules apply.]
Feminism is a good thing. Privilege is real. Scott Alexander is extremely uncharitable towards feminism over at SSC.
Yes, Yes, No. Still upvoting, because "Scott Alexander" and "uncharitable" in the same sentence does not compute.
Do you mind telling me how you think he's being uncharitable? I agree mostly with your first two statements. (If you don't want to put it on this public forum because hot debated topic etc I'd appreciate it if you could PM; I won't take you down the 'let's argue feminism' rabbit-hole.)
(I've always wondered if there was a way to rebut him, but I don't know enough of the relevant sciences to try and construct an argument myself, except in syllogistic form. And even then, it seems his statements on feminists are correct.)
Easier difficulty setting for your life in some context through no fault or merit of your own.
I'd argue that height privilege (up to a point, typically around 6'6") is a real thing, having nothing to do with being good at sports. There is a noted experiment, which my google-fu is currently failing to turn up, in which participants were shown a video of an interview between a man and a woman. In one group, the man was standing on a footstool behind his podium, so that he appeared markedly taller than the woman. In the other group, the man was standing in a depression behind his podium, so t that he appeared shorter. The content of the interview was identical.
Participants rated the man in the "taller" condition as more intelligent and more mature than the same man in the "shorter" condition. That's height privilege.
Why the "majority group" qualifier? Privilege has been historically associated with minorities, like aristocracy.
[Please read the OP before voting. Special voting rules apply.]
Superintelligence is an incoherent concept. Intelligence explosion isn't possible.
How smart does a mind have to be to qualify as a "superintelligence"? It's pretty clear that intelligence can go a lot higher than current levels.
What do you predict would happen if we uploaded Von Neumann's brain onto an extremely fast, planet-sized supercomputer? What do you predict would happen if we selectively bred humans for intelligence for a couple million years? "Impractical" would be understandable, but I don't see how you can believe superintelligence is "incoherent".
As for "Intelligence explosion isn't possible", that's a lot more reasonable, e.g. see the entire AI foom debate.
[Please read the OP before voting. Special voting rules apply.]
Buying a lottery ticket every now and then is not irrational. Unless you have thoroughly optimized the conversion of every dollar you own into utility-yielding investments and expenses, the exposure to large positive tail risk netted by spending a few dollars on lottery tickets can still be rational.
Phrased another way, when you buy a lottery ticket you aren't buying an investment, you're buying a possibility that is not available otherwise.
[Please read the OP before voting. Special voting rules apply.]
The dangers of UFAI are minimal.
[Please read the OP before voting. Special voting rules apply.]
For many smart people, academia is one of the highest-value careers they could pursue.
[Please read the OP before voting. Special voting rules apply.]
Utilitarianism is a moral abomination.
Exactly what is repugnant about utilitarianism?
It's inhuman, totalitarian slavery.
Islam and Christianity are big on slavery, but it's mainly a finite list of do's and don'ts from a Celestial Psychopath. Obey those, and you can go to a movie. Take a nap. The subjugation is grotesque, but it has an end, at least in this life.
Not so with utilitarianism. The world is a big machine that produces utility, and your job is to be a cog in that machine. Your utility is 1 seven billionth of the equation - which rounds to zero. It is your duty in life to chug and chug and chug like a good little cog without any preferential treatment from you, for you or anyone else you actually care about, all through your days without let.
And that's only if you don't better serve the Great Utilonizer ground into a human paste to fuel the machine.
A cog, or fuel. Toil without relent, or harvest my organs? Which is less of a horror?
Of course, some others don't get much better consideration. They, too, are potential inputs to the great utility machine. Chew up this guy here, spit out 3 utilons. A net increase in utilons! Fire up the woodchipper!
But at least one can argue that there is a net increase of util...
I disagree, but my reasons are a little intricate. I apologize, therefore, for the length of what follows.
There are at least three sorts of questions you might want to use a moral system to answer. (1) "Which possible world is better?", (2) "Which possible action is better?", (3) "Which kind of person is better?". Many moral systems take one of these as fundamental (#1 for consequentialist systems, #2 for deontological systems, #3 for virtue ethics) but in practice you are going to be interested in answers to all of them, and the actual choices you need to make are between actions, not between possible worlds or characters.
Suppose you have a system for answering question 1, and on a given occasion you need to decide what to do. One way to do this is by choosing the action that produces the best possible world (making whatever assumptions about the future you need to), but it isn't the only way. There is no inconsistency in saying "Doing X will lead to a better world, but I care about my own happiness as well as about optimizing the world so I'm going to do Y instead"; that just means that you care about other things besides morality. Which pr...
Lots to comment on here. That last paragraph certainly merits some comment.
Yes, most people are almost entirely inconsistent about the morality they profess to believe. At least in the "civilized world". I get the impression of more widespread fervent and sincere beliefs in the less civilized world.
Do Christians in the US really believe all their rather wacky professions of faith? Or even the most tame, basic professions of faith? Very very few, I think. There are Christians who really believe, and I tend to like them, despite the wackiness. Honest, consistent, earnest people appeal to me.
For the great mass, I increasingly think they just make talking noises appropriate to their tribe. It's not that they lie, it's more that correspondence to reality is so far down the list of motivations, or even evaluations, that it's not relevant to the noises that come from their mouths.
It's the great mass of people who seem to instinctively say whatever is socially advantageous in their tribe that give be the heebie jeebies. They are completely alien - which, given the relative numbers, means I am totally alien. A stranger in a strange land.
...Isn't it better to classify people in a
AI boxing will work.
EDIT: Used to be "AI boxing can work." My intent was to contradict the common LW positions that AI boxing is either (1) a logical impossibility, or (2) more difficult or more likely to fail than FAI.
[Please read the OP before voting. Special voting rules apply.]
It would be of significant advantage to the world if most people started living on houseboats.
[Please read the OP before voting. Special voting rules apply.]
There probably exists - or has existed at some time in the past - at least one entity best described as a deity.
Having political beliefs is silly. Movements like neoreaction or libertarianism or whatever will succeed or fail mostly independently of whether their claims are true. Lies aren't threatened by the truth per se, they're threatened by more virulent lies and more virulent truths. Various political beliefs, while fascinating and perhaps true, are unimportant and worthless.
Arguing for or against various political beliefs functions mostly (1) to signal intelligence or allegiance or whatever, and (2) as mental masturbation, like playing Scrabble. "I want to improve politics" is just a thin veil that system 2 throws over system 1's urges to achieve (1) and (2).
If you actually think that improving politics is a productive thing to do, your best bet is probably something like "ensure more salt gets iodized so people will be smarter", or "build an FAI to govern us". But those options don't sound nearly as fun as writing political screeds.
(While "politics is the mind-killer" is LW canon, "believing political things is stupid" seems less widely-held.)
[Please read the OP before voting. Special voting rules apply.]
Fossil fuels will remain the dominant source of energy until we build something much smarter than ourselves. Efforts spent on alternative energy sources are enormously inefficient and mostly pointless.
Related claim: the average STEM-type person has no gut-level grasp of the quantity of energy consumed by the economy and this leads to popular utopian claims about alternative energy.
Roko's Basilisk legitimately demonstrates a problem with LW. "Rationality" that leads people to believe such absurd ideas is messed up, and 1) the presence of a significant number of people psychologically affected by the basilisk and 2) the fact that Eliezer accepts that basilisk-like ideas can be dangerous are signs that there is something wrong with the rationality practiced here.
[opening post special voting rules yadda yadda]
Biological hominids descended from modern humans will be the keystone species of biomes loosely descended from farms pastures and cities optimized for symbiosis and matter/energy flow between organisms, covering large fractions of the Earth's land, for tens of millions of years. In special cases there may be sub-biomes in which non-biological energy is converted into biomass, and it is possible that human-keystone ocean-based biomes might appear as well. Living things will continue to be the driving force of non-geological activity on Earth, with hominid-driven symbiosis (of which agriculture is an inefficient first draft) producing interesting new patterns materials and ecosystems.
[Please read the OP before voting. Special voting rules apply.]
Frequentist statistics are at least as appropriate as, if not more appropriate than, Bayesian statistics for approaching most problems.
[Please read the OP before voting. Special voting rules apply.]
Reductionism as a cognitive strategy has proven useful in a number of scientific and technical disciplines. However, reductionism as a metaphysical thesis (as presented in this post) is wrong. Verging on incoherent, even. I'm specifically talking about the claim that in reality "there is only the most basic level".
Meta-comment: I'm not sure that structure or voting scheme is particularly useful. The hope would be to allow conversation about contrarian viewpoints which are actually worth investigating. I'm not sure how you separate the wheat from the chaff, but that should be the goal...
Yes. Contrarian position: This thread would be better if we upvoted contrarian positions that are interesting or caused updates, not those that we disagree with.
[Please read the OP before voting. Special voting rules apply.]
An AI which followed humanity's CEV would make most people on this site dramatically less happy.
I don't think you could get enough of humanity to agree on what should be considered "deviant" to make that value cohere.
[Please read the OP before voting. Special voting rules apply.]
The notion of freedom is incoherent. People would be better off abandoning the pursuit of it.
[Please read the OP before voting. Special voting rules apply.]
Causal connections should not be part of our most fundamental model of the Universe. Everything that is useful about causal narratives is a consequence of the Second Law of Thermodynamics, which is irrelevant when we're talking about microscopic interactions. Extrapolating our macroscopic fascination with causation into the microscopic realm has actually impeded the exploration of promising possibilities in fundamental physics.
That sentence has the same air of paradox about it as "Many solipsists believe ...". (Perhaps deliberately?)
A word of advice: Perhaps anyone posting a comment here with the intention of voicing a contrarian opinion and getting upvotes for disagreement should indicate the fact explicitly in their comment. Otherwise I predict that the upvote/downvote signal will be severely corrupted by people voting "normally". (Especially if these comments produce discussion -- if A posts something you strongly disagree with and B posts a very good and clearly-explained reason for disagreeing, what are you supposed to do? I suggest the right thing here is to upvote both A and B, but it's liable to be easy to get confused...)
[EDITED to add: 1. For the avoidance of doubt, of course the above is not intended to be a controversial opinion and if you vote on it you should do so according to the normal conventions, not the special ones governing this discussion. 2. It is possible to edit your own comments; if you read the above and think it's sensible, but have already posted a contrarian opinion here, you can fix it.]
Social problems are nearly impossible to solve. The methods we have developed in the hard sciences and engineering are insufficient to solve them.
English has a pronoun that can be used for either gender and, as an accident of history not some hidden agenda, said pronoun in English is "he/him/&c."
Edited: VAuroch is the best kind of correct on "neuter" pronouns. Changed, though that might make a view less controversial than I thought (all but 2 readers agree, really?) even less so :)
[Please read the OP before voting. Special voting rules apply.]
Artificial intelligences are overrated as a threat, and institutional intelligences are underrated.
The universe we perceive is probably a simulation of a more complex Universe. In breaking with the simulation hypothesis, however, the simulation is not originated by humans. Instead, our existence is simply an emergent property of the physics (and stochasticity) of the simulation.
There are some I hold:
These are 10 different propositions. Fortunately I disagree with most of them so can upvote the whole bag with a clear conscience, but it would be better for this if you separated them out.
[Please read the OP before voting. Special voting rules apply.]
The SF Bay Area is a lousy place to live.
Max L.
Meta
I think LW is already too biased towards contrarian ideas - we don't need to encourage them more with threads like this.
I think this thread is for opinions that are contrarian relative to LW, and not to the mainstream.
e.g. my opinion on open borders is something that a great majority of people share but is contrarian here, shown by the fact that as of the time of writing it is currently tied for highest-voted in the thread.
Developing a rationalist identity is harmfull. Promoting a "-ism" or group affilication with the label "rational" is harmful.
[Please read the OP before voting. Special voting rules apply.]
American intellectual discourse, including within the LW community, is informed to a significant extent by folk beliefs existing in the culture at large. One of these folk beliefs is an emphasis on individualism -- both methodological and prescriptive. This is harmful: methodological individualism ignores the existence of shared cultures and coordination mechanisms that can be meaningfully abstracted across groups of individuals, and prescriptive individualism deprives those who take it seriously of community, identity, and ritual, all of which are basic human needs.
[META]
Previous incarnations of this idea: Closet survey #1, The Irrationality Game (More, II, III)
[Contrarian thread special voting rules]
I would not want to be cryonically frozen and resurrected as my sense of who I am is tied into social factors that would be lost
[Please read the OP before voting. Special voting rules apply.]
Politically, the traditional left is broadly correct.
Traditional as in not the radical left or any post-neocon positions. Socialism. Approximately the position of the leftmost of the two biggest political parties in a typical western-european country.
[Please read the OP before voting. Special voting rules apply.]
Humanities is not only an useful method of knowing about the world - but, properly interfaced, ought to be able to significantly speed up science.
(I have a large interval for how controversial this is, so pardon me if you think it's not.)
[Please read the OP before voting. Special voting rules apply.]
There is nothing morally wrong about eating meat, and vegetarianism/veganism aren't morally superior to meat-eating.
This seems pretty similar to the irrationality game. That's not necessarily a bad thing, but personally I would try the following formula next time (perhaps this should be a regular thread?):
Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone's position flatly stated doesn't usually improve the accuracy of my beliefs.
Ask people to avoid upvoting views they already agree with. This is to prevent the thread from becoming an echo chamber of edgy "contrarian" views that are in fact pretty widespread already.
Ask people to vote up only those comments that cause them to update or change their mind on some topic. Increased belief accuracy is what we want; let's reward that.
Ask people to downvote spam and trolling only. Through this restriction on the use of downvotes, we lessen the anticipated social punishment for sharing an unpopular view that turns out to be incorrect (which is important counterfactually).
Encourage people to make contrarian factual statements rather than cont
[meta]
Is there some way to encourage coherence in people's stated views? For some of the posts in this thread I can't tell whether I agree or disagree because I can't understand what the view is. I feel an urge to downvote such posts, although this could easily be a bad idea, since extreme contrarian views will probably seem less coherent. On the other hand, if I can't even understand what is being claimed in the first place then it's hard for me to get much benefit out of it.
[Please read the OP before voting. Special voting rules apply.]
The necessary components of AGI are quite simple, and have already been worked out in most cases. All that is required is a small amount of integrative work to build the first UFAI.
[Please read the OP before voting. Special voting rules apply.]
You can expect to have about as much success effectively and systematically teaching rationality as you could in effectively and systematically teaching wisdom. Attempts for a systematic rationality curriculum will end up as cargo cultism and hollow ingroup signaling at worst and heuristics and biases research literature scholarship at best. Once you know someone's SAT score, knowing whether they participated in rationality training will give very little additional predictive power on whether they will win at life.
[Contrarian thread, special voting rules apply]
Engaging in political processes (and learning how to do so) is a useful thing, and is consistently underrated by the LW consensus.
Dollars and utilons are not meaningfully comparable.
Edited to restate: Dollars (or any physical, countable object) cannot stand in for utilons.
My current understanding of U.S. laws on cryonics is that you have to be legally pronounced brain-dead before you can be frozen. I think that defeats the entire purpose of cryonics; I can't trust attempts to reverse-engineer my brain if I'm already brain-dead; that is, if my brain cells are already damaged beyond resuscitation. I don't live in the U.S. anyway, but sometimes I consider moving there just to be close to cryonics facilities. However, as long as I can't freeze my intact brain, I can't trust the procedure.
[ Please read the OP before voting. Special voting rules apply.]
MWI is wrong, and relational QM is right.
Physicalism is wrong, because of the mind body problem, and other considerations, and dual aspect neutral monism is right.
STEM types are too quick to reject ethical Objectivism. Moreover moral subjectivism is horribly wrong. Don't know what the right answer is, but it could be some kind of Kantianism or Contractarianism.
Arguing to win is good, or to be precise, it largely coincides with truth seeking,
There is no kind of smart that makes you uniformly g...
[Please read the OP before voting. Special voting rules apply.]
Somewhere between 1950 and 1970 too many people started studying physics, and now the community of physicists has entered a self-sustaining state where writing about other people's work is valued much, much more than forming ideas. Many modern theories (string theory, AdS/CFT correspondence, renormalisation of QFT) are hard to explain because they do not consist of an idea backed by a mathematical framework but solely of this mathematical framework.
Friendliness by mathematical proof about exact trustworthiness of future computing principles is misguided.
I sense this opinion is not that marginal here, but it does go against the established orthodoxy: I'm pro-specks.
[Please read the OP before voting. Special voting rules apply.]
The study and analysis of human movement is very underfunded. There a lot of researches into getting information about static information such as DNA or X-ray but very little about getting dynamic information about how humans move.
I think raising the sanity waterline is the most important thing we can do, and we do too little of it because our discussions tend to happen amongst ourselves, i.e. with people who are far from that waterline.
Any attempt to educate people, including the attempt to educate them about rationality, should focus on teens, or where possible on children, in order to create maximum impact. HPMOR does that to some degree, but Less Wrong usually presupposes cognitive skills that the very people who'd benefit most from rationality do not possess. It is very much in...
Is this supposed to be a contrarian view on LW? If it is, I am going to cry.
Unless we reach a lot of young people, we risk than in 30-40 years the "rationalist movement" will be mostly a group of old people spending most of their complaining about how things were better when they were young. And the change will come so gradually we may not even notice it.
Changing minds is usually impossible. People will only be shifted on things they didn't feel confident about in the first place. Changes in confidence are only weakly influenced by system 2 reasoning.
[Please read the OP before voting. Special voting rules apply.]
Utilitarianism relies on so many levels of abstraction as to be practically useless in most situations.
Our society is ruled by a Narrative which has no basis in reality and is essentially religious in character. Every component of the Narrative is at best unjustified by actual evidence, and at worst absurd on the face of it. Moreover, most leading public intellectuals never seriously question the Narrative because to do so is to be expelled from their positions of prestige. The only people who can really poke holes in the Narrative are people like Peter Thiel and Nassim Taleb, whose positions of wealth and prestige are independently guaranteed.
The lesson is...
Finding better ways for structuring knowledge is more important than faster knowledge transfer through devices such as high throughput Brain Computer Interfaces.
It's a travesty that outside of computer programming languages few new languages get invented in the last two decades.
I'm upvoting top-level comments which I think are in the spirit of this post but I personally disagree with (in the case of comments with several sentences, if I disagree with their conjunction), downvoting ones I don't think are in the spirit of this post (e.g. spam, trolling, views which are clearly not contrarian either on LW nor in the mainstream), and leaving alone ones which are in the spirit of this post but I already agree with. Is that right?
What about comments I'm undecided about? I'm upvoting them if I consider them less likely than my model of the average LWer does and leaving them alone otherwise. Is that right?
[Please read the OP before voting. Special voting rules apply.]
Improving the typical human's emotional state — e.g. increasing compassion and reducing anxiety — is at least as significant to mitigating existential risks as improving the typical human's rationality.
The same is true for unusually intelligent and capable humans.
For that matter, unusually intelligent and capable humans who hate or fear most of humanity, or simply don't care about others, are unusually likely to break the world.
(Of course, there are cases where failures of rationality and failu...
Once you actually take human nature into account (especially, the the things that cause us to feel happiness, pride, regret, empathy), then most seemingly-irrational human behavior actually turns out to be quite rational.
Conscious thought processes are often deficient in comparison to subconscious ones, both in terms of speed and in terms of amount of information they can integrate together to make decisions.
From 1 and 2 it follows that most attempts at trying to consciously improve 'rational' behavior will end up falling short or backfiring.
History isn't over in any Fukuyamian sense; in fact the turmoil of the twenty-first century will dwarf the twentieth. A US-centered empire will likely take shape by century's end.
I will elaborate if requested.
I've always had problems with MWI, but it's just a gut feeling. I don't have the necessary specialized knowledge to be able to make a decent argument for or against it. I do concede it one advantage: it's a Copernican explanation, and so far Copernican explanations have a perfect record of having been right every time. Other than that, I find it irritating, most probably because it's the laziest plot device in science-fiction.
You can't solve AI friendliness in a vacuum. To build a friendly AI, you have to simultaneously work on the AI and the code of ethics it should use, because they are interdependent. Until you know how the AI models reality most effectively you can't know if your code of ethics uses atoms that make sense to the AI. You can try to always prioritize the ethics aspects and not make the AI any smarter until you have to do so, but you can't first make sure that you have an infallible code of ethics and only start building the AI afterwards.
[Please read the OP before voting. Special voting rules apply.]
Toxicology research is underfunded. Investing more money into finding tools to measure toxicology makes more sense than spending money into trying to understand the functioning of various genes.
[Read the OP before voting for special voting rules.]
The many worlds interpretation of quantum mechanics is categorically confused nonsense. Its origins lie in a map/territory confusion, and in the mind projection fallacy. Configuration space is a map, not territory—it is an abstraction used for describing the way that things are laid out in physical space. The density matrix (or in the special case of pure states, the state vector, or the wave function) is a subjective calculational tool used for finding probabilities. It's something that exists in the mind. Any 'interpretation' of quantum mechanics which claims that any of these things exists in reality (e.g. MWI) therefore commits the mind projection fallacy.
[Please read the OP before voting. Special voting rules apply.]
Homeownership is not a good idea for most people.
[Please read the OP before voting. Special voting rules apply.]
Sitting down and thinking really hard is a bad way of deciding what to do. A much better way is to find several trusted advisors with relevant experience and ask their advice.
This looks like two posts I saw quite a while ago where contrarian posts were also intended to be up-voted. I can't seem to find those posts (searching for contrarian doesn't match anything and searching for 'vote' is obviously useless). Nonetheless those posts urged to mark each contrarian comment to clearly indicate the opposite voting semantics to avoid unsuspecting readers being misled by the votes. Maybe someone can provide the links?
[Please read the OP before voting. Special voting rules apply.]
As long as you get the gist (think in probability instead of certainty, update incrementally when new evidence comes along), there's no additional benefit to learning Bayes' Theorem.
Meta: It is easy to take a position that is held by a significant number of people and exaggerate it to the point where nobody holds the exaggerated version. Does that count as a contrarian opinion (since nobody believes the exaggerated version that was stated) or as a non-contrarian opinion (since people believe the non-exaggerated version)?
(Edit: This is not intended to be a controversial opinion. It's meta.)
[Please read the OP before voting. Special voting rules apply.]
The truth of a statement depends on the context in which the statement is made.
While I am not pro-wireheading and I expect this to be only a semi-contrarian position here...
Happiness is actually far more important than people give it credit for, as a component of a reflectively coherent human utility function. About two thirds of all statements made of the form, "$HIGHER_ORDER_VALUE is more important than being happy!" are reflectively incoherent and/or pure status-signaling. The basic problem that needs addressing is of distinction between simplistic pleasures and a genuinely happy life full of variety, complexity, and s...
the United States prison system is a tragedy on par or exceeding the horror of the Soviet gulags. In my opinion the only legitimate reason for incarcerating people is to prevent crime. The USA currently has 7 times the OECD average number of prisoners and crime rates similar to the OECD average. 6/7 of the Us penial system population is a little over 2 million people. If we are unnescesarily incarcerating anywhere close to 2 million people right now then the USA is a morally hellish country.
note: Less than half of the inamtes in the USa are there for drug related charges. It is very close to 50% federally but less at the state level. Immediately pardoning all criminals primarily gets us to 3.5 times the OECd average.
Provably-secure computing is undervalued as a mechanism for guaranteeing Friendliness from an AI.
Bitcoin and a few different altcoins can all coexist in the future and each have significant value, each fulfilling different functions based on their technical details.
As per a recent comment this thread is meant to voice contrarian opinions, that is anything this community tends not to agree with. Thus I ask you to post your contrarian views and upvote anything you do not agree with based on personal beliefs. Spam and trolling still needs to be downvoted.