One exercise you can try is imagining a world where your currently popular belief is as unpopular as eugenics is now. Almost no one thinks your belief is plausible; most people are dumbfounded or angered by your sincere assertions, and ascribe bad motives to you. Some get mad just because you make an argument that might indirectly support that view. Take 5 minutes to think about what it would be like to experience such a world. If you find yourself less attached to the belief, you might be unduly influenced by its current popularity.
(If you're inclined to contrarianism, imagine the opposite.)
The noncentral fallacy is about inappropriately treating a noncentral member of a category as if it were a central member. But your argument is that taxation isn't a member of the category "theft" at all. "Taxation is theft, but that's okay, because it's not the common, bad kind of theft" would be more in line with Scott's responses.
I think the person-affecting view shouldn't be dismissed so quickly. For example, when we talk about poverty-alleviation or health interventions in EA, we talk about how that's good because it makes actual people better off. Similarly, when something is bad, we point to people for whom it's bad, e.g. those who suffer as a consequence of an action. Saving a life isn't consequentially equivalent to creating one, because the conterfactuals are different: in the former, a life would've been nonconsensually terminated, which is bad for ...
The maintenance of already existing cultural traits that are off-putting to outsiders may be more effective than intentionally designing filters, because the former are already part of the community, so by keeping them we're not diluting the culture, and the process of designing filters is likely to cause contestation within the community.about which of its traits are essential and which are peripheral.
It's hard to explicitly describe what the current barriers to entry are, but they include familiarity with LW ideas (and agreement with a lot of them), enjo...
I think it is both the case that:
1) a really valuable thing the community provides is a place to talk about ideas at a deep level. This is pretty rare, and it's valuable both to the sort of people who explicitly crave that, and (I believe), valuable to the world for generating ideas that are really important, and I do this this is something that is at risk of being destroyed if we lowered barriers to entry and scaled up without thinking too hard about it.
but, 2) it's also the case that
2a) there are a lot of smart people who I know would contribute valuab...
I'm a peripheral member of the Berkeley rationalist community, and some of this sounds highly concerning to me. Specifically, in practice, trying to aim at prosociality tends to produce oppressive environments, and I think we need more of people making nonconforming choices that are good for them and taking care of their own needs. I'm also generally opposed to reducing barriers to entry because I want to maintain our culture and not become more absorbed into the mainstream (which I think has happened too much already).
Moral responsibility is related to but not the same thing as moral obligation, and it's completely possible for a utilitarian to say one is morally forbidden to be a bystander and let a murder happen while admitting that doing so doesn't make you responsible for it. This is because responsibility is about causation and obligation is about what one ought to do. Murderers cause murders and are therefore responsible for them, while bystanders are innocent. The utilitarian should say not that the bystander is as morally responsible as the murderer (because they aren't), but that moral responsibility isn't what ultimately matters.
I don't agree with any of these options, but I proposed the question back in 2014, so I hope I can shed some light. The difference between non-cognitivism and error theory is that the error theory supposes that people attempt to describe some feature of the world when they make moral statements, and that feature doesn't exist, while non-cognitivism holds that moral statements only express emotional attitudes ("Yay for X!") or commands ("Don't X!"), which can neither be true nor false. The difference between error theory and subjectivism...
I'm a guy in a polyamorous relationship with one girlfriend, who is in several relationships simultaneously. It's not a problem - the only occasional issue is that of limited time, and that's not unique to polyamory, it would be necessary to make those tradeoffs for friendships as well. On the plus side, compersion is a great feeling, and another benefit that I get in particular is that my girlfriend dating other people expands my social circle and introduces me to cool people, whom I would have greater difficulty meeting otherwise, because I'm normally not very social with people I don't know.
Thank you for doing this survey.
I would be interested to see the correlations between political identification and moral views, and between moral views and meta-ethics.
(Also, looking at my responses to the survey, I think I unintentionally marked "Please do not use my data for formal research".)
Utilitarianism is a normative ethical theory. Normative ethical theories tell you what to do (or, in the case of virtue ethics, tell you what kind of person to be). In the specific case of utilitarianism, it holds that the right thing to do (i.e. what you ought to do) is maximize world utility. In the current world, there are many people who could sacrifice a lot to generate even more world utility. Utilitarianism holds that they should do so, therefore it is demanding.
If I tell my friend that I am visiting him on egoistic grounds, it suggests that being around him and/or promoting his well-being gives me pleasure or something like that, which doesn't sound off - it sounds correct. I should hope that my friends enjoy spending time around me and take pleasure in my well-being.
Regarding inconsistent preferences, yes, that is what I'm referring to.
Ordinal utility doesn't by itself necessitate wireheading, such as if you are incapable of experiencing pleasure, but if you can experience it, then you should wirehead, because pleasure has the quale of desirability (pleasure feels desirable).
What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole 'actual' terminal value of a human is pleasure?
That upon ideal rational deliberation and when having all the relevant information, a person will choose to pursue pleasure as a terminal value.
I might be perfectly happy with the expenditure per utility shift.
That's exactly the problem - you'd be happy with the expenditure per shift, but every time a fill cycle would be made, you'd be worse off. If you start out with A and $10, pay me a dollar to switch to B, another dollar to switch to C, and a third dollar to switch to A, you'd end up with A and $7, worse off than you started, despite being satisfied with each transaction. That's the cost of inconsistency.
(Note: Being continuously downvoted is making me reluctant to continue this discussion.)
One reason to be internally consistent is that it prevents you from being Dutch booked. Another reason is that it enables you to coherently be able to get the most of what you want, without your preferences contradicting each other.
Why should the way things are be the way things are?
As far as preferences and motivation are concerned, however things should be must appeal to them as they are, or at least as they would be if they were internally consistent.
It's not a matter of what you should desire, it's a matter of what you'd desire if you were internally consistent. Theoretically, you could have values that weren't pleasure, such as if you couldn't experience pleasure.
Also, the naturalistic fallacy isn't a fallacy, because "is" and "ought" are bound together.
I suggested the metaethics question, and I'm sorry for any inadequacies in my descriptions. I used emotivism as the example for non-cognitivism because it's the form of it with which I'm most familiar, and because it would've been difficult to come up with a general example that would encompass all forms of non-cognitivism.
It was similarly difficult to come up with a general example for constructivism - my example is along the lines of Hobbesian constructivism, with which other constructivists may disagree.
Left-wing market anarchism is anarcho-capitalism that is left-wing in its orientation. They typically support the same policies as other anarcho-capitalists, but in non-policy areas, they have notable differences. They're opposed to hierarchical labor relations (though they don't want to make them illegal), with which they associate the term "capitalism", and which is why they like to call themselves free-market anti-capitalists. They have a favorable view of labor unions, strikes, and worker cooperatives. They tend to believe that the current po...
I'm pro-infanticide, but there's also a consistent position of "the line between not having and having a right to not be killed is crossed while in the womb". Another plausible position is evictionism - "Regardless of whether you have the right to kill a fetus, you aren't obligated to support it and are free to expel it if you wish".
Not sure if this counts, but though my views can roughly be described as "libertarian", I have a mix of moderate and radical positions that I rarely see found together. On the moderate side, I favor a carbon tax, think intellectual property protection is justified in principle, want a government-managed fiat currency (and don't want to abolish or audit the Fed), and probably other positions that I'm missing here. On the radical side, I want to abolish the welfare state, open the borders, and greatly reduce the military budget and only use the military for defensive wars.
Regarding scope sensitivity and the oily bird test, one man's modus ponens is another's modus tollens. Maybe if you're willing to save one bird, you should be willing to donate to save many more birds. But maybe the reverse is true - you're not willing to save thousands and thousands of birds, so you shouldn't save one bird, either. You can shut up and multiply, but you can also shut up and divide.
Do nihilists think they have no goals (aka terminal values) or do nihilists think they don't have goals about fulfilling others' goals or is it something else?
I am not a nihilist, and I don't know if I'd be able to pass an Ideological Turing Test as one, but to give my best answer to this, the nihilist would say that there are no moral oughts. How they connect this to terminal goals varies depending on the nihilist.
...Ok so would that be right to say this?: Utilitarianism is giving equal weight to everyone's utility function (including yours) in your &qu
That is an inaccurate definition of nihilism because it doesn't match what nihilists actually believe. Not only do they reject intrinsic morality, they reject all forms of morality altogether. Someone who believes in any kind of moral normativity (e.g. a utilitarian) cannot be a nihilist.
Utilitarianism is used as "the normative ethical theory that one ought to maximize the utility of the world". This is in contrast to something like egoism ("the normative ethical theory that one ought to maximize one's own utility") and other forms of consequentialism.
For Super Extra Bonus Questions: (feel free to modify the answer choices)
With which of these metaethical positions do you most identify?
For relationship status, a polyamorous person can be married and in a relationship at the same time, which is a problem. Similarly, someone can be living with their partner/spouse and additional roommates. Also, "Liberal" in the Political section should probably be renamed to "Progressive", to avoid collisions with how "liberal" is used in Europe and in political philosophy.
Besides the scope of a person's boundaries, there's also variance in how bad a boundary violation feels. Those of us who experience boundary violations as particularly negative might prefer others not to try to find benign violations, even if the violator is well-intentioned and sincerely promises to never do that specific thing again. For these people, would-be violators' fear of punishment is a feature. The same goes for people unlikely to experience a benign violation because their gap between social and personal boundaries is small.