One exercise you can try is imagining a world where your currently popular belief is as unpopular as eugenics is now. Almost no one thinks your belief is plausible; most people are dumbfounded or angered by your sincere assertions, and ascribe bad motives to you. Some get mad just because you make an argument that might indirectly support that view. Take 5 minutes to think about what it would be like to experience such a world. If you find yourself less attached to the belief, you might be unduly influenced by its current popularity.
(If you're inclined to contrarianism, imagine the opposite.)
The noncentral fallacy is about inappropriately treating a noncentral member of a category as if it were a central member. But your argument is that taxation isn't a member of the category "theft" at all. "Taxation is theft, but that's okay, because it's not the common, bad kind of theft" would be more in line with Scott's responses.
I think the person-affecting view shouldn't be dismissed so quickly. For example, when we talk about poverty-alleviation or health interventions in EA, we talk about how that's good because it makes actual people better off. Similarly, when something is bad, we point to people for whom it's bad, e.g. those who suffer as a consequence of an action. Saving a life isn't consequentially equivalent to creating one, because the conterfactuals are different: in the former, a life would've been nonconsensually terminated, which is bad for ...
Upon further consideration, it seems to me that while it being enforced can make it worse, much of the prosociality cluster (e.g. guess culture) is oppressive in itself.
The maintenance of already existing cultural traits that are off-putting to outsiders may be more effective than intentionally designing filters, because the former are already part of the community, so by keeping them we're not diluting the culture, and the process of designing filters is likely to cause contestation within the community.about which of its traits are essential and which are peripheral.
It's hard to explicitly describe what the current barriers to entry are, but they include familiarity with LW ideas (and agreement with a lot of them), enjo...
I think it is both the case that:
1) a really valuable thing the community provides is a place to talk about ideas at a deep level. This is pretty rare, and it's valuable both to the sort of people who explicitly crave that, and (I believe), valuable to the world for generating ideas that are really important, and I do this this is something that is at risk of being destroyed if we lowered barriers to entry and scaled up without thinking too hard about it.
but, 2) it's also the case that
2a) there are a lot of smart people who I know would contribute valuab...
I'm a peripheral member of the Berkeley rationalist community, and some of this sounds highly concerning to me. Specifically, in practice, trying to aim at prosociality tends to produce oppressive environments, and I think we need more of people making nonconforming choices that are good for them and taking care of their own needs. I'm also generally opposed to reducing barriers to entry because I want to maintain our culture and not become more absorbed into the mainstream (which I think has happened too much already).
I think you mean ethics and not morals.
Those terms are synonymous under standard usage.
Moral responsibility is related to but not the same thing as moral obligation, and it's completely possible for a utilitarian to say one is morally forbidden to be a bystander and let a murder happen while admitting that doing so doesn't make you responsible for it. This is because responsibility is about causation and obligation is about what one ought to do. Murderers cause murders and are therefore responsible for them, while bystanders are innocent. The utilitarian should say not that the bystander is as morally responsible as the murderer (because they aren't), but that moral responsibility isn't what ultimately matters.
I don't agree with any of these options, but I proposed the question back in 2014, so I hope I can shed some light. The difference between non-cognitivism and error theory is that the error theory supposes that people attempt to describe some feature of the world when they make moral statements, and that feature doesn't exist, while non-cognitivism holds that moral statements only express emotional attitudes ("Yay for X!") or commands ("Don't X!"), which can neither be true nor false. The difference between error theory and subjectivism...
The least answered question on the last survey was - “what is your favourite lw post, provide a link”.
IIRC, that question was added to the survey later.
I have taken the survey.
I'm a guy in a polyamorous relationship with one girlfriend, who is in several relationships simultaneously. It's not a problem - the only occasional issue is that of limited time, and that's not unique to polyamory, it would be necessary to make those tradeoffs for friendships as well. On the plus side, compersion is a great feeling, and another benefit that I get in particular is that my girlfriend dating other people expands my social circle and introduces me to cool people, whom I would have greater difficulty meeting otherwise, because I'm normally not very social with people I don't know.
I'm not a progressive, but I don't see 1 and 2 as mutually exclusive. 1 is just a different way of stating 2 - leftists classify people on an oppressor-oppressed axis, where the oppressed are people perceived to be in bad situations.
I think he meant that Kling, being a libertarian, failed the Turing Test when describing the framework behind the progressive and conservative viewpoints.
Clearly, we haven't been doing enough to increase other risks. We can't let pandemic stay in the lead.
As Arnold Kling suggests, progressives think of issues on an oppressor-oppressed axis. Women, poor people, and immigrants are all seen as oppressed, which is why feminism, raising the minimum wage, and support for more immigration are positions that are often found together.
Support for a higher minimum wage, increased immigration, and feminism are all typically left-wing positions, so it's not surprising that they're found together.
Thank you for doing this survey.
I would be interested to see the correlations between political identification and moral views, and between moral views and meta-ethics.
(Also, looking at my responses to the survey, I think I unintentionally marked "Please do not use my data for formal research".)
Utilitarianism is a normative ethical theory. Normative ethical theories tell you what to do (or, in the case of virtue ethics, tell you what kind of person to be). In the specific case of utilitarianism, it holds that the right thing to do (i.e. what you ought to do) is maximize world utility. In the current world, there are many people who could sacrifice a lot to generate even more world utility. Utilitarianism holds that they should do so, therefore it is demanding.
If I tell my friend that I am visiting him on egoistic grounds, it suggests that being around him and/or promoting his well-being gives me pleasure or something like that, which doesn't sound off - it sounds correct. I should hope that my friends enjoy spending time around me and take pleasure in my well-being.
I mean that pleasure, by its nature, feels utility-satisfying. I don't know what you mean by "path" in "utility-maximizing path".
Regarding inconsistent preferences, yes, that is what I'm referring to.
Ordinal utility doesn't by itself necessitate wireheading, such as if you are incapable of experiencing pleasure, but if you can experience it, then you should wirehead, because pleasure has the quale of desirability (pleasure feels desirable).
But presumably you don't get utility from switching as such, you get utility from having A, B, or C, so if you complete a cycle for free (without me charging you), you have exactly the same utility as when you started, and if I charge you, then when you're back to A, you have lower utility.
What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole 'actual' terminal value of a human is pleasure?
That upon ideal rational deliberation and when having all the relevant information, a person will choose to pursue pleasure as a terminal value.
I might be perfectly happy with the expenditure per utility shift.
That's exactly the problem - you'd be happy with the expenditure per shift, but every time a fill cycle would be made, you'd be worse off. If you start out with A and $10, pay me a dollar to switch to B, another dollar to switch to C, and a third dollar to switch to A, you'd end up with A and $7, worse off than you started, despite being satisfied with each transaction. That's the cost of inconsistency.
Dutch booking has nothing to do with preferences; it refers entirely to doxastic probabilities.
You can be Dutch booked with preferences too. If you prefer A to B, B to C, and C to A, I can make money off of you by offering a circular trade to you.
(Note: Being continuously downvoted is making me reluctant to continue this discussion.)
One reason to be internally consistent is that it prevents you from being Dutch booked. Another reason is that it enables you to coherently be able to get the most of what you want, without your preferences contradicting each other.
Why should the way things are be the way things are?
As far as preferences and motivation are concerned, however things should be must appeal to them as they are, or at least as they would be if they were internally consistent.
It's not a matter of what you should desire, it's a matter of what you'd desire if you were internally consistent. Theoretically, you could have values that weren't pleasure, such as if you couldn't experience pleasure.
Also, the naturalistic fallacy isn't a fallacy, because "is" and "ought" are bound together.
Terminal values are what are sought for their own sake, as opposed to instrumental values, which are sought because they ultimately produce terminal values.
Fundamentally, because pleasure feels good and preferable, and it doesn't need anything additional (such as conditioning through social norms) to make it desirable.
Supporting neo-reaction because SJWs are bad is a severe case of false dichotomy.
My position is in line with that - people are wrong about what their terminal values are, and they should realize that their actual terminal value is pleasure.
Why? We do this all the time, when we advise people to do something different from what they're currently doing.
I hardly ever post (somewhere between one post per month and one post per year), but I read my feed almost daily.
"None" is presumably included in "Other", though next year it should probably be a separate option.
I suggested the metaethics question, and I'm sorry for any inadequacies in my descriptions. I used emotivism as the example for non-cognitivism because it's the form of it with which I'm most familiar, and because it would've been difficult to come up with a general example that would encompass all forms of non-cognitivism.
It was similarly difficult to come up with a general example for constructivism - my example is along the lines of Hobbesian constructivism, with which other constructivists may disagree.
Whether this feeling is irrational depends on what causes it. It makes sense to worry about a community you like becoming popular, since it means that an increasing number of people would join it, potentially reducing its quality.
Left-wing market anarchism is anarcho-capitalism that is left-wing in its orientation. They typically support the same policies as other anarcho-capitalists, but in non-policy areas, they have notable differences. They're opposed to hierarchical labor relations (though they don't want to make them illegal), with which they associate the term "capitalism", and which is why they like to call themselves free-market anti-capitalists. They have a favorable view of labor unions, strikes, and worker cooperatives. They tend to believe that the current po...
Maybe it's straightforward to discover when the fetus can feel pain, but it's not straightforward that being able to feel pain should be the cutoff point.
I'm pro-infanticide, but there's also a consistent position of "the line between not having and having a right to not be killed is crossed while in the womb". Another plausible position is evictionism - "Regardless of whether you have the right to kill a fetus, you aren't obligated to support it and are free to expel it if you wish".
Not sure if this counts, but though my views can roughly be described as "libertarian", I have a mix of moderate and radical positions that I rarely see found together. On the moderate side, I favor a carbon tax, think intellectual property protection is justified in principle, want a government-managed fiat currency (and don't want to abolish or audit the Fed), and probably other positions that I'm missing here. On the radical side, I want to abolish the welfare state, open the borders, and greatly reduce the military budget and only use the military for defensive wars.
I usually see "left-libertarianism" used to refer to left-wing market anarchism, not to something between progressivism and libertarianism.
Finished the survey. Didn't answer the SSC question even though I read it regularly because I plan to take the edited version when it's posted there, and I also didn't answer the digit ratio question.
Regarding scope sensitivity and the oily bird test, one man's modus ponens is another's modus tollens. Maybe if you're willing to save one bird, you should be willing to donate to save many more birds. But maybe the reverse is true - you're not willing to save thousands and thousands of birds, so you shouldn't save one bird, either. You can shut up and multiply, but you can also shut up and divide.
Do nihilists think they have no goals (aka terminal values) or do nihilists think they don't have goals about fulfilling others' goals or is it something else?
I am not a nihilist, and I don't know if I'd be able to pass an Ideological Turing Test as one, but to give my best answer to this, the nihilist would say that there are no moral oughts. How they connect this to terminal goals varies depending on the nihilist.
...Ok so would that be right to say this?: Utilitarianism is giving equal weight to everyone's utility function (including yours) in your &qu
That is an inaccurate definition of nihilism because it doesn't match what nihilists actually believe. Not only do they reject intrinsic morality, they reject all forms of morality altogether. Someone who believes in any kind of moral normativity (e.g. a utilitarian) cannot be a nihilist.
Utilitarianism is used as "the normative ethical theory that one ought to maximize the utility of the world". This is in contrast to something like egoism ("the normative ethical theory that one ought to maximize one's own utility") and other forms of consequentialism.
If you want less fine-grained answers, there's the consequentialism/deontology/virtue ethics question in the earlier part of the survey.
For Super Extra Bonus Questions: (feel free to modify the answer choices)
With which of these metaethical positions do you most identify?
For relationship status, a polyamorous person can be married and in a relationship at the same time, which is a problem. Similarly, someone can be living with their partner/spouse and additional roommates. Also, "Liberal" in the Political section should probably be renamed to "Progressive", to avoid collisions with how "liberal" is used in Europe and in political philosophy.
Besides the scope of a person's boundaries, there's also variance in how bad a boundary violation feels. Those of us who experience boundary violations as particularly negative might prefer others not to try to find benign violations, even if the violator is well-intentioned and sincerely promises to never do that specific thing again. For these people, would-be violators' fear of punishment is a feature. The same goes for people unlikely to experience a benign violation because their gap between social and personal boundaries is small.