All of blacktrance's Comments + Replies

Besides the scope of a person's boundaries, there's also variance in how bad a boundary violation feels. Those of us who experience boundary violations as particularly negative might prefer others not to try to find benign violations, even if the violator is well-intentioned and sincerely promises to never do that specific thing again. For these people, would-be violators' fear of punishment is a feature. The same goes for people unlikely to experience a benign violation because their gap between social and personal boundaries is small.

One exercise you can try is imagining a world where your currently popular belief is as unpopular as eugenics is now. Almost no one thinks your belief is plausible; most people are dumbfounded or angered by your sincere assertions, and ascribe bad motives to you. Some get mad just because you make an argument that might indirectly support that view. Take 5 minutes to think about what it would be like to experience such a world. If you find yourself less attached to the belief, you might be unduly influenced by its current popularity.
(If you're inclined to contrarianism, imagine the opposite.)

The noncentral fallacy is about inappropriately treating a noncentral member of a category as if it were a central member. But your argument is that taxation isn't a member of the category "theft" at all. "Taxation is theft, but that's okay, because it's not the common, bad kind of theft" would be more in line with Scott's responses.

4Raemon
I think one thing going on here is that normally, the word "theft" comes tacked on with metadata about "and theft is bad, boo" (the way most people use words) The claim "taxation is theft" is specifically meant to associate it with "and that's bad, boo".

I think the person-affecting view shouldn't be dismissed so quickly. For example, when we talk about poverty-alleviation or health interventions in EA, we talk about how that's good because it makes actual people better off. Similarly, when something is bad, we point to people for whom it's bad, e.g. those who suffer as a consequence of an action. Saving a life isn't consequentially equivalent to creating one, because the conterfactuals are different: in the former, a life would've been nonconsensually terminated, which is bad for ... (read more)

1Richard_Ngo
Interesting points. I agree that the arguments against non-person-affecting views are rather compelling, but still find arguments against person-affecting views even more persuasive. Person-affecting views can easily endorse extinction if it's going to occur when almost everyone currently alive has died anyway - for example, if there is a meteorite 150 years away from destroying the earth and we could easily avert it but would need to raise taxes by 1% to do so, I think most person-affecting views would say to let it hit (assuming it's a secret meteorite, etc). There's also a second way in which they endorse extinction. Almost nobody can stomach the claim that it's morally neural to create people who you know will be tortured for their whole lives; therefore, person-affecting views often end up endorsing an asymmetry where it's bad to create people with net-negative lives but neutral to create people with net-positive lives. But unless you predict an incredibly utopian future, that's an argument for human extinction right now - since there will otherwise be enough net-negative people in the future to outweigh the interests of everyone currently alive. I agree that it's weird to think of saving a life as equivalent to creating one, but can we actually defend saving a life as being more important in general? Most basic case: either you can save a 20 year old who will live another 60 years, or else have a child who will live 60 years total. You say that the former is better because it avoids nonconsensual termination. But it doesn't! The 20 year old still dies eventually... Of course, in the latter case you have two nonconsensual deaths not one, but there's an easy fix for that: just raise the child so it's won't be scared of death! I know that sounds stupid but it's sort of what I was getting at when I claimed that some arguments about death are circular: they only apply to people who already think that death is bad. In fact it seems like most people are pretty comf

Upon further consideration, it seems to me that while it being enforced can make it worse, much of the prosociality cluster (e.g. guess culture) is oppressive in itself.

6ChristianKl
It might be my German background but not everybody who's social operates on guess culture.
5the gears to ascension
I don't think you can escape guess culture. you only can pretend you don't have it, and then pay the price.
3Raemon
I wouldn't conflate guess culture and prosociality - I think those are pretty different axes.

The maintenance of already existing cultural traits that are off-putting to outsiders may be more effective than intentionally designing filters, because the former are already part of the community, so by keeping them we're not diluting the culture, and the process of designing filters is likely to cause contestation within the community.about which of its traits are essential and which are peripheral.

It's hard to explicitly describe what the current barriers to entry are, but they include familiarity with LW ideas (and agreement with a lot of them), enjo... (read more)

I think it is both the case that:

1) a really valuable thing the community provides is a place to talk about ideas at a deep level. This is pretty rare, and it's valuable both to the sort of people who explicitly crave that, and (I believe), valuable to the world for generating ideas that are really important, and I do this this is something that is at risk of being destroyed if we lowered barriers to entry and scaled up without thinking too hard about it.

but, 2) it's also the case that

2a) there are a lot of smart people who I know would contribute valuab... (read more)

I'm a peripheral member of the Berkeley rationalist community, and some of this sounds highly concerning to me. Specifically, in practice, trying to aim at prosociality tends to produce oppressive environments, and I think we need more of people making nonconforming choices that are good for them and taking care of their own needs. I'm also generally opposed to reducing barriers to entry because I want to maintain our culture and not become more absorbed into the mainstream (which I think has happened too much already).

3bogus
I don't think this is really true. I think that lots of authoritarian-minded people nowadays try to use pro-sociality and vague ideas of "social progress", "a safe and welcoming community for everyone", "a well-tended garden" and the like as cover for what are really oppressive environments (often focused on enforcement of petty etiquette and narrow, cultish shibboleths, to the detriment of actual, ethically significant issues). But supposing that "trying to aim at prosociality tends to produce oppressive environments" amounts to arguing that open and non-oppressive environments are inherently "antisocial". I see no reason to assume this is the case.
9tristanm
Can you explain more about what you consider to be the current barriers to entry and how they work? The things that seem to turn off people to the rationality community are more like cultural issues, then things deliberately put in place to filter out unwanted members. I'm not necessarily against barriers in general as long as they are well engineered, but this post seems to be more about reducing the problems with the things that have emerged somewhat organically within the original rationality groups.
4Raemon
I agree that enforced prosociality can be oppressive, and plan to discuss it in an upcoming post. (I'll respond to each comment separately for easier threading).

I think you mean ethics and not morals.

Those terms are synonymous under standard usage.

-2Lumifer
Depends on your standards. Under my standard usage they are not synonyms. Morals specify the value systems (what you believe) and ethics specify practical decisions in real life (what you do).

Moral responsibility is related to but not the same thing as moral obligation, and it's completely possible for a utilitarian to say one is morally forbidden to be a bystander and let a murder happen while admitting that doing so doesn't make you responsible for it. This is because responsibility is about causation and obligation is about what one ought to do. Murderers cause murders and are therefore responsible for them, while bystanders are innocent. The utilitarian should say not that the bystander is as morally responsible as the murderer (because they aren't), but that moral responsibility isn't what ultimately matters.

I don't agree with any of these options, but I proposed the question back in 2014, so I hope I can shed some light. The difference between non-cognitivism and error theory is that the error theory supposes that people attempt to describe some feature of the world when they make moral statements, and that feature doesn't exist, while non-cognitivism holds that moral statements only express emotional attitudes ("Yay for X!") or commands ("Don't X!"), which can neither be true nor false. The difference between error theory and subjectivism... (read more)

The least answered question on the last survey was - “what is your favourite lw post, provide a link”.

IIRC, that question was added to the survey later.

2buybuydandavis
I don't remember even seeing that.

I have taken the survey.

I'm a guy in a polyamorous relationship with one girlfriend, who is in several relationships simultaneously. It's not a problem - the only occasional issue is that of limited time, and that's not unique to polyamory, it would be necessary to make those tradeoffs for friendships as well. On the plus side, compersion is a great feeling, and another benefit that I get in particular is that my girlfriend dating other people expands my social circle and introduces me to cool people, whom I would have greater difficulty meeting otherwise, because I'm normally not very social with people I don't know.

I'm not a progressive, but I don't see 1 and 2 as mutually exclusive. 1 is just a different way of stating 2 - leftists classify people on an oppressor-oppressed axis, where the oppressed are people perceived to be in bad situations.

3gjm
I think "oppressed" is more specific than "in a bad situation", and "oppressor" is much more specific than "in a comfortable situation". Saying that lefties classify people on an oppressor/oppressed axis suggests that they're addicted to what's sometimes called "politics of envy" -- it's not enough to help the poor, the rich must be made to suffer becaucse they are evil oppressors, etc. I'm sure there are people who think (and feel) that way, but I think it's a straw man if presented as an analysis of how lefties generally see the world. I think most lefties would agree with me that when people are in bad situations it doesn't have to be because anyone's oppressing them. They might just have been unlucky, or they might in some sense have done it to themselves (one place where Left and Right commonly disagree: on the left, this is not usually taken to mean that they shouldn't be helped). And I think most lefties would agree with me that someone very comfortably off is not necessarily oppressing anyone. (There are some who would say that our society systematically favours some groups and screws others over, to the benefit of the former at the expense of the latter, and that that means that being a rich straight white educated able-bodied man does make you in some sense an oppressor. I, and I think many others, largely agree with the first bit of that but think the conclusion that the rich (etc.) are oppressors is misguided: to benefit from an oppressive system is not necessarily to be an oppressor. And I, and I think many others, think "oppressed" is too strong and too specific a word to describe the ways in which things are bad for most statistically-disadvantaged groups.)

I think he meant that Kling, being a libertarian, failed the Turing Test when describing the framework behind the progressive and conservative viewpoints.

2Luke_A_Somers
I get that. But he wasn't even TAKING the Turing test. He described it fairly accurately, if in terms that people on the inside wouldn't have used. So?
2Prismattic
Correct.

Clearly, we haven't been doing enough to increase other risks. We can't let pandemic stay in the lead.

1Ander
Get to work on making more AIs everyone!

As Arnold Kling suggests, progressives think of issues on an oppressor-oppressed axis. Women, poor people, and immigrants are all seen as oppressed, which is why feminism, raising the minimum wage, and support for more immigration are positions that are often found together.

7Prismattic
In my experience, libertarians tend to think highly of Arnold Kling's taxonomy, and liberals and conservatives do not. I regard it as a Turing test fail.

Support for a higher minimum wage, increased immigration, and feminism are all typically left-wing positions, so it's not surprising that they're found together.

5Gunnar_Zarncke
Thanks. Maybe it's obvious for you but it does surprise me. Maybe it's more clear over ther in the U.S. I'm nonetheless surprised by the magnitude if they are only connected via such a unspecific bucket as 'left-wing'.

Thank you for doing this survey.

I would be interested to see the correlations between political identification and moral views, and between moral views and meta-ethics.

(Also, looking at my responses to the survey, I think I unintentionally marked "Please do not use my data for formal research".)

Utilitarianism is a normative ethical theory. Normative ethical theories tell you what to do (or, in the case of virtue ethics, tell you what kind of person to be). In the specific case of utilitarianism, it holds that the right thing to do (i.e. what you ought to do) is maximize world utility. In the current world, there are many people who could sacrifice a lot to generate even more world utility. Utilitarianism holds that they should do so, therefore it is demanding.

If I tell my friend that I am visiting him on egoistic grounds, it suggests that being around him and/or promoting his well-being gives me pleasure or something like that, which doesn't sound off - it sounds correct. I should hope that my friends enjoy spending time around me and take pleasure in my well-being.

I mean that pleasure, by its nature, feels utility-satisfying. I don't know what you mean by "path" in "utility-maximizing path".

Regarding inconsistent preferences, yes, that is what I'm referring to.

Ordinal utility doesn't by itself necessitate wireheading, such as if you are incapable of experiencing pleasure, but if you can experience it, then you should wirehead, because pleasure has the quale of desirability (pleasure feels desirable).

0Transfuturist
And you think that "desirability" in that statement refers to the utility-maximizing path?

But presumably you don't get utility from switching as such, you get utility from having A, B, or C, so if you complete a cycle for free (without me charging you), you have exactly the same utility as when you started, and if I charge you, then when you're back to A, you have lower utility.

0Transfuturist
If I have utility in the state of the world, as opposed to the transitions between A, B, and C, I don't see how it's possible for me to have cyclic preferences, unless you're claiming that my utility doesn't have ordinality for some reason. If that's the sort of inconsistency in preferences you're referring to, then yes, it's bad, but I don't see how ordinal utility necessitates wireheading.

What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole 'actual' terminal value of a human is pleasure?

That upon ideal rational deliberation and when having all the relevant information, a person will choose to pursue pleasure as a terminal value.

I might be perfectly happy with the expenditure per utility shift.

That's exactly the problem - you'd be happy with the expenditure per shift, but every time a fill cycle would be made, you'd be worse off. If you start out with A and $10, pay me a dollar to switch to B, another dollar to switch to C, and a third dollar to switch to A, you'd end up with A and $7, worse off than you started, despite being satisfied with each transaction. That's the cost of inconsistency.

0Transfuturist
And 3 utilons. I see no cost there.

Dutch booking has nothing to do with preferences; it refers entirely to doxastic probabilities.

You can be Dutch booked with preferences too. If you prefer A to B, B to C, and C to A, I can make money off of you by offering a circular trade to you.

1Transfuturist
And if I'm unaware that such a strategy is taking place. Even if I was aware, I am a dynamic system evolving in time, and I might be perfectly happy with the expenditure per utility shift. Unless I was opposed to that sort of arrangement, I find nothing wrong with that. It is my prerogative to spend resources to satisfy my preferences.

(Note: Being continuously downvoted is making me reluctant to continue this discussion.)

One reason to be internally consistent is that it prevents you from being Dutch booked. Another reason is that it enables you to coherently be able to get the most of what you want, without your preferences contradicting each other.

Why should the way things are be the way things are?

As far as preferences and motivation are concerned, however things should be must appeal to them as they are, or at least as they would be if they were internally consistent.

0Transfuturist
Retracted: Dutch booking has nothing to do with preferences; it refers entirely to doxastic probabilities. I very much disagree. I think you're couching this deontological moral stance as something more than the subjective position that it is. I find your morals abhorrent, and your normative statements regarding others' preferences to be alarming and dangerous.

It's not a matter of what you should desire, it's a matter of what you'd desire if you were internally consistent. Theoretically, you could have values that weren't pleasure, such as if you couldn't experience pleasure.

Also, the naturalistic fallacy isn't a fallacy, because "is" and "ought" are bound together.

0Transfuturist
Why is the internal consistency of my preferences desirable, particularly if it would lead me to prefer something I am rather emphatically against? Why should the way things are be the way things are?

Terminal values are what are sought for their own sake, as opposed to instrumental values, which are sought because they ultimately produce terminal values.

0DefectiveAlgorithm
I know what terminal values are and I apologize if the intent behind my question was unclear. To clarify, my request was specifically for a definition in the context of human beings - that is, entities with cognitive architectures with no explicitly defined utility functions and with multiple interacting subsystems which may value different things (ie. emotional vs deliberative systems). I'm well aware of the huge impact my emotional subsystem has on my decision making. However, I don't consider it 'me' - rather, I consider it an external black box which interacts very closely with that which I do identify as me (mostly my deliberative system). I can acknowledge the strong influence it has on my motivations whilst explicitly holding a desire that this not be so, a desire which would in certain contexts lead me to knowingly make decisions that would irreversibly sacrifice a significant portion of my expected future pleasure. To follow up on my initial question, it had been intended to lay the groundwork for this followup: What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole 'actual' terminal value of a human is pleasure?

Fundamentally, because pleasure feels good and preferable, and it doesn't need anything additional (such as conditioning through social norms) to make it desirable.

-1Transfuturist
Why should I desire what you describe? What's wrong with values more complex than a single transistor? Also, naturalistic fallacy.

Supporting neo-reaction because SJWs are bad is a severe case of false dichotomy.

My position is in line with that - people are wrong about what their terminal values are, and they should realize that their actual terminal value is pleasure.

-2DefectiveAlgorithm
Can you define 'terminal values', in the context of human beings?
1Transfuturist
Why is my terminal value pleasure? Why should I want it to be?

Why? We do this all the time, when we advise people to do something different from what they're currently doing.

3Transfuturist
No, we don't. That's making recommendations as to how they can attain their preferences. That you don't seem to understand this distinction is concerning. Instrumental and terminal values are different.

I hardly ever post (somewhere between one post per month and one post per year), but I read my feed almost daily.

"None" is presumably included in "Other", though next year it should probably be a separate option.

I suggested the metaethics question, and I'm sorry for any inadequacies in my descriptions. I used emotivism as the example for non-cognitivism because it's the form of it with which I'm most familiar, and because it would've been difficult to come up with a general example that would encompass all forms of non-cognitivism.

It was similarly difficult to come up with a general example for constructivism - my example is along the lines of Hobbesian constructivism, with which other constructivists may disagree.

1Leonhart
I was mostly irked that "the position from the Sequences" wasn't an option (although I quite understand why you'd want to avoid parochial signalling), as neither your definition of subjectivist nor substantive realist seemed to capture it adequately. I eventually opted for the latter.
3Creutzer
The formulation of the question didn't quite make it clear that emotivism was just intended as an example for one possible non-cognitivist position. That's what I objected to. As an example, it's fine of course - it is, after all, the most well-known such position.

Whether this feeling is irrational depends on what causes it. It makes sense to worry about a community you like becoming popular, since it means that an increasing number of people would join it, potentially reducing its quality.

3Omid
I don't think that's what caused my angst, I think I was worried about becoming less special because more people were reading my favorite blog.

Left-wing market anarchism is anarcho-capitalism that is left-wing in its orientation. They typically support the same policies as other anarcho-capitalists, but in non-policy areas, they have notable differences. They're opposed to hierarchical labor relations (though they don't want to make them illegal), with which they associate the term "capitalism", and which is why they like to call themselves free-market anti-capitalists. They have a favorable view of labor unions, strikes, and worker cooperatives. They tend to believe that the current po... (read more)

0[anonymous]
Replies to the comment you are now reading accurately describe my ideas so the original post has been replaced by this disclaimer to spare your time :)
0[anonymous]
The term "left-libertarianism" is kind of ambiguous. But I don't think it usually means anarcho-capitalism.

Maybe it's straightforward to discover when the fetus can feel pain, but it's not straightforward that being able to feel pain should be the cutoff point.

I'm pro-infanticide, but there's also a consistent position of "the line between not having and having a right to not be killed is crossed while in the womb". Another plausible position is evictionism - "Regardless of whether you have the right to kill a fetus, you aren't obligated to support it and are free to expel it if you wish".

Not sure if this counts, but though my views can roughly be described as "libertarian", I have a mix of moderate and radical positions that I rarely see found together. On the moderate side, I favor a carbon tax, think intellectual property protection is justified in principle, want a government-managed fiat currency (and don't want to abolish or audit the Fed), and probably other positions that I'm missing here. On the radical side, I want to abolish the welfare state, open the borders, and greatly reduce the military budget and only use the military for defensive wars.

I usually see "left-libertarianism" used to refer to left-wing market anarchism, not to something between progressivism and libertarianism.

6Jayson_Virissimo
During my time as a libertarian, it was common to refer to people like Nancy as "liberaltarians". You are correct that "left-libertarian" usually signifies a certain kind of anarchist, not a progressive/libertarian hybrid.
2Luke_A_Somers
I can't make sense of 'left-wing market anarchism' except as something that is in fact kind of in between progressivism and libertarianism, even if it's not really what I'm thinking of.

Finished the survey. Didn't answer the SSC question even though I read it regularly because I plan to take the edited version when it's posted there, and I also didn't answer the digit ratio question.

Regarding scope sensitivity and the oily bird test, one man's modus ponens is another's modus tollens. Maybe if you're willing to save one bird, you should be willing to donate to save many more birds. But maybe the reverse is true - you're not willing to save thousands and thousands of birds, so you shouldn't save one bird, either. You can shut up and multiply, but you can also shut up and divide.

Do nihilists think they have no goals (aka terminal values) or do nihilists think they don't have goals about fulfilling others' goals or is it something else?

I am not a nihilist, and I don't know if I'd be able to pass an Ideological Turing Test as one, but to give my best answer to this, the nihilist would say that there are no moral oughts. How they connect this to terminal goals varies depending on the nihilist.

Ok so would that be right to say this?: Utilitarianism is giving equal weight to everyone's utility function (including yours) in your &qu

... (read more)
2Mati_Roy
Ok thanks for your answers!

That is an inaccurate definition of nihilism because it doesn't match what nihilists actually believe. Not only do they reject intrinsic morality, they reject all forms of morality altogether. Someone who believes in any kind of moral normativity (e.g. a utilitarian) cannot be a nihilist.

Utilitarianism is used as "the normative ethical theory that one ought to maximize the utility of the world". This is in contrast to something like egoism ("the normative ethical theory that one ought to maximize one's own utility") and other forms of consequentialism.

0Mati_Roy
Thank you for your answer. Do nihilists think they have no goals (aka terminal values) or do nihilists think they don't have goals about fulfilling others' goals or is it something else? Ok so would that be right to say this?: Utilitarianism is giving equal weight to everyone's utility function (including yours) in your "meta" utility function. Egoism means you don't consider others' utility function in your utility function. And then there is everything in-between (meaning giving more weight to your utility function than to other's utility function in your "meta" utility function).

If you want less fine-grained answers, there's the consequentialism/deontology/virtue ethics question in the earlier part of the survey.

For Super Extra Bonus Questions: (feel free to modify the answer choices)

With which of these metaethical positions do you most identify?

  • Non-cognitivism: Moral statements don't express propositions and can neither be true nor false. "Murder is wrong" means something like "Boo murder!".
  • Error theory: Moral statements have a truth-value, but attempt to describe features of the world that don't exist. "Murder is wrong" and "Murder is right" are both false statements because moral rightness and wrongness aren't features
... (read more)
2A1987dM
Or “Let's please not murder”, which doesn't express a proposition either.
-1Mati_Roy
"Moral nihilism is the meta-ethical view that nothing is intrinsically moral or immoral." (http://en.wikipedia.org/wiki/Moral_nihilism) Utility functions (aka morality) are (is) in the mind, not in Nature. That would probably be the answer of most LWers. Otherwise, you'll have to tell me what you mean by morality. Is utilitarianism used as "maximizing happiness" or "maximizing utility". If it's "maximizing utility", well isn't that everyone's position? What differs is simply what counts as "utility".
6A1987dM
Could you have less fine-grained answers, so that I don't have to spend a week on the SEP just to know what the answers mean?

For relationship status, a polyamorous person can be married and in a relationship at the same time, which is a problem. Similarly, someone can be living with their partner/spouse and additional roommates. Also, "Liberal" in the Political section should probably be renamed to "Progressive", to avoid collisions with how "liberal" is used in Europe and in political philosophy.

Load More