Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Thoughts on moral intuitions

39 Post author: Kaj_Sotala 30 June 2012 06:01AM

The style in the following article is slightly different from most LW articles, due to the fact that I originally posted this on my blog. Some folks on #lesswrong liked it, so I thought it might be liked here as well.


Our moral reasoning is ultimately grounded in our moral intuitions: instinctive "black box" judgements of what is right and wrong. For example, most people would think that needlessly hurting somebody else is wrong, just because. The claim doesn't need further elaboration, and in fact the reasons for it can't be explained, though people can and do construct elaborate rationalizations for why everyone should accept the claim. This makes things interesting when people with different moral intuitions try to debate morality with each other.

---

Why do modern-day liberals (for example) generally consider it okay to say "I think everyone should be happy" without offering an explanation, but not okay to say "I think I should be free to keep slaves", regardless of the explanation offered? In an earlier age, the second statement might have been considered acceptable, while the first one would have required an explanation.

In general, people accept their favorite intuitions as given and require people to justify any intuitions which contradict those. If people have strongly left-wing intuitions, they tend to consider right-wing intuitions arbitrary and unacceptable, while considering left-wing intuitions so obvious as to not need any explanation. And vice versa.

Of course, you will notice that in some cultures specific moral intuitions tend to dominate, while other intuitions dominate in other cultures. People tend to pick up the moral intuitions of their environment: some claims go so strongly against the prevailing moral intuitions of my social environment that if I were to even hypothetically raise the possibility of them being correct, I would be loudly condemned and feel bad for even thinking that way. (Related: Paul Graham's What you can't say.) "Culture" here is to be understood as being considerably more fine-grained than just "the culture in Finland" or the "culture in India" - there are countless of subcultures even within a single country.

---

Social psychologists distinguish between two kinds of moral rules: ones which people consider absolute, and ones which people consider to be social conventions. For example, if a group of people all bullied and picked on one of them, this would usually be considered wrong, even if everyone in the group (including the bullied person) thought it was okay. But if there's a rule that you should wear a specific kind of clothing while at work, then it's considered okay not to wear those clothes if you get special permission from your boss, or if you switch to another job without that rule.

The funny thing is that many people don't realize that the distinction of which is which is by itself a moral intuition which varies from people to people, and from culture to culture. Jonathan Haidt writes in The Righteous Mind: Why Good People Are Divided by Politics and Religion of his finding that while the upper classes in both Brazil and USA were likely to find violations of harmless taboos to be violations of social convention, lower classes in both countries were more likely to find them violations of absolute moral codes. At the time, moral psychology had mistakenly thought that "moving on" to a conception of right and wrong that was only grounded in concrete harms would be the way that children's morality naturally develops, and that children discover morality by themselves instead of learning it from others.

So moral psychologists had mistakenly been thinking about some moral intuitions as absolute instead of relative. But we can hardly blame them, for it's common to fail to notice that the distinction between "social convention" and "moral fact" is variable. Sometimes this is probably done for purpose, for rhetorical reasons - it's a much more convincing speech if you can appeal to ultimate moral truths rather than to social conventions. But just as often people simply don't seem to realize the distinction.

(Note to international readers: I have been corrupted by the American blogosphere and literature, and will therefore be using "liberal" and "conservative" mostly to denote their American meanings. I apologize profusely to my European readers for this terrible misuse of language and for not using the correct terminology like God intended it to be used.)

For example, social conservatives sometimes complain that liberals are pushing their morality on them, by requiring things such as not condemning homosexuality. To liberals, this is obviously absurd - nobody is saying that the conservatives should be gay, people are just saying that people shouldn’t be denied equal rights simply because of their sexual orientation. From the liberal point of view, it is the conservatives who are pushing their beliefs on others, not vice versa.

But let's contrast "oppressing gays" to "banning polluting factories". Few liberals would be willing to accept the claim that if somebody wants to build a factory that causes a lot of harm to the environment, he should be allowed to do so, and to ban him from doing it would be to push the liberal ideals on the factory-owner. They might, however, protest that to prevent them from banning the factory would be pushing (e.g.) pro-capitalism ideals on them. So, in other words:

Conservatives want to prevent people from being gay. They think that this just means upholding morality. They think that if somebody wants to prevent them from doing so, that somebody is pushing their own ideals on them.

Liberals want to prevent people from polluting their environment. They think that this just means upholding morality. They think that if somebody wants to prevent them from doing so, that somebody is pushing their own ideals on them.

Now my liberal readers (do I even have any socially conservative readers?) will no doubt be rushing to point out the differences in these two examples. Most obviously the fact that pollution hurts other people than just the factory owner, like people on their nearby summer cottages who like seeing nature in a pristine and pure state, so it's justified to do something about it. But conservatives might also argue that openly gay behavior encourages being openly gay, and that this hurts those in nearby suburbs who like seeing people act properly, so it's justified to do something about it.

It's easy to say that "anything that doesn't harm others should be allowed", but it's much harder to rigorously define harm, and liberals and conservatives differ in when they think it's okay to cause somebody else harm. And even this is probably conceding too much to the liberal point of view, as it accepts a position where the morality of an act is judged primarily in the form of the harms it causes. Some conservatives would be likely to argue that homosexuality just is wrong, the way that killing somebody just is wrong.

My point isn't that we should accept the conservative argument. Of course we should reject it - my liberal moral intuitions say so. But we can't in all honestly claim an objective moral high ground. If we are to be honest to ourselves, we will accept that yes, we are pushing our moral beliefs on them - just as they are pushing their moral beliefs on us. And we will hope that our moral beliefs win.

Here's another example of "failing to notice the subjectivity of what counts as social convention". Many people are annoyed by aggressive vegetarians, who think anyone who eats meat is a bad person, or by religious people who are actively trying to convert others. People often say that it's fine to be vegetarian or religious if that's what you like, but you shouldn't push your ideology to others and require them to act the same.

Compare this to saying that it's fine to refuse to send Jews to concentration camps, or to let people die in horrible ways when they could have been saved, but you shouldn't push your ideology to others and require them to act the same. I expect that would sound absurd to most of us. But if you accept a certain vegetarian point of view, then killing animals for food is exactly equivalent to the Holocaust. And if you accept a certain religious view saying that unconverted people will go to Hell for an eternity, then not trying to convert them is even worse than letting people die in horrible ways. To say that these groups shouldn't push their morality to others is to already push your own ideology - which says that decisions about what to eat and what to believe are just social conventions, while decisions about whether to kill humans and save lives are moral facts - on them.

So what use is there in debating morality, if we have so divergent moral intuitions? In some cases, people have such widely differing intuitions that there is no point. In other cases, their intuitions are similar enough that they can find common ground, and in that case discussion can be useful. Intuitions can clearly be affected by words, and sometimes people do shift their intuitions as a result of having debated them. But this usually requires appealing to, or at least starting out from, some moral intuition that they already accept. There are inferential distances involved in moral claims, just as there are inferential distances involved in factual claims.

So what about the cases when the distance is too large, when the gap simply cannot be bridged? Well in those cases, we will simply have to fight to keep pushing our own moral intuitions to as many people as possible, and hope that they will end up having more influence than the unacceptable intuitions. Many liberals probably don't want to admit to themselves that this is what we should do, in order to beat the conservatives - it goes so badly against the liberal rhetoric. It would be much nicer to pretend that we are simply letting everyone live the way they want to, and that we are fighting to defend everyone's right for that.

But it would be more honest to admit that we actually want to let everyone live the way they want to, as long as they don't things we consider "really wrong", such as discriminating against gays. And that in this regard we're no different from the conservatives, who would likewise let everyone live the way they wanted to, as long as they don't do things the conservatives consider "really wrong".

Of course, whether or not you'll want to be that honest depends on what your moral intuitions have to say about honesty.

Comments (193)

Comment author: DanArmak 30 June 2012 02:54:17PM 4 points [-]

There's a theory of ethics I seem to follow, but don't know the name of. Can someone refer me to existing descriptions?

The basic idea is to restrict the scope where the theory is valid. Many other theories fail (only in my personal view, obviously) by trying to solve universal problems: does my theory choose the best possible universe? How do I want everyone to behave? If everyone followed my theory, would that be a good or a stable world? Solving under these constraints can lead people to some pretty repugnant conclusions, as well as people rejecting otherwise good theories because they aren't universally valid.

By examining the rules I actually seem to follow, I am led to a more narrow theory. It doesn't tell me how to choose a whole universe from the realm of possibility - so it's not suitable for a superhuman AI to follow. But that makes it easier to decide what I personally should do.

Instead of having to decide whether democracy or autocracy is in some grand sense better, I can just estimate the marginal results of my own vote in the coming elections. Instead of figuring out how to maximize everyone's happiness, and fall into the traps of utilitarianism and its alternatives, I take advantage of the fact I am only one person - and maximize the happiness of myself and others near me, which is much easier.

Similarly, I don't have to worry about what would happen if everyone was as selfish as I was, because I can't affect other people's selfishness significantly enough for that to be a serious problem. Instead, I just need to consider the optimal degree of my own selfishness, given how other people in fact behave.

This doesn't mean I can't or don't take into account other people's welfare. I do, because I care about others. But I can accept that this is just a fact about the universe, produced by evolution and culture and other historical reasons, and that if I didn't feel a concern for others then I wouldn't act to benefit them. I don't need to invent a grand theory of how cooperating agents win, or how my morality is somehow objectively inferior and I should want to take a pill to modify my moral intuitions.

A brief statement of my approach might be: I'm not going to change my rules of ethics to win in dilemmas I don't expect to actually encounter, if these changes would make me perform less well in everyday situations. I don't want to be vulnerable to ethical-rules Pascal's mugging, so to speak.

Comment author: shokwave 30 June 2012 05:18:57PM 3 points [-]

There seems to be a parallel here, with the concepts of rationality and bounded rationality. Rational decision-making needs to solve problems like Newcomb's Dilemma, Pascal's Mugging, acausal outside-the-lightcone one-shot cooperation, and the trillionth digit of pi being odd with probability .5 when lacking logical omniscience. In contrast, bounded rationality recognises that these things are outside the scope, and concerns itself with being correct within its bounds.

So perhaps you could adopt the name 'bounded morality'?

Comment author: drethelin 30 June 2012 04:30:07PM 1 point [-]

http://blog.muflax.com/morality/non-local-metaethics/ I really like Muflax's post on this topic. For practical purposes, morality needs to be calculable.

Comment author: DanArmak 30 June 2012 04:40:57PM *  1 point [-]

Thanks! Muflax comes to this conclusion in that post:

your moral theories better be local, or you’re screwed.

I agree that local theories are better than nonlocal ones - although "local" is in some degree relative; local theories with a large "locality" may be acceptable. This isn't specific to moral theories, it applies to all decision algorithms.

This doesn't directly address my position that theories that only tell you what to do in some cases, but do cover the cases likely to occur to you personally, are valid and useful.

Comment author: Douglas_Knight 28 June 2012 10:07:37PM *  4 points [-]

Jonathan Haidt writes in The Righteous Mind: Why Good People Are Divided by Politics and Religion of his finding that while the upper classes in both Brazil and USA were likely to find things like "not wearing a uniform to school" to be violations of social convention, lower classes in both countries were likely to find them violations of absolute moral codes.

Does he? The data in the source disagree (tables on 619-620). I haven't read all the text of the source, but it gives the uniform as the prototypical example of a custom and seems to say that it did work out that way. 40% of low SES adults in Recife (but not Porto Alegre) did claim it universal, but that's less than on any of the interesting examples. (Children everywhere showed less class-sensitivity than adults.)


Just to be clear, the description of the results of the experiment is correct, just mixing up the control example with the experimental example.

Comment author: Kaj_Sotala 29 June 2012 10:07:42AM 0 points [-]

Thanks, I edited the sentence to be clearer on that: "...that while the upper classes in both Brazil and USA were likely to find violations of harmless taboos to be violations of social convention, lower classes in both countries were more likely to find them violations of absolute moral codes."

Comment author: mwengler 02 July 2012 09:33:47AM *  1 point [-]

That's a fun result.

Years ago, I had a "spiritual person" telling me about how god could help me if I prayed to him. Wishing to make a point by metaphor, I told him "it seems to me that god is just santa clause for grown-ups." "Yes," he responded, "santa clause gives kids what they want, god gives you what you need."

If only clever repartee established truth, then Stephen Colbert would be the last president we would ever need.

If the smarter you get, the more things you think are social convention and the fewer you think are absolute morality, then what is our self-improving AI going to eventually think about the CEV we coded in back when he was but an egg?

Comment author: wedrifid 02 July 2012 09:54:06AM *  6 points [-]

If the smarter you get, the more things you think are social convention and the fewer you think are absolute morality, then what is our self-improving AI going to eventually think about the CEV we coded in back when he was but an egg?

It isn't going to think the CEV is an absolute morality - it'll just keep doing what it is programmed to do because that is what it does. If the programming is correct it'll keep implementing CEV. If it was incorrect then we'll probably all die.

The relevance to 'absolute morality' here is that if the programmers happened to believe there was an absolute morality and tried to program the AI to follow that then they would fail, potentially catastrophically.

Comment author: Incorrect 28 June 2012 06:47:36PM 9 points [-]

Why do modern-day liberals (for example) generally consider it okay to say "I think everyone should be happy" without offering an explanation, but not okay to say "I think I should be free to keep slaves", regardless of the explanation offered?

"I think everyone should be happy" is an expression of a terminal value. Slavery is not a typically positive terminal value, so if you terminally value slavery you would have to say something like "I like the idea of slavery itself"; if you just say "I like slavery" people will think you have some justification in terms of other terminal values (e.g. slavery -> economics -> happiness).

So, to say you like slavery implies you have some justification for it as an instrumental value. Such justifications are generally considered to be incorrect for typical terminal values and so, the "liberals" could legitimately consider you to be factually incorrect.

Comment author: fubarobfusco 28 June 2012 11:23:03PM 10 points [-]

So, to say you like slavery implies you have some justification for it as an instrumental value.

Well, let's ask some folks who actually did like slavery, and fought for it.

From the Texas Declaration of Secession, adopted February 2, 1861:

[T]he servitude of the African race, as existing in these States, is mutually beneficial to both bond and free, and is abundantly authorized and justified by the experience of mankind, and the revealed will of the Almighty Creator, as recognized by all Christian nations [...]

So at least some people who strongly believed that slavery was moral, claimed to hold this belief on the basis of (what they believed to be) both consequential and divine-command morality.

Comment author: taw 30 June 2012 11:36:08AM 4 points [-]

It's not at all obvious if they really believed it. People say stuff they don't believe all the time.

Comment author: AlexMennen 29 June 2012 07:01:38AM 1 point [-]

That seems like a valid distinction, but what makes you think that it is actually the distinction that motivates the difference in reactions?

Comment author: [deleted] 01 July 2012 09:10:30AM 3 points [-]

I have been corrupted by the American blogosphere and literature, and will therefore be using "liberal" and "conservative" mostly to denote their American meanings.

You could use “left-wing” and “right-wing”, whose meanings (across the First World at least) are more consistent.

Comment author: Douglas_Knight 28 June 2012 09:45:56PM 3 points [-]

Change you link to a better version of Haidt's paper. Your current link doesn't have searchable text.

Comment author: Kaj_Sotala 28 June 2012 10:04:43PM 1 point [-]

Thanks, changed.

Comment author: JenniferRM 28 June 2012 04:32:03PM *  10 points [-]

For example, most people would think that needlessly hurting somebody else is wrong, just because. The claim doesn't need further elaboration, and in fact the reasons for it can't be explained, though people can and do construct elaborate rationalizations for why everyone should accept the claim.

I think this is a folk theory about how "moral intuitions" work, and I don't think that it is true, in the sense that it is a naive answer to a naive question that should have been dissolved rather than answered. For example, most people think everything "just because", and further elaboration is just confabulation unless you do something unusual.

Thinking that morality is a specialized domain (a separate magisterium?) leads to the idea of "debating morality" as though the actual real communication events that acquire that label are like other debates except about the specialized domain: engaged in for similar purposes, with similar actual end points, resolved according to similar rhetorical patterns, and so on. Compare and contrast variations on the terms: "ethical debates", "political debates", "scientific debates", "morality conversations", "morality dialogues", "political dialogues", etc. Imagine the halo of all such terms, and the wider halo of all communication events that match anything in the halo of terms, and then imagine running a clustering algorithm on those communication events to see if they are even distinct things, and if so what the real differences are.

I don't want to say "Boo!" here too much. I'm friendly to the essay. And given your starting assumptions it does pretty much lead to the open minded interpretation of moral debates you derived. I tend to like people who go a little bit meta on those communication events more then people who just participate in them by blind reflex, but I think that going meta on those communication events a lot (with tape recorders and statistics and hypothesis testing and a research budget and so on) would reveal a lot of really useful theory. You linked to Haidt... some of this research is being done. I suspect more would be worthwhile :-)

Edited to add: And I bet the researcher's "moral debating" performance and moral conclusions would themselves be very interesting objects of study. Imagine being a fly on the wall while Haidt, Drescher, and Lakoff tried to genuinely aumann updated on political issues of the day.

Comment author: Kaj_Sotala 28 June 2012 05:35:46PM *  5 points [-]

I think this is a folk theory about how "moral intuitions" work, and I don't think that it is true, in the sense that it is a naive answer to a naive question that should have been dissolved rather than answered

I'm not entirely sure what you mean, or perhaps you use "dissolving" in a different sense from how I understand it. I thought that dissolving a question meant taking a previously mysterious and unanswerable question and providing such an explanation that there's no longer any question to be asked. But if there is a mysterious and unanswerable question here, I'm not sure of what it is.

Comment author: JenniferRM 28 June 2012 09:26:10PM *  0 points [-]

Potential questions this essay could have been written to answer, that might deserve to be dissolved rather than answered directly:

  • How does moral reasoning work (and what are the implications)?

  • How do moral debates find ground in moral feelings (and what are the implications)?

  • Where does the motivational force attributed to pro-social intrinsic values come from (and what are the implications)?

Comment author: [deleted] 28 June 2012 11:33:27PM *  3 points [-]

I'm currently reading a book called Braintrust: What Neuroscience Tells Us about Morality that frames the problem exactly like that. It's by Patricia Churchland. The view that she defends is that moral decision are based on constraint satisfaction, just like a lot of other decisions processes.

Comment author: torekp 22 July 2012 09:14:58PM 1 point [-]

For what it's worth, I'd bet that your third question will be answered more or less directly, without dissolution. See Wix's reply for a step in that direction.

Comment author: JenniferRM 23 July 2012 04:53:18AM *  1 point [-]

You're probably right. In some sense I just re-stated the same question a few times, dissolving more at each step :-)

Comment author: Kaj_Sotala 04 July 2012 07:56:44AM *  1 point [-]

Still not sure what you mean: questions one and two seem interesting but outside the scope of my essay, and I'm not sure I understand the third one. You said in your original comment that

I think this is a folk theory about how "moral intuitions" work, and I don't think that it is true, in the sense that it is a naive answer to a naive question that should have been dissolved rather than answered.

...but I don't think I really answered any of those three questions in my post.

Comment author: AnotherIdiot 30 June 2012 11:21:43PM 1 point [-]

To be fair, this post does point out a reason why debating morality is different from debating most other subjects (using different words from mine): people have very different priors on morality, and unlike in, say, physics, these priors can't be rebutted by observing the universe. Reaching an agreement in morality is therefore often much harder than in other subjects, if an agreement even can be reached.

Comment author: torekp 01 July 2012 04:22:41PM 2 points [-]

Our moral reasoning is ultimately grounded in our moral intuitions

I don't accept the premise. Moral intuitions play a part, but the ultimate constraints come more from the nature of rational discourse and the psychology of the discoursing species. For extended arguments along these lines (well mostly the emphasized part) see Jürgen Habermas and Thomas Scanlon.

Comment author: private_messaging 01 July 2012 02:42:43PM *  3 points [-]

I have an impression that most of the explicit thinking about "morality" gets sabotaged by conditioning. The type of thought that allows you to eat the last piece of cake is associated with eating cake, the type of thought that leads to sense of guilt is associated with guilt.

Subsequently a great deal of self proclaimed systems of morality are produced in such a manner that they are much too ill defined to be used to determine the correct actions , and are only usable for rationalization (utilitarianisms, i am looking at you).

Meanwhile, there is an objective scale: how effective are the rules for peer to peer cooperation (intellectual and other); and for the most part the moralities we find entirely reprehensible are also least productive. There is no relativism in the jungle. No survival relativism, no moral relativism. And the morality as practiced gets produced by selection on this criteria.

If you want to know if you should transplant organs out of 1 healthy person who was doing routine check up, into 10 people who will otherwise die, against healthy person's will - well, the sort of societies who just cut up the healthy person and transplant end up with hardly anyone ever going to check ups. The answer is clear if you actually want the answer what you should do (when doing something for sake of everyone). Unfortunately, when people think of morality, what results is a product of lifelong history of conditioning that includes multiple small misdemeanors with associated rewards, and the guilt that resulted from thinking too clearly, and the pleasure that resulted from thinking sloppy and grand. People don't think along the lines of what is the best action; people think along the lines of what type of thought was most self serving, and the one where ends justify means is usually the most self serving (when coupled with rationalization).

Comment author: Swimmer963 01 July 2012 08:13:56PM 0 points [-]

Unfortunately, when people think of morality, what results is a product of lifelong history of conditioning that includes multiple small misdemeanors with associated rewards, and the guilt that resulted from thinking too clearly, and the pleasure that resulted from thinking sloppy and grand. People don't think along the lines of what is the best action; people think along the lines of what type of thought was most self serving, and the one where ends justify means is usually the most self serving (when coupled with rationalization).

Can you clarify this/give some concrete examples?

Comment author: private_messaging 02 July 2012 10:03:28AM *  4 points [-]

Morals are significantly restrictive and influence personal pleasure (to the point that thinking about your own action produces guilt, a pain-like feeling, and the morals stand in the way of getting what you want).

Subsequently the thought is subject to reward/punishment conditioning.

If you rationalize why you should have more cake than the other, you get cake, which is reward, if you think too clearly about your ill-doings, you are hurt by feeling of guilt; if you engage in particular form of thought whereby you do not ensure correctness of the reasoning and do not note the ways how your argument may fail (implicit assumptions etc) you can easily rationalize away the things you did wrong.

Basically, you are being conditioned to feel good about bad approach to reasoning - where you make huge jumps, where you don't note the assumptions you make, where you just make invalid assumption, where you don't search for faults, etc., and feel bad about good approach to reasoning. Your very thought process is being trained to be sloppy and broken, with only very superficial resemblance to the logic - only sufficient resemblance that the guilt circuit won't be triggered.

There is some minor conditioning from the situations where you received some external punishment or reward, but those are too uncommon and too inconsistent, and the reward/punishment is too delayed, and at the very best those would condition mere avoidance of being caught.

Comment author: Swimmer963 03 July 2012 10:41:38AM 1 point [-]

Basically, you are being conditioned to feel good about bad approach to reasoning - where you make huge jumps, where you don't note the assumptions you make, where you just make invalid assumption, where you don't search for faults, etc., and feel bad about good approach to reasoning.

My initial response to this was "that seems completely untrue," so I decided to hunt for examples. I think you're right, because I was able to come up with an example of myself doing this, namely downloading music and movies for free from the Internet. I do consider this kind-of-vaguely-like-stealing, but the "kind-of-vaguely" part is a good indication that my thinking is deliberately fuzzy in this area.

When I think about it, I don't know why–I don't consume enough entertainment materials that paying for it would be a significant pull on my finances, and I'm hardly financially strapped. I think it's because the usual strong positive reinforcement I would get for knowing I was "doing the right thing" despite wanting Thing X really badly is outweighed by the knowledge that several of my friends would make fun of me for paying for stuff on iTunes. Which...if I think about it...is also a pretty selfish reason!

You may just have convinced me that I should start paying for my music and movies, as a way of training my moral thinking to be less "sloppy"!

Comment author: private_messaging 03 July 2012 12:24:04PM *  -1 points [-]

You may just have convinced me that I should start paying for my music and movies, as a way of training my moral thinking to be less "sloppy"!

Heh. But why did I do that? Selfish motives also (I make software for living).

I came up with another example. Consider the sunk cost issue. Suppose that you spent years working on a project that is heading nowhere, the effort was wasted, and there's a logical way to see that it is wasted effort. Any time your thought wavers in the direction of understanding that the effort was wasted, you get stab of negative emotions - particular hormones are released into bloodstream, particular pathways activate - and that is negative reinforcement for everything you've been doing including the use of mental framework that did lead you to that thought. I think LW calls something similar an 'ugh field', except the issue is that reinforcement is not so specific in it's action as to make you avoid one specific thought without also making you avoid the very method of thinking that got you there.

I think it may help in general (to combat the induced sloppiness) to do some kind of work where you are reliably negatively reinforced for being wrong or sloppy. Studying mathematics and doing the exercises correctly can be useful. (Studying without exercises doesn't even work). Software development, also. This will build a skill of what to do not to be sloppy, but won't necessarily transfer onto moral reasoning, for skill to transfer something else may be needed.

Comment author: Swimmer963 03 July 2012 02:20:57PM 0 points [-]

Consider the sunk cost issue. Suppose that you spent years working on a project that is heading nowhere, the effort was wasted, and there's a logical way to see that it is wasted effort. Any time your thought wavers in the direction of understanding that the effort was wasted, you get stab of negative emotions - particular hormones are released into bloodstream, particular pathways activate - and that is negative reinforcement for everything you've been doing including the use of mental framework that did lead you to that thought.

Solution: have a community where you can gain respect and status by having successfully noticed and avoided sunk cost reasoning. LW isn`t the best possible example of such a community, but a lot of the exercises done at, say, the summer minicamps in San Francisco were subsets of "get positive reinforcement for noticing Irrational Thought Pattern X in yourself, when normally various kinds of cognitive dissonance would make it tempting to sort of vaguely not notice it."

Comment author: private_messaging 03 July 2012 05:07:01PM *  -2 points [-]

LW is a terrible example, an attachment to bunch of people (SI) who keep sinking their effort and other people's money, and rationalizing it. Regarding noticing irrational pattern, so you notice it, get rid of it, then what? You aren't gaining some incredible powers of finding correct answer (you'll just come up with something else that's wrong). It's something you always find in cults - thought reform, unlearn what you learnt style. You don't find people sitting at the desks doing math exercises all day being ranked for being correct, being taught how to be correct, that would be school/university course, it is boring, it's no silver bullet, it takes time.

Comment author: wedrifid 03 July 2012 05:14:17PM -1 points [-]

LW is a terrible example, an attachment to bunch of people (SI) who keep sinking their effort and other people's money, and rationalizing it. Regarding noticing irrational pattern, so you notice it, get rid of it, then what? You aren't gaining some incredible powers of finding correct answer.

Why are you here then? Please leave.

Comment author: private_messaging 03 July 2012 05:17:29PM -1 points [-]

Boredom. You guys are highly unusual, have to give you that.

Comment author: shokwave 03 July 2012 05:32:46PM 1 point [-]

Might I suggest using fungibility? There are more effective ways than LW to treat boredom and desire for unusual conversation, if you pursue them separately.

Comment author: Eugine_Nier 04 July 2012 05:24:22AM 0 points [-]

Why are you here then? Please leave.

Are you intentionally trying to promote evaporative cooling?

Comment author: wedrifid 04 July 2012 07:19:00AM *  0 points [-]

Are you intentionally trying to promote evaporative cooling?

Evaporative cooling regarding that attitude and this behavioral pattern? ABSOULTELY!

Comment author: Eugine_Nier 04 July 2012 05:27:30AM 0 points [-]

Solution: have a community where you can gain respect and status by having successfully noticed and avoided sunk cost reasoning.

This has its own failure mode.

Comment author: Swimmer963 04 July 2012 10:14:30AM 0 points [-]

I had read that article before. It's not something that I would consider a problem for myself...I rarely if ever abandon a project in the middle, and when I do, it's a) always been a personal project or goal that affects no one else, and b) always been something that turned out to be either a bad idea in the first place (i.e. my goal at age 14 of weighing 110 pounds...would never happen unless I actually develop an eating disorder), or important to me for the wrong reasons (going to the Olympics for swimming). Etc.

Note that this isn't any kind of argument against your point... If anything, it's my own personal failure mode of assuming everyone's brain is like mine and that their main problems are like mine.

However, I think it does count for something that nyan_sandwich posted this article, noticing a flaw in his reasoning, on LW...and got upvotes and praise.

Comment author: TheOtherDave 28 June 2012 05:45:17PM *  2 points [-]

FWIW, I don't actually want to let everyone live the way they want to.
Ideally, I would far prefer that everyone live the way that's best for everyone.

Of course, I don't know that there is any such way-to-live, and I certainly don't know what it is, or how to cause everyone to live that way.

I might end up endorsing letting everyone live the way they want to, if I were convinced that that was the best achievable approximation of everyone living the way that's best for everyone. (IRL I'm not convinced of that.) But it would be an approximation of what I want, not what I want.

So what use is there in debating morality, if we have so divergent moral intuitions?

It's worth drawing a distinction here between debating morality and discussing it.

Roughly, I would say that the goal of debate is to net-increase among listeners their support for the position I champion, and the goal of discussion is to net-increase among listeners their understanding of the positions being discussed. In both cases, I might or might not hold any particular position, and participants in the discussion/debate are also listeners.

So. The value to me of debating moral positions is to convince listeners to align themselves with the moral positions I choose to champion. The value of debating other positions in moral terms is to convince listeners to align themselves with the other positions I choose to champion. The value to me of discussing moral positions is to learn more and to help others learn more about the various moral positions that exist.

Of course, many people respond negatively when they infer that someone is trying to get them to change their positions, and so it's often valuable when debating a topic to pretend to be discussing it instead. And, of course, if I believe that to understand my position is necessarily to support it, then I won't be able to tell the difference between debating and discussing that position.

So all of those things are sometimes called "debating morality", sometimes accurately. And debating morality is sometimes called other things.

Comment author: wedrifid 28 June 2012 05:50:42PM 3 points [-]

Of course, many people respond negatively when they infer that someone is trying to get them to change their positions, and so it's often valuable when debating a topic to pretend to be discussing it instead.

That can also backfire with charges of "disingenuous".

Comment author: TheOtherDave 28 June 2012 05:57:07PM 2 points [-]

Well, yes. I mean, it is disingenuous.
If I'm going to successfully pretend to be doing something I'm not, it helps to not get caught out.

Comment author: David_Gerard 29 June 2012 08:03:05AM 1 point [-]

Front page, I suggest.

Comment author: Konkvistador 29 June 2012 04:30:26PM 0 points [-]

I agree.

Comment author: Kaj_Sotala 30 June 2012 06:02:15AM 0 points [-]

Thanks, I moved it. Let's see how it does.

Comment author: [deleted] 01 July 2012 09:58:07PM 1 point [-]

So what about the cases when the distance is too large, when the gap simply cannot be bridged? Well in those cases, we will simply have to fight to keep pushing our own moral intuitions to as many people as possible, and hope that they will end up having more influence than the unacceptable intuitions.

We're not stuck with our moral intuitions, unless we have "faith" that they're "true."

  1. Doesn't it seem odd, Kaj_Sojala—even irrational—that we should push our "moral intuitions" when that's all they are: intuitions—which don't describe any reality, which aren't intuitions about anything?

  2. We can change our "moral intuitions" rationally—although the mission isn't one of finding "truth". Our standards of personal integrity respond to our adaptive needs, and we can help change them in the interest of rational adaptation. They are not, even for us, "ultimate moral values."

Comment author: Gust 31 July 2012 01:34:11PM 0 points [-]

and we can help change them in the interest of rational adaptation

And why should you do that?

Comment author: taw 30 June 2012 11:34:45AM 1 point [-]

I probably have very different sense what's moral and what isn't from the author (who claims to be American liberal), but I agree with pretty much everything the author says about meta-morality.

Comment author: prase 30 June 2012 11:40:26AM 0 points [-]

The author doesn't claim to be American and in fact is, as far as I know, Finnish.

Comment author: shokwave 30 June 2012 05:19:53PM 6 points [-]

Potentially "American liberal" is American-flavour liberalism, and not an American who is also a liberal.

Comment author: prase 01 July 2012 05:52:05PM 2 points [-]

Damn ambiguous natural languages, you may be right.

Comment author: Kaj_Sotala 01 July 2012 09:10:06PM 0 points [-]

Though it should probably be noted that I used "liberal" mostly a convenient shorthand to characterize my views regarding gay rights, rather than as a characterization of my political views in general. I expect there to be a number of issues on which my views map badly to the views of the typical American liberal, though I don't actually know American politics well enough to know exactly what views those are.

Comment author: pleeppleep 03 July 2012 02:48:42AM 1 point [-]

I would compare ethics to swimming in a giant tub of ice cream (all the same flavor) with the rest of humanity. Everyone has a favorite flavor which their intuitions pick for them, but the world can't fit everyone's tastes. Some flavors are acceptable deviations, but others are painfully unbearable. It only makes sense to try and fill the tub with your personal preference.

Comment author: djcb 28 June 2012 10:24:27PM *  1 point [-]

Moral intuitions (i.e, 'kneejerk reactions') are what fuels many people's opinions. Can we do better on LW? Meta-ethical systems (consequentialism, deontology) are often used as post-hoc rationalizations for said moral intuitions, but can we do better?

For these kind of problems I especially like Kant's approach -- can we come up with a rule that underlies our opinion on something, and would we be willing to follow that rule, even if it goes against our immediate intuitions in some other case? And the more specific a rule gets (ie., 'this only applies to green people', the clearer is the sign that we're doing some special pleading.

Comment author: mwengler 30 June 2012 02:21:40AM 4 points [-]

How is coming up with a rule based on our moral intuitions and then following that rule even when it means violating our intuitions any better than just following intuitions in the first place? How is it better to replace following intuitions with following an imperfect simplification derived from an intuition?

I have been thinking these past months that I could somehow be immune from or outside of the necessity of having my intuitions dictate my values. Someone pointed out to me that it was essentially an intuition of mine that separating from this source of morality would be a good idea, and since then I have been trying to figure out how to live with being just an evolutionarily determined set of arbitrary (to anyone outside the system) values.

Comment author: RichardKennaway 01 July 2012 08:54:30AM 3 points [-]

How is coming up with a rule based on our moral intuitions and then following that rule even when it means violating our intuitions any better than just following intuitions in the first place? How is it better to replace following intuitions with following an imperfect simplification derived from an intuition?

You can't get away from your intuitions.

We contemplate our moral intuitions and intuitively abstract rules from them, and have the intuition that such rules should be followed. Yet the rules may turn out to violate other intuitions. The problem is not rules against intuitions, but intuitions against intuitions.

Comment author: Eugine_Nier 01 July 2012 06:51:16AM 1 point [-]

How is coming up with a rule based on our moral intuitions and then following that rule even when it means violating our intuitions any better than just following intuitions in the first place?

Well, in mathematics and science we made a lot of progress when we stopped doing the latter and started doing the former.

Comment author: mwengler 02 July 2012 09:24:41AM *  1 point [-]

Yes. In science and math we had reality against which to measure our progress.

What do you measure your progress against in coming up with a moral system? If it is the extent to which your moral system matches your moral intuitions, you will never do better than just following your intuitions.

If you are measuring your progress against something else, do say what it is. I know I have been searching for decades for some way to make morality objective.

If there is nothing against which to measure your progress, than following your intuitions is immeasurably better or worse than making up a system based on SOME of your intuitions.

Comment author: Eugine_Nier 03 July 2012 05:10:02AM *  0 points [-]

Yes. In science and math we had reality against which to measure our progress.

Except how do you measure something against reality in a way that doesn't (at least implicitly) rely on your intuitions?

What do you measure your progress against in coming up with a moral system? If it is the extent to which your moral system matches your moral intuitions,

Well, this is more-or-less what we do in mathematics.

Comment author: mwengler 03 July 2012 07:34:37AM 3 points [-]

I can routinely travel thousands of miles in a few hours at extremely finite cost. Our modern society gives US citizens on average the benefit of 25 humans worth of energy usage (that is, the amount of energy per day used by the average american would require 25 slaves to generate if human slaves were used to generate energy).

Even in math, I can build my understanding in to circuits which by working, verify my mathematical reasoning, and more importantly, verify that the reasoning stands independently of my own feelings or intuitions about it it. I routinely calculate things and then build software to implement them that 1) either works as I expected from my mathematical calculations, or 2) doesn't, in which case, so far, I have always been able to find that I made a mistake in my calculations, or in my interpretation of how my implementation was related to my calculations.

I'll admit some theoretical intuitive component to understanding the connection between science/math and real benefits that come from it.

But it isn't just that I am privileging math/science in a way I refuse to privilege moral reasoning. It is that I don't even know what benefits for systematized morality you are claiming. What do I expect as my payoff for systematizing morality, that I may perhaps have to make some intuitive leaps to notice? What does systematized morality offer us that merely relying on moral intuition in a non-systematic way doesn't do just as well?

This is a real question, not some rhetorical question to say "see, I am right." What do you get out of throwing your faith behind moral realism and systematizing it?

Comment author: TheOtherDave 30 June 2012 03:59:42AM 1 point [-]

Well, deriving and following a rule can allow for consistent behavior across sets of situations where my intuitions are inconsistent. If I value consistency, I might endorse that.

Comment author: mwengler 02 July 2012 09:30:38AM 0 points [-]

If you value consistency, AND your moral system is derived from your moral intuitions and nothing else, AND your moral intuitions are inconsistent...

If it walks like a science and it talks like a science but it is astrology, is it worth doing the calculations?

Comment author: Eugine_Nier 03 July 2012 05:13:44AM 2 points [-]

If it walks like a science and it talks like a science but it is astrology, is it worth doing the calculations?

When you consider that "doing the calculations" is how astronomy was ultimately derived from and separated from astrology quiet possibly.

Comment author: mwengler 03 July 2012 07:25:34AM 1 point [-]

Good point. So we have now had astronomy for more than 2000 years, thanks Astrology!

What have we gotten from doing Ethics? What has moral realism delivered? I suppose you might say a population easier to rule, and that would be something indeed, but before I put words in your mouth, you tell me what you get for having tried to systematize morality for 4000 years?

Comment author: TheOtherDave 02 July 2012 01:44:17PM 1 point [-]

Yes. Though not more than once.
Incidentally, I don't accept that adding the "and nothing else" clause preserves the meaning of my original comment. Which is fine; you're under no obligation to preserve that meaning, I just wanted to make that explicit.

Comment author: mwengler 02 July 2012 06:46:40PM 0 points [-]

I don't accept that adding the "and nothing else" clause preserves the meaning of my original comment.

Since we are talking about how to form a system of morality, where it might come from, and what might be good or bad about doing so, if there is some source of morality that youare presuming that has not yet entered the discussion, by all means, please, let 'er rip. I would prefer knowing what it is than to merely knowing that you may or may not have one in your pocket that you haven't stated.

Comment author: TheOtherDave 02 July 2012 07:08:00PM 1 point [-]

I have not claimed a hidden source of morality, nor do I possess one, so you can rest easy on that score.

But deriving a rule, or a consistent set of rules, or a system of morality based on my moral intuitions and my knowledge of the world is different from deriving it based on my moral intuitions and nothing else, even if my knowledge of the world is not itself a source of morality.

Comment author: mwengler 03 July 2012 07:40:49AM 0 points [-]

Its better if you tell me what you think and I don't have to guess. I don't see how a moral intuition could ever even appear absent some knowledge of the world, these are feelings which arise in response to situations we find ourselves in and (at least we think) comprehending.

Ir your systematized morality is "better" than your non-systematized moral intuitions, please tell me, at least through examples, 1) How it is different and 2) How you know (or at least why you think) it is better.

Comment author: TheOtherDave 03 July 2012 03:06:19PM 1 point [-]

I'm not asserting that moral intuitions can arise without any knowledge of the world.

But not all of my knowledge of the world plays a significant role in the formation of my moral intuitions, for various reasons, any more than all of my knowledge of the world plays a significant role in the formation of my physical and social intuitions.

And (as I've said repeatedly) taking all of that knowledge into account along with my moral intuitions when deriving moral rules can lead to a different set of rules than deriving those moral rules based on my intuitions and nothing else (as you initially framed the question).

1) How it is different and 2) How you know (or at least why you think) it is better.

As I said in the first place, the potential value of a systematized moral framework is that it can allow for consistent behavior across sets of situations where my intuitions are inconsistent, and some people value consistency.

If that's not clear enough to preclude the need for guesswork, I apologize for the lack of clarity. If you have specific questions or challenges I'll try to address them. If I'm just not making any sense at all, I'd prefer to drop this exchange here.

Comment author: djcb 30 June 2012 07:53:10AM 0 points [-]

That is a good question.

I think determining some underlying rule can help me make (subjectively of course) better judgements, which have a much better chance of being consistent (as TheOtherDave mentions).

It's much too easy for the emotional machinery in our brains to be hijacked by images of baby-seals, terrorists, etc., and I feel my judgements are better if I can use some underlying rules rather than my intuitions.

Comment author: mwengler 02 July 2012 09:28:56AM 0 points [-]

If all you have to base your moral system on is your intuitions, then the best you can hope for in a "consistent" systematization is to do no worse than flipping a coin when you have conflicting intuitions.

I suppose what I am really reacting to is that it strikes me that carefully systematizing morality makes as much sense as carefully systematizing astrology. The details and the calculations and the cogitation serve to give the illusion of there being something there while in actuality... all you have is Rationality Theater.

Comment author: djcb 02 July 2012 09:34:57PM *  1 point [-]

True, at some point intuitions come into play (unless you are some kind of Spock), to determine your personal moral bedrock. But at least for me, these intuitions are not all born equal, and not all intuitions are part of this bedrock.

A typical example would be: 'Cute animals are more important', which may conflict with some deeper rule in some situation. Instead of just following my intuition with that first rule, I think my moral judgements are better when I take a step back and try to use the deeper rule.

Comment author: Eugine_Nier 03 July 2012 05:20:46AM 0 points [-]

If all you have to base your moral system on is your intuitions, then the best you can hope for in a "consistent" systematization is to do no worse than flipping a coin when you have conflicting intuitions.

Well, the same problem exists in science but that hasn't stopped us from making progress.

Comment author: mwengler 03 July 2012 07:22:51AM *  3 points [-]

You are on to something, science in some sense is taken on faith and morality in a similar sense is taken on faith.

But the faiths are different. The faith of science is a testable faith. Either you build stuff that works or you don't. if your musings about thermodynamics lead to a steam engine and later to an air conditioner, and your musings about electrons in a semiconductor lead to a transistor and later to a smartphone, well, that is what your high priests of science can bring you.

What is the test of a faith in moral realism? I don't wish to answer with a strawman that I will knock down, I really want to know, how do you evaluate if your moral system is doing a good job? Do you measure fewer inconsistencies in intuition? Do you get elected to the senate? Do people vote up your karma?

Science leads to jet aircraft and HD TVs and hip replacements. 2 out of 3 Abrahamic religions lead toenjoyable promises of an eternity of bliss.

What is the promise of a moral system? What is the thing it claims to give me that I don't have just following my intuitions in a non-systematic way? I know what the high-priests of science are claiming for their mojo, and it sure seems to me they deliver. (And they don't require me to believe in their mumbo jumbo "induction" stuff in order to use their jet aircraft and smartphones). What are the moral realists offering? And even more important, what are they delivering?

Comment author: Eugine_Nier 04 July 2012 06:27:19AM 2 points [-]

The best answer I can give you is that a moral realist today is currently in the same situation as a physical realist was before the development of the scientific method. There were lots of competing not-quite coherent theories of what it means for something to be real, but if you asked 100 people they would all agree on whether something was a rock or a glass of milk barring weirdness. Similarly, today there are lots of competing not-quite coherent theories of what it means for something to be moral, but if you asked 100 people they would all agree that killing an innocent person is wrong barring weirdness.

(The above is paraphrased from another comment that I can't locate right now.)

I realize that the above may not be the most satisfying answer, especially if the history of philosophy isn't available for you.

Comment author: mwengler 04 July 2012 04:20:00PM 1 point [-]

So perhaps we still await the development of "the moral method."

It does strike me, and I mean I have not thought of this really until right now, that law and government are the engineering branches of "the moral method" of "moral realism" as "the scientific method" corresponds to "physical realism." Economics and Sociology may be the Physics and Chemistry of "moral realism." The progress that law and government have enabled are an economic productivity contributed to by billions of people (or at least 100s of millions) which dwarfs that of our predecessors in the same way that our technology dwarfs that of our predecessors.

There are at least a few interesting things about this idea. One need not "believe" in science to use the fruits of it, whereas plausibliy a belief in science is necessary to contribute to developing its progress. One can be an anarchist or a communist or an ignoramus or a nihilist and benefit from the modern economy and unprecedented levels of personal security in society. Presumably any "realism" would have implications that did not depend on the state of belief in the thing which is real.

What my off-the-cuff thesis lacks is any neessity for the truth-or-falsehood of moral statements. "You ought to obey the law" or "killing in a way which is against the law is wrong" are NOT required to be meaningful statements with an objective truth value. Or are they? In some sense, the truth value of scientific statements require the assumptions of logic and induction. One could say that it is not necessary to have a truth value associated with "all electrons repel each other" in order for me to build a smartphone which will only work if its untested electrons act the same in the future as the very very few electrons I have actually tested in the past. So perhaps "de facto" as it were, the practitioners and advancers of the law and government have a belief in "the moral method" just as non-philosopher scientists and engineers seem to have a "de facto" belief in induction.

This identification of law and government with the stuff of moral realism even has the feature that it can be wrong, or wrong-ish, just like science and engineering. ALL engineering design is done using approximations of physics. That is, we KNOW the principles behind our designs our "wrong" in that they are inexact approximations for what is really happening. We then use trial and error to develop an art of design which "usually" works, which usually keeps the thing we are designing away from where the inaccuracies of our design assumptions matter. Heck we even have the idea that there can be better and worse law and government just as there are better and worse science.

To stretch the analogy past all reason, can I say something interesting about the moral discussions that to me seem typical and which make me want to be a nihilist? These are the discussions of "my morality comes from moral intuitions but one of my intuitions is my morality should be consistent so I build these elaborate personal strutures instead of just doing what feels right." Their analogy in science might be someone who assiduously records all sorts of personal data to advance his health without a clue that his better option would be to plug in to the progress made in medical research. Someone who attempts to build his own smartphone through introspection instead of getting the professional product.

I don't know. Now I'll have to read about philosophy of law and government to discover that everything I've just said has been said before, its flaws categorized into labeled branches of belief. But for now I'm pretty happy with the concept and feel as though I've just invented something even though I've probably just dredged it up from things I've heard and read over the last half a century and, at least consciously, forgotten.

Comment author: Eugine_Nier 05 July 2012 07:08:40AM -1 points [-]

It does strike me, and I mean I have not thought of this really until right now, that law and government are the engineering branches of "the moral method" of "moral realism" as "the scientific method" corresponds to "physical realism." Economics and Sociology may be the Physics and Chemistry of "moral realism."

Given the current state of economics and sociology I'd replace chemistry with alchemy in that metaphor. Also, foundational systems like utilitarianism and deontology are the equivalent of astronomy/astrology before they got separated.

To stretch the analogy past all reason, can I say something interesting about the moral discussions that to me seem typical and which make me want to be a nihilist? These are the discussions of "my morality comes from moral intuitions but one of my intuitions is my morality should be consistent so I build these elaborate personal strutures instead of just doing what feels right." Their analogy in science might be someone who assiduously records all sorts of personal data to advance his health without a clue that his better option would be to plug in to the progress made in medical research. Someone who attempts to build his own smartphone through introspection instead of getting the professional product.

A better analogy might be someone who believes that he can develop a physical theory simply by introspection without looking at the world. (It was a popular philosophical position before the scientific method was developed, after all that's how mathematics works and it had been successful.)

Comment author: David_Gerard 29 June 2012 08:04:53AM 2 points [-]

Doing this is, of course, a major project in philosophy. Many attempts have serious problems.

Comment author: djcb 29 June 2012 06:46:20PM 0 points [-]

I can see that... one of the obvious problems that we can find some case where the meta-ethical systems go against our moral intuitions. This sometimes leads to attempt to make the meta-ethics incorporate this case (and then some more), but I feel it quickly becomes rather obvious that we cannot come up with any consistent system that also satisfies our intuitions. I'm a bit pessimistic philosophers will resolve this problem soon...

On a more happy note, I have found Kant's reasoning very useful for my own personal opinion-making, by constantly reminding me that if I find X about, say, genetically-modified food, nuclear energy etc., I really need to make my opinion in terms of a rule that doesn't include the particular case, and I try to think what this same rule would mean for other opinions I hold.

Comment author: Eugine_Nier 01 July 2012 06:48:57AM 0 points [-]

I can see that... one of the obvious problems that we can find some case where the meta-ethical systems go against our moral intuitions. This sometimes leads to attempt to make the meta-ethics incorporate this case (and then some more), but I feel it quickly becomes rather obvious that we cannot come up with any consistent system that also satisfies our intuitions. I'm a bit pessimistic philosophers will resolve this problem soon...

Reasoning about, e.g., mathematics or physics has the same problem, and yet in those fields we can still build the system on our intuitions while accepting that they're sometimes wrong.

Comment author: stcredzero 28 June 2012 11:30:16PM -2 points [-]

Perhaps we should view our moral intuitions as yet another evolved mechanism, in that they are imperfect and arbitrary though they work well enough for hunter gatherers.

When we lived as hunter gatherers, an individual could find a group with compatible moral intuitions or walk away from a group with incompatible ones. The ability or possibility that an unpleasant individual's moral intuitions would affect you from one valley over was minimal.

One should note, though, that studies of murder rates amongst hunter gatherer groups found that they were on the high side compared to industrialized societies.

Comment author: mwengler 30 June 2012 02:17:28AM 2 points [-]

When we lived as hunter gatherers, an individual could find a group with compatible moral intuitions or walk away from a group with incompatible ones.

I suspect that this was much less true among hunter gatherers than it is now. From what I have read of groups in the Amazon and New Guinea, if you were to walk away from your group and try to walk into another, you would most likely be killed, and possibly captured and enslaved.

Comment author: [deleted] 30 June 2012 10:08:30AM 6 points [-]

From what I have read of groups in the Amazon and New Guinea, if you were to walk away from your group and try to walk into another, you would most likely be killed, and possibly captured and enslaved.

What groups? Low-tech tribal societies in the Amazon and New Guinea aren't necessarily hunter-gatherers. Both regions have agricultural societies going back a long way.

Comment author: stcredzero 30 June 2012 03:06:04AM 3 points [-]

From what I have read of groups in the Amazon and New Guinea, if you were to walk away from your group and try to walk into another, you would most likely be killed, and possibly captured and enslaved.

Perhaps this varies because of local environmental/economic conditions. From my undergraduate studies, I seem to remember that !Kung Bushmen would sometimes walk away from conflicts into another group.

Comment author: [deleted] 30 June 2012 10:09:10AM 1 point [-]

Yes. That's true of many other mobile forager societies as well.

Comment author: taw 30 June 2012 11:37:32AM 3 points [-]

Dear everyone, please stop talking about "hunter gatherers". We have precisely zero samples of any real Paleolithic societies unaffected by extensive contact with Neolithic cultures.

Comment author: Nisan 02 July 2012 03:00:58AM 3 points [-]

Can you elaborate on this? I mean, can you give me a reason that using the phrase "hunter-gatherer" is a mistake? I understand your second sentence but I don't understand why that's a reason.

Comment author: taw 02 July 2012 10:22:50AM 2 points [-]

People make all kinds of stuff about how humans supposedly lived in "natural state" with absolute certainty, and we know just about nothing abut it, other than some extremely dubious extrapolations.

A fairly safe extrapolation is that human were always able to live in very diverse environments, so even if we somehow find one unpolluted sample somehow (by time travel most likely...), it will give us zero knowledge of "typical" Paleolithic humans.

The label has also been used on countless modern and fairly recent historical societies which are definitely not living in any kind of Paleolithic-like conditions. Like agricultural societies in Papua New-Guinea. And banana farmers Yanomami (who are everybody's favourite "hunter gatherers" when talking about violence in "Paleolithic"). etc. Or Inuit who had domesticated dogs, and lived in condition as climatically removed from Paleolithic humans as possible.

With pretty much 100% rate of statement being wrong when anybody says anything about "hunter gatherers" due to these reasons.

One should note, though, that studies of murder rates amongst hunter gatherer groups found that they were on the high side compared to industrialized societies.

That's a great example of all these fallacies put together. Murder rates of some people who were actually not hunter gatherers (my bet is they refer to Yanomami), after fairly significant amount of contact with civilization (so not even in their "natural" state, whatever that might be), in one short time period when research was conducted (as we know 1939-1945 murder rates are perfectly extrapolable to entire European history), among people who are not really hunter gatherers in the first place, was found to be fairly high. This is then generalized to what all humans must have been like in prehistory.

With such a clusterfuck of fallacies happening every time anybody says anything about "hunter gatherers", let's just stop.

Comment author: wedrifid 02 July 2012 10:47:06AM 1 point [-]

With pretty much 100% rate of statement being wrong when anybody says anything about "hunter gatherers" due to these reasons.

Assuming your premises, how the heck would you know?

Comment author: TimS 02 July 2012 02:16:55PM 3 points [-]

I think that that paragraph before the one you quoted counts as "presenting evidence."

That just leaves hyperbole - which I'm sure you've never used yourself.

Comment author: wedrifid 02 July 2012 02:30:05PM 0 points [-]

That just leaves hyperbole - which I'm sure you've never used yourself.

I try to avoid self defeating ironic hyperbole.

Comment author: TimS 02 July 2012 02:38:14PM 1 point [-]

I don't approve of taw's tone - as you note, it is more off-putting than persuasive. But "ancestral environment" is an applause light in this community. I don't see what your comment adds beyond reinforcing the applause light.

Comment author: wedrifid 02 July 2012 03:04:14PM 0 points [-]

"Ancestral Environment"? I thought he was talking about the phrase "Hunter Gatherer". The former phrase isn't even in the comment!

Comment author: RichardKennaway 03 July 2012 03:15:53PM 3 points [-]

The meaning is, however, found in the original context. stcredzero:

When we lived as hunter gatherers

That's a reference to ancestral environment.

One should note, though, that studies of murder rates amongst hunter gatherer groups

That's a reference to present-day hunter-gatherers, with the implication that what we see among modern groups so described is what happened among humans generally in the Paleolithic, when hunting and gathering were the only ways that people had yet invented for getting their food. This is the fallacy that taw is talking about when he says:

We have precisely zero samples of any real Paleolithic societies unaffected by extensive contact with Neolithic cultures.

To which stcredzero replied by quoting:

Bushman society is...

And so on.

Comment author: stcredzero 03 July 2012 02:02:40AM *  -2 points [-]

http://www.crinfo.org/articlesummary/10594/

Bushman society is fairly egalitarian, with power being evenly and widely dispersed. This makes coercive bilateral power-plays (such as war) less likely to be effective, and so less appealing. A common unilateral power play is to simply walk away from a dispute which resists resolution. Travel among groups and extended visits to distant relatives are common. As Ury explains, Bushmen have a good unilateral BATNA (Best Alternative to a Negotiated Agreement). It is difficult to wage war on someone who can simply walk away. Trilateral power plays draw on the power of the community to force a settlement. The emphasis on consensual conflict resolution and egalitarian ethos means that Bushmen communities will not force a solution on disputing parties. However the community will employ social pressure, by for instance ostracizing an offender, to encourage dispute resolution.

Please explain to me how Bushmen picked up the above from industrialized society. It strikes me as highly unlikely that this pattern of behavior didn't predate the industrial era.

Did you consider precisely what you were objecting to, or was this a knee-jerk reaction to a general category?

Comment author: taw 03 July 2012 02:47:51PM 2 points [-]

Bushmen lived in contact with pastoralist and then agricultural societies nearby for millennia. The idea that they represent some kind of pre-contact human nature is baseless.

"Industrialized" or not isn't relevant.