Forager Anthropology
(This is the second post in a short sequence discussing evidence and arguments presented by Christopher Ryan and Cacilda Jethá's Sex at Dawn, inspired by the spirit of Kaj_Sotala's recent discussion of What Intelligence Tests Miss. It covers Part II: Lust in Paradise and Part III: The Way We Weren't.)
Forager anthropology is a discipline that is easy to abuse. It relies on unreliable first-hand observations of easily misunderstood cultures that are frequently influenced by the presence of modern observers. These cultures are often exterminated or assimilated within decades of their discovery, making it difficult to confirm controversial claims and discoveries. But modern-day foraging societies are the most direct source of evidence we have about our pre-agricultural ancestors; in many ways, they are agriculture's control group, living in conditions substantially similar to the ones under which our species evolved. The standard narrative of human sexual evolution ignores or manipulates the findings of forager anthropology to support its claims, and this is no doubt responsible for much of its confused support.
Steven Pinker is one of the most prominent and well-respected advocates of the standard narrative, both on Less Wrong and elsewhere. Eliezer has referenced him as an authority on evolutionary psychology. One commenter on the first post in this series claimed that Pinker is "the only mainstream academic I'm aware of who visibly demonstrates the full suite of traditional rationalist virtues in essentially all of his writing." Another cited Pinker's claim that 20-60% of hunter-gatherer males were victims of lethal human violence ("murdered") as justification for a Malthusian view of human nature.
That 20-60% number comes from a claim about war casualties in a 2007 TED talk Pinker gave on "the myth of violence", for which he drew upon several important findings in forager anthropology. (The talk is based on an argument presented in the third chapter of The Blank Slate; there is a text version of the talk available, but it omits the material on forager anthropology that Ryan and Jethá critique.)
At 2:45 in the video Pinker displays a slide which reads
Until 10,000 years ago, humans lived as hunter-gatherers, without permanent settlements or government.
He also points out that modern hunter-gatherers are our best evidence for drawing conclusions about those prehistoric hunter-gatherers; in both these statements he is in accordance with nearly universal historical, anthropological, and archaeological opinion. Pinker's next slide is a chart from The Blank Slate, originally based on the research of Lawrence Keeley. Sort of. It is labeled as "the percentage of male deaths due to warfare," with bars for eight hunter-gatherer societies that range from approximately 15-60%. The problem is that of these eight cultures, zero are migratory hunter-gatherers.
TED Talks for Less Wrong
Dan Ariely talks about pain and cheating. In a nutshell: people report less pain when (i) they experience the strongest pain first; (ii) they experience less pain for a longer interval rather than more pain for a shorter interval; (iii) they can take breaks. The data falsifies the common intuition that people will prefer short, high intensity pain. In general, people tend to cheat more when (i) they obtain things other than actual cash; (ii) they observe in-group members cheating successfully; they tend to cheat less when (i) they take away cash; (ii) they observe out-group members cheating successfully; (iii) they experience priming with moral concepts such as the Ten Commandments.
Post yours in comments. I've put a couple with the theme "how brains work" down there.
Essay-Question Poll: Voting
There has been a considerable amount of discussion scattered around Less Wrong about voting, what software features having to do with voting should be added or subtracted, what purpose voting should serve, etc. It seems as though it would be useful to have conveniently consolidated information on how people are actually voting, so we know what habits that we want to encourage or discourage are actually in use and how prevalently.
1. About what percentage of comments do you vote on at all? What percentage of top-level posts?
2. Do you use the boo vote or the anti-kibitzer extensions? Why or why not?
3. What karma threshold do you use to filter what you see, if any?
4. When you vote on a post, or read it and decide not to vote on it, what features of the post are you occurrently conscious of that influence your decision either way? (Submitter, current post score, length, style, topic, spelling, whatever.) What about comments?
Wanting to Want
In response to a request, I am going to do some basic unpacking of second-order desire, or "metawanting". Basically, a second-order desire or metawant is a desire about a first-order desire.
Example 1: Suppose I am very sleepy, but I want to be alert. My desire to be alert is first-order. Suppose also that there is a can of Mountain Dew handy. I know that Mountain Dew contains caffeine and that caffeine will make me alert. However, I also know that I hate Mountain Dew1. I do not want the Mountain Dew, because I know it is gross. But it would be very convenient for me if I liked Mountain Dew: then I could drink it, and I could get the useful effects of the caffeine, and satisfy my desire for alertness. So I have the following instrumental belief: wanting to drink that can of Mountain Dew would let me be alert. Generally, barring other considerations, I want things that would get me other things I want - I want a job because I want money, I want money because I can use it to buy chocolate, I want chocolate because I can use it to produce pleasant taste sensations, and I just plain want pleasant taste sensations. So, because alertness is something I want, and wanting Mountain Dew would let me get it, I want to want the Mountain Dew.
This example demonstrates a case of a second-order desire about a first-order desire that would be instrumentally useful. But it's also possible to have second-order desires about first-order desires that one simply does or doesn't care to have.
Example 2: Suppose Mimi the Heroin Addict, living up to her unfortunate name, is a heroin addict. Obviously, as a heroin addict, she spends a lot of her time wanting heroin. But this desire is upsetting to her. She wants not to want heroin, and may take actions to stop herself from wanting heroin, such as going through rehab.
One thing that is often said is that what first-order desires you "endorse" on the second level are the ones that are your most true self. This seems like an appealing notion in Mimi's case; I would not want to say that at her heart she just wants heroin and that's an intrinsic, important part of her. But it's not always the case that the second-order desire is the one we most want to identify with the person who has it:
Example 3: Suppose Larry the Closet Homosexual, goodness only knows why his mother would name him that, is a closet homosexual. He has been brought up to believe that homosexuality is gross and wrong. As such, his first-order desire to exchange sexual favors with his friend Ted the Next-Door Neighbor is repulsive to him when he notices it, and he wants desperately not to have this desire.
In this case, I think we're tempted to say that poor Larry is a gay guy who's had an alien second-order desire attached to him via his upbringing, not a natural homophobe whose first-order desires are insidiously eroding his real personality.
A less depressing example to round out the set:
Example 4: Suppose Olivia the Overcoming Bias Reader, whose very prescient mother predicted she would visit this site, is convinced on by Eliezer's arguments about one-boxing in Newcomb's Problem. However, she's pretty sure that if Omega really turned up, boxes in hand, she would want to take both of them. She thinks this reflects an irrationality of hers. She wants to want to one-box.
1Carbonated beverages make my mouth hurt. I have developed a more generalized aversion to them after repeatedly trying to develop a taste for them and experiencing pain every time.
The First Koan: Drinking the Hot Iron Ball
In the traditions of Zen in which koans are common teaching tools, it is common to use a particular story as a novice's first koan. It's the story of Joshu's Dog.
A monk asked Joshu, a Chinese Zen master: `Has a dog Buddha-nature or not?'
Joshu answered: `Mu.' [Mu is the negative symbol in Chinese, meaning `No-thing' or `Nay'.]
What does this koan mean? How can we find out for ourselves?
It is important to remember certain things: Firstly, koans are not meant to be puzzles, riddles, or intellectual games. They are examples, illustrations of the state of mind that the student is expected to internalize. Secondly, they often appear paradoxical.
Paradox is a pointer telling you to look beyond it. If paradoxes bother you, that betrays your deep desire for absolutes. The relativist treats a paradox merely as interesting, perhaps amusing or even -- dreadful thought -- educational.
Thirdly, the purpose of Zen teaching isn't to acquire new conceptual baggage, but to eliminate it; not to generate Enlightenment, but to remove the false beliefs that preventing us from recognizing what we already possess. Shedding error is the point, not learning something new.
Take a look at Mumon's commentary for this koan:
To realize Zen one has to pass through the barrier of the patriachs. Enlightenment always comes after the road of thinking is blocked. If you do not pass the barrier of the patriachs or if your thinking road is not blocked, whatever you think, whatever you do, is like a tangling ghost. You may ask: What is a barrier of a patriach? This one word, Mu, is it.
This is the barrier of Zen. If you pass through it you will see Joshu face to face. Then you can work hand in hand with the whole line of patriachs. Is this not a pleasant thing to do?
If you want to pass this barrier, you must work through every bone in your body, through ever pore in your skin, filled with this question: What is Mu? and carry it day and night. Do not believe it is the common negative symbol meaning nothing. It is not nothingness, the opposite of existence. If you really want to pass this barrier, you should feel like drinking a hot iron ball that you can neither swallor nor spit out.
Then your previous lesser knowledge disappears. As a fruit ripening in season, your subjectivity and objectivity naturally become one. It is like a dumb man who has had a dream. He knows about it but cannot tell it.
When he enters this condition his ego-shell is crushed and he can shake the heaven and move the earth. He is like a great warrior with a sharp sword. If a Buddha stands in his way, he will cut him down; if a patriach offers him any obstacle, he will kill him; and he will be free in this way of birth and death. He can enter any world as if it were his own playground. I will tell you how to do this with this koan:
Just concentrate your whole energy into this Mu, and do not allow any discontinuation. When you enter this Mu and there is no discontinuation, your attainment will be as a candle burning and illuminating the whole universe.
I'll give you a hint: Joshu's reply isn't really an answer to the monk's question, it's a response induced by it. Joshu answers the question the monk didn't ask but should have - the question whose answer the monk is taking for granted in what he asks.
This morning I passed by a gym with a glass-walled front, and I saw within the building many people working at machines, moving weights back and forth. What was being accomplished? Superficially, nothing at all. Their actions would appear to be wasted; nothing was done with them. The real purpose, of course, was to exercise the body, to condition the muscles and strengthen the bones.
The point of the koan isn't to find the 'right answer', the point of the koan is to struggle with it, and by struggling, develop one's own understanding. Contradiction and apparent contradiction is a powerful tool for this purpose. Trying to understand, we usually perceive a contradiction and let the process terminate. But if we keep struggling with the problem, even though we cannot expect to achieve anything, we build within ourselves ever more complex models, ways of seeing. Eventually the complexity will be useful in dealing with other problems, ones with solutions we didn't see before.
One warning: the fact that a problem is used as a source of contradiction does not mean that it doesn't actually have an answer. Don't mistake the use for the reality.
Has a dog Buddha-nature?
This is the most serious question of all.
If you say yes or no,
You lose your own Buddha-nature.
"Self-pretending" is not as useful as we think
A few weeks ago I made a draft of a post that was originally intended to be about the same issue addressed in MBlume’s post regarding beneficial false beliefs. Coincidentally, my draft included the same exact hypothetical about entering a club believing you’re the most attractive person in the room in order to increase chances of attracting women. There seems to be a general agreement with MBlume’s “it’s ok to pretend because it’s not self-deception and produces similar results” conclusion. I was surprised to see so much agreement considering that when I made my original draft I reached a completely different conclusion.
I do agree, however, that pretending may have some benefits, but those benefits are much more limited than MBlume makes them out to be. He brings up a time where pretending helped him better fit into his character in a play. Unfortunately, his anecdote is not an appropriate example of overcoming vestigial evolutionary impulses by pretending. His mind wasn’t evolutionarily programmed to “be afraid” when pretending to be someone else, it was programmed to “be afraid” when hitting on attractive women. When I am alone in my room I can act like a real alpha male all day long, but put me in front of attractive women (or people in general) and I will retreat back to my stifled self.
The only way false beliefs can overcome your obsolete evolutionary impulses is to truly believe in those false beliefs. And we all know why that would be a bad idea. Furthermore, pretending can be dangerous just like reading fiction can be dangerous. So the small benefit that pretending might give may not even be worth the cost (at times).
But there is something we can learn from these (sometimes beneficial) false beliefs.
Obviously, there is no direct casual chain that goes from self-fulfilling beliefs to real-world success. Beliefs, per se, are not the key variables in causing success; instead, these beliefs give rise to whatever the key variable is. We should figure out what are the key variables that arise and find a systematic way of getting those variables.
With the club example, we should instead figure out what behavior changes may result from believing that every girl is attracted to you. Then, figure out which of those behaviors attract women and find a way to perfect those behaviors. This is the approach the seduction community adopts for learning how to attract women—and it works.
Same goes with public speaking. If you have a fear of public speaking, you can’t expect to pretend your fear away. There are ways of reducing unnecessary emotions; the ways that work, however, don’t depend on pretending.
Rational Groups Kick Ass
Reply to: Extreme Rationality: It's Not That Great
Belaboring of: Rational Me Or We?
Related to: A Sense That More Is Possible
The success of Yvain's post threw me off completely. My experience has been opposite to what he describes: x-rationality, which I've been working on since the mid-to-late nineties, has been centrally important to successses I've had in business and family life. Yet the LessWrong community, which I greatly respect, broadly endorsed Yvain's argument that:
There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it.
So that left me pondering what's different in my experience. I've been working on these things longer than most, and am more skilled than many, but that seemed unlikely to be the key.
The difference, I now think, is that I've been lucky enough to spend huge amounts of time in deeply rationalist organizations and groups--the companies I've worked at, my marriage, my circle of friends.
And rational groups kick ass.
An individual can unpack free will or figure out that the Copenhagen interpretation is nonsense. But I agree with Yvain that in a lonely rationalist's individual life, the extra oomph of x-rationality may well be drowned in the noise of all the other factors of success and failure.
But groups! Groups magnify the importance of rational thinking tremendously:
- Whereas a rational individual is still limited by her individual intelligence, creativity, and charisma, a rational group can promote the single best idea, leader, or method out of hundreds or thousands or millions.
- Groups have powerful feedback loops; small dysfunctions can grow into disaster by repeated reflection, and small positives can cascade into massive success.
- In a particularly powerful feedback process, groups can select for and promote exceptional members.
- Groups can establish rules/norms/patterns that 1) directly improve members and 2) counteract members' weaknesses.
- Groups often operate in spaces where small differences are crucial. Companies with slightly better risk management are currently preparing to dominate the financial space. Countries with slightly more rational systems have generated the 0.5% of extra annual growth that leads, over centuries, to dramatically improved ways of life. Even in family life, a bit more rationality can easily be the difference between gradual divergence and gradual convergence.
And we're not even talking about the extra power of x-rationality. Imagine a couple that truly understood Aumann, a company that grokked the Planning Fallacy, a polity that consistently tried Pulling the Rope Sideways.
When it comes to groups--sized from two to a billion--Yvain couldn't be more wrong.
Actions and Words: Akrasia and the Fruit of Self-Knowledge
Knowing other people requires intelligence,
but knowing yourself requires wisdom.
Those who overcome others have force,
but those who overcome themselves have power.
- Tao Te Ching, Chapter 33: Without Force, Without Perishing
Much has been written here about the issue of akrasia. People often report that they really, sincerely want to do something, that they recognize that certain courses of action are desirable/undesirable and that they should choose them -- but when the time comes to decide, they do otherwise. Their choices don't match what they said their choices would be.
While I'm sure many people are less than honest in reporting their intentions to others, and possibly even more who aren't even being honest with themselves, there are still plenty of people that are presumably sincere and honest. So how can they make their actions match their understanding of what they want? How can their choices reflect their own best judgment?
Isn't that really the wrong question?
On Seeking a Shortening of the Way
"The most instructive experiences are those of everyday life." - Friedrich Nietzsche
What is it that the readers of lesswrong are looking for? One claim that's been repeated frequently is that we're looking for rationality tricks, shortcuts and clever methods for being rational. Problem is: there aren't any.
People generally want novelty and gimmicks. They're exciting and interesting! Useful advice tends to be dull, tedious, and familiar. We've heard it all before, and it sounded like a lot of hard work and self-discipline. If we want to lose weight, we don't do the sensible and quite difficult thing and eat a balanced diet while increasing our levels of exercise. We try fad diets and eat nothing but grapefruits for a week, or we gorge ourselves on meats and abhor carbohydrates so that our metabolisms malfunction. We lose weight that way, so clearly it's just as good as exercising and eating properly, right?
We cite Zen stories but don't take the time and effort to research their contexts, while at the same time sniggering a the actual beliefs inherent in that system. We wax rhapsodic about psychedelics and dismiss the value of everyday experiences as trivial - and handwave away praise of the mundane as utilization of "applause lights".
We talk about the importance of being rational, but don't determine what's necessary to do to become so.
Some of the greatest thinkers of the past had profound insights after paying attention to parts of everyday life that most people don't give a second thought. Archimedes realized how to determine the volume of a complex solid while lounging in a bath. Galileo recognized that pendulums could be used to reliably measure time while letting his mind drift in a cathedral.
Sure, we're not geniuses, so why try to pay attention to ordinary things? Shouldn't we concern ourselves with the novel and extraordinary instead?
Maybe we're not geniuses because we don't bother paying attention to ordinary things.
Dead Aid
Followup to So You Say You're an Altruist:
Today Dambisa Moyo's book "Dead Aid: Why Aid Is Not Working and How There Is a Better Way for Africa" was released.
From the book's website:
In the past fifty years, more than $1 trillion in development-related aid has been transferred from rich countries to Africa. Has this assistance improved the lives of Africans? No. In fact, across the continent, the recipients of this aid are not better off as a result of it, but worse—much worse.
In Dead Aid, Dambisa Moyo describes the state of postwar development policy in Africa today and unflinchingly confronts one of the greatest myths of our time: that billions of dollars in aid sent from wealthy countries to developing African nations has helped to reduce poverty and increase growth.
In fact, poverty levels continue to escalate and growth rates have steadily declined—and millions continue to suffer. Provocatively drawing a sharp contrast between African countries that have rejected the aid route and prospered and others that have become aid-dependent and seen poverty increase, Moyo illuminates the way in which overreliance on aid has trapped developing nations in a vicious circle of aid dependency, corruption, market distortion, and further poverty, leaving them with nothing but the “need” for more aid.
From the Global Investor Bookshop:
Dead Aid analyses the history of economic development over the last fifty years and shows how Aid crowds out financial and social capital and directly causes corruption; the countries that have caught up did so despite rather than because of Aid. There is, however, an alternative. Extreme poverty is not inevitable. Dambisa Moyo also shows how, with improved access to capital and markets and with the right policies, even the poorest nations could be allowed to prosper. If we really do want to help, we have to do more than just appease our consciences, hoping for the best, expecting the worst. We need first to understand the problem.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)