Cached Selves
by Anna Salamon and Steve Rayhawk (joint authorship)
Related to: Beware identity
A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."
Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future.
To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing. So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards. Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.
For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself. If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends. If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.
All familiar phenomena, right? You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas. But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena. And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence. (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)
Precommitting to paying Omega.
Related to: Counterfactual Mugging, The Least Convenient Possible World
What would you do in situation X?" and "What would you like to pre-commit to doing, should you ever encounter situation X?" should, to a rational agent, be one and the same question.
Applied to Vladimir Nesov's counterfactual mugging, the reasoning is then:
Precommitting to paying $100 to Omega has expected utility of $4950.p(Omega appears). Not precommitting has strictly less utility; therefore I should precommit to paying. Therefore I should, in fact, pay $100 in the event (Omega appears, coin is tails).
To combat the argument that it is more likely that one is insane than that Omega has appeared, Eliezer said:
So imagine yourself in the most inconvenient possible world where Omega is a known feature of the environment and has long been seen to follow through on promises of this type; it does not particularly occur to you or anyone that believing this fact makes you insane.
My first reaction was that it is simply not rational to give $100 away when nothing can possibly happen in consequence. I still believe that, with a small modification: I believe, with moderately high probability, that it will not be instrumentally rational for my future self to do so. Read on for the explanation.
Rationalist Poetry Fans, Unite!
Related to: Little Johnny Bayesian, Savanna Poets
There are certain stereotypes about what rationalists can talk about versus what's really beyond the pale. So far, Less Wrong has pretty consistently exploded those stereotypes. In the past three weeks, we've discussed everything from Atlantis to chaos magick to "9-11 Truth". But I don't think anything surprised me quite as much as learning that there are a couple of rationalists here with a genuine interest in poetry.
Poetry has not been very friendly to the rational worldview over the past few centuries. What with all the 19th century's talk of unweaving rainbows and the 20th century's talk of quadrupeds swooning into billiard balls, it's tempting to think it reflects some natural order of things, some eternal conflict between Art and Science.
But for most of human history, science and art were considered natural allies. Lucretius' De Rerum Natura, an argument for atheism and atomic theory famous for being the ancient Roman equivalent of The God Delusion, was written in poetry. All through the Middle Ages, artists worked to a philosophy of trying to depict and celebrate natural truth. And the eighteenth century saw a golden age of what was sometimes called "rationalist poetry", a versified celebration of Enlightenment principles.
When William Wordsworth launched his poetic jihad against rationalism, he called his declaration of war The Tables Turned. On a mundane level, the title referred to an argument he was having with his friend, but on a grander scale he was consciously inverting the previous order of Reason as the virtue of poetry. Thus:
Enough of Science and of Art;
Close up these barren leaves;
Come forth, and bring with you a heart
That watches and receives.
Over the next few years, he and fellow jihadis John Keats and Percy Bysshe Shelley were wildly successful in completely changing the poetic ideal. I can't begrudge them their little movement; their poetry ranks among the greatest art ever produced by humankind. But it bears repeating that there was a strong rationalist tradition in poetry before, during, and after the Romantic Era. In its honor, I thought I would share some of my favorite rationalist poems. I make no claims that this is exhaustive, representative, or anything else besides my personal choices.
Little Johny Bayesian
Followup to: Rationalist Storybooks: A Challenge
This was originally a comment in response to a challenge to create a nursery rhyme conveying rationality concepts, but, at the suggestion of Eliezer, I've made it into its own post.
Little Johny thought he was very bright,
But the schoolkids did not -- they would laugh when he came in sight.
He could count, sing, and guess the weather.
Then one day, Big Bill said "Real bright boys will grow a feather."
"Ach!" he cried, "Could it be true?"
"Then I'm not bright, which makes me blue."
So he went home, and searched all over.
And then found growth on his head, clear as a clover.
Dead Aid
Followup to So You Say You're an Altruist:
Today Dambisa Moyo's book "Dead Aid: Why Aid Is Not Working and How There Is a Better Way for Africa" was released.
From the book's website:
In the past fifty years, more than $1 trillion in development-related aid has been transferred from rich countries to Africa. Has this assistance improved the lives of Africans? No. In fact, across the continent, the recipients of this aid are not better off as a result of it, but worse—much worse.
In Dead Aid, Dambisa Moyo describes the state of postwar development policy in Africa today and unflinchingly confronts one of the greatest myths of our time: that billions of dollars in aid sent from wealthy countries to developing African nations has helped to reduce poverty and increase growth.
In fact, poverty levels continue to escalate and growth rates have steadily declined—and millions continue to suffer. Provocatively drawing a sharp contrast between African countries that have rejected the aid route and prospered and others that have become aid-dependent and seen poverty increase, Moyo illuminates the way in which overreliance on aid has trapped developing nations in a vicious circle of aid dependency, corruption, market distortion, and further poverty, leaving them with nothing but the “need” for more aid.
From the Global Investor Bookshop:
Dead Aid analyses the history of economic development over the last fifty years and shows how Aid crowds out financial and social capital and directly causes corruption; the countries that have caught up did so despite rather than because of Aid. There is, however, an alternative. Extreme poverty is not inevitable. Dambisa Moyo also shows how, with improved access to capital and markets and with the right policies, even the poorest nations could be allowed to prosper. If we really do want to help, we have to do more than just appease our consciences, hoping for the best, expecting the worst. We need first to understand the problem.
Science vs. art
In the comments on Soulless Morality, a few people mentioned contributing to humanity's knowledge as an ultimate value. I used to place a high value on this myself.
Now, though, I doubt whether making scientific advances would give me satisfaction on my deathbed. All you can do in science is discover something before someone else discovers it. (It's a lot like the race to the north pole, which struck me as stupid when I was a child; yet I never transferred that judgement to scientific races.) The short-term effects of your discovering something sooner might be good, and might not. The long-term effects are likely to be to bring about apocalypse a little sooner.
Art is different. There's not much downside to art. There are some exceptions - romance novels perpetuate destructive views of love; 20th-century developments in orchestral music killed orchestral music; and Ender's Game has warped the psyches of many intelligent people. But artists seldom worry that their art might destroy the world. And if you write a great song, you've really contributed, because no one else would have written that song.
EDIT: What is above is instrumental talk. I find that, as I get older, science fails to satisfy me as much. I don't assign it the high intrinsic value I used to. But it's hard for me to tell whether this is really an intrinsic valuation, or the result of diminishing faith in its instrumental value.
I think that people who value rationality tend to place an unusually high value on knowledge. Rationality requires knowledge; but that gives knowledge only instrumental value. It doesn't (can't, by definition) justify giving knowledge intrinsic value.
What do the rest of you think? Is there a strong correlation between rationalism, giving knowledge high intrinsic value, and giving art low intrinsic value? If so, why? And which would you rather be - a great scientist, or a great artist of some type? (Pretend that great scientists and great artists are equally well-paid and sexually attractive.)
(I originally wrote this as over-valuing knowledge and under-valuing art, but Roko pointed out that that's incoherent.)
Under a theory that intrinsic and instrumental values are separate things, there's no reason why giving science a high instrumental value should correlate with giving it a high intrinsic value, or vice-versa. Yet the people here seem to be doing one of those things.
My theory is that we can't keep intrinsic and instrumental values separate from each other. We attach positive valences to both, and then operate on the positive valences. Or, we can't distinguish our intrinsic values from our instrumental values by introspection. (You may have noticed that I started using examples that refer to both intrinsic and instrumental values. I don't think I can separate them, except retrospectively; and with about as much accuracy as a courtroom witness asked to testify about an event that took place 20 years ago.)
It's tempting to mention friends and family in here too, as another competing fundamental value. But that would demand solving the relationship between personal values that you yourself take, and the valuations you would want a society or a singleton AI to make. That's too much to take on here. I want to talk just about intrinsic value given to science vs. art.
Oh, and saying science is an art is a dodge. You then have to say whether you value the knowledge, or the artistic endeavor. Also, ignore the possibility that your scientific work can make a safe Singularity. That would be science as instrumental value. I'm asking about science vs. art as intrinsic values.
EDIT: An obvious explanation: I was assuming that people here want to be rational as an instrumental value, and that we should find the distribution of intrinsic values to be the same as in the general populace. But of course some people are drawn here because rationality is an intrinsic value to them, and this heavily biases the distribution of intrinsic values found here.
The Skeptic's Trilemma
Followup to: Talking Snakes: A Cautionary Tale
Related to: Explain, Worship, Ignore
Skepticism is like sex and pizza: when it's good, it's very very good, and when it's bad, it's still pretty good.
It really is hard to dislike skeptics. Whether or not their rational justifications are perfect, they are doing society a service by raising the social cost of holding false beliefs. But there is a failure mode for skepticism. It's the same as the failure mode for so many other things: it becomes a blue vs. green style tribe, demands support of all 'friendly' arguments, enters an affective death spiral, and collapses into a cult.
What does it look like when skepticism becomes a cult? Skeptics become more interested in supporting their "team" and insulting the "enemy" than in finding the truth or convincing others. They begin to think "If a assigning .001% probability to Atlantis and not accepting its existence without extraordinarily compelling evidence is good, then assigning 0% probability to Atlantis and refusing to even consider any evidence for its existence must be great!" They begin to deny any evidence that seems pro-Atlantis, and cast aspersions on the character of anyone who produces it. They become anti-Atlantis fanatics.
Wait a second. There is no lost continent of Atlantis. How do I know what a skeptic would do when confronted with evidence for it? For that matter, why do I care?
Dialectical Bootstrapping
"Dialectical Bootstrapping" is a simple procedure that may improve your estimates. This is how it works:
- Estimate the number in whatever manner you usually would estimate. Write that down.
- Assume your first estimate is off the mark.
- Think about a few reasons why that could be. Which assumptions and considerations could have been wrong?
- What do these new considerations imply? Was the first estimate rather too high or too low?
- Based on this new perspective, make a second, alternative estimate.
Herzog and Hertwig find that average of the two estimates (in a historical-date estimating task) is more accurate than the first estimate, (Edit: or the average of two estimates without the "assume you're wrong" manipulation). To put the finding in a OB/LW-centric manner, this procedure (sometimes, partially) avoids Cached Thoughts.
Software tools for community truth-seeking
In reply to: Community Epistemic Practice
There are software tools, possibly helpful for community truth-seeking. For example, truthmapping.com is described very well here. Also, debategraph.org, and I'm sure there are others.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)