One argument in favor of limited life spans
It's the only absolutely reliable way of getting rid of bad leaders.
This might not be a good enough reason to oppose longevity tech, but I don't think it's easily disposed of.
Transparency and Accountability
[Added 02/24/14: After writing this post, I discovered that I had miscommunicated owing to not spelling out my thinking in sufficient detail, and also realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them). See Reflections on a Personal Public Relations Failure: A Lesson in Communication. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.]
Follow-up to: Existential Risk and Public Relations, Other Existential Risks, The Importance of Self-Doubt
Over the last few days I've made a string of posts levying strong criticisms against SIAI. This activity is not one that comes naturally to me. In The Trouble With Physics Lee Smolin writes
...it took me a long time to decide to write this book. I personally dislike conflict and confrontation [...] I kept hoping someone in the center of string-theory research would write an objective and detailed critique of exactly what has and has not been acheived by the theory. That hasn't happened.
My feelings about and criticisms of SIAI are very much analogous to Smolin's feelings about and criticisms of string theory. Criticism hurts feelings and I feel squeamish about hurting feelings. I've found the process of presenting my criticisms of SIAI emotionally taxing and exhausting. I fear that if I persist for too long I'll move into the region of negative returns. For this reason I've decided to cut my planned sequence of posts short and explain what my goal has been in posting in the way that I have.
Edit: Removed irrelevant references to VillageReach and StopTB, modifying post accordingly.
Against Cryonics & For Cost-Effective Charity
Related To: You Only Live Twice, Normal Cryonics, Abnormal Cryonics, The Threat Of Cryonics, Doing your good deed for the day, Missed opportunities for doing well by doing good
Summary: Many Less Wrong posters are interested in advocating for cryonics. While signing up for cryonics is an understandable personal choice for some people, from a utilitarian point of view the money spent on cryonics would be much better spent by donating to a cost-effective charity. People who sign up for cryonics out of a generalized concern for others would do better not to sign up for cryonics and instead donating any money that they would have spent on cryonics to a cost-effective charity. People who are motivated by a generalized concern for others to advocate the practice of signing up for cryonics would do better to advocate that others donate to cost-effective charities.
Added 08/12: The comments to this post have prompted me to add the following disclaimers:
(1) Wedrifid understood me to be placing moral pressure on people to sacrifice themselves for the greater good. As I've said elsewhere, "I don't think that Americans should sacrifice their well-being for the sake of others. Even from a utilitarian point of view, I think that there are good reasons for thinking that it would be a bad idea to do this." My motivation for posting on this topic is the one described by rhollerith_dot_com in his comment.
(2) In line with the above comment, when I say "selfish" I don't mean it with the negative moral connotations that the word carries, I mean it as a descriptive term. There are some things that we do for ourselves and there are some things that we do for others - this is as things should be. I'd welcome any suggestions for a substitute for the word "selfish" that has the same denotation but which is free of negative conotations.
(3) Wei_Dai thought that my post assumed a utilitarian ethical framework. I can see how my post may have come across that way. However, while writing the post I was not assuming that the reader ascribes to utilitarianism. When I say "we should" in my post I mean "to the extent that we ascribe to utilitarianism we should." I guess that while writing the post I thought that this would be clear from context, but turned out to have been mistaken on this point.
As an aside, I do think that there are good arguments for a (sophisticated sort of) utilitarian ethical framework. I will make a post about this after reading Eliezer's posts on utilitarianism.
(4) Orthonormal thinks that I'm treating cryonics differently from other expenditures. This is not the case, from my (utilitarian) point of view, expenditures should be judged exclusively based on their social impact. The reason why I wrote a post about cryonics is because I had the impression that there are members of the Less Wrong community who view cryonics expenditures and advocacy as "good" in a broader sense than I believe is warranted. But (from a utilitarian point of view) cryonics is one of thousands of things that people ascribe undue moral signficance to. I certainly don't think that advocacy of and expenditures on "cryonics" is worse from a utilitarian point of view than advocacy of and expenditures on something like "recycling plastic bottles".
I've also made the following modifications to my post
(A) In response to a valid objection raised by Vladimir_Nesov I've added a paragraph clarifying that Robin Hanson's suggestion that cryonics might be an effective charity is based on the idea that doing so will drive costs down, and explanation for why I think that my points still hold.
(B) I've added a third example of advocacy of cryonics within the Less Wrong community to make it more clear that I'm not arguing against a straw man.
Without further ado, below is the main body of the revised post.
Study: Encouraging Obedience Considered Harmful
A while back I did a couple of posts on the care and feeding of young rationalists. Though it is not new, I recently found a truly excellent post on this topic, in Dale Mcgowan's blog, The Meming of Life. The post details a survey carried out on ordinary citizens of Hitler's Germany, searching for correlations between style of upbringing, and adult moral decisions.
Everyday Germans of the Nazi period are the focus of a fascinating study discussed in the PBB seminars and in the Ethics chapter of Raising Freethinkers. For their book The Altruistic Personality, researchers Samuel and Pearl Oliner conducted over 700 interviews with survivors of Nazi-occupied Europe. Included were both “rescuers” (those who actively rescued victims of persecution) and “non-rescuers” (those who were either passive in the face of the persecution or actively involved in it). The study revealed interesting differences in the upbringing of the two groups — specifically the language and practices that parents used to teach their values.
Non-rescuers were 21 times more likely than rescuers to have been raised in families that emphasized obedience—being given rules that were to be followed without question—while rescuers were over three times more likely than non-rescuers to identify “reasoning” as an element of their moral education. “Explained,” the authors said, is the single most common word used by rescuers in describing their parents’ ways of talking about rules and ethical ideas.
The Spotlight
Sequence index: Living Luminously
Previously in sequence: Lights, Camera, Action
Next in sequence: Highlights and Shadows
Inspecting thoughts is easier and more accurate if they aren't in your head. Look at them in another form from the outside, like they belonged to someone else.
You may find your understanding of this post significantly improved if you read the fourth story from Seven Shiny Stories.
One problem with introspection is that the conclusions you draw about your thoughts are themselves thoughts. Thoughts, of course, can change or disappear before you can extract information about yourself from them. If a flash of unreasonable anger crosses my mind, this might stick around long enough to make me lash out, but then vanish before I discover how unreasonable it was. If thoughts weren't slippery like this, luminosity wouldn't be much of a project. So of course, if you're serious about luminosity, you need a way to pin down your thoughts into a concrete format that will hold still.
You have to pry your thoughts out of your brain.
Writing is the obvious way to do this - for me, anyway. You don't have to publicize what you extract, so it doesn't have to be aesthetic or skillful, just serviceable for your own reference. The key is to get it down in a form that you can look at without having to continue to introspect. Whether this means sketching or scribing or singing, dump your brain out into the environment and have a peek. It's easy to fool yourself into thinking that a given idea makes sense; it's harder to fool someone else. Writing down an idea automatically engages the mechanisms we use to communicate to others, helping you hold your self-analysis to a higher standard.
Lights, Camera, Action!
Sequence index: Living Luminously
Previously in sequence: The ABC's of Luminosity
Next in sequence: The Spotlight
You should pay attention to key mental events, on a regular and frequent basis, because important thoughts can happen very briefly or very occasionally and you need to catch them.
You may find your understanding of this post significantly improved if you read the third story from Seven Shiny Stories.
Luminosity is hard and you are complicated. You can't meditate on yourself for ten minutes over a smoothie and then announce your self-transparency. You have to keep working at it over a long period of time, not least because some effects don't work over the short term. If your affect varies with the seasons, or with major life events, then you'll need to keep up the first phase of work through a full year or a major life event, and it turns out those don't happen every alternate Thursday. Additionally, you can't cobble together the best quality models from snippets of introspection that are each five seconds long; extended strings of cognition are important, too, and can take quite a long time to unravel fully.
Sadly, looking at what you are thinking inevitably changes it. With enough introspection, this wouldn't influence your accuracy about your overall self - there's no reason in principle why you couldn't spend all your waking hours noting your own thoughts and forming meta-thoughts in real time - but practically speaking that's not going to happen. Therefore, some of your data will have to come from memory. To minimize the error introduction that comes of retrieving things from storage, it's best to arrange to reflect on very recent thoughts. It may be worth your while to set up an external reminder system to periodically prompt you to look inward, both in the moment and retrospectively over the last brief segment of time. This can be a specifically purposed system (i.e. set a timer to go off every half hour or so), or you can tie it to convenient promptings from the world as-is, like being asked "What's up?" or "Penny for your thoughts".
The ABC's of Luminosity
Sequence index: Living Luminously
Previously in sequence: Let There Be Light
Next in sequence: Lights, Camera, Action!
Affect, behavior, and circumstance interact with each other. These interactions constitute informative patterns that you should identify and use in your luminosity project.
You may find your understanding of this post significantly improved if you read the second story from Seven Shiny Stories.
The single most effective thing you can do when seeking luminosity is to learn to correlate your ABC's, collecting data about how three interrelated items interact and appear together or separately.
A stands for "affect". Affect is how you feel and what's on your mind. It can be far more complicated than "enh, I'm fine" or "today I'm sad". You have room for plenty of simultaneous emotions, and different ones can be directed at different things - being on a generally even keel about two different things isn't the same as being nervous about one and cheerful about the other, and neither state is the same as being entirely focused on one subject that thrills you to pieces. If you're nervous about your performance evaluation but tickled pink that you just bought a shiny new consumer good and looking forward to visiting your cousin next week yet irritated that you just stubbed your toe, all while being amused by the funny song on the radio, that's this. For the sake of the alphabet, I'm lumping in less emotionally laden cognition here, too - what thoughts occur to you, what chains of reasoning you follow, what parts of the environment catch your attention.
B stands for "behavior". Behavior here means what you actually do. Include as a dramatically lower-weighted category those things that you fully intended to do, and actually moved to do, but were then prevented from without from doing, or changed your mind about due to new, unanticipated information. This is critical. Fleeting designs and intentions cross our minds continually, and if you don't firmly and definitively place your evidential weight on the things that ultimately result in action, you will get subconsciously cherry-picked subsets of those incomplete plan-wisps. This is particularly problematic because weaker intentions will be dissuaded by minor environmental complications at a much higher rate. Don't worry overmuch about "real" plans that this filtering process discards. You're trying to know yourself in toto, not yourself at your best time-slices when you valiantly meant to do good thing X and were buffetted by circumstance: if those dismissed real plans represent typical dispositions you have, then they'll have their share of the cohort of actual behavior. Trust the law of averages.
C stands for "circumstance". This is what's going on around you (what time is it? what's going on in your life now and recently and in the near future - major events, minor upheavals, plans for later, what people say to you? where are you: is it warm, cold, bright, dim, windy, calm, quiet, noisy, aromatic, odorless, featureless, busy, colorful, drab, natural, artificial, pretty, ugly, spacious, cozy, damp, dry, deserted, crowded, formal, informal, familiar, new, cluttered, or tidy?). It also covers what you're doing and things inside you that are generally conceptualized as merely physical (are you exhausted, jetlagged, drugged, thirsty, hungry, sore, ill, drunk, energetic, itchy, limber, wired, shivering? are you draped over a recliner, hiding in a cellar, hangliding or dancing or hiking or drumming or hoeing or diving?) Circumstances are a bit easier to observe than affect and behavior. If you have trouble telling where you are and what you're up to, your first priority shouldn't be luminosity. And while we often have some trouble distinguishing between various physical ailments, there are strong pressures on our species to be able to tell when we're hungry or in pain. Don't neglect circumstance when performing correlative exercises just because it doesn't seem as "the contents of your skull"-y. SAD should be evidence enough that our environments can profoundly influence our feelings. And wouldn't it be weird, after all, if you felt and acted just the same while ballroom dancing, and while setting the timer on your microwave oven to reheat soup, and while crouching on the floor after having been taken hostage at the bank?
Man-with-a-hammer syndrome
What gummed up Skinner’s reputation is that he developed a case of what I always call man-with-a-hammer syndrome: to the man with a hammer, every problem tends to look pretty much like a nail.
The Psychology of Human Misjudgment is an brilliant talk given by Charlie Munger that I still return to and read every year to gain a fresh perspective. There’s a lot of wisdom to be distilled from that piece but the one thing I want to talk about today is the man-with-a-hammer syndrome.
Man-with-a-hammer syndrome is pretty simple: you think of an idea and then, pretty soon, it becomes THE idea. You start seeing how THE idea can apply to anything and everything, it’s the universal explanation for how the universe works. Suddenly, everything you’ve ever thought of before must be reinterpreted through the lens of THE idea and you’re on an intellectual high. Utilitarianism is a good example of this. Once you independently discover Utilitarianism you start to believe that an entire moral framework can be constructed around a system of pleasures and pains and, what’s more, that this moral system is both objective and platonic.
Counterfactual Mugging v. Subjective Probability
This has been in my drafts folder for ages, but in light of Eliezer's post yesterday, I thought I'd see if I could get some comment on it:
A couple weeks ago, Vladimir Nesov stirred up the biggest hornet's nest I've ever seen on LW by introducing us to the Counterfactual Mugging scenario.
If you didn't read it the first time, please do -- I don't plan to attempt to summarize. Further, if you don't think you would give Omega the $100 in that situation, I'm afraid this article will mean next to nothing to you.
So, those still reading, you would give Omega the $100. You would do so because if someone told you about the problem now, you could do the expected utility calculation 0.5*U(-$100)+0.5*U(+$10000)>0. Ah, but where did the 0.5s come from in your calculation? Well, Omega told you he flipped a fair coin. Until he did, there existed a 0.5 probability of either outcome. Thus, for you, hearing about the problem, there is a 0.5 probability of your encountering the problem as stated, and a 0.5 probability of your encountering the corresponding situation, in which Omega either hands you $10000 or doesn't, based on his prediction. This is all very fine and rational.
So, new problem. Let's leave money out of it, and assume Omega hands you 1000 utilons in one case, and asks for them in the other -- exactly equal utility. What if there is an urn, and it contains either a red or a blue marble, and Omega looks, maybe gives you the utility if the marble is red, and asks for it if the marble is blue? What if you have devoted considerable time to determining whether the marble is red or blue, and your subjective probability has fluctuated over the course of you life? What if, unbeknownst to you, a rationalist community has been tracking evidence of the marble's color (including your own probability estimates), and running a prediction market, and Omega now shows you a plot of the prices over the past few years?
In short, what information do you use to calculate the probability you plug into the EU calculation?
Generalizing From One Example
Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective
"Everyone generalizes from one example. At least, I do."
-- Vlad Taltos (Issola, Steven Brust)
My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example:
There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like?
Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.
The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2.
Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)