What is the group selection debate?
Related to Group selection update, The tragegy of group selectionism
tl;dr: In competitive selection processes, selection is a two-place word: there's something being selected (a cause), and something it's being selected for (an effect). The phrase group-level gene selection helps dissolve questions and confusion surrounding the less descriptive phrase "group selection".
(Essential note for new readers on reduction: Reality does not seem to keep track of different "levels of organization" and apply different laws at each level; rather, it seems that the patterns we observe at higher levels are statistical consequences of the laws and initial conditions at the lower levels. This is the "reductionist thesis.")
When I first encountered people debating "whether group selection is real", I couldn't see what there was to possibly debate about. I've since realized the debate is mostly a confusion arising from a cognitive misuse of a two-place "selection" relation.
Causes being selected versus effects they're being selected for.
A gene is an example of a Replicating Cause. (So is a meme; postpone discussion here.) A gene has many effects, one of which is that what we call "copies" of it tend to crop up in reality, through various mechanisms that involve cellular and organismal reproduction.
For example, suppose a particular human gene X causes cells containing it to immediately reproduce without bound, i.e. the gene is "cancerous". One effect is that there will soon be many more cells with that gene, hence more copies of the gene. Another effect is that the human organism containing it is liable to die without passing it on, hence fewer copies of the gene (once the dead organism starts to decay). If that's what happens, the gene itself can be considered unfit: all things considered, its various effects eventually lead it to stop existing.
(An individual in the next generation can still "get cancer", though, if a mutation in one produces a new cancerous gene, Y. This is what happens in reality.)
Thus, cancers are examples of where higher-complexity mechanisms trump lower complexity-mechanisms: organism-level gene selection versus cellular-level gene selection. Note that the Replicating Cause being selected is always the gene, but it is being selected for its net effects occurring on various levels.
So what's left to debate about?
Self-empathy as a source of "willpower"
tl:dr; Dynamic consistency is a better term for "willpower" because its meaning is robust to changes in how we think constistent behavior actually manages to happen. One can boost consistency by fostering interactions between mutually inconsistent sub-agents to help them better empathize with each other.
Despite the common use of the term, I don't think of my "willpower" as an expendable resource, and mostly it just doesn't feel like one. Let's imagine Bob, who is somewhat overweight, likes to eat cake, and wants to lose weight to be more generically attractive and healthy. Bob often plans not to eat cake, but changes his mind, and then regrets it, and then decides he should indulge himself sometimes, and then decides that's just an excuse-meme, etc. Economists and veteran LessWrong readers know this oscillation between value systems is called dynamic inconsistency (q.v. Wikipedia). We can think of Bob as oscillating between being two different idealized agents living in the same body: a WorthIt agent, and a NotWorthIt agent.
The feeling of NotWorthIt-Bob's (in)ability to control WorthIt-Bob is likely to be called "(lack of) willpower", at least by NotWorthIt-Bob, and maybe even by WorthIt-Bob. But I find the framing and langauge of "willpower" fairly unhelpful. Instead, I think NotWorthIt-Bob and WorthIt-Bob just aren't communicating well enough. They try to ignore each other's relevance, but if they could both be present at the same time and actually talk about it, like two people in a healthy relationship, maybe they'd figure something out. I'm talking about self-empathy here, which is opposite to self-sympathy: relating to emotions of yours that you are not immediately feeling. Haven't you noticed you're better at convincing people to change their minds when you actually empathize with their position during the conversation? The same applies to convincing yourself.
Don't ask "Do I have willpower?", but "Am I a dynamically consistent team?"
Morality and relativistic vertigo
tl;dr: Relativism bottoms-out in realism by objectifying relations between subjective notions. This should be communicated using concrete examples that show its practical importance. It implies in particular that morality should think about science, and science should think about morality.
Sam Harris attacks moral uber-relativism when he asserts that "Science can answer moral questions". Countering the counterargument that morality is too imprecise to be treated by science, he makes an excellent comparison: "healthy" is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health.
What needs adding to his presentation (which is worth seeing, though I don't entirely agree with it) is what I consider the strongest concise argument in favor of science's moral relevance: that morality is relative simply means that the task of science is to examine absolute relations between morals. For example, suppose you uphold the following two moral claims:
- "Teachers should be allowed to physically punish their students."
- "Children should be raised not to commit violence against others."
First of all, note that questions of causality are significantly more accessible to science than people before 2000 thought was possible. Now suppose a cleverly designed, non-invasive causal analysis found that physically punishing children, frequently or infrequently, causes them to be more likely to commit criminal violence as adults. Would you find this discovery irrelevant to your adherence to these morals? Absolutely not. You would reflect and realize that you needed to prioritize them in some way. Most would prioritize the second one, but in any case, science will have made a valid impact.
So although either of the two morals is purely subjective on its own, how these morals interrelate is a question of objective fact. Though perhaps obvious, this idea has some seriously persuasive consequences and is not be taken lightly. Why?
First of all, you might change your morals in response to them not relating to each other in the way you expected. Ideas parse differently when they relate differently. "Teachers should be allowed to physically punish their students" might never feel the same to you after you find out it causes adult violence. Even if it originally felt like a terminal (fundamental) value, your prioritization of (2) might make (1) slowly fade out of your mind over time. In hindsight, you might just see it as an old, misinformed instrumental value that was never in fact terminal.
Second, as we increase the number of morals under consideration, the number of relations for science to consider grows rapidly, as (n2-n)/2: we have many more moral relations than morals themselves. Suddenly the old disjointed list of untouchable maxims called "morals" fades into the background, and we see a throbbing circulatory system of moral relations, objective questions and answers without which no person can competently reflect on her own morality. A highly prevalent moral like "human suffering is undesirable" looks like a major organ: important on its own to a lot of people, and lots of connections in and out for science to examine.
Treating relativistic vertigo
To my best recollection, I have never heard the phrase "it's all relative" used to an effect that didn't involve stopping people from thinking. When the topic of conversation — morality, belief, success, rationality, or what have you — is suddenly revealed or claimed to depend on a context, people find it disorienting, often to the point of feeling the entire discourse has been and will continue to be "meaningless" or "arbitrary". Once this happens, it can be very difficult to persuade them to keep thinking, let alone thinking productively…
Break your habits: be more empirical
tl;dr: The neurotypical attitude that "You think too much" might be better parsed as "You don't experiment enough." Once you have an established procedure for living optimally in «setting», be a good scientist and keep trying to falsify your theory when it's not too costly to do so.
(Note: in aspects of life where you're impulsive, don't introspect enough, or have poor self discipline, this post is probably advice in the wrong direction.)
Alice is highly analytically minded. She always walks the same most-efficient route to work, only dances tango and salsa, and refuses to deviate even on rare occasions from her carefully planned schedule. She has judged carefully from experience that the expected value of dating is too low to be worth her time, and will only watch a movie if at least 3 of her 5 closest friends recommend it. She travels only when it relates to her job, to ensure the trip has a purpose and to minimize unnecessary transportation costs. Oh, and she also thinks a lot. About everything.
Bob often tells Alice that she "thinks too much", advice that rarely if ever resonates. But consider that Bob may be sensing a legitimate imbalance: Alice may be doing too much analysis with not enough data. He can tell she thinks way more than he does, and blames that for the imbalance, suggesting that Alice should "turn off her brain". But Alice can't agree. Why would she ever waste a resource as constantly applicable and available as her mind? That seems like a terrible idea. So here's a better one: Alice, if you're reading this, don't turn your mind off... turn it outward.
When (analysis:data) looks too big, just try turning up the data. There's no need to get stupider or anything. When it's not overly costly, you should deviate from your usual theories of optimal behavior for the sake of expected information gain. Even in theory, empiricism is necessary... For a Bayesian optimizing agent in an uncertain world, information has positive expected utility, and experiments have positive expected information. Ergo, do them sometimes! And what sort of experiment do I mean?
Don't judge a skill by its specialists
tl;dr: The marginal benefits of learning a skill shouldn't be judged heavily on the performance of people who have had it for a long time. People are unfortunately susceptible to these poor judgments via the representativeness heuristic.
Warn and beware of the following kludgy argument, which I hear often and have to dispel or refine:
"Naively, learning «skill type» should help my performance in «domain». But people with «skill type» aren't significantly better at «domain», so learning it is unlikely to help me."
In the presence or absence of obvious mediating factors, skills otherwise judged as "inapplicable" might instead present low hanging fruit for improvement. But people too often toss them away using biased heuristics to continue being lazy and mentally stagnant. Here are some parallel examples to give the general idea (these are just illustrative, and might be wrong):
Weak argument: "Gamers are awkward, so learning games won't help my social skills."
Mediating factor: Lack of practice with face-to-face interaction.
Ideal: Socialite acquires moves-ahead thinking and learns about signalling to help get a great charity off the ground.Weak argument: "Physicists aren't good at sports, so physics won't help me improve my game."
Mediating factor: Lack of exercise.
Ideal: Athlete or coach learns basic physics and tweaks training to gain a leading edge.Weak argument: "Mathematicians aren't romantically successful, so math won't help me with dating."
Mediating factor: Aversion to unstructured environments.
Ideal: Serial dater learns basic probability to combat cognitive biases in selecting partners.Weak argument: "Psychologists are often depressed, so learning psychology won't help me fix my problems."
Mediating factor: Time spent with unhappy people.
Ideal: College student learns basic neuropsychology and restructures study/social routine to accommodate better unconscious brain functions.
Composting fruitless debates
Why do long, uninspiring, and seemingly-childish debates sometimes emerge even in a community like LessWrong? And what can we do about them? The key is to recognize the potentially harsh environmental effect of an audience, and use a dying debate to fertilize a more sheltered private conversation.
Let me start by saying that LessWrong generally makes excellent use of public debate, and naming two things I don't believe are solely responsible for fruitless debates here: rationalization biases and self-preservation1. When your super-important debate grows into a thorny mess, the usual aversion to say various forms of "just drop it" are about signaling that:
- you're not skilled enough to continue arguing, so you'd look bad,
- the other person isn't worth your time, in which case they'd be publicly insulted and compelled to continue with at least one self-defense comment, extending the conflict, or
- the other person is right, which would risk spreading what appear to be falsehoods.
"Stop the wrongness", the last concern, is in my opinion the most perisistent here simply because it is the least misguided. It's practically the name of the site. Many LessWrong users seem to share a sincere, often altruistic desire to share truth, abolish falsehood, and overcome conflict. Public debate is a selection mechanism generally used very effectively here to grow and harvest good arguments. But we can still benefit from diffusing the weed-like quibbling that sometimes shows up in the harsh environment of debate, and for that you need a response that avoids the problematic signals above. So try this:
"I'm worried that debating this more here won't be useful to others, but I want to keep working on it with you, so I'm responding via private message. Let's post on it again once we either agree or better organize our disagreement. Hopefully at least one of us will learn and refine a new argument from this conversation."
Take a moment to see how this carefully avoids (1)-(3). Then you can try changing the tone of the private message to be more collaborative than competitive; the change in medium will help mark the transition. This way you'll each be less afraid of having been wrong and more concerned with learning to be right, so rationalization bias will also be diminished. As well, much social drama can disintegrate without the pressure of the audience environment (I imagine this might contribute to couples fighting more after they have children, though this is just anecdotal speculation). Despite being perhaps obvious, these effects are not to be underestimated!
But hang on... if you're convinced someone is very wrong, is it okay to leave such a debate hanging midstream in public? Why doesn't "stop the wrongness" trump our social concerns and compel us to flog away at our respective puddles of horsemeat?
Physicalism: consciousness as the last sense
Follow-up to There just has to be something more, you know? and The two insights of materialism.
I have alluded that one cause for the common reluctance to consider physicalism — in particular, that our minds can in principle be characterized entirely by physical states — is an asymmetry in how people perceive characterization. This can be alleviated by analogy to how our external senses can supervene on each other, and how abstract manipulations of those senses using recording, playback, and editing technologies have made such characterizations useful and intuitive.
We have numerous external senses, and at least one internal sense that people call "thinking" or "consciousness". In part because you and I can point our external senses at the same objects, collaborative science has done a great job characterizing them in terms of each other. The first thing is to realize the symmetry and non-triviality of this situation.
First, at a personal level: say you've never sensed a musical instrument in any way, and for the first time, in the dark, you hear a cello playing. Then later, you see the actual cello. You probably wouldn't immediately recognize these perceptions as being of the same physical object. But watching and listening to the cello playing at the same time would certainly help, and physically intervening yourself to see that you can change the pitch of the note by placing your fingers on the strings would be a deal breaker: you'd start thinking of that sound, that sight, and that tactile sense as all coming from one object "cello".
Before moving on, note how in these circumstances we don't conclude that "only sight is real" and that sound is merely a derivate of it, but simply that the two senses are related and can characterize each other, at least roughly speaking: when you see a cello, you know what sort of sounds to expect, and conversely.
Next, consider the more precise correspondence that collaborative science has provided, which follows a similar trend: in the theory of characterizing sound as logitudinal compression waves, first came recording, then playback, and finally editing. In fact, the first intelligible recording of a human voice, in 1860, was played back for the first time in 2008, using computers. So, suppose it's 1810, well before the invention of the phonoautograph, and you've just heard the first movement of Beethowen's 5th. Then later, I unsuggestively show you a high-res version of this picture, with zooming capabilities:
MathOverflow as an example for LessWrong
"How can LessWrong maintain high post quality while obtaining new posters? How can we encourage everyone to read everything, but not everyone to post everything? How can we be less intimidating to newcomers?"
A lot of Meta conversation goes on here, and the longer it goes on without having a great example to learn from, the longer our discussion will be more aimless and less informed than it could be. Consider speculating whether blue mould from bread could treat supporating eye infections before you knew it also treated supporating flesh wounds... it would seem pretty random, and the discussion would be fairly aimless.
But LessWrong.com is the first successful community of its kind! There is no example to learn from, right?
With the latter, I wouldn't agree: http://mathoverflow.net
[What I've already said in comments: MathOverflow is a Q&A forum for research-level mathematicians, aimed at each other, created by a math grad student and a post-doc in September 2009. As hoped, it expanded very quickly, involving many famous mathematicians around the world. You can even see Fields Medalist — the math equivalent of Nobel Laurate — Terrence Tao is a regular contributor (bottom right).]
MathOverflow awards Karma for good questions and good answers, it's moderated, it's open to new users, and maintains a high standard so professionals stay interested and involved. Sound familliar? Well, what about these features...
The top of every page links to:
- Frequently Asked Questions
- How to write a good MathOverflow question
- meta.mathoverflow.net, a separate forum for questions/suggestions about site policy.
Have a look at those links. If your first reaction is "Sure, precise guidelines worked for a professional mathematics Q&A site...", consider this: they didn't start out as a professional mathematics Q&A site. They started out wanting to be one. They had to defend against wave after wave of undergraduate calculus students posting for homework help. They had to defy the natural propensity of the community to become an open discussion forum for mathematicians. I watched as these problems arose, were dealt with, and subsided. For example:
Too busy to think about life
Many adults maintain their intelligence through a dedication to study or hard work. I suspect this is related to sub-optimal levels of careful introspection among intellectuals.
If someone asks you what you want for yourself in life, do you have the answer ready at hand? How about what you want for others? Human values are complex, which means your talents and technical knowledge should help you think about them. Just as in your work, complexity shouldn't be a curiosity-stopper. It means "think", not "give up now."
But there are so many terrible excuses stopping you...
Too busy studying? Life is the exam you are always taking. Are you studying for that? Did you even write yourself a course outline?
Too busy helping? Decision-making is the skill you are aways using, or always lacking, as much when you help others as yourself. Isn't something you use constantly worth improving on purpose?
Too busy thinking to learn about your brain? That's like being too busy flying an airplane to learn where the engines are. Yes, you've got passengers in real life, too: the people whose lives you affect.
Emotions too irrational to think about them? Irrational emotions are things you don't want to think for you, and therefore are something you want to think about. By analogy, children are often irrational, and no one sane concludes that we therefore shouldn't think about their welfare, or that they shouldn't exist.
So set aside a date. Sometime soon. Write yourself some notes. Find that introspective friend of yours, and start solving for happiness. Don't have one? For the first time in history, you've got LessWrong.com!
Reasons to make the effort:
Happiness is a pairing between your situation and your disposition. Truly optimizing your life requires adjusting both variables: what happens, and how it affects you.
You are constantly changing your disposition. The question is whether you'll do it with a purpose. Your experiences change you, and you affect those, as well as how you think about them, which also changes you. It's going to happen. It's happening now. Do you even know how it works? Put your intelligence to work and figure it out!
The road to harm is paved with ignorance. Using your capability to understand yourself and what you're doing is a matter of responsibility to others, too. It makes you better able to be a better friend.
You're almost certainly suffering from Ugh Fields: unconscious don't-think-about-it reflexes that form via Pavlovian conditioning. The issues most in need of your attention are often ones you just happen not to think about for reasons undetectable to you.
How not to waste the effort:
Don't wait till you're sad. Only thinking when you're sad gives you a skew perspective. Don't infer that you can think better when you're sad just because that's the only time you try to be thoughtful. Sadness often makes it harder to think: you're farther from happiness, which can make it more difficult to empathize with and understand. Nonethess we often have to think when sad, because something bad may have happened that needs addressing.
Introspect carefully, not constantly. Don't interrupt your work every 20 minutes to wonder whether it's your true purpose in life. Respect that question as something that requires concentration, note-taking, and solid blocks of scheduled time. In those times, check over your analysis by trying to confound it, so lingering doubts can be justifiably quieted by remembering how thorough you were.
Re-evaluate on an appropriate time-scale. Try devoting a few days before each semester or work period to look at your life as a whole. At these times you'll have accumulated experience data from the last period, ripe and ready for analysis. You'll have more ideas per hour that way, and feel better about it. Before starting something new is also the most natural and opportune time to affirm or change long term goals. Then, barring large unexpecte d opportunities, stick to what you decide until the next period when you've gathered enough experience to warrant new reflection.
(The absent minded driver is a mathematical example of how planning outperforms constant re-evaluation. When not engaged in a deep and careful introspection, we're all absent minded drivers to a degree.)
Lost about where to start? I think Alicorn's story is an inspiring one. Learn to understand and defeat procrastination/akrasia. Overcome your cached selves so you can grow freely (definitely read their possible strategies at the end). Foster an everyday awareness that you are a brain, and in fact more like two half-brains.
These suggestions are among the top-rated LessWrong posts, so they'll be of interest to lots of intellectually-minded, rationalist-curious individuals. But you have your own task ahead of you, that only you can fulfill.
So don't give up. Don't procrastinate it. If you haven't done it already, schedule a day and time right now when you can realistically assess
- how you want your life to affect you and other people, and
- what you must change to better achieve this.
Eliezer has said I want you to live. Let me say:
I want you to be better at your life.
VNM expected utility theory: uses, abuses, and interpretation
When interpreted convservatively, the von Neumann-Morgenstern rationality axioms and utility theorem are an indispensible tool for the normative study of rationality, deserving of many thought experiments and attentive decision theory. It's one more reason I'm glad to be born after the 1940s. Yet there is apprehension about its validity, aside from merely confusing it with Bentham utilitarianism (as highlighted by Matt Simpson). I want to describe not only what VNM utility is really meant for, but a contextual reinterpretation of its meaning, so that it may hopefully be used more frequently, confidently, and appropriately.
- Preliminary discussion and precautions
- Sharing decision utility is sharing power, not welfare
- Contextual Strength (CS) of preferences, and VNM-preference as "strong" preference
- Hausner (lexicographic) decision utility
- The independence axiom isn't bad either
- Application to earlier LessWrong discussions of utility
1. Preliminary discussion and precautions
The idea of John von Neumann and Oskar Mogernstern is that, if you behave a certain way, then it turns out you're maximizing the expected value of a particular function. Very cool! And their description of "a certain way" is very compelling: a list of four, reasonable-seeming axioms. If you haven't already, check out the Von Neumann-Morgenstern utility theorem, a mathematical result which makes their claim rigorous, and true.
VNM utility is a decision utility, in that it aims to characterize the decision-making of a rational agent. One great feature is that it implicitly accounts for risk aversion: not risking $100 for a 10% chance to win $1000 and 90% chance to win $0 just means that for you, utility($100) > 10%utility($1000) + 90%utility($0).
But as the Wikipedia article explains nicely, VNM utility is:
- not designed to predict the behavior of "irrational" individuals (like real people in a real economy);
- not designed to characterize well-being, but to characterize decisions;
- not designed to measure the value of items, but the value of outcomes;
- only defined up to a scalar multiple and additive constant (acting with utility function U(X) is the same as acting with a·U(X)+b, if a>0);
- not designed to be added up or compared between a number of individuals;
- not something that can be "sacrificed" in favor of others in a meaningful way.
[ETA] Additionally, in the VNM theorem the probabilities are understood to be known to the agent as they are presented, and to come from a source of randomness whose outcomes are not significant to the agent. Without these assumptions, its proof doesn't work.
Because of (4), one often considers marginal utilities of the form U(X)-U(Y), to cancel the ambiguity in the additive constant b. This is totally legitimate, and faithful to the mathematical conception of VNM utility.
Because of (5), people often "normalize" VNM utility to eliminate ambiguity in both constants, so that utilities are unique numbers that can be added accross multiple agents. One way is to declare that every person in some situation values $1 at 1 utilon (a fictional unit of measure of utility), and $0 at 0. I think a more meaningful and applicable normalization is to fix mean and variance with respect to certain outcomes (next section).
Because of (6), characterizing the altruism of a VNM-rational agent by how he sacrifices his own VNM utility is the wrong approach. Indeed, such a sacrifice is a contradiction. Kahneman suggests1, and I agree, that something else should be added or substracted to determine the total, comparative, or average well-being of individuals. I'd call it "welfare", to avoid confusing it with VNM utility. Kahneman calls it E-utility, for "experienced utility", a connotation I'll avoid. Intuitively, this is certainly something you could sacrifice for others, or have more of compared to others. True, a given person's VNM utility is likely highly correlated with her personal "welfare", but I wouldn't consider it an accurate approximation.
So if not collective welfare, then what could cross-agent comparisons or sums of VNM utilities indicate? Well, they're meant to characterize decisions, so one meaningful application is to collective decision-making:
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)