Ultimatums in the Territory
When you think of "ultimatums", what comes to mind?
Manipulativeness, maybe? Ultimatums are typically considered a negotiation tactic, and not a very pleasant one.
But there's a different thing that can happen, where an ultimatum is made, but where articulating it isn't a speech act but rather an observation. As in, the ultimatum wasn't created by the act of stating it, but rather, it already existed in some sense.
Some concrete examples: negotiating relationships
I had a tense relationship conversation a few years ago. We'd planned to spend the day together in the park, and I was clearly angsty, so my partner asked me what was going on. I didn't have a good handle on it, but I tried to explain what was uncomfortable for me about the relationship, and how I was confused about what I wanted. After maybe 10 minutes of this, she said, "Look, we've had this conversation before. I don't want to have it again. If we're going to do this relationship, I need you to promise we won't have this conversation again."
I thought about it. I spent a few moments simulating the next months of our relationship. I realized that I totally expected this to come up again, and again. Earlier on, when we'd had the conversation the first time, I hadn't been sure. But it was now pretty clear that I'd have to suppress important parts of myself if I was to keep from having this conversation.
"...yeah, I can't promise that," I said.
"I guess that's it then."
"I guess so."
I think a more self-aware version of me could have recognized, without her prompting, that my discomfort represented an unreconcilable part of the relationship, and that I basically already wanted to break up.
The rest of the day was a bit weird, but it was at least nice that we had resolved this. We'd realized that it was a fact about the world that there wasn't a serious relationship that we could have that we both wanted.
I sensed that when she posed the ultimatum, she wasn't doing it to manipulate me. She was just stating what kind of relationship she was interested in. It's like if you go to a restaurant and try to order a pad thai, and the waiter responds, "We don't have rice noodles or peanut sauce. You either eat somewhere else, or you eat something other than a pad thai."
An even simpler example would be that at the start of one of my relationships, my partner wanted to be monogamous and I wanted to be polyamorous (i.e. I wanted us both to be able to see other people and have other partners). This felt a bit tug-of-war-like, but eventually I realized that actually I would prefer to be single than be in a monogamous relationship.
I expressed this.
It was an ultimatum! "Either you date me polyamorously or not at all." But it wasn't me "just trying to get my way".
I guess the thing about ultimatums in the territory is that there's no bluff to call.
It happened in this case that my partner turned out to be really well-suited for polyamory, and so this worked out really well. We'd decided that if she got uncomfortable with anything, we'd talk about it, and see what made sense. For the most part, there weren't issues, and when there were, the openness of our relationship ended up just being a place where other discomforts were felt, not a generator of disconnection.
Normal ultimatums vs ultimatums in the territory
I use "in the territory" to indicate that this ultimatum isn't just a thing that's said but a thing that is true independently of anything being said. It's a bit of a poetic reference to the map-territory distinction.
No bluffing: preferences are clear
The key distinguishing piece with UITTs is, as I mentioned above, that there's no bluff to call: the ultimatum-maker isn't secretly really really hoping that the other person will choose one option or the other. These are the two best options as far as they can tell. They might have a preference: in the second story above, I preferred a polyamorous relationship to no relationship. But I preferred both of those to a monogamous relationship, and the ultimatum in the territory was me realizing and stating that.
This can actually be expressed formally, using what's called a preference vector. This comes from Keith Hipel at University of Waterloo. If the tables in this next bit doesn't make sense, don't worry about it: all important conclusions are expressed in the text.
First, we'll note that since each of us have two options, a table can be constructed which shows four possible states (numbered 0-3 in the boxes).
This representation is sometimes referred to as matrix form or normal form, and has the advantage of making it really clear who controls which state transitions (movements between boxes). Here, my decision controls which column we're in, and my partner's decision controls which row we're in.
Next, we can consider: of these four possible states, which are most and least preferred, by each person? Here's my preferences, ordered from most to least preferred, left to right. The 1s in the boxes mean that the statement on the left is true.
The order of the states represents my preferences (as I understand them) regardless of what my potential partner's preferences are. I only control movement in the top row (do I insist on polyamory or not). It's possible that they prefer no relationship to a poly relationship, in which case we'll end up in state 2. But I still prefer this state over state 1 (mono relationship) and state 0 (in which I don't ask for polyamory and my partner decides not to date me anyway). So whatever my partners preferences are, I've definitely made a good choice for me, by insisting on polyamory.
This wouldn't be true if I were bluffing (if I preferred state 1 to state 2 but insisted on polyamory anyway). If I preferred 1 to 2, but I bluffed by insisting on polyamory, I would basically be betting on my partner preferring polyamory to no relationship, but this might backfire and get me a no relationship, when both of us (in this hypothetical) would have preferred a monogamous relationship to that. I think this phenomenon is one reason people dislike bluffy ultimatums.
My partner's preferences turned out to be...
You'll note that they preferred a poly relationship to no relationship, so that's what we got! Although as I said, we didn't assume that everything would go smoothly. We agreed that if this became uncomfortable for my partner, then they would tell me and we'd figure out what to do. Another way to think about this is that after some amount of relating, my partner's preference vector might actually shift such that they preferred no relationship to our polyamorous one. In which case it would no longer make sense for us to be together.
UITTs release tension, rather than creating it
In writing this post, I skimmed a wikihow article about how to give an ultimatum, in which they say:
"Expect a negative reaction. Hardly anyone likes being given an ultimatum. Sometimes it may be just what the listener needs but that doesn't make it any easier to hear."
I don't know how accurate the above is in general. I think they're talking about ultimatums like "either you quit smoking or we break up". I can say that expect that these properties of an ultimatum contribute to the negative reaction:
- stated angrily or otherwise demandingly
- more extreme than your actual preferences, because you're bluffing
- refers to what they need to do, versus your own preferences
So this already sounds like UITTs would have less of a negative reaction.
But I think the biggest reason is that they represent a really clear articulation of what one party wants, which makes it much simpler for the other party to decide what they want to do. Ultimatums in the territory tend to also be more of a realization that you then share, versus a deliberate strategy. And this realization causes a noticeable release of tension in the realizer too.
Let's contrast:
"Either you quit smoking or we break up!"
versus
"I'm realizing that as much as I like our relationship, it's really not working for me to be dating a smoker, so I've decided I'm not going to. Of course, my preferred outcome is that you stop smoking, not that we break up, but I realize that might not make sense for you at this point."
Of course, what's said here doesn't necessarily correspond to the preference vectors shown above. Someone could say the demanding first thing when they actually do have a UITT preference-wise, and someone who's trying to be really NVCy or something might say the sceond thing even though they're actually bluffing and would prefer to . But I think that in general they'll correlate pretty well.
The "realizing" seems similar to what happened to me 2 years ago on my own, when I realized that the territory was issuing me an ultimatum: either you change your habits or you fail at your goals. This is how the world works: your current habits will get you X, and you're declaring you want Y. On one level, it was sad to realize this, because I wanted to both eat lots of chocolate and to have a sixpack. Now this ultimatum is really in the territory.
Another example could be realizing that not only is your job not really working for you, but that it's already not-working to the extent that you aren't even really able to be fully productive. So you don't even have the option of just working a bit longer, because things are only going to get worse at this point. Once you realize that, it can be something of a relief, because you know that even if it's hard, you're going to find something better than your current situation.
Loose ends
More thoughts on the break-up story
One exercise I have left to the reader is creating the preference vectors for the break-up in the first story. HINT: (rot13'd) Vg'f fvzvyne gb gur cersrerapr irpgbef V qvq fubj, jvgu gjb qrpvfvbaf: fur pbhyq vafvfg ba ab shgher fhpu natfgl pbairefngvbaf be abg, naq V pbhyq pbagvahr gur eryngvbafuvc be abg.
An interesting note is that to some extent in that case I wasn't even expressing a preference but merely a prediction that my future self would continue to have this angst if it showed up in the relationship. So this is even more in the territory, in some senses. In my model of the territory, of course, but yeah. You can also think of this sort of as an unconscious ultimatum issued by the part of me that already knew I wanted to break up. It said "it's preferable for me to express angst in this relationship than to have it be angst free. I'd rather have that angst and have it cause a breakup than not have the angst."
Revealing preferences
I think that ultimatums in the territory are also connected to what I've called Reveal Culture (closely related to Tell Culture, but framed differently). Reveal cultures have the assumption that in some fundamental sense we're on the same side, which makes negotiations a very different thing... more of a collaborative design process. So it's very compatible with the idea that you might just clearly articulate your preferences.
Note that there doesn't always exist a UITT to express. In the polyamory example above, if I'd preferred a mono relationship to no relationship, then I would have had no UITT (though I could have bluffed). In this case, it would be much harder for me to express my preferences, because if I leave them unclear then there can be kind of implicit bluffing. And even once articulated, there's still no obvious choice. I prefer this, you prefer that. We need to compromise or something. It does seem clear that, with these preferences, if we don't end up with some relationship at the end, we messed up... but deciding how to resolve it is outside the scope of this post.
Knowing your own preferences is hard
Another topic this post will point at but not explore is: how do you actually figure out what you want? I think this is a mix of skill and process. You can get better at the general skill by practising trying to figure it out (and expressing it / acting on it when you do, and seeing if that works out well). One process I can think of that would be helpful is Gendlin's Focusing. Nate Soares has written about how introspection is hard and to some extent you don't ever actually know what you want: You don't get to know what you're fighting for. But, he notes,
"There are facts about what we care about, but they aren't facts about the stars. They are facts about us."
And they're hard to figure out. But to the extent that we can do so and then act on what we learn, we can get more of what we want, in relationships, in our personal lives, in our careers, and in the world.
(This article crossposted from my personal blog.)
Schelling Point Strategy Training
There's a category of game-theoretic scenario called Battle of the Sexes, which is commonly used to demonstrate coordination problems. Two cinema-goers, traditionally a husband and wife, have agreed to go to the cinema, but haven't decided on what to see beforehand. Of the two films that are showing, she would rather see King Kong Lives, while he would rather see Big Momma's House 2. Each would rather see their non-preferred film with their spouse than see their preferred film on their own. The payoff matrix is as follows:
|
|
Husband | ||
| King Kong Lives | Big Momma's House 2 | ||
| Wife | King Kong Lives | 2 / 1 | 0 / 0 |
| Big Momma's House 2 | 0 / 0 | 1 / 2 | |
The two have not conferred beforehand, beyond sharing knowledge of their preferences. They are turning up to the cinema and picking an auditorium in the hope that their spouse is in there. Which should they pick? This is a classic coordination problem. The symmetry of their preferences means there is no stand-out option for them to converge on. There is no Schelling Point.1
Except I'm going to argue that there is.
Shoehorning an example of a Schelling Point into the above scenario, we might imagine that one of the above films being screened is being billed as "an ideal romantic treat to share with your spouse", (which one that would be, I'm not entirely sure), though in the absence of a "natural" Schelling Point, there's no reason we can't make one. All we need is to identify procedures that would reliably elevate one of these options to our attention. Then it becomes a question of selecting which of these procedures is most likely to be selected by the other agent in the scenario.
I am now going to instigate a multidimensional instance of Battle of the Sexes with all the readers of this post. Below are sixteen randomly-ordered films. I am going to select one, and invite you to do the same. The object of the exercise is for all of us to pick the same one. I will identify my selection, and the logic behind it, in rot13 after the list.
Breakfast at Tiffany's
William Shakespeare's Romeo and Juliet
E.T. the Extra-Terrestrial
Children of the Corn
An American Werewolf in London
To Kill a Mockingbird
Harold and Maude
The Day the Earth Stood Still
Duck Soup
Highlander
Fantasia
Heathers
Forbidden Planet
Butch Cassidy and the Sundance Kid
Grosse Pointe Blank
Mrs. Doubtfire
Urer vf na vapbafrdhragvny fragrapr gb guebj bss crbcyr jub pna vagrecerg guvf plcure ba fvtug ol abj. Zl fryrpgvba jnf na nzrevpna jrerjbys va Ybaqba. Gur cebprqher V fryrpgrq jnf gur svefg svyz nycunorgvpnyyl. Guvf frrzf yvxr gur zbfg "boivbhf" cebprqher sbe eryvnoyl fryrpgvat n fvatyr vgrz sebz gur frg. Cbffvoyl n zber "boivbhf" bar jbhyq fvzcyl or gb fryrpg gur svefg bar ba gur yvfg (Oernxsnfg ng Gvssnal'f va guvf pnfr), ohg V jnf bcrengvat ba gur nffhzcgvba gung gur yvfg jnf abg arprffnevyl eryvnoyl-beqrerq (juvpu V gevrq gb pbairl ol qrfpevovat gur yvfg nf "enaqbzyl-beqrerq", ohg pbhyqa'g ernyyl rkcyvpvgyl fgngr jvgubhg cbffvoyl tvivat n ovt uvag nf gb gur cebprqher V pubfr. Guvf jbhyq unir fcbvyrq guvatf n yvggyr.
I have no idea if that worked. Whether or not it did, it seems to me that the general skill of identifying popular procedures for designating Schelling Points is possibly a worthwhile skill to develop. It also seems to me that once a handful of common strategies for identifying Schelling Points are known to a group, some effort has to be put into constructing scenarios in which that group can't coordinate. This forms the outline of an adversarial game, (provisionally named Schelling Point Strategy Training), whereby two teams take it in turns to construct and present a set of options which the other team has to coordinate on. I am idly toying with running a session of this at a future London Less Wrong meetup.
1 There is actually an unrelated meta-strategy here, whereby on all disputes one designated partner acquiesces to the wishes of the other. This behaviour is also far from unheard of in romantic partnerships. While this doesn't seem very egalitarian, I am wondering if it actually becomes a reasonable trade-off for partnerships which face coordination problems on a regular basis.
Notes on the Psychology of Power
Luke/SI asked me to look into what the academic literature might have to say about people in positions of power. This is a summary of some of the recent psychology results.
The powerful or elite are: fast-planning abstract thinkers who take action (1) in order to pursue single/minimal objectives, are in favor of strict rules for their stereotyped out-group underlings (2) but are rationalizing (3) & hypocritical when it serves their interests (4), especially when they feel secure in their power. They break social norms (5, 6) or ignore context (1) which turns out to be worsened by disclosure of conflicts of interest (7), and lie fluently without mental or physiological stress (6).
What are powerful members good for? They can help in shifting among equilibria: solving coordination problems or inducing contributions towards public goods (8), and their abstracted Far perspective can be better than the concrete Near of the weak (9).
- Galinsky et al 2003; Guinote, 2007; Lammers et al 2008; Smith & Bargh, 2008
- Eyal & Liberman
- Rustichini & Villeval 2012
- Lammers et al 2010
- Kleef et al 2011
- Carney et al 2010
- Cain et al 2005; Cain et al 2011
- Eckel et al 2010
- Slabu et al; Smith & Trope 2006; Smith et al 2008
Slowing Moore's Law: Why You Might Want To and How You Would Do It
In this essay I argue the following:
Brain emulation requires enormous computing power; enormous computing power requires further progression of Moore’s law; further Moore’s law relies on large-scale production of cheap processors in ever more-advanced chip fabs; cutting-edge chip fabs are both expensive and vulnerable to state actors (but not non-state actors such as terrorists). Therefore: the advent of brain emulation can be delayed by global regulation of chip fabs.
Full essay: http://www.gwern.net/Slowing%20Moore%27s%20Law
[Poll] Who looks better in your eyes?
This is thread where I'm trying to figure out a few things about signalling on LessWrong and need some information, so please immediately after reading about the two individuals please answer the poll. The two individuals:
A. Sees that an interpretation of reality shared by others is not correct, but tries to pretend otherwise for personal gain and/or safety.
B. Fails to see that an interpretation of reality is shared by others is flawed. He is therefore perfectly honest in sharing the interpretation of reality with others. The reward regime for outward behaviour is the same as with A.
To add a trivial inconvenience that matches the inconvenience of answering the poll before reading on, comments on what I think the two individuals signal,what the trade off is and what I speculate the results might be here versus the general population, is behind this link.
The Goal of the Bayesian Conspiracy
Suppose that there were to exist such an entity as the Bayesian Conspiracy.
I speak not of the social group of that name, the banner under which rationalists meet at various conventions – though I do not intend to disparage that group! Indeed, it is my fervent hope that they may in due time grow into the entity which I am setting out to describe. No, I speak of something more like the “shadowy group of scientists” which Yudkowsky describes, tongue (one might assume) firmly in cheek. I speak of such an organization which has been described in Yudkowsky's various fictional works, the secret and sacred cabal of mathematicians and empiricists who seek unwaveringly for truth... but set in the modern-day world, perhaps merely the seed of such a school, an organization which can survive and thrive in the midst of, yet isolated from, our worldwide sociopolitical mess. I ask you, if such an organization existed, right now, what would – indeed, what should – be its primary mid-term (say, 50-100 yrs.) goal?
I submit that the primary mid-term goal of the Bayesian Conspiracy, at this stage of its existence, is and/or ought to be nothing less than world domination.
Before the rotten fruit begins to fly, let me make a brief clarification.
The term “world domination” is, unfortunately, rather socially charged, bringing to mind an image of the archetypal mad scientist with marching robot armies. That's not what I'm talking about. My usage of the phrase is intended to evoke something slightly less dramatic, and far less sinister. “World domination”, to me, actually describes rather a loosely packed set of possible world-states. One example would be the one I term “One World Government”, wherein the Conspiracy (either openly or in secret) is in charge of all nations via an explicit central meta-government. Another would be a simple infiltration of the world's extant political systems, followed by policy-making and cooperation which would ensure the general welfare of the world's entire population – control de facto, but without changing too much outwardly. The common thread is simply that the Conspiracy becomes the only major influence in world politics.
(Forgive my less-than-rigorous definition, but a thorough examination of the exact definition of the word “influence” is far, far outside the scope of this article.)
So there is my claim. Let me tell you why I believe this is the morally correct course of action.
Let us examine, for a moment, the numerous major good works which are currently being openly done by rationalists, or with those who may not self-identify as rationalists, but whose dogmas and goals accord with ours. We have the Singularity Institute, which is concerned with ensuring that our technological, transhumanistic advent happens smoothly and with a minimum of carnage. We have various institutions worldwide advocating and practicing cryonics, which offers a non-zero probability of recovery from death. We have various institutions also who are working on life extension technologies and procedures, which offer to one day remove the threat of death entirely from our world.
All good things, I say. I also say: too slow!
Imagine what more could be accomplished if the United States, for example, granted to the Life Extension Foundation or to Alcor the amount of money and social prominence currently reserved for military purposes. Imagine what would happen if every scientist around the world were perhaps able to contribute under a unified institution, working on this vitally important problem of overcoming death, with all the money and time the world's governments could offer at their disposal.
Imagine, also, how many lives are lost every day due to governmental negligence, and war, and poverty, and hunger. What does it profit the world, if we offer to freeze the heads of those who can afford it, while all around us there are people who can't even afford their bread and water?
I have what is, perhaps, to some who are particularly invested, an appalling and frightening proposition: for the moment, we should devote fewer of our resources to cryonics and life extension, and focus on saving the lives of those to whom these technologies are currently beyond even a fevered dream. This means holding the reins of the world, that we might fix the problems inherent in our society. Only when significant steps have been taken in the direction of saving life can we turn our focus toward extending life.
What should the Bayesian Conspiracy do, once it comes to power? It should stop war. It should usurp murderous despots, and feed the hungry and wretched who suffered under them. Again: before we work on extending the lives of the healthy and affluent beyond what we've so far achieved, we should, for example, bring the average life expectancy in Africa above the 50-year mark, where it currently sits (according to a 2006 study in the BMJ). This is what will bring about the maximum level of happiness in the world; not cryonics for those who can afford it.
Does this mean that we should stop researching these anti-death technologies? No! Of course not! Consider: even if cryonics drops to, say, priority 3 or 4 under this system, once the Conspiracy comes to power, that will still be far more support than it's currently receiving from world governments. The work will end up progressing at a far faster rate than it currently does.
Some of you may have qualms about this plan of action. You may ask, what about individual choice? What about the peoples' right to choose who leads them? Well, for those of us who live in the United States, at least, this is already a bit of a naïve question: due to color politics, you already do not have much of a choice in who leads you. But that's a matter for another time. Even if you think that dictatorship – even benevolent, rationalist dictatorship – would be inherently morally worse than even the flawed democratic system we enjoy here – a notion that may not even necessarily be the case! – do not worry: there's no reason why world domination need entail dictatorships. In countries where there are democratic systems in place, we will work within the system, placing Conspirators into positions where they can convince the people, via legitimate means, to give them public office. Once we have attained a sufficient level of power over this democratic system, we will effect change, and thence the work will go forth until this victory of rationalist dogma covers all the earth. When there are dictators, they will be removed and replaced with democratic systems... under the initial control of Conspirators, of course, and ideally under their continued control as time passes – but legitimately obtained control.
It is demonstrable that one's level of strength as a rationalist has a direct correlation to the probability that the one will make correct decisions. Therefore, the people who make decisions that affect large numbers of people ought to be those who have the highest level of rationality. In this way we can seek to avoid the many, many, many pitfalls of politics, including the inefficiency which Yudkowsky has again and again railed against. If all the politicians are on the same side, who's to argue?
In fact, even if two rationalists disagree on a particular point (which they shouldn't, but hey, even the best rationalists aren't perfect yet), they'll be able to operate more efficiently than two non-rationalists in the same position. Is the disagreement able to be settled by experiment? If it's important, throw funds at a lab to conduct such an experiment! After all, we're in charge of the money and the scientists. Is it not? Find a compromise that has the maximum expected utility for the constituents. We can do that with a high degree of accuracy; we have access to the pollsters and sociologists, and know about reliable versus unreliable polling methods!
What about non-rationalist aspiring politicians? Well, under an ideal Conspiracy takeover, there would be no such thing. Lessons on politics would include rationality as a basis; graduation from law school would entail induction into the Conspiracy, and access to the truths had therein.
I suppose the biggest question is, is all this realistic? Or is just an idealist's dream? Well, there's a non-zero probability that the Conspiracy already exists, in which case, I hope that they will consider my proposal... or, even better, I hope that I've correctly deduced and adequately explained the master plan. If the Conspiracy does not currently exist, then if my position is correct, we have a moral obligation to work our hardest on this project.
“But I don't want to be a politician,” you exclaim! “I have no skill with people, and I'd much rather tinker with the Collatz Conjecture at my desk for a few years!” I'm inclined to say that that's just too bad; sacrifices must be made for the common good, and after all, it's often said that anyone who actually wants a political office is by the fact unfit for the position. But in all realism, I'm quite sure that there will be enough room in the Conspiracy for non-politicians. We're all scientists and mathematicians at heart, anyway.
So! Here is our order of business. We must draw up a charter for the Bayesian Conspiracy. We must invent a testing system able to keep a distinction between those who are and are not ready for the Truths the Conspiracy will hold. We must find our strongest Rationalists – via a testing procedure we have not yet come up with – and put them in charge, and subordinate ourselves to them (not blindly, of course! The strength of community, even rationalist community, is in debate!). We must establish schools and structured lesson plans for the purpose of training fresh students; we must also take advantage of those systems which are already in place, and utilize them for (or turn them to) our purposes. I expect to have the infrastructure set up in no more than five years.
At that point, our real work will begin.
[Link] Study on Group Intelligence
Full disclosure: This has already been discussed here, but I see utility in bringing it up again. Mostly because I only heard about it offline.
The Paper:
Some researchers were interested if, in the same way that there's a general intelligence g that seems to predict competence in a wide variety of tasks, there is a group intelligence c that could do the same. You can read their paper here.
Their abstract:
Psychologists have repeatedly shown that a single statistical factor—often called “general intelligence”—emerges from the correlations among people’s performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of “collective intelligence” exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group’s performance on a wide variety of tasks. This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.
Basically, groups with higher social sensitivity, equality in conversational turn-taking, and proportion of females are collectively more intelligent. On top of that, those effects trump out things like average IQ or even max IQ.
I theorize that proportion of females mostly works as a proxy for social sensitivity and turn-taking, and the authors speculate the same.
Some thoughts:
What does this mean for Less Wrong?
The most important part of the study, IMO, is that "social sensitivity" (measured by a test where you try and discern emotional states from someone's eyes) is such a stronger predictor of group intelligence. It probably helps people to gauge other people's comprehension, but based on the fact that people sharing talking time more equally also helps, I would speculate that another chunk of its usefulness comes from being able to tell if other people want to talk, or think that there's something relevant to be said.
One thing that I find interesting in the meatspace meetups is how in new groups, conversation tends to be dominated by the people who talk the loudest and most insistently. Often, those people are also fairly interesting. However, I prefer the current, older DC group to the newer one, and there's much more equal time speaking. Even though this means that I don't talk as much. Most other people seem to share similar sentiments, to the point that at one early meetup it was explicitly voted to be true that most people would rather talk more.
Solutions/Proposals:
Anything we should try doing about this? I will hold off on proposing solutions for now, but this section will get filled in sometime.
Making projects happen
Judging by the number of upvotes, Brandon Reinhart's analysis of SIAI's financial filings is valuable to quite a few people. Similar analysis' of Alcor and the Cryonics Institute would be quite valuable. There has been talk of more work on condensing LW content and placing it on the wiki. I'm sure lots of people would like to know about the literature on low dose asprin. People seem to want a front page more accessible to newcomers. Will these projects get accomplished? Some of them, but probably fewer than optimal. I think we can do better.
I would like to look for ways to channel group willingness to contribute to a project into focused individual willingness to work on a project.
Observations about the problem space
The following is based on discussions at the Seattle Less Wrong meetup.
Many people would get a moderate amount of benefit from such projects, but only a small number would end up putting in the hard work to make them happen.
The people most enthusiastic about a given project may not be the best people to work on the project. Perhaps they have very time consuming jobs or have a hard time being objective about the topic (e.g. someone who gets especially emotional about Cryonics) or have too many other projects already or perhaps they are intellectually motivated but not emotionally motivated by the project which might make it difficult to Get Things Done.
Trying to generalize too early is a risk here. Going out and building fancy tools or otherwise trying something elaborate is probably not a good idea at first. Better to try some concrete trials first and learn from those experiences.
Sources of motivation
There are three major potential sources of motivation: Money (the unit of caring), social status (Karma, kind words etc.), things (pizza, books, cookies, pony pictures).
- Money
- Transfers of money (the unit of caring) are often much more efficient than transfers of other goods.
- Extrinsic rewards (especially money) can reduce intrinsic motivation.
- Large monetary rewards can also make relationship between the project contributors and the project sponsors less social.
- Many Less Wrong people are high paid
- Less likely to be motivated by small monetary rewards
- Have more money to contribute to projects.
- Not all Less Wrong people are high paid.
- There are services for collecting donations (link).
- Social rewards
- Praise
- Karma
- Social status
- Things
- Pizza, books, cookies, pony pictures
- Social pressure
- requests
- progress monitoring
Different motivators may work better for different kinds of projects. For example, money might be a counterproductive motivator for social projects but a great motivator for setting up a website.
How have others tackled this?
This is a problem others face as well. How do other similar groups and communities ameliorate it?
- Intrinsic motivation
- Conferring social status on those who do valuable work
- Sprints: several people get together in a single place and work together on a project for a couple of days.
- Main draw seems to be Fun
- Frequently used by Python projects
- Competition/bounties (McKinsey survey of prize literature)
- Provides social and/or material rewards
- Sometimes used on LW (link 1, link 2, link 3).
- Work seem well for some larger open source software projects (link 1, link 2, link 3), though some fail to get off the ground at all.
- Poorly arranged prizes can induce wasted effort
- Judging quality can be a serious issue especially when monetary rewards are involved
- potential for social conflict
- some people are better at dealing with social conflicts than others
- pre-designated arbiters more likely to be trusted than others
Miscellaneous observations
- Working groups or otherwise close contact sometimes increase people's motivations via peer pressure.
- Personally requesting someone work on a project can increase their motivation to do so.
- With certain kinds of motivation you often get people agreeing to work on a project and then getting slightly stuck and delaying it indefinitely. (Patri Friedman has given one reason why this might happen)
- Different incentives might work better/worse for different kinds of projects.
- Monitoring project progress could help motivation (it might also have other benefits, such as knowing when to rethink the project or to find another person to work on it).
- Splitting up a project into a number of small clear tasks that individuals can pick up and complete decreases the costs of working on projects. The very fact of announcing, specifying and taskifying a project can induce interest.
- Open projects (Wikipedia, open source projects) are often primarily worked on by a small group of highly dedicated contributors.
- Want to encourage quality
- sometimes something is better than nothing
- sometimes drafts and large output volume is useful for future work
- People most interested in the results of a project are not always the people best suited to do the project.
- High visibility projects
- Increase interest in working on projects
- Completed projects give social rewards to completors
- Completed projects serve as templates for future related projects
- Quantifying aggregate interest (both in terms of number and intensity) is useful for deciding what projects are most important
- Aggregating what skills potential project contributors have is useful for determining what projects are possible
In the interest of Holding Off On Proposing Solutions, please take a moment to try to identify features of the problem space that I have not mentioned before reading the comments. Please mention any features you notice as well as any potential solutions or parts of solutions in the comments. I have some ideas, and I will propose them in the comments.
Schneier talks about The Dishonest Minority [Link]
Evolution. Morality. Strategy. Security/Cryptography. This hits so many topics of interest, I can't imagine it not being discussed here. Bruce Schneier blogs about his book-in-progress, The Dishonest Minority:
Humans evolved along this path. The basic mechanism can be modeled simply. It is in our collective group interest for everyone to cooperate. It is in any given individual's short-term self interest not to cooperate: to defect, in game theory terms. But if everyone defects, society falls apart. To ensure widespread cooperation and minimal defection, we collectively implement a variety of societal security systems.
I am somewhat reminded of Robin Hanson's Homo Hypocritus writings from the above, although it is not the same. Schneier says that the book is basically a first draft at this point, and might still change quite a bit. Some of the comments focus on whether "dishonest" is actually the best term to use for defecting from social norms.
Terrorist leaders are not about Terror
From "Academics Doubt Impact of Osama bin Laden’s Death":
"...Fifty-three percent of the terrorist organizations that suffered such a violent leadership loss fell apart — which sounds impressive until you discover that 70 percent of groups who did not deal with an assassination no longer exist.
Further crunching of the numbers revealed that leadership decapitation becomes more counterproductive the older the group is. The difference in collapse rates (between groups that did and did not have a leader assassinated) is fairly small among organizations less than 20 years old but quite large for those more than 20 years in age, and even larger for those that have been around more than 30 years.
Assassination of a leader does seem to negatively impact smaller terrorist groups: The data shows organizations with fewer than 500 members are more likely to collapse if they suffer such a leadership loss. But organizations with more than 500 members are actually more likely to survive after an assassination, making this strategy “highly counterproductive for larger groups,” Jordan writes."
See also Lost Purposes, The Importance of Goodhart's Law, & Faster than Science.
anchoring for coordination
Reading Schelling's the Strategy of Conflict, a useful social purpose for anchoring occurred to me.
First some background.
You and your husband/wife lose each other in the mall. The two of you have not before this agreed on a place to meet in case you lose each other. Still, there is a good chance both of you would decide to meet up at some salient/prominent place, say the main information desk. This is coordination without communication and with aligned interests.
You and your partner/rival are independently given the choice of "Heads" or "Tails". Neither knows the choice of the other. If both of you choose "Heads" you'll get $1 and your opponent $3. If both of you choose "Tails", you'll get $3 and your opponent $1. Otherwise neither of you gets anything. Most pairs would coordinate on "Heads" (again because of convention/salience), with the Tails player not insisting on Tails because she has no way to coordinate (arrange for compensation, say) with the Heads player on this. This is coordination without communication and with some conflict of interest. But not total conflict, as they would rather coordinate than get no payment at all.
Schelling then goes on to consider explicit bargaining situations where there is of course actual communication between the parties deciding on a mutually acceptable outcome. But he notes that even here, "focal points" seem to exert a huge influence. Furthermore, these focal points are often quite partial towards the interests of a certain party. Yet often, the 'losing' party still accepts this less than stellar bargaining outcome. For example, a nation conceding some territory because the only prominent landmark permitting a stable division was some river partial to the interests of the other nation.
He then proceeds to show how explicit bargaining is not so different from the Heads/Tails coordination game. In a bargaining situation, both sides would rather reach agreement than none at all. And there is a range of possible points of agreement, where the 'losing' party would rather concede than forfeit any agreement. But how to decide among these points of potential agreement? A stable point of agreement for an outcome of the bargaining would be one in which neither expects the other to make further concessions. But a party decides if they would concede based on their expectations of the other party's likelihood of conceding. And so it goes back and forth. Hence, Schelling argues, even in explicit bargaining, focal points play an important role in coordinating expectations, and in ensuring that an agreement is reached, to the mutual (if lopsided) benefit of both parties.
This is where anchoring comes in. The proposal is that, sure, by letting yourself be influenced by a subpar anchor, you are forgoing a much better bargaining outcome for yourself. But this is better than no agreement at all! If you accepted a price from a merchant who proposed high price to set an anchor, it was only because this was a price you were willing to accept before you started bargaining. And if instead the lowest price the merchant would allow was too high, you would simply have rejected the transaction, and perhaps found a better offer from his competitors.
Anchoring is of course, not limited to such explicit bargaining situations. But then so is the principle of 'focal points'! In many situations throughout life, there are situations where participants share some interests and diverge on others, and where bargaining is not entirely explicit. To coordinate on these at all we require the ability to respond to anchors. Of course, this would only create an incentive to manipulate anchors, and subsequently an incentive to be resistant to such manipulation. But resistance is not total non-susceptibility! If one does not respond to anchors at all, one would be unnecessarily forgoing many mutually beneficial bargaining outcomes, to one's own detriment.
[Link] Space Stasis: What the strange persistence of rockets can teach us about innovation
http://www.slate.com/id/2283469/pagenum/all/
It's a long article, but the most relevant stuff is at the end, about how we're pretty much locked into the existing rocket technologies:
That is not, however, the most important way that rockets generate lock-in. In order to understand this, it's necessary to know a few things about (1) the physical environment of rocket launches, (2) the economics of the industry, and (3) the way it is regulated, or, to be more precise, the way it interacts with government.
1. The designer of a rocket payload, such as a communications satellite, has much more to worry about than merely limiting the payload to a given size, shape, and weight. The payload must be designed to survive the launch and the transition through various atmospheric regimes into outer space. As we all know from watching astronauts on movies and TV, there will be acceleration forces, relatively modest at the beginning, but building to much higher values as fuel is burned and the rocket becomes lighter relative to its thrust. At some moments, during stage separation, the acceleration may even reverse direction for a few moments as one set of engines stops supplying thrust and atmospheric resistance slows the vehicle down. Rockets produce intense vibration over a wide range of frequencies; at the upper end of that range we would identify this as noise (noise loud enough to cause physical destruction of delicate objects), at the lower range, violent shaking. Explosive bolts send violent shocks through the vehicle's structure. During the passage through the ionosphere, the air itself becomes conductive and can short out electrical gear. Enclosed spaces must be vented so that pressure doesn't build up in them as the vehicle passes into vacuum. Once the satellite has reached orbit, sharp and intense variations in temperature as it passes in and out of the earth's shadow can cause problems if not anticipated in the engineering design. Some of these hazards are common to all things that go into space, but many are unique to rockets.
2. If satellites and launches were cheap, a more easygoing attitude toward their design and construction might prevail. But in general they are, pound for pound, among the most expensive objects ever made even before millions of dollars are spent launching them into orbit. Relatively mass-produced satellites, such as those in the Iridium and Orbcomm constellations, cost on the order of $10,000/lb. The communications birds in geostationary orbit—the ones used for satellite television, e.g.—are two to five times as expensive, and ambitious scientific/defense payloads are often $100,000 per pound. Comsats can only be packed so close together in orbit, which means that there is a limited number of available slots—this makes their owners want to pack as much capability as possible into each bird, helping jack up the cost. Once they are up in orbit, comsats generate huge amounts of cash for their owners, which means that any delays in launching them are terribly expensive. Rockets of the old school aren't perfect—they have their share of failures—but they have enough of a track record that it's possible to buy launch insurance. The importance of this fact cannot be overestimated. Every space entrepreneur who dreams of constructing a better mousetrap sooner or later crunches into the sickening realization that, even if the new invention achieved perfect technical success, it would fail as a business proposition simply because the customers wouldn't be able to purchase launch insurance.
3. Rockets—at least, the kinds that are destined for orbit, which is what we are talking about here—don't go straight up into the air. They mostly go horizontally, since their purpose is to generate horizontal velocities so high that centrifugal force counteracts gravity. The initial launch is vertical because the thing needs to get off the pad and out of the dense lower atmosphere, but shortly afterwards it bends its trajectory sharply downrange and begins to accelerate nearly horizontally. Consequently, all rockets destined for orbit will pass over large swathes of the earth's surface during the 10 minutes or so that their engines are burning. This produces regulatory and legal complications that go deep into the realm of the absurd. Existing rockets, and the launch pads around which they have been designed, have been grandfathered in. Space entrepreneurs must either find a way to negotiate the legal minefield from scratch or else pay high fees to use the existing facilities. While some of these regulatory complications can be reduced by going outside of the developed world, this introduces a whole new set of complications since space technology is regulated as armaments, and this imposes strict limits on the ways in which American rocket scientists can collaborate with foreigners. Moreover, the rocket industry's status as a colossal government-funded program with seemingly eternal lifespan has led to a situation in which its myriad contractors and suppliers are distributed over the largest possible number of congressional districts. Anyone who has witnessed Congress in action can well imagine the consequences of giving it control over a difficult scientific and technological program.
Dr. Jordin Kare, a physicist and space launch expert to whom I am indebted for some of the details mentioned above, visualizes the result as a triangular feedback loop joining big expensive launch systems; complex, expensive, long-life satellites; and few launch opportunities. To this could be added any number of cultural factors (the engineers populating the aerospace industry are heavily invested in the current way of doing things); the insurance and regulatory factors mentioned above; market inelasticity (cutting launch cost in half wouldn't make much of a difference); and even accounting practices (how do you amortize the nonrecoverable expenses of an innovative program over a sufficiently large number of future launches?).
To employ a commonly used metaphor, our current proficiency in rocket-building is the result of a hill-climbing approach; we started at one place on the technological landscape—which must be considered a random pick, given that it was chosen for dubious reasons by a maniac—and climbed the hill from there, looking for small steps that could be taken to increase the size and efficiency of the device. Sixty years and a couple of trillion dollars later, we have reached a place that is infinitesimally close to the top of that hill. Rockets are as close to perfect as they're ever going to get. For a few more billion dollars we might be able to achieve a microscopic improvement in efficiency or reliability, but to make any game-changing improvements is not merely expensive; it's a physical impossibility.
There is no shortage of proposals for radically innovative space launch schemes that, if they worked, would get us across the valley to other hilltops considerably higher than the one we are standing on now—high enough to bring the cost and risk of space launch down to the point where fundamentally new things could begin happening in outer space. But we are not making any serious effort as a society to cross those valleys. It is not clear why.
A temptingly simple explanation is that we are decadent and tired. But none of the bright young up-and-coming economies seem to be interested in anything besides aping what the United States and the USSR did years ago. We may, in other words, need to look beyond strictly U.S.-centric explanations for such failures of imagination and initiative. It might simply be that there is something in the nature of modern global capitalism that is holding us back. Which might be a good thing, if it's an alternative to the crazy schemes of vicious dictators. Admittedly, there are many who feel a deep antipathy for expenditure of money and brainpower on space travel when, as they never tire of reminding us, there are so many problems to be solved on earth. So if space launch were the only area in which this phenomenon was observable, it would be of concern only to space enthusiasts. But the endless BP oil spill of 2010 highlighted any number of ways in which the phenomena of path dependency and lock-in have trapped our energy industry on a hilltop from which we can gaze longingly across not-so-deep valleys to much higher and sunnier peaks in the not-so-great distance. Those are places we need to go if we are not to end up as the Ottoman Empire of the 21st century, and yet in spite of all of the lip service that is paid to innovation in such areas, it frequently seems as though we are trapped in a collective stasis. As described above, regulation is only one culprit; at least equal blame may be placed on engineering and management culture, insurance, Congress, and even accounting practices. But those who do concern themselves with the formal regulation of "technology" might wish to worry less about possible negative effects of innovation and more about the damage being done to our environment and our prosperity by the mid-20th-century technologies that no sane and responsible person would propose today, but in which we remain trapped by mysterious and ineffable forces.
Games People Play
Game theory is great if you know what game you're playing. All this talk of Diplomacy reminds me of this memory of Adam Cadre:
I remember that in my ninth grade history class, the teacher had us play a game that was supposed to demonstrate how shifting alliances work. He divided the class into seven groups — dubbed Britain, France, Germany, Belgium, Italy, Austria and Russia — and, every few minutes, declared a "battle" between two of the countries. Then there was a negotiation period, during which we all were supposed to walk around the room making deals. Whichever warring country collected the most allies would win the battle and a certain number of points to divvy up with its allies. The idea, I think, was that countries in a battle would try to win over the wavering countries by promising them extra points to jump aboard.
That's not how it worked in practice. Three or four guys — the same ones who had gotten themselves elected to ASB, the student government — decided among themselves during the first negotiation period what the outcome would be, and told people whom to vote for. And the others just shrugged and did as they were told. The ASB guys had decided that Germany would win, followed by France, Britain, Belgium, Austria, Italy and Russia. The first battle was France vs. Russia. Germany and Britain both signed up on the French side. Austria and Italy, realizing that if they just went along with the ASB plan they'd come in 5th and 6th, joined up with Russia. That left it up to Belgium. I was on team Belgium. I voted to give our vote to the Russian side, because that way at least we weren't doomed to come in 4th. And no one else on my team went along. They meekly gave their points to the French side. (As I recall, Josh Lorton was particularly adamant about this. I guess he thought it would make the ASB guys like him.) After that, there was no contest. Britain vs. Austria? 6-1, Britain. Germany vs. Belgium? 6-1, Germany. (And we could have beaten them if we'd just formed a bloc with the other three losers!) The teacher noticed that Germany and France were always on the same side and declared Germany vs. France. Outcome: 6-1, Germany.
The ASB guys were able to just impose their will on a class of 40 students. No carrots, no sticks, just "here's what will happen" and everyone else nodding. I have no idea how that works. I do recall that because they were in student government, for fourth period they had to take a class called Leadership. From what I could tell they just spent the class playing volleyball out in the quad. But I guess they were learning something!
What happened? Why did Italy and Russia fall into line and abandon Austria in the second battle?
This utterly failed to demonstrate the "shifting alliances" that Adam thought the teacher wanted. Does this happen every year?
Yes, the students were coerced into "playing" this game, but elsewhere he describes the same thing happen in games that people choose to play. Moreover, he tells the first story to illustrate his perception of politics.

= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)