Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Details of Taskforces; or, Cooperate Now

15 paulfchristiano 05 April 2011 05:16PM

Recently I've spent a lot of time thinking about what exactly I should be doing with my life. I'm lucky enough to be in an environment where I can occasionally have productive conversations about the question with smart peers, but I suspect I would think much faster if I spent more of my time with a community grappling with the same issues. Moreover, I expect I could be more productive if I spent time with others trying to get similar things done, not to mention the benefits of explicit collaboration.

I would like to organize a nonstandard sort of meetup: regular gatherings with people who are dealing with the question "How do I do the most good in the world?" focused explicitly on answering the question and acting on the answer. If I could find a group with which I am socially compatible, I might spend a large part of my time working with them. I am going to use the term "taskforce" because I don't know of a better one. It is vaguely related to but quite different from the potential taskforces Eliezer discusses.

Starting such a taskforce requires making many decisions.

Size:

I believe that even two people who think through issues together and hold each other accountable are significantly more effective than two people working independently. At the other limit, eventually the addition of individuals doesn't increase the effectiveness of the group and increases coordination costs. Based on a purely intuitive feeling for group dynamics, I would feel most comfortable with a group of 5-6 until I knew of a better scheme for productively organizing large groups of rationalists (at which point I would want to grow as large as that scheme could support). I suspect in practice there will be huge constraints based on interest and commitment; I don't think this is a terminal problem, because there are probably significant gains even for 2-4 people, and I don't think its a permanent one, because I am optimistic about our ability as a community to grow rapidly.

Frequency:

Where I am right now in life, I believe that thinking about this question and gathering relevant evidence is the most important thing for me to be doing. I would be comfortable spending several hours several times a week working with a group I got along with. Due to scheduling issues and interest limitations, I think this means that I would like to invest as much time as schedules and interests allow. I think the best plan is to allow and expect self-modification: make the choice of time-commitment an explicit decision controlled by the group. Meeting once a week seems like a fair default which can be supported by most schedules.

Concreteness:

There are three levels of concreteness I can imagine for the initial goals of a taskforce:

  • The taskforce is created with a particular project or a small collection of possible projects in mind. Although the possibility of abandoning a project is available (like all other changes), having a strong concrete focus may help a great deal with maintaining initial enthusiasm, attracting people, and fostering a sense of having a real effect on the world rather than empty theorizing. The risk is that, while I suspect many of us have many good ideas, deciding what projects are best is really an important part of why I care about interacting with other people. Just starting something may be the quickest way to get a sense of what is most important, but it may also slow progress down significantly.
  • The taskforce is created with the goal of converging to a practical project quickly. The discussion is of the form "How should we be doing the most good right now: what project are we equipped to solve given our current resources?" While not quite as focused as the first possibility, it does at least keep the conversation grounded.
  • The taskforce is created with the most open-ended possible goal. Helping its members decide how to spend their time in the coming years is just as important as coming up with a project to work on next week. A particular project is adopted only if the value of that project exceeds the value of further deliberation, or if working on a project is a good way to gather evidence or develop important skills.

I am inclined towards the most abstract level if it is possible to get enough support, since it is always capable of descending to either of the others. I think the most important question is how much confidence you have in a group of rationalists to understand the effectiveness of their own collective behavior and modify appropriately. I have a great deal, especially when the same group meets repeatedly and individuals have time to think carefully in between meetings.

Metaness:

A group may spend a long time discussing efficient structures for organizing, communicating, gathering information, making decisions, etc. Alternatively, a group may avoid these issues in favor of actually doing things--even if by doing things we only mean discussing the issues the group was created to discuss. Most groups I have been a part of have very much tried to do things instead of refining their own processes.

My best plan is to begin by working on non-meta issues. However, the ability of groups of rationalists to efficiently deliberate is an important one to develop, so it is worth paying a lot of attention to anything that reduces effectiveness. In particular, I would support very long digressions to deal with very minor problems as long as they are actually problems. Our experiences can be shared, any question answered definitively remains answered definitively, and any evidence gathered is there for anyone else who wants to see it. A procedural digression should end when it is no longer the best use of time--not because of a desire to keep getting things done for the sake of getting things done. Improving our rationality as individuals should be treated similarly; I am no longer interested in setting out to improve my rationality for the sake of becoming more rational, but I am interested in looking very carefully for failures of rationality that actually impact my effectiveness.

I can see how this approach might be dangerous; but it has the great advantage of being able to rescue itself from failure, by correctly noticing that entertaining procedural digressions is counter-productive. In some sense this is universally true: a system which does not limit self-examination can at least in principle recover from arbitrary failures. Moreover, it offers the prospect of refining the rationality of the group, which in turn improves the group's ability to select and implement efficient structures, which closes a feedback loop whose limit may be an unusually effective group.

Homogeneity:

A homogeneous taskforce is composed of members who face similar questions in their own lives, and who are more likely to agree about which issues require discussion and which projects they could work profitably on. An inhomogeneous taskforce is composed of members with a greater variety of perspectives, who are more likely to be able have complementary information and to avoid failures. In general, I believe that working for the common good involves enough questions of general importance (ie, of importance to people in very different positions) that the benefits of inhomogeneity seem greater than the costs. 

In practice, this issue is probably forced for now. Whoever is interested enough to participate will participate (and should be encouraged to participate), until there is enough interest that groups can form selectively.

Atmosphere:

In principle the atmosphere of a community is difficult to control. But the content of discussion and structure of expectations prior to the first meeting have a significant effect on the atmosphere. Intuitively, I expect there is a significant risk of a group falling apart immediately for a variety of reasons: social incompatibility, apparent uselessness, inability to maintain initial enthusiasm based on unrealistic expectations, etc. Forcing even a tiny commmunity into existence is hard (though I suspect not impossible).

I think the most important part of the atmosphere of a community is its support for criticism, and willingness to submit beliefs to criticism. There is a sense (articulated by Orson Scott Card somewhere at some point) that you maintain status by never showing your full hand; by never admitting "That's it. That's all I have. Now you can help me decide whether I am right or wrong." This attitude is very dangerous coupled with normal status-seeking, because its not clear to me that it is possible to recover from it. I don't believe that having rational members is enough to avoid this failure.

I don't have any other observations, except that factors controlling atmosphere should be noted when trying to understand the effectiveness of particular efforts to start communities of any sort, even though such factors are difficult to measure or describe.

Finding People:

The internet is a good place to find people, but there is only a weak sense of personal responsibility throughout much of it, and committing to dealing with people you don't know well is hard/unwise. The real world is a much harder place to find people, but conversations in person quickly establish a sense of personal responsibility and can be used to easily estimate social compatibility. Most people are strangers, and the set of people who could possibly be convinced to work with a taskforce is extremely sparse. On the other hand, your chances of convincing an acquaintance to engage in an involved project with you seem to be way higher.

My hope is that LW is large enough, and unusual enough, that it may be possible to start something just by exchanging cheap talk here. At least, I think this is possible and therefore worth acting on, since alternative states of the world will require more time to get something like this rolling. Another approach is to use the internet to orchestrate low-key meetings, and then bootstrap up from modest personal engagement to something more involved. Another is to try and use the internet to develop a community which can better support/encourage the desired behavior. Of course there are approaches that don't go through the internet, but those approaches will be much more difficult and I would like to explore easy possibilities first.

Recovery from Failure:

I can basically guarantee that if anything comes of my desire, it will include at least one failure. The real cost of failure is extremely small. My fear, based on experience, is that every time an effort at social organization fails it significantly decreases enthusiasm for similar efforts in the future. My only response to this fear is: don't be too optimistic, and don't be too pessimistic. Don't stake too much of your hope on the next try, but don't assume the next try will fail just because the last one did. In short: be rational.


Conclusion:

There are more logistical issues, many reasons a taskforce might fail, and many reasons it might not be worth the effort. But I believe I can do much more good in the future than I have done in the past, and that part of that will involve more effectively exploiting the fact that I am not alone as a rationalist. Even if the only conclusion of a taskforce is to disband itself, I would like to give it a shot.

As groups succeed or fail, different answers to these questions can be tested. My initial impulse in favor of starting abstractly and self-modifying towards concreteness can be replaced by emulating the success of other groups. Of course, this is an optimistic vision: for now, I am focused on getting one group to work once.

I welcome thoughts on other high-level issues, criticism of my beliefs,  or (optimistically) discussions/prioritization of particular logistical issues. But right now I would mostly like to gauge interest. What arguments could convince you that such a taskforce would be useful / what uncertainties would have to be resolved? What arguments could convince you to participate? Under what conditions would you be likely to participate? Where do you live?

I am in Cambridge, am willing to travel anywhere in the Boston area, need no additional arguments to convince me that such a taskforce would be useful, and would participate in any group I thought had a reasonable chance of moderate success.

A Player of Games

15 Larks 23 September 2010 10:52PM

Earlier today I had an idea for a meta-game a group of people could play. It’d be ideal if you lived in an intentional community, or were at university with a games society, or somewhere with regular Less Wrong Meetups.

Each time you would find a new game. Each of you would then study the rules for half an hour and strategise, and then you’d play it, once. Afterwards, compare thoughts on strategies and meta-strategies. If you haven’t played Imperialism, try that. If you’ve never tried out Martin Gardner’s games, try them. If you’ve never played Phutball, give it a go.

It should help teach us to understand new situations quickly, look for workable exploits, accurately model other people, and compute Nash equilibrium. Obviously, be careful not to end up just spending your life playing games; the aim isn't to become good at playing games, it's to become good at learning to play games - hopefully including the great game of life.

However, it’s important that no-one in the group know the rules before hand, which makes finding the new games a little harder. On the plus side, it doesn’t matter that the games are well-balanced: if the world is mad, we should be looking for exploits in real life.

It could be really helpful if people who knew of good games to play gave suggestions. A name, possibly some formal specifications (number of players, average time of a game), and some way of accessing the rules. If you only have the rules in a text-file, rot13 them please, and likewise for any discussion of strategy.

Med Patient Social Networks Are Better Scientific Institutions

37 Liron 19 February 2010 08:11AM

When you're suffering from a life-changing illness, where do you find information about its likely progression? How do you decide among treatment options?

You don't want to rely on studies in medical journals because their conclusion-drawing methodologies are haphazard. You'll be better off getting your prognosis and treatment decisions from a social networking site: PatientsLikeMe.com.

PatientsLikeMe.com lets patients with similar illnesses compare symptoms, treatments and outcomes. As Jamie Heywood at TEDMED 2009 explains, this represents an enormous leap forward in the scope and methodology of clinical trials. I highly recommend his excellent talk, and I will paraphrase part of it below.

continue reading »

How Much Should We Care What the Founding Fathers Thought About Anything?

-3 David_J_Balan 11 February 2010 12:38AM

A while back I saw an interesting discussion between U.S. Supreme Court Justices Stephen Breyer and Antonin Scalia. Scalia is well known for arguing that the way to deal with Constitutional questions is to use the plain meaning of the words in the Constitutional text as they would have been understood at the time and place they were written.* Any other approach, he argues, would amount to nothing more than an unelected judge taking his or her personal political and moral views and making them into the highest law of the land. In his view if a judge is not taking the answer out of the text, then that judge must be putting the answer into the text, and no judge should be allowed to do that.** One illustrative example that comes up in the exchange is the question of whether and when it's OK to cite foreign law in cases involving whether a particular punishment is "Cruel and Unusual" and hence unconstitutional. In Scalia's view, the right way to approach the question would be to try as best one could to figure out what was meant by the words "cruel" and "unusual" in 18th century England, and what contemporary foreign courts have to say cannot possibly inform that question. He also opposes (though somewhat less vigorously) the idea that decisions ought to take into account changes over time in what is considered cruel and unusual in America: he thinks that if people have updated their opinions about such matters, they are free to get their political representatives to pass new laws or to amend the Constitution***, but short of that it is simply not the judge's job to take that sort of thing into account.

continue reading »

Morality and International Humanitarian Law

2 David_J_Balan 30 November 2009 03:27AM

International humanitarian law proscribes certain actions in war, particularly actions that harm non-combatants. On a strict reading of these laws (see what Richard Goldstone said in his debate with Dore Gold at Brandeis University here and see what Matthew Yglesias had to say here), these actions are prohibited regardless of the justice of the war itself: there are certain things that you are just not allowed to do, no matter what. The natural response of any warring party accused of violating humanitarian law and confronted with this argument (aside from simply denying having done the things they are accused of doing) is to insist that their actions in the war cannot be judged outside the context that led to them going to war in the first place. They are the aggrieved party, they are in the right, and they did what they needed to do to defend themselves. Any law or law enforcer who fails to understand this critical distinction between the good guys and the bad guys is at best hopelessly naive and at worst actively evil.

What to make of this response? On the one hand, the position taken by Goldstone and Yglesias can't strictly be morally right. No one really believes that moral obligations in a war are completely independent of whatever caused the war in the first place. For example, it can't but be the case that the set of morally acceptable actions if you are defending yourself against annihilation is different from the set of morally acceptable actions if you (justifiably) take offensive action in response to some relatively minor provocation.

continue reading »

Bayesians vs. Barbarians

51 Eliezer_Yudkowsky 14 April 2009 11:45PM

Previously in seriesCollective Apathy and the Internet
Followup toHelpless Individuals

Previously:

Let's say we have two groups of soldiers.  In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy.  In group 2, everyone at all levels knows all about tactics and strategy.

Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?

In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.

Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.

Now there's a certain viewpoint on "rationality" or "rationalism" which would say something like this:

"Obviously, the rationalists will lose.  The Barbarians believe in an afterlife where they'll be rewarded for courage; so they'll throw themselves into battle without hesitation or remorse.  Thanks to their affective death spirals around their Cause and Great Leader Bob, their warriors will obey orders, and their citizens at home will produce enthusiastically and at full capacity for the war; anyone caught skimming or holding back will be burned at the stake in accordance with Barbarian tradition.  They'll believe in each other's goodness and hate the enemy more strongly than any sane person would, binding themselves into a tight group.  Meanwhile, the rationalists will realize that there's no conceivable reward to be had from dying in battle; they'll wish that others would fight, but not want to fight themselves.  Even if they can find soldiers, their civilians won't be as cooperative:  So long as any one sausage almost certainly doesn't lead to the collapse of the war effort, they'll want to keep that sausage for themselves, and so not contribute as much as they could.  No matter how refined, elegant, civilized, productive, and nonviolent their culture was to start with, they won't be able to resist the Barbarian invasion; sane discussion is no match for a frothing lunatic armed with a gun.  In the end, the Barbarians will win because they want to fight, they want to hurt the rationalists, they want to conquer and their whole society is united around conquest; they care about that more than any sane person would."

continue reading »

Collective Apathy and the Internet

29 Eliezer_Yudkowsky 14 April 2009 12:02AM

Previously in seriesBeware of Other-Optimizing
Followup toBystander Apathy

Yesterday I convered the bystander effect, aka bystander apathy: given a fixed problem situation, a group of bystanders is actually less likely to act than a single bystander.  The standard explanation for this result is in terms of pluralistic ignorance (if it's not clear whether the situation is an emergency, each person tries to look calm while darting their eyes at the other bystanders, and sees other people looking calm) and diffusion of responsibility (everyone hopes that someone else will be first to act; being part of a crowd diminishes the individual pressure to the point where no one acts).

Which may be a symptom of our hunter-gatherer coordination mechanisms being defeated by modern conditions.  You didn't usually form task-forces with strangers back in the ancestral environment; it was mostly people you knew.  And in fact, when all the subjects know each other, the bystander effect diminishes.

So I know this is an amazing and revolutionary observation, and I hope that I don't kill any readers outright from shock by saying this: but people seem to have a hard time reacting constructively to problems encountered over the Internet.

Perhaps because our innate coordination instincts are not tuned for:

  • Being part of a group of strangers.  (When all subjects know each other, the bystander effect diminishes.)
  • Being part of a group of unknown size, of strangers of unknown identity.
  • Not being in physical contact (or visual contact); not being able to exchange meaningful glances.
  • Not communicating in real time.
  • Not being much beholden to each other for other forms of help; not being codependent on the group you're in.
  • Being shielded from reputational damage, or the fear of reputational damage, by your own apparent anonymity; no one is visibly looking at you, before whom your reputation might suffer from inaction.
  • Being part of a large collective of other inactives; no one will single out you to blame.
  • Not hearing a voiced plea for help.
continue reading »

Toxic Truth

12 MichaelHoward 11 April 2009 11:25AM

For those who haven't heard about this yet, I thought this would be a good way to show the potentially insidious effect of biased, one-sided analysis and presentation of evidence under ulterior motives, and the importance of seeking out counter-arguments before accepting a point, even when the evidence being presented to support that point is true.

"[DHMO] has been a part of nature longer than we have; what gives us the right to eliminate it?" - Pro-DHMO web site.

DHMO (hydroxilic acid), commonly found in excised tumors and lesions of terminal lung and throat cancer patients, is a compound known to occur in second hand tobacco smoke. Prolonged exposure in solid form causes severe tissue damage, and a proven link has been established between inhalation of DHMO (even in small quantities) and several deaths, including many young children whose parents were heavy smokers.

It's also used as a solvent during the synthesis of cocaine, in certain forms of particularly cruel and unnecessary animal research, and has been traced to the distribution process of several cases of pesticides causing genetic damage and birth defects. But there are huge political and financial incentives to continue using the compound.

There have been efforts across the world to ban DHMO - an Australian MP has announced a campaign to ban it internationally - but little progress. Several online petitions to the British prime minister on this subject have been rejected. The executive director of the public body that operates Louisville Waterfront Park was actually criticised for posting warning signs on a public fountain that was found to contain DHMO. Jacqui Dean, New Zealand National Party MP was simily told "I seriously doubt that the Expert Advisory Committee on Drugs would want to spend any time evaluating that substance".

If you haven't guessed why, re-read my first sentence then click here.

HT the Coalition to Ban Dihydrogen Monoxide.

[Edit to clarify point:] I'm not saying truth is in any way bad. Truth rocks. I'm reminding you truth is *not sufficient*. When they're given treacherously or used recklessly, truth is as toxic as hydroxilic acid.

Follow-up to: Comment in The Forbidden Post.

Money: The Unit of Caring

95 Eliezer_Yudkowsky 31 March 2009 12:35PM

Previously in seriesHelpless Individuals

Steve Omohundro has suggested a folk theorem to the effect that, within the interior of any approximately rational, self-modifying agent, the marginal benefit of investing additional resources in anything ought to be about equal.  Or, to put it a bit more exactly, shifting a unit of resource between any two tasks should produce no increase in expected utility, relative to the agent's utility function and its probabilistic expectations about its own algorithms.

This resource balance principle implies that—over a very wide range of approximately rational systems, including even the interior of a self-modifying mind—there will exist some common currency of expected utilons, by which everything worth doing can be measured.

In our society, this common currency of expected utilons is called "money".  It is the measure of how much society cares about something.

This is a brutal yet obvious point, which many are motivated to deny.

With this audience, I hope, I can simply state it and move on.  It's not as if you thought "society" was intelligent, benevolent, and sane up until this point, right?

I say this to make a certain point held in common across many good causes.  Any charitable institution you've ever had a kind word for, certainly wishes you would appreciate this point, whether or not they've ever said anything out loud.  For I have listened to others in the nonprofit world, and I know that I am not speaking only for myself here...

continue reading »

Helpless Individuals

42 Eliezer_Yudkowsky 30 March 2009 11:10AM

Previously in seriesRationality: Common Interest of Many Causes

When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all.

Well—there are governments with specialized militaries and police, which can extract taxes.  That's a non-ancestral idiom which dates back to the invention of sedentary agriculture and extractible surpluses; humanity is still struggling to deal with it.

There are corporations in which the flow of money is controlled by centralized management, a non-ancestral idiom dating back to the invention of large-scale trade and professional specialization.

And in a world with large populations and close contact, memes evolve far more virulent than the average case of the ancestral environment; memes that wield threats of damnation, promises of heaven, and professional priest classes to transmit them.

But by and large, the answer to the question "How do large institutions survive?" is "They don't!"  The vast majority of large modern-day institutions—some of them extremely vital to the functioning of our complex civilization—simply fail to exist in the first place.

I first realized this as a result of grasping how Science gets funded: namely, not by individual donations.

continue reading »

View more: Next