The Last Days of the Singularity Challenge
From Michael Anissimov on the Singularity Institute blog:
Thanks to generous contributions by our donors, we are only $11,840 away from fulfilling our $100,000 goal for the 2010 Singularity Research Challenge. For every dollar you contribute to SIAI, another dollar is contributed by our matching donors, who have pledged to match all contributions made before February 28th up to $100,000. That means that this Sunday is your final chance to donate for maximum impact.
Funds from the challenge campaign will be used to support all SIAI activities: our core staff, the Singularity Summit, the Visiting Fellows program, and more. Donors can earmark their funds for specific grant proposals, many of which are targeted towards academic paper-writing, or just contribute to our general fund
The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing
We can reasonably debate torture vs. dust specks when it is one person being tortured versus 3^^^3 people being subjected to motes of dust.
However, there should be little debate when we are comparing the torture of one person to the minimal suffering of a mere millions of people. I propose a way to generate approximately one billion dollars for charity over five years: The Craigslist Revolution.
In 2006, Craigslist's CEO Jim Buckmaster said that if enough users told them to "raise revenue and plow it into charity" that they would consider doing it. I have more recently emailed Craig Newmark and he indicated that they remain receptive to the idea if that's what the users want.
A simple text advertising banner at the top of the Craigslist home or listing pages would generate enormous amounts of revenue. They could put a large "X" next to the ad, allowing you to permanently close it. There seems to be little objection to this idea. The optional banner is harmless, and a billion dollars could be enough to dramatically improve the lives of millions or make a serious impact in the causes we take seriously around here. As a moral calculus, the decision seems a no brainer. It's possible that some or many dollars would support bad charities, but the marginal impact of supporting some truly good charities makes the whole thing worthwhile.
I don't have access to Craigslist's detailed traffic data, but I think one billion USD over five years is a reasonable estimate for a single optional banner ad. With 20 billion pageviews a month, a Google Adwords banner would bring in about 200 million dollars a year. Over five years that will be well over a billion dollars. With employees selling the advertising rather than Google, that number could very well be multiplied. An extremely low bound for the amount of additional revenue that could be trivially generated over five years would be 100 million.
It's okay to be (at least a little) irrational
Caused by: Purchase Fuzzies and Utilons Separately
As most readers will know by now, if you're donating to a charity, it doesn't make sense to spread your donations across several charities (assuming you're primarily trying to maximize the amount of good done). You'll want to pick the charity where your money does the most good, and then donate as much as possible to that one. Most readers will also be aware that this isn't intuitive to most people - many will instinctively try to spread their money across several different causes.
I'm spending part of my income on charity, too. Admittedly, this isn't much - 30 USD each month - but then neither is my income as a student. Previously I had been spreading that sum to three different charities, each of them getting an equal amount. On at least two different venues, people had (not always knowingly) tried to talk me out of it, and I did feel that their arguments were pretty strong. Still, I didn't change my ways, even though there was mental pressure building up, trying to push me in that direction. There were actually even some other charities I was considering also donating to, even though I knew I probably shouldn't.
Then I read Eliezer's Purchase Fuzzies and Utilons Separately. Here was a post saying, in essence, that it's okay to spend some of your money in what amounted to an irrational way. Yes, go ahead and spread your money, and go ahead and use some of it just to purchase warm fuzzies. You're just human, after all. Just try to make sure you still donate more to a utilon maximizer than to purchasing the fuzzies.
Here I was, with a post that allowed me to stop rationalizing reasons for why spreading money was good, and instead spread them because I was honestly selfish and just buying a good feeling. Now, I didn't need to worry about being irrational in having diversified donations. So since it was okay, I logged in to PayPal, cancelled the two monthly donations I had going to the other organizations, and tripled the amount of money that I was giving to the Institute Which Shall Not Be Named.
Not exactly the outcome one might have suspected.
Purchase Fuzzies and Utilons Separately
Previously in series: Money: The Unit of Caring
Yesterday:
There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone...
If the lawyer needs to work an hour at the soup kitchen to keep himself motivated and remind himself why he's doing what he's doing, that's fine. But he should also be donating some of the hours he worked at the office, because that is the power of professional specialization and it is how grownups really get things done. One might consider the check as buying the right to volunteer at the soup kitchen, or validating the time spent at the soup kitchen.
I hold open doors for little old ladies. I can't actually remember the last time this happened literally (though I'm sure it has, sometime in the last year or so). But within the last month, say, I was out on a walk and discovered a station wagon parked in a driveway with its trunk completely open, giving full access to the car's interior. I looked in to see if there were packages being taken out, but this was not so. I looked around to see if anyone was doing anything with the car. And finally I went up to the house and knocked, then rang the bell. And yes, the trunk had been accidentally left open.
Under other circumstances, this would be a simple act of altruism, which might signify true concern for another's welfare, or fear of guilt for inaction, or a desire to signal trustworthiness to oneself or others, or finding altruism pleasurable. I think that these are all perfectly legitimate motives, by the way; I might give bonus points for the first, but I wouldn't deduct any penalty points for the others. Just so long as people get helped.
But in my own case, since I already work in the nonprofit sector, the further question arises as to whether I could have better employed the same sixty seconds in a more specialized way, to bring greater benefit to others. That is: can I really defend this as the best use of my time, given the other things I claim to believe?
Money: The Unit of Caring
Previously in series: Helpless Individuals
Steve Omohundro has suggested a folk theorem to the effect that, within the interior of any approximately rational, self-modifying agent, the marginal benefit of investing additional resources in anything ought to be about equal. Or, to put it a bit more exactly, shifting a unit of resource between any two tasks should produce no increase in expected utility, relative to the agent's utility function and its probabilistic expectations about its own algorithms.
This resource balance principle implies that—over a very wide range of approximately rational systems, including even the interior of a self-modifying mind—there will exist some common currency of expected utilons, by which everything worth doing can be measured.
In our society, this common currency of expected utilons is called "money". It is the measure of how much society cares about something.
This is a brutal yet obvious point, which many are motivated to deny.
With this audience, I hope, I can simply state it and move on. It's not as if you thought "society" was intelligent, benevolent, and sane up until this point, right?
I say this to make a certain point held in common across many good causes. Any charitable institution you've ever had a kind word for, certainly wishes you would appreciate this point, whether or not they've ever said anything out loud. For I have listened to others in the nonprofit world, and I know that I am not speaking only for myself here...
Helpless Individuals
Previously in series: Rationality: Common Interest of Many Causes
When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all.
Well—there are governments with specialized militaries and police, which can extract taxes. That's a non-ancestral idiom which dates back to the invention of sedentary agriculture and extractible surpluses; humanity is still struggling to deal with it.
There are corporations in which the flow of money is controlled by centralized management, a non-ancestral idiom dating back to the invention of large-scale trade and professional specialization.
And in a world with large populations and close contact, memes evolve far more virulent than the average case of the ancestral environment; memes that wield threats of damnation, promises of heaven, and professional priest classes to transmit them.
But by and large, the answer to the question "How do large institutions survive?" is "They don't!" The vast majority of large modern-day institutions—some of them extremely vital to the functioning of our complex civilization—simply fail to exist in the first place.
I first realized this as a result of grasping how Science gets funded: namely, not by individual donations.
Altruist Coordination -- Central Station
Related to: Can Humanism Match Religion's Output?
I thought it would be helpful for us to have a central space to pool information about various organizations to which we might give our money and/or time. Honestly, a wiki would be ideal, but it seems this should do nicely.
Comment to this post with the name of an organization, and a direct link to where we can donate to them. Provide a summary of the group's goals, and their plans for reaching them. If you can link to outside confirmation of the group's efficiency and effectiveness, please do so.
Respond to these comments adding information about the named group, whether to criticize or praise it.
Hopefully with the voting system, we should be able to collect the most relevent information we have available reasonably quickly.
If you choose to contribute to a group, respond to that group's comment with a dollar amount, so that we can all see how much we have raised for each organization.
Feel free to replace "dollar amount" with "dollar amount/month" in the above, if you wish to make such a commitment. Please do not do this unless you are (>95%) confident that said commitment will last at least a year.
If possible, mention this page, or this site, while donating.
Your Price for Joining
Previously in series: Why Our Kind Can't Cooperate
In the Ultimatum Game, the first player chooses how to split $10 between themselves and the second player, and the second player decides whether to accept the split or reject it—in the latter case, both parties get nothing. So far as conventional causal decision theory goes (two-box on Newcomb's Problem, defect in Prisoner's Dilemma), the second player should prefer any non-zero amount to nothing. But if the first player expects this behavior—accept any non-zero offer—then they have no motive to offer more than a penny. As I assume you all know by now, I am no fan of conventional causal decision theory. Those of us who remain interested in cooperating on the Prisoner's Dilemma, either because it's iterated, or because we have a term in our utility function for fairness, or because we use an unconventional decision theory, may also not accept an offer of one penny.
And in fact, most Ultimatum "deciders" offer an even split; and most Ultimatum "accepters" reject any offer less than 20%. A 100 USD game played in Indonesia (average per capita income at the time: 670 USD) showed offers of 30 USD being turned down, although this equates to two week's wages. We can probably also assume that the players in Indonesia were not thinking about the academic debate over Newcomblike problems—this is just the way people feel about Ultimatum Games, even ones played for real money.
There's an analogue of the Ultimatum Game in group coordination. (Has it been studied? I'd hope so...) Let's say there's a common project—in fact, let's say that it's an altruistic common project, aimed at helping mugging victims in Canada, or something. If you join this group project, you'll get more done than you could on your own, relative to your utility function. So, obviously, you should join.
But wait! The anti-mugging project keeps their funds invested in a money market fund! That's ridiculous; it won't earn even as much interest as US Treasuries, let alone a dividend-paying index fund.
Clearly, this project is run by morons, and you shouldn't join until they change their malinvesting ways.
Now you might realize—if you stopped to think about it—that all things considered, you would still do better by working with the common anti-mugging project, than striking out on your own to fight crime. But then—you might perhaps also realize—if you too easily assent to joining the group, why, what motive would they have to change their malinvesting ways?
Well... Okay, look. Possibly because we're out of the ancestral environment where everyone knows everyone else... and possibly because the nonconformist crowd tries to repudiate normal group-cohering forces like conformity and leader-worship...
...It seems to me that people in the atheist/libertarian/technophile/sf-fan/etcetera cluster often set their joining prices way way way too high. Like a 50-way split Ultimatum game, where every one of 50 players demands at least 20% of the money.
Why Our Kind Can't Cooperate
Previously in series: Rationality Verification
From when I was still forced to attend, I remember our synagogue's annual fundraising appeal. It was a simple enough format, if I recall correctly. The rabbi and the treasurer talked about the shul's expenses and how vital this annual fundraise was, and then the synagogue's members called out their pledges from their seats.
Straightforward, yes?
Let me tell you about a different annual fundraising appeal. One that I ran, in fact; during the early years of a nonprofit organization that may not be named. One difference was that the appeal was conducted over the Internet. And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd. (To point in the rough direction of an empirical cluster in personspace. If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.)
I crafted the fundraising appeal with care. By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years. The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal. I sent it out to several mailing lists that covered most of our potential support base.
And almost immediately, people started posting to the mailing lists about why they weren't going to donate. Some of them raised basic questions about the nonprofit's philosophy and mission. Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them. (They didn't volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)
Now you might say, "Well, maybe your mission and philosophy did have basic problems—you wouldn't want to censor that discussion, would you?"
Hold on to that thought.
Because people were donating. We started getting donations right away, via Paypal. We even got congratulatory notes saying how the appeal had finally gotten them to start moving. A donation of $111.11 was accompanied by a message saying, "I decided to give **** a little bit more. One more hundred, one more ten, one more single, one more dime, and one more penny. All may not be for one, but this one is trying to be for all."
But none of those donors posted their agreement to the mailing list. Not one.
Soulless morality
Follow-up to: So you say you're an altruist
The responses to So you say you're an altruist indicate that people have split their values into two categories:
- values they use to decide what they want
- values that are admissible for moral reasoning
(where 2 is probably a subset of 1 for atheists, and probably nearly disjoint from 1 for Presbyterians).
You're reading Less Wrong. You're a rationalist. You've put a lot of effort into education, and learning the truth about the world. You value knowledge and rationality and truth a lot.
Someone says you should send all your money to Africa, because this will result in more human lives.
What happened to the value you placed on knowledge and rationality?
There is little chance that any of the people you save in Africa will get a good post-graduate education and then follow that up by rejecting religion, embracing rationality, and writing Less Wrong posts.
Here you are, spending a part of your precious life reading Less Wrong. If you spend 10% of your life on the Web, you are saying that that activity is worth at least 1/10th of a life, and that lives with no access to the Web are worth less than lives with access. If you value rationality, then lives lived rationally are more valuable than lives lived irrationally. If you think something has a value, you have to give it the same value in every equation. Not doing so is immoral. You can't use different value scales for everyday and moral reasoning.
Society tells you to work to make yourself more valuable. Then it tells you that when you reason morally, you must assume that all lives are equally valuable. You can't have it both ways. If all lives have equal value, we shouldn't criticize someone who decides to become a drug addict on welfare. Value is value, regardless of which equation it's in at the moment.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)