Tallinn-Evans $125,000 Singularity Challenge

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

Michael Anissimov posted the following on the SIAI blog:

Thanks to the generosity of two major donors; Jaan Tallinn, a founder of Skype and Ambient Sound Investments, and Edwin Evans, CEO of the mobile applications startup Quinly, every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.

Interested in optimal philanthropy — that is, maximizing the future expected benefit to humanity per charitable dollar spent? The technological creation of greater-than-human intelligence has the potential to unleash an “intelligence explosion” as intelligent systems design still more sophisticated successors. This dynamic could transform our world as greatly as the advent of human intelligence has already transformed the Earth, for better or for worse. Thinking rationally about these prospects and working to encourage a favorable outcome offers an extraordinary chance to make a difference. The Singularity Institute exists to do so through its research, the Singularity Summit, and public education.

We support both direct engagements with the issues as well as the improvements in methodology and rationality needed to make better progress. Through our Visiting Fellows program, researchers from undergrads to Ph.Ds pursue questions on the foundations of Artificial Intelligence and related topics in two-to-three month stints. Our Resident Faculty, up to four researchers from three last year, pursues long-term projects, including AI research, a literature review, and a book on rationality, the first draft of which was just completed. Singularity Institute researchers and representatives gave over a dozen presentations at half a dozen conferences in 2010. Our Singularity Summit conference in San Francisco was a great success, bringing together over 600 attendees and 22 top scientists and other speakers to explore cutting-edge issues in technology and science.

We are pleased to receive donation matching support this year from Edwin Evans of the United States, a long-time Singularity Institute donor, and Jaan Tallinn of Estonia, a more recent donor and supporter. Jaan recently gave a talk on the Singularity and his life at a entrepreneurial group in Finland. Here’s what Jaan has to say about us:

“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us. Since we have only one shot at getting the transition right, the importance of SIAI’s work cannot be overestimated. Not finding any organisation to take up this challenge as seriously as SIAI on my side of the planet, I conclude that it’s worth following them across 10 time zones.”
– Jaan Tallinn, Singularity Institute donor

Make a lasting impact on the long-term future of humanity today — make a donation to the Singularity Institute and help us reach our $125,000 goal. For more detailed information on our projects and work, contact us at institute@intelligence.org or read our new organizational overview.

-----

Kaj's commentary: if you haven't done so recently, do check out the SIAI publications page. There are several new papers and presentations, out of which I thought that Carl Shulman's Whole Brain Emulations and the Evolution of Superorganisms made for particularly fascinating (and scary) reading. SIAI's finally starting to get its paper-writing machinery into gear, so let's give them money to make that possible. There's also a static page about this challenge; if you're on Facebook, please take the time to "like" it there.

(Full disclosure: I was an SIAI Visiting Fellow in April-July 2010.)

Comments (369)

Comment author: VNKKET 26 February 2011 04:32:09AM 6 points [-]

I donated $250 on the last day of the challenge.

Comment author: curiousepic 21 January 2011 04:35:45PM 2 points [-]

According to the page, they (we) made it to the full $125,000/250,000! Does anyone know what percentage this is of all money the SIAI has raised?

Comment author: Rain 28 January 2011 01:26:59PM 1 point [-]

Their annual budget is typically in the range of $500,000, so this would be around half.

Comment author: XiXiDu 28 January 2011 01:39:51PM 4 points [-]

I wonder at what point donations to the SIAI would hit diminishing returns and contributing to another underfunded cause would be more valuable? Suppose for example Bill Gates was going to donate 1.5 billion US$, would my $100 donation still be best placed with the SIAI?

Comment author: Rain 29 January 2011 10:10:20PM 2 points [-]

Marginal contributions are certainly important to consider, and it's one of the reasons I mentioned in my original post about why I support them.

Even asteroid discovery, long considered underfunded, is receiving hundreds of millions.

Comment author: anon895 20 January 2011 05:32:50PM *  13 points [-]

In a possibly bad decision, I put a $1000 check in the mailbox with the intent of going out and transferring the money to my checking account later today. That puts them at $123,700 using Silas' count.

Comment author: anon895 20 January 2011 10:49:45PM 3 points [-]

...yep, didn't make it. I'll have to get to the bank early tomorrow and hope the mail is slow.

Comment author: anon895 21 January 2011 10:06:10PM 2 points [-]

Ended up making the transfer over the phone.

Comment author: sfb 19 January 2011 11:06:48PM 3 points [-]

every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.

Anyone willing to comment on that as a rationalist incentive? Presumably I'm supposed to think "I want more utility to SIAI so I should donate at a time when my donation is matched so SIAI gets twice the cash" and not "they have money which they can spare and are willing to donate to SIAI but will not donate it if their demands are not met within their timeframe, that sounds a lot like coercion/blackmail"?

Would it work the other way around? If we individuals grouped together and said "We collectively have $125,000 to donate to SIAI but will only do so if SIAI convinces a company / rich investor to match it dollar for dollar before %somedate%"?

Comment author: timtyler 20 January 2011 12:06:41AM 1 point [-]

The sponsor gets publicity for their charitable donation - while the charity stimulates donations - by making donors feel as though they are getting better value for money.

If the sponsor proposes the deal, they can sometimes make the charity work harder at their fund-raising effort for the duration - which probably helps their cause.

If the charity proposes the deal, the sponsor can always pay the rest of their gift later.

Comment author: AnnaSalamon 19 January 2011 11:39:50PM *  12 points [-]

It's a symmetrical situation. Suppose that A prefers having $1 in his personal luxury budget to having $1 in SIAI, but prefers having $2 in SIAI to having a mere $1 in his personal luxury budget. Suppose that B has the same preferences (regarding his own personal luxury budget, vs SIAI).

Then A and B would each prefer not-donating to donating, but they would each prefer donating-if-their-donation-gets-a-match to not-donating. And so a matching campaign lets them both achieve their preferences.

This is a pretty common situation -- for example, lots of people are unwilling to give large amounts now to save lives in the third world, but would totally be willing to give $1k if this would cause all other first worlders to do so, and would thereby prevent all the cheaply preventable deaths. Matching grants are a smaller version of the same.

Comment author: steven0461 20 January 2011 12:16:07AM 3 points [-]

It seems like it would be valuable to set up ways for people to make these deals more systematically than through matching grants.

Comment author: wedrifid 19 January 2011 11:48:12PM 2 points [-]

This is a pretty common situation

Indeed. It seems to be essentially 'solving a cooperation problem'.

Comment author: SilasBarta 19 January 2011 10:27:26PM 13 points [-]

I donated 1000 USD. (This puts them at ~$122,700 ... so close!)

Comment author: curiousepic 19 January 2011 04:06:34AM *  10 points [-]

I have not donated a significant amount before, but will donate $500 IF someone else will (double) match it.

Why did the SIAI remove the Grant Proposals page? http://singinst.org/grants/challenge#grantproposals

EDIT: Donated $500, in response to wmorgan's $1000

Comment author: wmorgan 19 January 2011 06:17:18AM 18 points [-]

Your comment spurred me into donating an additional $1,000.

Comment author: curiousepic 19 January 2011 01:46:31PM *  8 points [-]

Excellent! Donated $500. Whether yours is a counter-bluff or not ;)

This is by far the most I've donated to a charity. I spent yesterday assessing my financial situation, something I've only done in passing because of my fairly comfortable position. It has always felt smart to me to ignore the existence of my excess cash, but I have a fair amount of it and the recent increase of discussion about charity has made me reassess where best to locate it. I will be donating to SENS in the near future, probably more than I have to SIAI. I'm aware of the argument for giving everything to a single charity, but it seems even Eli is conflicted about giving advice about SIAI vs. SENS, given this discussion.

I recently read that investing in the stock market (casually, not as a trader or anything) in the hopes that your wealth will grow such that you can donate even more at a later time is erroneous because the charity could be doing the same thing, with more of it. Is this true, and does anyone know if the SIAI, or SENS does this? It seems to me that both of these organizations have immediate use for pretty much all money they receive and do not invest at all. How much would my money have to make in an investment account to be able to contribute more (adjusting for inflation) in the future?

Comment author: endoself 20 January 2011 06:20:54PM *  5 points [-]

The logic of donating now is that if a charity would use your money now, it is because less money now is more useful than more money later. Not all charities may be smart enough to realize whether they should invest, but I feel confidant that if investing money rather than spending it right away were the best approach for their goals, the people at the SIAI would be smart enough to do so.

Comment author: Dorikka 19 January 2011 05:01:18AM 1 point [-]

I think that a rational agent would donate the $500 eventually either way because the utility value of a $500 contribution would be greater than that of a $0 contribution, if the matching $500 was not forthcoming. Thus, the precommitment to withhold the donation if it is not matched seems to be a bluff (for even if the agent reported that he had not donated the money, he could do so privately without fear of exposure) Therefore, it seems to me that the matching arrangement is a device designed to convince irrational agents, because the matcher's contribution does not affect the amount of the original donor's contribution.

Am I missing something?

Comment author: endoself 19 January 2011 05:23:44AM 2 points [-]

He may actually refrain from donating, by the reasoning that such offers would work iff someone deems them reasonable and that person is more likely to deem it reasonable if he does, by TDT/UDT. I could see myself doing such a thing.

Comment author: Dorikka 19 January 2011 05:39:13AM 0 points [-]

But whether he does or doesn't donate does not affect how such offers are responded to in the future, since he is free to lie without fear of exposure. Given such, it seems that he should always maximize utility by donating.

Comment author: endoself 19 January 2011 06:15:58AM 0 points [-]

Future offers do not matter. His precommitment not to donate if others do not acausally effects how this offer is responded to.

Comment author: Dorikka 19 January 2011 05:30:10PM 0 points [-]

I'm not sure I understand what you mean. Would you mind explaining?

Comment author: endoself 19 January 2011 06:17:53PM *  0 points [-]

Are you familiar with UDT? There's a lot about it written on this site. It's complex and non-intuitive, but fascinating and a real conceptual advance. You can start by reading about http://wiki.lesswrong.com/wiki/Counterfactual_mugging . In general, decision theory is weird, much weirder than you'd expect.

Comment author: Dorikka 21 January 2011 01:45:37AM 0 points [-]

I've read some of the posts on Newcomblike problems, but am not very familiar with UDT. I'll take a look -- thanks for the link.

Comment author: hairyfigment 19 January 2011 02:24:51AM 15 points [-]

Just donated $500.

(At one time I had an excuse for waiting. But plainly I won't get confirmation on a price for cryonics-themed life insurance by the deadline, and should likely have donated sooner).

Comment author: Furcas 17 January 2011 12:32:28AM *  20 points [-]

Donated $500 CAD just now.

By the way, SIAI is still more than 31,000 US dollars away from its target.

Comment author: wmorgan 14 January 2011 06:00:39PM 19 points [-]

$1,000

Comment author: Nick_Tarleton 06 January 2011 08:18:24AM 20 points [-]

I just donated $512.

Comment author: Psy-Kosh 05 January 2011 10:57:05PM 16 points [-]

Just donated $200.

Comment author: AngryParsley 05 January 2011 01:33:09AM *  21 points [-]
Comment author: Normal_Anomaly 05 January 2011 01:25:09AM 13 points [-]

Donated $50.

Comment author: Kyre 03 January 2011 03:09:15AM 19 points [-]

$1000 - looking forward to a good year for SIAI in 2011.

Comment author: XiXiDu 02 January 2011 03:34:56PM 9 points [-]

I just sent 15 USD to each the SIAI, VillageReach and The Khan Academy.

I am aware of and understand this but felt more comfortable to diversify right now. I also know it is not much, I'll have to somehow force myself to buy less shiny gadgets and rather donate more. Generally I have to be less inclined to the hoarding of money in favor of giving.

Comment author: AlexMennen 02 January 2011 05:45:33AM 19 points [-]

Donated $120

Comment author: JGWeissman 02 January 2011 02:08:18AM 44 points [-]

I just wrote a check for $13,200.

Comment author: Costanza 02 January 2011 05:11:41AM 7 points [-]

As I write, this comment has earned only 5 karma points (one of them mine). According to Larks' exchange rate of $32 dollars to one karma point, this donation has more than four hundred upvotes to go.

Wait ... I assume you're planning to actually mail the check too?

Comment author: JGWeissman 02 January 2011 05:27:26AM 15 points [-]

Wait ... I assume you're planning to actually mail the check too?

Yes, I mailed the check, too, just after writing the comment. (And I wrote and mailed it to SIAI. No tricks, it really is a donation.)

I would be surprised if karma scaled linearly with dollars over that range.

Comment author: taryneast 01 January 2011 11:42:31PM 15 points [-]

$50 - it's definitely a different cause to the usual :)

Comment author: patrissimo 01 January 2011 10:28:43PM 20 points [-]

Wow, SIAI has succeeded in monetizing Less Wrong by selling karma points. This is either a totally awesome blunder into success or sheer Slytherin genius.

Comment author: orthonormal 31 December 2010 08:56:04PM *  25 points [-]

I just donated $1,370. The reason why it's not a round number is interesting, and I'll write a Discussion post about it in a minute. EDIT: Here it is.

Also, I find it interesting that (before my donation) the status bar for the challenge was at $8,500, and the donations mentioned here totaled (by my estimation) about $6,700 of that...

Comment author: Larks 29 December 2010 11:52:50PM 27 points [-]

On the one hand, I absolutly abhore SIAI. On the other hand, I'd love to turn my money into karma...

/joke

$100

Comment author: Larks 01 January 2011 01:35:15AM 6 points [-]

At the moment, my comment has 15 karma, while Leonhart's, which was posted before, and for more money, has 14. As £1 = $1.5,

$32 = 1 karma,

and thus my donation is only worth around 3 karma.

So it seems my joke must have been worth 12 karma, or $386. I never realised my comparative advantage was in humour...

Comment author: [deleted] 01 January 2011 02:15:37AM *  5 points [-]

I imagine karma and donation amounts, if they correlate at all, correlate on a log scale. We'd therefore expect your comment to get 14/log(300 x 1.5) x log(100) karma from the donation amount alone, which comes to about 10.5 karma. Therefore 4.5 of your karma came from your joke.

Unfortunately, we can't convert your joke karma into dollars in any consistent way. But if you hadn't donated any money, and made an equally good joke, you would have gotten about as much karma as someone donating $7, assuming our model holds up in that range.

Edit: also a factor is that I'm sure many people on LessWrong don't actually know the conversion factor between $ and £.

Comment author: Leonhart 29 December 2010 04:53:59PM 27 points [-]

£300.

Comment author: ciphergoth 28 December 2010 02:34:07PM 24 points [-]

I seem to remember reading a comment saying that if I make a small donation now, it makes it more likely I'll make a larger donation later, so I just donated £10.

Comment author: timtyler 31 December 2010 03:55:24PM 2 points [-]

Does that still work, once you know about the sunk cost fallacy?

Comment author: Perplexed 31 December 2010 05:39:55PM 1 point [-]

Perhaps it works due to warm-and-fuzzy slippery slopes, rather than sunk costs.

Comment author: ciphergoth 31 December 2010 05:00:56PM 1 point [-]

Don't know - I guess we'll find out!

Comment author: Vaniver 28 December 2010 02:48:46PM 10 points [-]

Ben Franklin effect, as well as consistency bias. Good on you for turning a bug into a feature.

Comment author: Dr_Manhattan 27 December 2010 10:57:35PM 6 points [-]
Comment author: Yvain 27 December 2010 10:56:57PM 18 points [-]

New Year's resolution is not to donate to things until I check if there's a matching donation drive starting the next week :( Anyway, donated a little extra because of all the great social pressure from everyone's amazing donations here. Will donate more when I have an income.

Comment author: NancyLebovitz 02 January 2011 04:35:11PM 1 point [-]

I wonder if there's empirical research on how much in advance to announce matching donation drives so as to maximize revenue.

Any observations of how established charities handle this?

Comment author: Kevin 30 December 2010 10:55:05AM 2 points [-]

I don't think it actually matters, unless the matching drive isn't fulfilled. Even then, I would be really surprised if Jaan and Edwin take their money back. So in some sense it is better to have donated before the drive, as it allows someone else to have their donation matched who might not have donated without the promise of matching.

Comment author: Benquo 27 December 2010 11:12:01PM *  5 points [-]

At first I felt a little better that someone else made the same mistake, but on reflection I should feel worse.

Comment author: John_Maxwell_IV 09 March 2011 08:17:05AM 0 points [-]

On reflection I shouldn't feel bad about much of anything.

Comment author: Dorikka 19 January 2011 04:44:22AM 2 points [-]

I would avoid the phrase "I should feel worse" in most scenarios due to pain and gain motivation.

Comment author: blogospheroid 27 December 2010 05:49:56PM 33 points [-]

I put in $500, really pinches in Indian rupees (Rs. 23,000+). Hoping for the best to happen next year with a successful book release and promising research to be done.

Comment author: Plasmon 27 December 2010 03:39:17PM 17 points [-]

I have donated a small amount of money.

The Singularity is now a little bit closer and safer because of your efforts. Thank you. We will send a receipt for your donations and our newsletter at the end of the year. From everyone at the Singularity Institute – our deepest thanks.

I do hope they mean they will send a receipt and newsletter by e-mail, and not by physical mail.

Comment author: David_Gerard 27 December 2010 04:38:50PM *  -2 points [-]

I have donated a small amount of money.

I understood that this was considered pointless hereabouts: that the way to effective charitable donation is to pick the most effective charity and donate your entire charity budget to it. Thus, the only appropriate donations to SIAI would be nothing or everything.

Or have I missed something in the chain of logic?

(This is, of course, from the viewpoint of the donor rather than that of the charity.)

Edit: Could the downvoter please explain? I am not at all personally convinced by that Slate story, but it really is quite popular hereabouts.

Comment author: MichaelVassar 29 December 2010 05:12:06PM 1 point [-]

The slate article is correct, but its desirable to be polite as well as accurate if you actually want to communicate something. Also, if someone wants to donate to feel good, that feeling good is an actively good thing that they are purchasing and its undesirable to try to damage it.

Comment author: SilasBarta 19 January 2011 10:13:11PM *  1 point [-]

Wait, if I do an echeck through Paypal today, would it count toward the challenge? Paypal says it takes a few days to process :-/

EDIT: n/m, I guess I can just do it via credit card, though SIAI gets less that way.

Comment author: AnnaSalamon 19 January 2011 10:44:51PM 2 points [-]

Donations count toward the challenge if they're dated before the end, even if they aren't received until a few days later.

Comment author: SilasBarta 19 January 2011 11:07:02PM 0 points [-]

Thanks. How long until a donation is reflected in the picture? Is it possible the 125k goal is already met?

Comment author: MichaelAnissimov 20 January 2011 12:02:06AM 1 point [-]

I update it daily.

Comment author: SilasBarta 20 January 2011 08:36:35PM 0 points [-]

Victory! The $125k challenge has been met, according to the current site's picture! (mouse over the image)

Though of course it still encourages you to donate to help meet ... that same $125k goal.

Comment author: MichaelAnissimov 20 January 2011 11:18:30PM 5 points [-]

Thank you everyone, I really appreciate all your contributions. We've had a wonderful past year and the fulfillment of this matching challenge really capped it off.

http://singinst.org/blog/2011/01/20/tallinn-evans-challenge-grant-success/

Comment author: SilasBarta 19 January 2011 08:53:47PM 2 points [-]

What's the status on this? The picture on the page suggests the $125,000 matching maximum was met, but nothing says for sure.

What time on Thursday is the deadline?

Comment author: curiousepic 19 January 2011 09:31:51PM 1 point [-]

Mousing over the image gives the total $121,616.

Comment author: SilasBarta 19 January 2011 10:07:21PM 2 points [-]

Sweet, I can still be the one to push it over! [1]

[1] so long as you disregard the fungibility of money and therefore my contribution's indistinguishability from that of all the others.

Comment author: shokwave 29 December 2010 06:09:56PM 0 points [-]

The guy from GiveWell linked to this, which seems relevant to your point.

Comment author: shokwave 28 December 2010 04:19:05PM *  0 points [-]

the way to effective charitable donation is to pick the most effective charity and donate your entire charity budget to it. Thus, the only appropriate donations to SIAI would be nothing or everything.

Your conclusion doesn't follow from the premise. A small amount of money could reasonably be Plasmon's entire charity budget; when you say "nothing or everything" you do not qualify it with "of your charity budget".

edit: Oy, if I'd scrolled down!

Comment author: Kaj_Sotala 28 December 2010 03:45:58PM 15 points [-]

I feel rather uncomfortable at seeing someone mention that he donated, and getting a response which indirectly suggests that he's being irrational and should have donated more.

Comment author: shokwave 28 December 2010 05:20:29PM 3 points [-]

It is indirect, but I believe David is trying to highlight the possibility of problems with the Slate article. Once we have something to protect (a donor) we will be more motivated to explore its possible failings instead of taking it as gospel.

Comment author: David_Gerard 28 December 2010 04:51:50PM 0 points [-]

I don't think that, as I have noted. I'm not at all keen on the essay in question. But it is popular hereabouts.

Comment author: Kaj_Sotala 28 December 2010 08:54:53PM 0 points [-]

Okay, good. But it still kinda comes off that way, at least to me.

Comment author: [deleted] 27 December 2010 06:31:09PM 2 points [-]

Unless, of course, you believe that the decisions of other people donating to charity are correlated with your own. In this case, a decision to donate 100% of your money to SIAI would mean that all those people implementing a decision process sufficiently similar to your own would donate 100% of their money to SIAI. A decision to donate 50% of your money to SIAI and 50% to Charity Option B would imply a similar split for all those people as well.

If there are enough people like this, then the total amount of money involved may be large enough that the linear approximation does not hold. In that case, it seems natural to me to assume that, if both charity options are worthwhile, significantly increasing the successfulness of both charities is more important than increasing SIAI's successfulness even more significantly. Thus, you would donate 50%/50%.

Overall, the argument you link to seems to me to parallel (though inexactly) the argument that voting is pointless considering how unlikely your vote is to swing the outcome.

Comment author: Caspian 02 January 2011 02:34:43PM 2 points [-]

Also your errors in choosing a charity won't necessarily be random. For example, if you trust your reasoning to pick the best three charities, but suspect if you had to pick just one you'd end up influenced by deceptive marketing, bad arguments, or your biases you'd rather not act on, and the same applies to other people, you may be better off not choosing between them, and better off if other people don't try to choose between them.

Comment author: paulfchristiano 27 December 2010 09:38:27PM 1 point [-]

This only applies if people donate simultaneously, which I doubt is the case in practice.

Comment author: [deleted] 27 December 2010 09:43:39PM 0 points [-]

I don't understand. Could you please clarify?

Comment author: paulfchristiano 27 December 2010 09:56:43PM 5 points [-]

In this case, a decision to donate 100% of your money to SIAI would mean that all those people implementing a decision process sufficiently similar to your own would donate 100% of their money to SIAI. A decision to donate 50% of your money to SIAI and 50% to Charity Option B would imply a similar split for all those people as well.

This argument assumes that the people using a similar decision process are faced with the same evidence. In particular, if they made their decision significantly later then they would know about your donation (not directly, but if SIAI now had significantly more funds they could know about it).

If all decision makers were perfectly rational and omniscient, but didn't have to make their decisions at the same time, then you wouldn't expect to see the 50/50 splitting. You would expect everyone to donate to the charity for which the current marginal usefulness is greatest. In the situation you envision, the marginal usefulness would decrease over time, until eventually donors would notice that it was no longer the best option, and then start diverting their funding. Perhaps once this sort of equilibrium is reached splitting your money is advisable, but we are extremely unlikely to be anywhere near such an equilibrium (with respect to my personal values) unless there is an explicit mechanism pushing us towards it. This would probably require postulating a lot of brilliant rational donors with identical values.

Comment author: David_Gerard 27 December 2010 09:18:49PM 1 point [-]

I'm not keen on it myself, but I've seen it linked here (and pushed elsewhere by LessWrong regulars) quite a lot.

Comment author: topynate 27 December 2010 05:58:41PM 7 points [-]

The idea is that the optimal method of donation is to donate as much as possible to one charity. Splitting your donations between charities is less effective, but still benefits each. They actually have a whole page about how valuable small donations are, so I doubt they'd hold a grudge against you for making one.

Comment author: David_Gerard 27 December 2010 07:58:06PM -1 points [-]

Yes, I'm sure the charity has such a page. I am intimately familiar with how splitting donations into fine slivers is very much in the interests of all charities except the very largest; I was speaking of putative benefit to the donors.

Comment author: Eliezer_Yudkowsky 01 January 2011 10:57:14PM 9 points [-]

I am intimately familiar with how splitting donations into fine slivers is very much in the interests of all charities except the very largest;

Not the largest, the neediest.

As charities become larger, the marginal value of the next donation goes down; they become less needy. In an efficient market for philanthropy you could donate to random charities and it would work as well as buying random stocks. We do NOT have an efficient market in philanthropy.

Comment author: David_Gerard 02 January 2011 11:35:22AM *  1 point [-]

No, I definitely meant size, not need (or effectiveness or quality of goals or anything else). A larger charity can mount more effective campaigns than a smaller one. This is from the Iron Law of Institutions perspective, in which charities are blobs for sucking in money from a more or less undifferentiated pool of donations. An oversimplification, but not too much of one, I fear - there's a reason charity is a sector in employment terms.

Comment author: Eliezer_Yudkowsky 02 January 2011 07:43:22PM 5 points [-]

It is necessary at all times to distinguish whether we are talking about humans or rational agents, I think.

<humans> If you expect that larger organizations mount more effective marketing campaigns and do not attend to their own diminishing marginal utility and that most people don't attend to the diminishing marginal utility either, you should look for maximum philanthropic return among smaller organizations doing very important, almost entirely neglected things that they have trouble marketing, but not necessarily split your donation up among those smaller organizations, except insofar as, being a human, you can donate more total money if you split up your donations to get more glow. </humans>

<rational agents> Marketing campaign? What's a marketing campaign? </rational agents>

Comment author: shokwave 03 January 2011 05:06:04AM *  1 point [-]

Voted up because swapping those tags around is funny.

Comment author: wedrifid 03 January 2011 04:42:26AM 1 point [-]

<rational agents> Marketing campaign? What's a marketing campaign? </rational agents>

Rational agents are not necessarily omniscient agents. There are cases where providing information to the market is a practical course of action.

Comment author: shokwave 03 January 2011 05:12:18AM 0 points [-]

Can't rational agents then mostly discount your information due to publication bias? In any case where providing information is not to your benefit, you would not provide it.

Comment author: wedrifid 03 January 2011 05:59:59AM 1 point [-]

Discount but not discard. Others have their own agenda and if it were directly opposed to mine such that all our interactions were zero sum then I would ignore their communication. But in most cases there is some overlap in goals or at least compatibility. In such cases communication can be useful. Particularly when the information is verifiable. There will be publication bias but that is a bias not a completely invalidated signal.

Comment author: Nick_Tarleton 03 January 2011 05:43:47AM 0 points [-]

To amplify Eliezer's response: What Evidence Filtered Evidence? and comments thereon.

Comment author: Eliezer_Yudkowsky 03 January 2011 05:35:22AM 1 point [-]

In which case the nonprovision of that info is also information.

But it wouldn't at all resemble marketing as we know it, either way.

Comment author: TheOtherDave 03 January 2011 02:58:47AM 0 points [-]

<rational agents> Marketing campaign? What's a marketing campaign? </rational agents>

A mechanism for making evidence that supports certain conclusions more readily available to agents whose increased confidence in those conclusions benefits me.

Comment author: Nick_Tarleton 02 January 2011 12:13:28AM *  0 points [-]

How does everyone splitting donations go against the interests of the neediest charities, if we don't have an efficient market in philanthropy and the lumped donations would have gone to the most popular (hypothetically = largest) charities rather than the neediest?

Or did you interpret "splitting donations" as referring to something other than everyone doing so?

Comment author: topynate 27 December 2010 10:09:02PM *  8 points [-]

Actions which increase utility but do not maximise it aren't "pointless". If you have two charities to choose from, £100 to spend, and you get a constant 2 utilons/£ for charity A and 1 utilon/£ for charity B, you still get a utilon for each pound you donate to B, even if to get 200 utilons you should donate £100 to A. It's just the wrong word to apply to the action, even assuming that someone who says he's donated a small amount is also saying that he's donated a small proportion of his charitable budget (which it turns out wasn't true in this case).

Comment author: Plasmon 27 December 2010 05:08:06PM *  6 points [-]

My donations are as effective as possible, I have never before donated anything to any organisation (except indirectly, via tax money).

I am too cautious to risk "black-swan events". I am probably overly cautious.

It could well be argued that donating more would be more cautious, depending on the probability of both black-swan events and UFAI, and the effectiveness of SIAI, but I'm sure there are plenty of threads about that already.

Comment author: Benquo 27 December 2010 04:13:33AM 21 points [-]

Darn it; I j just made my annual donation a few days ago, but hopefully my employer's matching donation will come in during the challenge period. I will make sure to make my 2011 donation during the matching period (i.e. well before January 20th), in an amount no less than $1000.

Comment author: Benquo 04 January 2011 06:46:13PM 3 points [-]

Followed up today with my 2011 donation.

Comment author: wedrifid 27 December 2010 04:53:56AM *  3 points [-]

I will make sure to make my 2011 donation during the matching period

Whoops. The market just learned.

Comment author: Rain 28 December 2010 01:01:07AM *  3 points [-]

You can't time the market. The accepted strategy in a state of uncertainty is continuous, automatic investment. That's why I have a monthly donation set up, in addition to giving extra during matching periods.

Comment author: Benquo 28 December 2010 04:53:05PM 3 points [-]

The matching donor presumably wants the match to be used. So unless the match is often exhausted and I'd be displacing someone else's donation that would only be given if there were a match, it's in no one's interest (who supports the cause) to try to outsmart or prevent a virtuous cycle of donations. And there are generally just 2 states, a 1 for 1 match and a 0 for 1 match, so in the worst case, you can always save up your annual donations, and give them on December 31st if no match is forthcoming.

That said, if I weren't using credit to give, I'd use your system.

Comment author: wedrifid 28 December 2010 08:39:08AM 0 points [-]

You can't time the market. The accepted strategy in a state of uncertainty is continuous, automatic investment. That's why I have a monthly donation set up, in addition to giving extra during matching periods.

You are referring to a general principle that has slightly negative relevance in this instance.

Comment author: Nick_Roy 27 December 2010 03:24:57AM 32 points [-]

$100 from a poor college student. I can't not afford it.

Comment author: [deleted] 27 December 2010 02:05:34AM -1 points [-]

Try to be objective and consider whether a donation to the Singularity Institute is the most efficient charitable "investment"? Here's a simple argument that it's most unlikely. What's the probability that posters would stumble on the very most efficient investment: it requires research. Rationalists don't accede this way to the representativeness heuristic, which leads the donor to choose the recipient readily accessible to consciousness.

Relying on heuristics where their deployment is irrational, however, isn't the main reason the Singularity Institute is an attractive recipient for posters to Less Wrong. The first clue is the celebration of persons who have made donations and the eagerness of the celebrated to disclose their contribution.

Donations are almost entirely signaling. The donations disclosed in comments here signal your values, or more precisely, what you want others to believe are your values. The Singularity Institute is hailed here; donations signal devotion to a common cause. Yes, even donating based on efficiency criteria is signaling and much as other donations. It signals that the donor is devoted to rationality.

The inconsistent over-valuation of the Singularity Institute might be part of the explanation for why rationality sometimes seems not to pay off: the "rational" analyze everyone's behavior but their own. When dealing with their own foibles, rationalists abdicate rationality when evaluating their own altruism,

Comment author: AlexU 27 December 2010 11:31:28PM *  3 points [-]

Why has this comment been downvoted so much? It's well-written and makes some good points. I find it really disheartening every time I come on here to find that a community of "rationalists" is so quick to muffle anyone who disagrees with LW collective opinion.

Comment author: DSimon 28 December 2010 07:32:59PM *  8 points [-]

I can't speak for anyone else, but I downvoted it because of the deadly combination of:

  • A. Unfriendly snarkiness, i.e. scare-quoting "rationalists" and making very general statements about the flaws of LW without any suggestions for improvements, and without a tone of constructive criticism.

  • B. Incorrect content, i.e. not referencing this article which is almost certainly the primary reason there are so many comments saying "I donated", and the misuse of probability in the first paragraph.

If it were just A, then I could appreciate the comment for making a good point and do my best to ignore the antagonism. If it were just B, then the comment is cool because it creates an opportunity to correct a mistake in a way that benefits both the original commenter and others, and adds to the friendly atmosphere of the site.

The combination, though, results in comments that don't add anything at all, which is why I downvoted srdiamond's comment.

Comment author: wedrifid 28 December 2010 05:04:58PM 9 points [-]

Downvoted parent and grandparent. The grandparent because:

  • It doesn't deserve the above defence.
  • States obvious and trivial things as though they are deep insightful criticisms while applying them superficially
  • Sneaks through extra elements of an agenda via presumption.

I had left it alone until I saw it given unwarranted praise and a meta karma challenge.

I find it really disheartening every time I come on here to find that a community of "rationalists" is so quick to muffle anyone who disagrees with LW collective opinion.

See the replies to all similar complaints.

Comment author: XiXiDu 28 December 2010 06:06:56PM *  7 points [-]

Initially I wanted to downvote you but decided to upvote you for providing reasons for why you downvoted the above comments.

The reason for why I believe that the comments shouldn't have been downvoted is that in this case something other than signaling disapproval of poor style and argumentation is more important. This post and thread are especially off-putting to skeptical outsiders. Downvoting critical comments will just reinforce this perception. Therefore, if you are fond of LW and the SIAI, you should account for public relations and kindly answer any critical or generally skeptical comments rather than simply downvoting them.

Comment author: ata 28 December 2010 07:57:32PM 6 points [-]

Downvoting critical comments will just reinforce this perception. Therefore, if you are fond of LW and the SIAI, you should account for public relations and kindly answer any critical or generally skeptical comments rather than simply downvoting them.

What is there to say in response to a comment like the one that started this thread? It was purely an outside-view argument that doesn't make any specific claims against the efficacy of SIAI or against any of the reasons that people believe it is an important cause. It wasn't an argument, it was a dismissal.

Comment author: XiXiDu 29 December 2010 09:07:05AM 3 points [-]

It wasn't an argument, it was a dismissal.

I noticed the tendency on LW to portray comments as attacks. They may seem that way to trained rationalists and otherwise highly educated folks. But not every negative comment is actually intended to be just a rhetorical device or simple dismissal. It won't help if you just downvote people or call them logical rude. Some people are honestly interested but fail to express themselves adequately. Usually newcomers won't know about the abnormally high standards on LW. You have to tell them about it. You also have to take into account those who are linked to this post, or come across it by other means, who don't know anything about LW. How does this thread appear to them, what are they likely to conclude, especially if no critical comment is being answered kindly but simply downvoted or snidely rejected?

Comment author: Vaniver 28 December 2010 08:14:28PM *  4 points [-]

Your post right here seems like a good example. You could say something along the lines of "This is a dismissal, not an argument; merely naming a bias isn't enough to convince me. If you provide some specific examples, I'd be happy to listen and respond as best as I can." You can even tack on an "But until then, I'm downvoting this because it seems like it's coming from hostility rather than a desire to find the truth together."

Heck, you could even copy that and have it saved somewhere as a form response to comments like that.

Comment author: DSimon 28 December 2010 07:40:03PM 3 points [-]

Agreed that responding to criticism is important, but I think it's especially beneficial to respond only to non-nasty criticism. Responding nicely to people who are behaving like jerks can create an atmosphere where jerkiness is encouraged.

Comment author: Vaniver 28 December 2010 07:44:27PM 2 points [-]

This is the internet, though; skins are assumed to be tough. There is some benefit to saying "It looks like you wanted to say 'X'. Please try to be less nasty next time. Here's why I don't agree with X" instead of just "wow, you're nasty."

Comment author: wedrifid 28 December 2010 07:48:27PM 0 points [-]

There is some benefit to saying "It looks like you wanted to say 'X'. Please try to be less nasty next time. Here's why I don't agree with X"

I have noted that trying to take that sort of response seems to lead to negative consequences more often than not.

Comment author: Vaniver 28 December 2010 07:56:28PM 4 points [-]

Our experiences disagree, then; I can think of many plausible explanations that leave both of us justified, so I will leave it at this.

Comment author: shokwave 28 December 2010 04:46:45PM *  17 points [-]

It's been downvoted - I guess - because it sits on the wrong side of a very interesting dynamic: what I call the "outside view dismissal" or "outside view attack". It goes like this:

A: From the outside, far too many groups discover that their supported cause is the best donation avenue. Therefore, be skeptical of any group advocating their preferred cause as the best donation avenue.

B: Ah, but this group tries to the best of their objective abilities to determine the best donation avenue, and their cause has independently come out as the best donation avenue. You might say we prefer it because it's the best, not the other way around.

A: From the outside, far too many groups claim to prefer it because it's the best and not the other way around. Therefore, be skeptical of any group claiming they prefer a cause because it is the best.

B: Ah, but this group has spent a huge amount of time and effort training themselves to be good at determining what is best, and an equal amount of time training themselves to notice common failure modes like reversing causal flows because it looks better.

A: From the outside, far too many groups claim such training for it to be true. Therefore, be skeptical of any group making that claim.

B: Ah, but this group is well aware of that possibility; we specifically started from the outside view and used evidence to update properly to the level of these claims.

A: From the outside, far too many groups claim to have started skeptical and been convinced by evidence for it to be true. Therefore, be skeptical of any group making that claim.

B: No, we really, truly, did start out skeptical, and we really, truly, did get convinced by the evidence.

A: From the outside, far too many people claim they really did weigh the evidence for it to be true. Therefore, be skeptical of any person claiming to have really weighed the evidence.

B: Fine, you know what? Here's the evidence, look at it yourself. You already know you're starting from the position of maximum skepticism.

A: From the outside, there are far too many 'convince even a skeptic' collections of evidence for them all to be true. Therefore, I am suspicious that this collection might be indoctrination, not evidence.

And so on.

The problem is that the outside view is used not just to set a good prior, but also to discount any and all evidence presented to support a higher inside view. This is the opposite of an epistemically unreachable position - an epistemically stuck position, a flawed position (you can't get anywhere from there), but try explaining that idea to A. Dollars to donuts you'll get:

A: From the outside, far too many people accuse me of having a flawed or epistemically stuck position. Therefore, be skeptical of anyone making such an accusation.

And I am sure many people on LessWrong have had this discussion (probably in the form of 'oh yeah? lots of people think they're right and they're wrong' -> 'lots of people claim to work harder at being right too and they're wrong' -> 'lots of people resort to statistics and objective measurements that have probably been fudged to support their position' -> 'lots people claim they haven't fudged when they have' and so on), and I am sure that the downvoted comment pattern-matches the beginning of such a discussion.

Comment author: XiXiDu 28 December 2010 05:38:15PM 2 points [-]

Fine, you know what? Here's the evidence, look at yourself. You already know you're starting from the position of maximum skepticism.

Where is the evidence?

Comment author: shokwave 29 December 2010 05:09:07AM *  2 points [-]

All of the evidence that an AI is possible¹, then the best method of setting your prior for the behavior of an AI².

¹. Our brains are proof of concept. That it is possible for a lump of flesh to be intelligent means AI is possible - even under pessimistic circumstances, even if it means simulating a brain with atomic precision and enough power to run the simulation faster than 1 second per second. Your pessimism would have to reach "the human brain is irreducible" in order to disagree with this proof, by which point you'd have neurobiologists pointing out you're wrong.

². Which would be equal distribution over all possible points in relevant-thing-space, in this case mindspace.

Comment author: JoshuaZ 29 December 2010 05:44:06AM 2 points [-]

This doesn't address the most controversial aspect, which is that AI would go foom. If extreme fooming doesn't occur this isn't nearly as big an issue. That is an issue where many people have discussed it and not all have come away convinced. Robin Hanson had a long debate with Eliezer over this and Robin was not convinced. Personally, I consider fooming to be unlikely but plausible. But how likely one thinks it is matters a lot.

Comment author: Rain 29 December 2010 06:04:42PM 4 points [-]

This doesn't address the most controversial aspect, which is that [nuclear weapons] would [ignite the atmosphere]. If extreme [atmospheric ignition] doesn't occur this isn't nearly as big an issue.

Even without foom, AI is a major existential risk, in my opinion.

Comment author: shokwave 29 December 2010 07:47:17AM 0 points [-]

Foom is included in that proof concept. Human intelligence has made faster and faster computation; a human intelligence sped up could reasonably expect to increase the speed and amount of computation available to it; resulting in faster speeds, and so on.

Comment author: JoshuaZ 29 December 2010 01:53:27PM 4 points [-]

You are repeating what amounts to a single cached thought. The claim in question is that there's enough evidence to convince a skeptic. Giving a short line of logic for that isn't at all the same. Moreover, the claim that such evidence exists is empirically very hard to justify given the Yudkowsky-Hanson debate. Hanson is very smart. Eliezer did his best to present a case for AI going foom. He didn't convince Hanson.

Comment author: shokwave 29 December 2010 02:59:16PM 1 point [-]

You are repeating what amounts to a single cached thought.

I'm not allowed to cache thoughts that are right?

You seem to be taking "Hanson disagreed with Eliezer" as proof that all evidence Eliezer presented doesn't amount to FOOM.

I'd note here that I started out learning from this site very skeptical, treating "I now believe in the Singularity" as a failure mode of my rationality, but something tells me you'd be suspicious of that too.

Comment author: JoshuaZ 29 December 2010 07:10:59PM *  3 points [-]

I'm not allowed to cache thoughts that are right?

You are. But when people ask for evidence it is generally more helpful to actually point to the evidence rather than simply repeating a secondary cached thought that is part of the interpretation of the evidence.

You seem to be taking "Hanson disagreed with Eliezer" as proof that all evidence Eliezer presented doesn't amount to FOOM.

No. I must have been unclear. I'm pointing to the fact that there are people who are clearly quite smart and haven't become convinced by the claim after looking at it in detail. Which means that when someone like XiXiDu asks where the evidence is a one paragraph summary with zero links is probably not going to be sufficient.

I'd note here that I started out learning from this site very skeptical, treating "I now believe in the Singularity" as a failure mode of my rationality, but something tells me you'd be suspicious of that too.

I'm not suspicious of it. My own estimate for fooming has gone up since I've spent time here (mainly due to certain arguments made by cousin_it), but I don't see why you think I'd be suspicious or not. Your personal opinion or my personal opinion just isn't that relevant when someone has asked "where's the evidence?" Maybe our personal opinions with all the logic and evidence drawn out in detail might matter. But that's a very different sort of thing.

Comment author: TheOtherDave 29 December 2010 05:21:12AM 3 points [-]

Just to clarify: are you asserting that this comment, and the associated post about the size of mindspace, represent the "convince even a skeptic" collection of evidence you were alluding to in its grandparent (which XiXiDu quotes)?

Or was there a conversational disconnect somewhere along the line?

Comment author: shokwave 29 December 2010 05:33:15AM *  1 point [-]

I didn't provide all of the evidence that an AI is possible, just one strong piece. All the evidence, plus a good prior for how likely the AI is to turn us into more useful matter, should be enough to convince even a skeptic. However, the brain-as-proof-of-concept idea is really strong: try and formulate an argument against that position.

Unless they're a skeptic like A above, or they're an "UFAI-denier" (in the style of climate change deniers) posing as a skeptic, or they privilege what they want to believe over what they ought to believe. There are probably half a dozen more failure modes I haven't spotted.

Comment author: TheOtherDave 29 December 2010 06:32:27AM 5 points [-]

Sounds like a conversational disconnect to me, then: at least, going back through the sequence of comments, it seems the sequence began with an expression of skepticism of the claim that "a donation to the Singularity Institute is the most efficient charitable investment," and ended with a presentation of an argument that UFAI is both possible and more likely than FAI.

Thanks for clarifying.

Just to pre-emptively avoid being misunderstood myself, since I have stepped into what may well be a minefield of overinterpretation, let me state some of my own related beliefs: I consider human-level, human-produced AGI possible (confidence level ~1) within the next century (C ~.85-.99, depending on just what "human-level" means and assuming we continue to work on the problem), likely not within the next 30 years (C<.15-.5, depending as above). I consider self-improving AGI and associated FOOM, given human-level AGI, a great big question mark: I'd say >99% of HLAGIs we develop will be architected in such a way that significant self-improvement is unlikely (much as our own architectures make it unlikely for us), but the important question is whether the actual number of exceptions is 0 or 1, and I have no confidence in my intuitions about that (see my comments elsewhere about expected results based on small probabilities of large magnitudes). I consider UFAI given self-improving AGI practically a certainty: >99% of SIAGIs will be UFAIs, and again the important question is whether the number of exceptions is 0 or 1, and whether the exception comes first. (The same thing is true about non-SI AGIs, but I care about that less.) Whether SIAI can influence that last question at all, and if so by how much and in what direction, I haven't a clue about; if I wanted to develop an opinion about that I'd have to look into what SIAI actually does day-to-day.

If any of that is symptomatic of fallacy, I'd appreciate having it pointed out, though of course nobody is under any obligation to do so.

Comment author: shokwave 29 December 2010 07:55:49AM 2 points [-]

There's an argument chain I didn't make clear; "If UFAI is both more possible and more likely than FAI, then influencing this in favour of FAI is a critical goal" and "SIAI is the most effective charity working towards this goal".

The only part I would inquire about is

I'd say >99% of HLAGIs we develop will be architected in such a way that significant self-improvement is unlikely (much as our own architectures make it unlikely for us),

Humans don't have the ability to self-modify (at least, our neuroscience is too underdeveloped to count for that yet) but AGIs will probably be made from explicit programming code, and will probably have some level of command over programming code (it seems like one of the ways in which it would be expected to interact with the world, creating code that achieves its goals). So its architecture is more conducive to self-modification (and hence self-improvement) than ours is.

Of course, a more developed point is that humans are very likely to build a fixed AGI if they can. If you're making that point, and not that AGIs simply won't self-improve, then I see no issues.

Comment author: TheOtherDave 29 December 2010 02:05:15PM 3 points [-]

Re: argument chain... I agree that those claims are salient.

Observations that differentially support those claims are also salient, of course, which is what I understood XiXiDu to be asking for, which is why I asked you initially to clarify what you thought you were providing.

Re: self-improvement... I agree that AGIs will be better-suited to modify code than humans are to modify neurons, both in terms of physical access and in terms of a functional understanding of what that code does.

I also think that if humans did have the equivalent ability to mess with their own neurons, >99% of us would either wirehead or accidentally self-lobotomize rather than successfully self-optimize.

I don't think the reason for that is primarily in how difficult human brains are to optimize, because humans are also pretty dreadful at optimizing systems other than human brains. I think the problem is primarily in how bad human brains are at optimizing. (While still being way better at it than their competition.)

That is, the reasons have to do with our patterns of cognition and behavior, which are as much a part of our architecture as is the fact that our fingers can't rewire our neural circuits.

Of course, maybe human-level AGIs would be way way better at this than humans would. But if so, it wouldn't be just because they can write their own cognitive substrate, it would also be because their patterns of cognition and behavior were better suited for self-optimization.

I'm curious as to your estimate of what % of HLAGIs will successfully self-improve?

Comment author: Kaj_Sotala 28 December 2010 03:51:22PM 0 points [-]

I agree that it's been downvoted too much. (At -6 as of this comment, up from -7 due to my own upvote.)