Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Tallinn-Evans $125,000 Singularity Challenge

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

Michael Anissimov posted the following on the SIAI blog:

Thanks to the generosity of two major donors; Jaan Tallinn, a founder of Skype and Ambient Sound Investments, and Edwin Evans, CEO of the mobile applications startup Quinly, every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.

Interested in optimal philanthropy — that is, maximizing the future expected benefit to humanity per charitable dollar spent? The technological creation of greater-than-human intelligence has the potential to unleash an “intelligence explosion” as intelligent systems design still more sophisticated successors. This dynamic could transform our world as greatly as the advent of human intelligence has already transformed the Earth, for better or for worse. Thinking rationally about these prospects and working to encourage a favorable outcome offers an extraordinary chance to make a difference. The Singularity Institute exists to do so through its research, the Singularity Summit, and public education.

We support both direct engagements with the issues as well as the improvements in methodology and rationality needed to make better progress. Through our Visiting Fellows program, researchers from undergrads to Ph.Ds pursue questions on the foundations of Artificial Intelligence and related topics in two-to-three month stints. Our Resident Faculty, up to four researchers from three last year, pursues long-term projects, including AI research, a literature review, and a book on rationality, the first draft of which was just completed. Singularity Institute researchers and representatives gave over a dozen presentations at half a dozen conferences in 2010. Our Singularity Summit conference in San Francisco was a great success, bringing together over 600 attendees and 22 top scientists and other speakers to explore cutting-edge issues in technology and science.

We are pleased to receive donation matching support this year from Edwin Evans of the United States, a long-time Singularity Institute donor, and Jaan Tallinn of Estonia, a more recent donor and supporter. Jaan recently gave a talk on the Singularity and his life at a entrepreneurial group in Finland. Here’s what Jaan has to say about us:

“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us. Since we have only one shot at getting the transition right, the importance of SIAI’s work cannot be overestimated. Not finding any organisation to take up this challenge as seriously as SIAI on my side of the planet, I conclude that it’s worth following them across 10 time zones.”
– Jaan Tallinn, Singularity Institute donor

Make a lasting impact on the long-term future of humanity today — make a donation to the Singularity Institute and help us reach our $125,000 goal. For more detailed information on our projects and work, contact us at institute@intelligence.org or read our new organizational overview.

-----

Kaj's commentary: if you haven't done so recently, do check out the SIAI publications page. There are several new papers and presentations, out of which I thought that Carl Shulman's Whole Brain Emulations and the Evolution of Superorganisms made for particularly fascinating (and scary) reading. SIAI's finally starting to get its paper-writing machinery into gear, so let's give them money to make that possible. There's also a static page about this challenge; if you're on Facebook, please take the time to "like" it there.

(Full disclosure: I was an SIAI Visiting Fellow in April-July 2010.)

Comments (369)

Comment author: JGWeissman 02 January 2011 02:08:18AM 44 points [-]

I just wrote a check for $13,200.

Comment author: Costanza 02 January 2011 05:11:41AM 7 points [-]

As I write, this comment has earned only 5 karma points (one of them mine). According to Larks' exchange rate of $32 dollars to one karma point, this donation has more than four hundred upvotes to go.

Wait ... I assume you're planning to actually mail the check too?

Comment author: JGWeissman 02 January 2011 05:27:26AM 15 points [-]

Wait ... I assume you're planning to actually mail the check too?

Yes, I mailed the check, too, just after writing the comment. (And I wrote and mailed it to SIAI. No tricks, it really is a donation.)

I would be surprised if karma scaled linearly with dollars over that range.

Comment author: Kaj_Sotala 26 December 2010 11:25:32AM *  41 points [-]

And to encourage others to donate, let it be known that I just made a 500 euro (about 655 USD) donation.

Comment author: [deleted] 26 December 2010 09:09:34PM 37 points [-]

I sent 500 USD.

Comment author: WrongBot 26 December 2010 09:08:56PM 37 points [-]

$500. I can wait a little longer to get a new laptop.

Comment author: Kutta 26 December 2010 07:11:26PM *  37 points [-]

I sent 640 dollars.

Comment author: ciphergoth 28 December 2010 02:29:47PM 9 points [-]

640 dollars ought to be enough for anyone :-)

Comment author: blogospheroid 27 December 2010 05:49:56PM 33 points [-]

I put in $500, really pinches in Indian rupees (Rs. 23,000+). Hoping for the best to happen next year with a successful book release and promising research to be done.

Comment author: ata 26 December 2010 04:55:35PM 33 points [-]

I donated $100 yesterday. I hope to donate more by the end of the matching period, but for now that's around my limit (I don't have much money).

Comment author: Nick_Roy 27 December 2010 03:24:57AM 32 points [-]

$100 from a poor college student. I can't not afford it.

Comment author: Leonhart 29 December 2010 04:53:59PM 27 points [-]

£300.

Comment author: orthonormal 31 December 2010 08:56:04PM *  25 points [-]

I just donated $1,370. The reason why it's not a round number is interesting, and I'll write a Discussion post about it in a minute. EDIT: Here it is.

Also, I find it interesting that (before my donation) the status bar for the challenge was at $8,500, and the donations mentioned here totaled (by my estimation) about $6,700 of that...

Comment author: ciphergoth 28 December 2010 02:34:07PM 24 points [-]

I seem to remember reading a comment saying that if I make a small donation now, it makes it more likely I'll make a larger donation later, so I just donated £10.

Comment author: Vaniver 28 December 2010 02:48:46PM 10 points [-]

Ben Franklin effect, as well as consistency bias. Good on you for turning a bug into a feature.

Comment author: timtyler 31 December 2010 03:55:24PM 2 points [-]

Does that still work, once you know about the sunk cost fallacy?

Comment author: Perplexed 31 December 2010 05:39:55PM 1 point [-]

Perhaps it works due to warm-and-fuzzy slippery slopes, rather than sunk costs.

Comment author: ciphergoth 31 December 2010 05:00:56PM 1 point [-]

Don't know - I guess we'll find out!

Comment author: AngryParsley 05 January 2011 01:33:09AM *  21 points [-]
Comment author: Benquo 27 December 2010 04:13:33AM 21 points [-]

Darn it; I j just made my annual donation a few days ago, but hopefully my employer's matching donation will come in during the challenge period. I will make sure to make my 2011 donation during the matching period (i.e. well before January 20th), in an amount no less than $1000.

Comment author: Benquo 04 January 2011 06:46:13PM 3 points [-]

Followed up today with my 2011 donation.

Comment author: wedrifid 27 December 2010 04:53:56AM *  3 points [-]

I will make sure to make my 2011 donation during the matching period

Whoops. The market just learned.

Comment author: Rain 28 December 2010 01:01:07AM *  3 points [-]

You can't time the market. The accepted strategy in a state of uncertainty is continuous, automatic investment. That's why I have a monthly donation set up, in addition to giving extra during matching periods.

Comment author: Benquo 28 December 2010 04:53:05PM 3 points [-]

The matching donor presumably wants the match to be used. So unless the match is often exhausted and I'd be displacing someone else's donation that would only be given if there were a match, it's in no one's interest (who supports the cause) to try to outsmart or prevent a virtuous cycle of donations. And there are generally just 2 states, a 1 for 1 match and a 0 for 1 match, so in the worst case, you can always save up your annual donations, and give them on December 31st if no match is forthcoming.

That said, if I weren't using credit to give, I'd use your system.

Comment author: tammycamp 26 December 2010 10:40:47PM 21 points [-]

Donation made. Here's to optimal philanthropy!

Happy Holidays,

Tammy

Comment author: Furcas 17 January 2011 12:32:28AM *  20 points [-]

Donated $500 CAD just now.

By the way, SIAI is still more than 31,000 US dollars away from its target.

Comment author: Nick_Tarleton 06 January 2011 08:18:24AM 20 points [-]

I just donated $512.

Comment author: wmorgan 14 January 2011 06:00:39PM 19 points [-]

$1,000

Comment author: Kyre 03 January 2011 03:09:15AM 19 points [-]

$1000 - looking forward to a good year for SIAI in 2011.

Comment author: AlexMennen 02 January 2011 05:45:33AM 19 points [-]

Donated $120

Comment author: Yvain 27 December 2010 10:56:57PM 18 points [-]

New Year's resolution is not to donate to things until I check if there's a matching donation drive starting the next week :( Anyway, donated a little extra because of all the great social pressure from everyone's amazing donations here. Will donate more when I have an income.

Comment author: Benquo 27 December 2010 11:12:01PM *  5 points [-]

At first I felt a little better that someone else made the same mistake, but on reflection I should feel worse.

Comment author: Dorikka 19 January 2011 04:44:22AM 2 points [-]

I would avoid the phrase "I should feel worse" in most scenarios due to pain and gain motivation.

Comment author: Kevin 30 December 2010 10:55:05AM 2 points [-]

I don't think it actually matters, unless the matching drive isn't fulfilled. Even then, I would be really surprised if Jaan and Edwin take their money back. So in some sense it is better to have donated before the drive, as it allows someone else to have their donation matched who might not have donated without the promise of matching.

Comment author: NancyLebovitz 02 January 2011 04:35:11PM 1 point [-]

I wonder if there's empirical research on how much in advance to announce matching donation drives so as to maximize revenue.

Any observations of how established charities handle this?

Comment author: Psy-Kosh 05 January 2011 10:57:05PM 16 points [-]

Just donated $200.

Comment author: hairyfigment 19 January 2011 02:24:51AM 15 points [-]

Just donated $500.

(At one time I had an excuse for waiting. But plainly I won't get confirmation on a price for cryonics-themed life insurance by the deadline, and should likely have donated sooner).

Comment author: taryneast 01 January 2011 11:42:31PM 15 points [-]

$50 - it's definitely a different cause to the usual :)

Comment author: anon895 20 January 2011 05:32:50PM *  13 points [-]

In a possibly bad decision, I put a $1000 check in the mailbox with the intent of going out and transferring the money to my checking account later today. That puts them at $123,700 using Silas' count.

Comment author: anon895 20 January 2011 10:49:45PM 3 points [-]

...yep, didn't make it. I'll have to get to the bank early tomorrow and hope the mail is slow.

Comment author: anon895 21 January 2011 10:06:10PM 2 points [-]

Ended up making the transfer over the phone.

Comment author: SilasBarta 19 January 2011 10:27:26PM 13 points [-]

I donated 1000 USD. (This puts them at ~$122,700 ... so close!)

Comment author: Normal_Anomaly 05 January 2011 01:25:09AM 13 points [-]

Donated $50.

Comment author: Rain 26 December 2010 03:18:20PM 58 points [-]

I just put in 2700 USD, the current balance of my bank account, and I'll find some way to put in more by the end of the challenge.

Comment author: Rain 17 January 2011 03:00:38AM 12 points [-]

I just put in another 850 USD.

Comment author: anonym 27 December 2010 06:51:57AM 17 points [-]

Not that I don't think your donation is admirable, but I'm curious how you are able to donate your entire bank account without running the risk of not being able to respond to a black-swan event appropriately and your future well-being and ability to donate to SIAI being compromised?

Do you think it's rational in general for people to donate all their savings to the SIAI?

Comment author: Rain 27 December 2010 01:57:04PM *  35 points [-]

I have a high limit credit card which I pay off every month, no other form of debt, no expenses until my next paycheck, a very secure, well-paying job with good health insurance, significant savings in the form of stocks and bonds, and several family members and friends who would be willing to help me in the event of some catastrophe.

I prepare and structure my life such that I can take action without fear. I attribute most of this to reading the book Your Money Or Your Life while I was in college. My only regret is that I can afford to give more, but fail to have the cash on hand due to lifestyle expenditures and saving for my own personal future.

Comment author: anonym 27 December 2010 11:43:20PM 11 points [-]

Thanks for the reply. Bravo on structuring your life the way you have!

Comment author: wedrifid 27 December 2010 07:21:21AM 4 points [-]

Not that I don't think your donation is admirable, but I'm curious how you are able to donate your entire bank account without running the risk of not being able to respond to a black-swan event appropriately and your future well-being and ability to donate to SIAI being compromised?

Have a reliable source of income and an overdraft available.

Comment author: anonym 27 December 2010 11:38:14PM 2 points [-]

I don't think those two alone are sufficient for it to be rational.

I work for a mid-sized (in the thousands of employees), very successful, privately held company with a long, stable history, and I feel very secure in my job. I would say I have a reliable source of income, but even so, I wouldn't estimate the probability of finding myself suddenly and unexpectedly out of work in the next year at less than 1%, and if somebody has school loans, a mortgage, etc., then in that situation, it seems more rational to have at least enough cash to stay afloat for a few months or so (or have stocks, etc., that could be sold if necessary) while finding a new job.

Comment author: wedrifid 28 December 2010 08:44:19AM 4 points [-]

I don't think those two alone are sufficient for it to be rational.

They are sufficient to make the "entire bank account" factor irrelevant and the important consideration the $2,700 as an absolute figure. "Zero" is no longer an absolute cutoff and instead a point at which costs potentially increase.

Comment author: anonym 28 December 2010 07:37:58PM *  1 point [-]

Okay, let's think this through with a particular case.

Assume only your two factors: John has a reliable source of income and overdraft protection on an account. Since you assert that those two factors are sufficient, we can suppose John doesn't have any line of credit, doesn't own anything valuable that could be converted to cash, doesn't know anybody that could give him a loan or a job, etc

John donates all his savings, and loses his job the next day. He has overdraft protection on his empty bank account, which will save him from some fees when he starts bouncing checks, but the overdraft protection will expire pretty quickly once checks start bouncing.

Things will spiral out of control quickly unless John is able to get another source of income sufficient to cover his recurring expenses or there is some other compensating factor than the two you mentioned (which shows they are not sufficient). Or do you think he's doing okay a month later when the overdraft protection is no longer in effect, he has tons of bills due, needs to pay his rent, still hasn't found a job, has run out of food, etc.? And if he hasn't found work within a few months more -- which is quite possible -- he'll be evicted from his home and his credit will be ruined from not having paid any of his bills for several months.

ETA: the point isn't that all of that will happen or is even likely to happen, but that a bank account represents some amount of time that the person can stay afloat while they're looking for work. It greatly increases the likelihood that they will find a new source of income before they hit the catastrophe point of being evicted and having their credit ruined.

Comment author: Alicorn 28 December 2010 08:13:44PM 2 points [-]

It looks to me like you're ignoring the "reliable" bit in "reliable source of income".

Comment author: wnoise 28 December 2010 09:02:59PM 1 point [-]

There's no such thing as "reliable" at that level.

Comment author: wedrifid 26 December 2010 03:36:34PM 5 points [-]

I just put in 2700 USD, the current balance of my bank account, and I'll find some way to put in more by the end of the challenge.

Wow. I'm impressed. This kind of gesture brings back memories of parable that still prompts a surge of positive affect in me, that of the widow donating everything she had (Mark 12:40-44). It also flagrantly violates related hyperbolic exhortation "do not let the left hand know what the right hand is doing". Since that is message that I now dismiss as socially, psychologically and politically naive your public your public declaration seems beneficial. There are half a dozen factors of influence that you just invoked and some of them I can feel operating on myself even now.

Comment author: gwern 26 December 2010 04:33:02PM 3 points [-]

/munches popcorn

Comment author: tammycamp 26 December 2010 10:42:24PM 3 points [-]

Bravo! That's hardcore.

Way to pay it forward!

Tammy

Comment author: gjm 26 December 2010 07:28:17PM 12 points [-]

a book on rationality, the first draft of which was just completed

If Eliezer's reading this: Congratulations!

Comment author: curiousepic 19 January 2011 04:06:34AM *  10 points [-]

I have not donated a significant amount before, but will donate $500 IF someone else will (double) match it.

Why did the SIAI remove the Grant Proposals page? http://singinst.org/grants/challenge#grantproposals

EDIT: Donated $500, in response to wmorgan's $1000

Comment author: wmorgan 19 January 2011 06:17:18AM 18 points [-]

Your comment spurred me into donating an additional $1,000.

Comment author: curiousepic 19 January 2011 01:46:31PM *  8 points [-]

Excellent! Donated $500. Whether yours is a counter-bluff or not ;)

This is by far the most I've donated to a charity. I spent yesterday assessing my financial situation, something I've only done in passing because of my fairly comfortable position. It has always felt smart to me to ignore the existence of my excess cash, but I have a fair amount of it and the recent increase of discussion about charity has made me reassess where best to locate it. I will be donating to SENS in the near future, probably more than I have to SIAI. I'm aware of the argument for giving everything to a single charity, but it seems even Eli is conflicted about giving advice about SIAI vs. SENS, given this discussion.

I recently read that investing in the stock market (casually, not as a trader or anything) in the hopes that your wealth will grow such that you can donate even more at a later time is erroneous because the charity could be doing the same thing, with more of it. Is this true, and does anyone know if the SIAI, or SENS does this? It seems to me that both of these organizations have immediate use for pretty much all money they receive and do not invest at all. How much would my money have to make in an investment account to be able to contribute more (adjusting for inflation) in the future?

Comment author: endoself 20 January 2011 06:20:54PM *  5 points [-]

The logic of donating now is that if a charity would use your money now, it is because less money now is more useful than more money later. Not all charities may be smart enough to realize whether they should invest, but I feel confidant that if investing money rather than spending it right away were the best approach for their goals, the people at the SIAI would be smart enough to do so.

Comment author: Dorikka 19 January 2011 05:01:18AM 1 point [-]

I think that a rational agent would donate the $500 eventually either way because the utility value of a $500 contribution would be greater than that of a $0 contribution, if the matching $500 was not forthcoming. Thus, the precommitment to withhold the donation if it is not matched seems to be a bluff (for even if the agent reported that he had not donated the money, he could do so privately without fear of exposure) Therefore, it seems to me that the matching arrangement is a device designed to convince irrational agents, because the matcher's contribution does not affect the amount of the original donor's contribution.

Am I missing something?

Comment author: endoself 19 January 2011 05:23:44AM 2 points [-]

He may actually refrain from donating, by the reasoning that such offers would work iff someone deems them reasonable and that person is more likely to deem it reasonable if he does, by TDT/UDT. I could see myself doing such a thing.

Comment author: Larks 29 December 2010 11:52:50PM 27 points [-]

On the one hand, I absolutly abhore SIAI. On the other hand, I'd love to turn my money into karma...

/joke

$100

Comment author: Larks 01 January 2011 01:35:15AM 6 points [-]

At the moment, my comment has 15 karma, while Leonhart's, which was posted before, and for more money, has 14. As £1 = $1.5,

$32 = 1 karma,

and thus my donation is only worth around 3 karma.

So it seems my joke must have been worth 12 karma, or $386. I never realised my comparative advantage was in humour...

Comment author: [deleted] 01 January 2011 02:15:37AM *  5 points [-]

I imagine karma and donation amounts, if they correlate at all, correlate on a log scale. We'd therefore expect your comment to get 14/log(300 x 1.5) x log(100) karma from the donation amount alone, which comes to about 10.5 karma. Therefore 4.5 of your karma came from your joke.

Unfortunately, we can't convert your joke karma into dollars in any consistent way. But if you hadn't donated any money, and made an equally good joke, you would have gotten about as much karma as someone donating $7, assuming our model holds up in that range.

Edit: also a factor is that I'm sure many people on LessWrong don't actually know the conversion factor between $ and £.

Comment author: patrissimo 01 January 2011 10:28:43PM 20 points [-]

Wow, SIAI has succeeded in monetizing Less Wrong by selling karma points. This is either a totally awesome blunder into success or sheer Slytherin genius.

Comment author: VNKKET 26 February 2011 04:32:09AM 6 points [-]

I donated $250 on the last day of the challenge.

Comment author: Dr_Manhattan 27 December 2010 10:57:35PM 6 points [-]
Comment author: Plasmon 27 December 2010 03:39:17PM 17 points [-]

I have donated a small amount of money.

The Singularity is now a little bit closer and safer because of your efforts. Thank you. We will send a receipt for your donations and our newsletter at the end of the year. From everyone at the Singularity Institute – our deepest thanks.

I do hope they mean they will send a receipt and newsletter by e-mail, and not by physical mail.

Comment author: David_Gerard 27 December 2010 04:38:50PM *  -2 points [-]

I have donated a small amount of money.

I understood that this was considered pointless hereabouts: that the way to effective charitable donation is to pick the most effective charity and donate your entire charity budget to it. Thus, the only appropriate donations to SIAI would be nothing or everything.

Or have I missed something in the chain of logic?

(This is, of course, from the viewpoint of the donor rather than that of the charity.)

Edit: Could the downvoter please explain? I am not at all personally convinced by that Slate story, but it really is quite popular hereabouts.

Comment author: topynate 27 December 2010 05:58:41PM 7 points [-]

The idea is that the optimal method of donation is to donate as much as possible to one charity. Splitting your donations between charities is less effective, but still benefits each. They actually have a whole page about how valuable small donations are, so I doubt they'd hold a grudge against you for making one.

Comment author: David_Gerard 27 December 2010 07:58:06PM -1 points [-]

Yes, I'm sure the charity has such a page. I am intimately familiar with how splitting donations into fine slivers is very much in the interests of all charities except the very largest; I was speaking of putative benefit to the donors.

Comment author: Eliezer_Yudkowsky 01 January 2011 10:57:14PM 9 points [-]

I am intimately familiar with how splitting donations into fine slivers is very much in the interests of all charities except the very largest;

Not the largest, the neediest.

As charities become larger, the marginal value of the next donation goes down; they become less needy. In an efficient market for philanthropy you could donate to random charities and it would work as well as buying random stocks. We do NOT have an efficient market in philanthropy.

Comment author: David_Gerard 02 January 2011 11:35:22AM *  1 point [-]

No, I definitely meant size, not need (or effectiveness or quality of goals or anything else). A larger charity can mount more effective campaigns than a smaller one. This is from the Iron Law of Institutions perspective, in which charities are blobs for sucking in money from a more or less undifferentiated pool of donations. An oversimplification, but not too much of one, I fear - there's a reason charity is a sector in employment terms.

Comment author: Eliezer_Yudkowsky 02 January 2011 07:43:22PM 5 points [-]

It is necessary at all times to distinguish whether we are talking about humans or rational agents, I think.

<humans> If you expect that larger organizations mount more effective marketing campaigns and do not attend to their own diminishing marginal utility and that most people don't attend to the diminishing marginal utility either, you should look for maximum philanthropic return among smaller organizations doing very important, almost entirely neglected things that they have trouble marketing, but not necessarily split your donation up among those smaller organizations, except insofar as, being a human, you can donate more total money if you split up your donations to get more glow. </humans>

<rational agents> Marketing campaign? What's a marketing campaign? </rational agents>

Comment author: shokwave 03 January 2011 05:06:04AM *  1 point [-]

Voted up because swapping those tags around is funny.

Comment author: wedrifid 03 January 2011 04:42:26AM 1 point [-]

<rational agents> Marketing campaign? What's a marketing campaign? </rational agents>

Rational agents are not necessarily omniscient agents. There are cases where providing information to the market is a practical course of action.

Comment author: topynate 27 December 2010 10:09:02PM *  8 points [-]

Actions which increase utility but do not maximise it aren't "pointless". If you have two charities to choose from, £100 to spend, and you get a constant 2 utilons/£ for charity A and 1 utilon/£ for charity B, you still get a utilon for each pound you donate to B, even if to get 200 utilons you should donate £100 to A. It's just the wrong word to apply to the action, even assuming that someone who says he's donated a small amount is also saying that he's donated a small proportion of his charitable budget (which it turns out wasn't true in this case).

Comment author: Plasmon 27 December 2010 05:08:06PM *  6 points [-]

My donations are as effective as possible, I have never before donated anything to any organisation (except indirectly, via tax money).

I am too cautious to risk "black-swan events". I am probably overly cautious.

It could well be argued that donating more would be more cautious, depending on the probability of both black-swan events and UFAI, and the effectiveness of SIAI, but I'm sure there are plenty of threads about that already.

Comment author: Kaj_Sotala 28 December 2010 03:45:58PM 15 points [-]

I feel rather uncomfortable at seeing someone mention that he donated, and getting a response which indirectly suggests that he's being irrational and should have donated more.

Comment author: shokwave 28 December 2010 05:20:29PM 3 points [-]

It is indirect, but I believe David is trying to highlight the possibility of problems with the Slate article. Once we have something to protect (a donor) we will be more motivated to explore its possible failings instead of taking it as gospel.

Comment author: David_Gerard 28 December 2010 04:51:50PM 0 points [-]

I don't think that, as I have noted. I'm not at all keen on the essay in question. But it is popular hereabouts.

Comment author: [deleted] 27 December 2010 06:31:09PM 2 points [-]

Unless, of course, you believe that the decisions of other people donating to charity are correlated with your own. In this case, a decision to donate 100% of your money to SIAI would mean that all those people implementing a decision process sufficiently similar to your own would donate 100% of their money to SIAI. A decision to donate 50% of your money to SIAI and 50% to Charity Option B would imply a similar split for all those people as well.

If there are enough people like this, then the total amount of money involved may be large enough that the linear approximation does not hold. In that case, it seems natural to me to assume that, if both charity options are worthwhile, significantly increasing the successfulness of both charities is more important than increasing SIAI's successfulness even more significantly. Thus, you would donate 50%/50%.

Overall, the argument you link to seems to me to parallel (though inexactly) the argument that voting is pointless considering how unlikely your vote is to swing the outcome.

Comment author: Caspian 02 January 2011 02:34:43PM 2 points [-]

Also your errors in choosing a charity won't necessarily be random. For example, if you trust your reasoning to pick the best three charities, but suspect if you had to pick just one you'd end up influenced by deceptive marketing, bad arguments, or your biases you'd rather not act on, and the same applies to other people, you may be better off not choosing between them, and better off if other people don't try to choose between them.

Comment author: David_Gerard 27 December 2010 09:18:49PM 1 point [-]

I'm not keen on it myself, but I've seen it linked here (and pushed elsewhere by LessWrong regulars) quite a lot.

Comment author: paulfchristiano 27 December 2010 09:38:27PM 1 point [-]

This only applies if people donate simultaneously, which I doubt is the case in practice.

Comment author: MichaelVassar 29 December 2010 05:12:06PM 1 point [-]

The slate article is correct, but its desirable to be polite as well as accurate if you actually want to communicate something. Also, if someone wants to donate to feel good, that feeling good is an actively good thing that they are purchasing and its undesirable to try to damage it.

Comment author: SilasBarta 19 January 2011 08:53:47PM 2 points [-]

What's the status on this? The picture on the page suggests the $125,000 matching maximum was met, but nothing says for sure.

What time on Thursday is the deadline?

Comment author: curiousepic 19 January 2011 09:31:51PM 1 point [-]

Mousing over the image gives the total $121,616.

Comment author: SilasBarta 19 January 2011 10:07:21PM 2 points [-]

Sweet, I can still be the one to push it over! [1]

[1] so long as you disregard the fungibility of money and therefore my contribution's indistinguishability from that of all the others.

Comment author: SilasBarta 19 January 2011 10:13:11PM *  1 point [-]

Wait, if I do an echeck through Paypal today, would it count toward the challenge? Paypal says it takes a few days to process :-/

EDIT: n/m, I guess I can just do it via credit card, though SIAI gets less that way.

Comment author: AnnaSalamon 19 January 2011 10:44:51PM 2 points [-]

Donations count toward the challenge if they're dated before the end, even if they aren't received until a few days later.

Comment author: XiXiDu 02 January 2011 03:34:56PM 9 points [-]

I just sent 15 USD to each the SIAI, VillageReach and The Khan Academy.

I am aware of and understand this but felt more comfortable to diversify right now. I also know it is not much, I'll have to somehow force myself to buy less shiny gadgets and rather donate more. Generally I have to be less inclined to the hoarding of money in favor of giving.

Comment author: sfb 19 January 2011 11:06:48PM 3 points [-]

every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.

Anyone willing to comment on that as a rationalist incentive? Presumably I'm supposed to think "I want more utility to SIAI so I should donate at a time when my donation is matched so SIAI gets twice the cash" and not "they have money which they can spare and are willing to donate to SIAI but will not donate it if their demands are not met within their timeframe, that sounds a lot like coercion/blackmail"?

Would it work the other way around? If we individuals grouped together and said "We collectively have $125,000 to donate to SIAI but will only do so if SIAI convinces a company / rich investor to match it dollar for dollar before %somedate%"?

Comment author: AnnaSalamon 19 January 2011 11:39:50PM *  12 points [-]

It's a symmetrical situation. Suppose that A prefers having $1 in his personal luxury budget to having $1 in SIAI, but prefers having $2 in SIAI to having a mere $1 in his personal luxury budget. Suppose that B has the same preferences (regarding his own personal luxury budget, vs SIAI).

Then A and B would each prefer not-donating to donating, but they would each prefer donating-if-their-donation-gets-a-match to not-donating. And so a matching campaign lets them both achieve their preferences.

This is a pretty common situation -- for example, lots of people are unwilling to give large amounts now to save lives in the third world, but would totally be willing to give $1k if this would cause all other first worlders to do so, and would thereby prevent all the cheaply preventable deaths. Matching grants are a smaller version of the same.

Comment author: steven0461 20 January 2011 12:16:07AM 3 points [-]

It seems like it would be valuable to set up ways for people to make these deals more systematically than through matching grants.

Comment author: wedrifid 19 January 2011 11:48:12PM 2 points [-]

This is a pretty common situation

Indeed. It seems to be essentially 'solving a cooperation problem'.

Comment author: timtyler 20 January 2011 12:06:41AM 1 point [-]

The sponsor gets publicity for their charitable donation - while the charity stimulates donations - by making donors feel as though they are getting better value for money.

If the sponsor proposes the deal, they can sometimes make the charity work harder at their fund-raising effort for the duration - which probably helps their cause.

If the charity proposes the deal, the sponsor can always pay the rest of their gift later.

Comment author: curiousepic 21 January 2011 04:35:45PM 2 points [-]

According to the page, they (we) made it to the full $125,000/250,000! Does anyone know what percentage this is of all money the SIAI has raised?

Comment author: Rain 28 January 2011 01:26:59PM 1 point [-]

Their annual budget is typically in the range of $500,000, so this would be around half.

Comment author: XiXiDu 28 January 2011 01:39:51PM 4 points [-]

I wonder at what point donations to the SIAI would hit diminishing returns and contributing to another underfunded cause would be more valuable? Suppose for example Bill Gates was going to donate 1.5 billion US$, would my $100 donation still be best placed with the SIAI?

Comment author: Rain 29 January 2011 10:10:20PM 2 points [-]

Marginal contributions are certainly important to consider, and it's one of the reasons I mentioned in my original post about why I support them.

Even asteroid discovery, long considered underfunded, is receiving hundreds of millions.

Comment author: shokwave 26 December 2010 01:19:07PM 4 points [-]

up until January 20, 2010

2011?

Comment author: Kaj_Sotala 26 December 2010 01:41:09PM *  5 points [-]

Good catch. I e-mailed the SIAI folks about that typo, which seems to be both on the holiday challenge page and the blog posts. It'll probably get fixed in a jiffy.

EDIT: It's now been fixed on the challenge page and the blog post.

Comment author: [deleted] 27 December 2010 02:05:34AM -1 points [-]

Try to be objective and consider whether a donation to the Singularity Institute is the most efficient charitable "investment"? Here's a simple argument that it's most unlikely. What's the probability that posters would stumble on the very most efficient investment: it requires research. Rationalists don't accede this way to the representativeness heuristic, which leads the donor to choose the recipient readily accessible to consciousness.

Relying on heuristics where their deployment is irrational, however, isn't the main reason the Singularity Institute is an attractive recipient for posters to Less Wrong. The first clue is the celebration of persons who have made donations and the eagerness of the celebrated to disclose their contribution.

Donations are almost entirely signaling. The donations disclosed in comments here signal your values, or more precisely, what you want others to believe are your values. The Singularity Institute is hailed here; donations signal devotion to a common cause. Yes, even donating based on efficiency criteria is signaling and much as other donations. It signals that the donor is devoted to rationality.

The inconsistent over-valuation of the Singularity Institute might be part of the explanation for why rationality sometimes seems not to pay off: the "rational" analyze everyone's behavior but their own. When dealing with their own foibles, rationalists abdicate rationality when evaluating their own altruism,

Comment author: paulfchristiano 27 December 2010 03:00:22AM *  13 points [-]

Your argument applies to any donation of any sort, in fact to any action of any sort. What is the probability that the thing I am currently doing is the best possible thing to do? Why, its basically zero. Should I therefore not do it?

Referring to the SIAI as a cause "some posters stumbled on" is fairly inaccurate. It is a cause that a number of posters are dedicating their lives to, because in their analysis it is among the most efficient uses of their energy. In order to find a more efficient cause, I not only have to do some research, I have to do more research than the rational people who created SIAI (this isn't entirely true, but it is much closer to the truth than your argument). The accessibility of SIAI in this setting may be strong evidence in its favor (this isn't a coincidence; one reason to come to a place where rational people talk is that it tends to make good ideas more accessible than bad ones).

I am not donating myself. But for me there is some significant epistemic probability that the SIAI is in fact fighting for the most efficient possible cause, and that they are the best-equipped people currently fighting for it. If you have some information or an argument that suggests that this belief is inconsistent, you should share it rather than just imply that it is obvious (you have argued correctly that there probably exist better things to do with my resources, but I already knew that and it doesn't help me decide what to actually do with my resources.)

By treating people who do things I approve of well, I can encourage them to do things I approve of, and conversely. By donating and proclaiming it loudly I am strongly suggesting that I personally approve of donating. Signaling isn't necessarily irrational. If I am encouraging people to behave as rationally in support of my own goals, in what possible sense am I failing to be rational?

Comment author: Aharon 27 December 2010 10:47:13AM 4 points [-]

I'm curious: If you have the resources to donate (which you seem to imply by the statement that you have resources for which you can make a decision), and think it would be good to donate to the SIAI, then why don't you donate?

(I don't donate because I am not convinced unfriendly AI is such a big deal. I am aware that this may be lack of calibration on my part, but from the material I have read on on other sites, UFAI just doesn't seem to be that big a risk. (There were some discussions on the topic on stardestroyer.net. While the board isn't as dedicated to rationality as this board is, the counterarguments seemed well-founded, although I don't remember the specifics right now. If anybody is interested, I will try to dig them up.)

Comment author: paulfchristiano 27 December 2010 07:34:28PM 9 points [-]

I don't know if it is a good idea to donate to SIAI. From my perspective, there is a significant chance that it is a good idea, but also a significant chance that it isn't. I think everyone here recognizes the possibility that money going to the SIAI will accomplish nothing good. I either have a higher estimate for that possibility, or a different response to uncertainty. I strongly suspect that I will be better informed in the future, so my response is to continue earning interest on my money and only start donating to anything when I have a better idea of what is going on (or if I die, in which case the issue is forced).

The main source of uncertainty is whether the SIAI's approach is useful for developing FAI. Based on its output so far, my initial estimate is "probably not" (except insofar as they successfully raise awareness of the issues). This is balanced by my respect for the rationality and intelligence of the people involved in the SIAI, which is why I plan to wait until I get enough (logical) evidence to either correct "probably not" or to correct my current estimates about the fallibility of the people working with the SIAI.

Comment author: [deleted] 28 December 2010 07:46:27PM -2 points [-]

This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes. Don't tell me there isn't irrational prejudice here!

The argument that any donation is subject to similar objections is silly because it's obvious that a human-welfare maximizer would plug for the the donation the donor believes best, despite the unlikelihood of finding the absolute best. It should also be obvious that my argument is that it's unlikely that the Singularity Institute comes anywhere near the best donation, and one reason it's unlikely is related to the unlikelihood of picking the best, even if you have to forgo the literal very best!

Numerous posters wouldn't pick this particular charity, even if it happened to be among the best, unless they were motivated by signaling aspirations rather than the rational choice of the best recipient. As Yvain said in the previous entry: "Deciding which charity is the best is hard." Rationalists should detect the irrationality of making an exception when one option is the Singularity Institute.

(As to whether signaling is rational, completely irrelevant to the discussion, as we're talking about the best donation from a human-welfare standpoint. To argue that the contribution makes sense because signaling might be as rational as donating, even if plausible, is merely to change the subject, rather than respond to the argument.)

Another argument for the Singularity Institute donation I can't be dismiss so easily. I read the counter-argument as saying that the Singularity Institute is clearly the best donation conceivable. To that I don't have an answer, not any more than I have a counter-argument for many outright delusions. I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain's preceding entry, where each $500 saves a human life.

Before downvoting this, ask yourself whether you're saying my point is unintelligent or shouldn't be raised for other reasons. (Ask yourself if my point should be made, was made by anyone else, and isn't better than at least 50% of the postings here. Ask yourself whether it's rational to upvote the critic and his silly argument and whether the many donors arrived at their views about the Singularity Institute's importance based on the representative heuristic, the aura effect, which surrounds Eliezer, ignoring the probability of delivering any benefit,and a multitude of other errors in reasoning.)

Comment author: Vaniver 28 December 2010 08:07:08PM *  6 points [-]

This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes.

Envy is unbecoming; I recommend against displaying it. You'd be better off starting with your 3rd sentence and cutting the word "silly."

I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain's preceding entry, where each $500 saves a human life.

They have worked out this math, and it's available in most of their promotional stuff that I've seen. Their argument is essentially "instead of operating on the level of individuals, we will either save all of humanity, present and future, or not." And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it's a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).

The question as to whether SIAI is the best way to nudge the entire future of humanity is a separate question from whether or not SIAI is a better bet than preventing malaria deaths. I don't know if SIAI folks have made quantitative comparisons to other x-risk reduction plans, but I strongly suspect that if they have, a key feature of the comparison is that if we stop the Earth from getting hit by an asteroid, we just prevent bad stuff. If we get Friendly AI, we get unimaginably good stuff (and if we prevent Unfriendly AI without getting Friendly AI, we also prevent bad stuff).

Comment author: Aharon 28 December 2010 07:44:21PM 2 points [-]

I'm sorry, I didn't find the thread yet. I lurked there for a long time and just now registered to use their search function and find it again. The main objection I clearly remember finding convincing was that nanotech can't be used in the way many proponents of the Singularity propose, due to physical constraints, and thus an AI would be forced to rely on existing industry etc..

I'll continue the search, though. The point was far more elaborated than one sentence. I face a similar problem as with climate science here: I thoroughly informed myself on the subject, came to the conclusion that climate change deniers are wrong, and then, little by little, forgot the details of the evidence that led to this conclusion. My memory could be better :-/

Comment author: Kaj_Sotala 28 December 2010 08:59:15PM 5 points [-]

The main objection I clearly remember finding convincing was that nanotech can't be used in the way many proponents of the Singularity propose, due to physical constraints, and thus an AI would be forced to rely on existing industry etc..

Of course, the Singularity argument in no way relies on nanotech.

Comment author: XiXiDu 29 December 2010 04:53:35PM *  2 points [-]

Of course, the Singularity argument in no way relies on nanotech.

Without advanced real-world nanotechnology it will be considerable more difficult for an AI to FOOM and therefore pose an existential risk. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU's. It will have to rely on puny humans for a lot of tasks. It won't be able to create new computational substrate without the whole economy of the world supporting it. It won't be able to create an army of robot drones overnight without it either.

Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But more importantly it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn't just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources, therefore the AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.

So the absence of advanced nanotechnology constitutes an immense blow to any risk estimates including already available nanotech. Further, if one assumes that nanotech is a prerequisite for AI going FOOM then another question arises. It should be easier to create advanced replicators to destroy the world than creating AGI that then creates advanced replicators that then fails hold and then destroys the world. Therefore one might ask what is the bigger risk here.

Comment author: Kaj_Sotala 29 December 2010 05:30:25PM 3 points [-]

To be honest, I think this is far scarier AI-go-FOOM scenario than nanotech is.

Comment author: Kevin 27 December 2010 11:21:17AM 1 point [-]

I'm interested!

Comment author: Rain 27 December 2010 02:24:52AM *  13 points [-]

The rational reasons to signal are outlined in the post Why Our Kind Can't Cooperate, and there are more good articles with the Charity tag.

My personal reasons for supporting SIAI are outlined entirely in this comment.

Please inform me if anyone knows of a better charity.

Comment author: XiXiDu 28 December 2010 03:58:11PM *  5 points [-]

Please inform me if anyone knows of a better charity.

As long as you presume that the SIAI saves a potential galactic civilization from extinction (i.e. from being created), and assign a high enough probably to that outcome, nobody is going to be able to inform you of a charity with an higher payoff. At least as long as no other organization is going to make similar claims (implicitly or explicitly).

If you don't mind I would like you to state some numerical probability estimates:

  1. The risk of human extinction by AI (irrespective of countermeasures).
  2. Probability of the SIAI succeeding to implement an AI (see 3.) taking care of any risks thereafter.
  3. Estimated trustworthiness of the SIAI (signaling common good (friendly AI/CEV) while following selfish objectives (unfriendly AI)).

I'd also like you to tackle some problems I see regarding the SIAI in its current form:

Transparency

How do you know that they are trying to deliver what they are selling? If you believe the premise of AI going FOOM and that the SIAI is trying to implement a binding policy based on which the first AGI is going to FOOM, then you believe that the SIAI is an organisation involved in shaping the future of the universe. If the stakes are this high there does exist a lot of incentive for deception. Can you conclude that because someone writes a lot of ethical correct articles and papers that that output is reflective of their true goals?

Agenda and Progress

The current agenda seems to be very broad and vague. Can the SIAI make effective progress given such an agenda compared to specialized charities and workshops focusing on more narrow sub-goals?

  • How do you estimate their progress?
  • What are they working on right now?
  • Are there other organisations working on some of the sub-goals that make better progress?

As multifoliaterose implied here, at the moment the task to recognize humans as distinguished beings already seems to be too broad a problem to tackle directly. Might it be more effective, at this point, to concentrate on supporting other causes leading towards the general goal of AI associated existential risk mitigation?

Third Party Review

Without being an expert and without any peer review, how sure can you be about the given premises (AI going FOOM etc.) and the effectiveness of their current agenda?

Also what conclusion should one draw from the fact that at least 2 people who have been working for the SIAI, or have been in close contact with it, do disagree with some of the stronger claims. Robin Hanson seems not to be convinced that donating to the SIAI is an effective way to mitigate risks from AI? Ben Goertzel does not believe into the scary idea. And Katja Grace thinks AI is no big threat.

More

My own estimations

  • AI going FOOM: 0.1%
  • AI going FOOM being an x-risk: 5%
  • AI going FOOM being an x-risk is prevented by the SIAI: 0.01%
  • That the SIAI is trustworthy of pursuing to create the best possible world for all human beings: 60%

Therefore that a donation to the SIAI does pay off: 0.0000003%

Comment author: Rain 28 December 2010 08:01:49PM *  9 points [-]

I consider the above form of futurism to be the "narrow view". It considers too few possibilities over too short a timespan.

  • AI is not the only extinction risk we face.
  • AI is useful for a LOT more than just preventing extinction.
  • FOOM isn't necessary for AI to cause extinction.
  • AI seems inevitable, assuming humans survive other risks.
  • Human extinction by AI doesn't require the AI to swallow its light cone (Katja).
  • My interpretation of Ben's article is that he's saying SIAI is correct in everything except the probability that they can change the outcome.
  • You didn't mention third parties who support SIAI, like Nick Bostrom, who I consider to be the preeminent analyst on these topics.

I'm not academic enough to provide the defense you're looking for. Instead, I'll do what I did at the end of the above linked thread, and say you should read more source material. And no, I don't know what the best material is. And yes, this is SIAI's problem. They really do suck at marketing. I think it'd be pretty funny if they failed because they didn't have a catchy slogan...

I will give one probability estimate, since I already linked to it: SIAI fails in their mission AND all homo sapiens are extinct by the year 2100: 90 percent. I'm donating in the hopes of reducing that estimate as much as possible.

Comment author: XiXiDu 29 December 2010 10:32:12AM *  2 points [-]

I'll do what I did at the end of the above linked thread, and say you should read more source material.

One of my main problems regarding risks from AI is that I do not see anything right now that would hint at the possibility of FOOM. I am aware that you can extrapolate from the chimpanzee-human bridge. But does the possibility of superchimpanzee-intelligence really imply superhuman-intelligence? Even if that was the case, which I consider sparse evidence to neglect other risks for, I do not see that it implies FOOM (e.g. vast amounts of recursive self-improvement). You might further argue that even human-level intelligence (EMS or AI) might pose a significant risk when speed-up or by means of brute-force. In any case, I do believe that the associated problems to create any such intelligence are vastly greater than the problem to limit an intelligence, its scope of action. I believe that it is reasonable to assume that there will be a gradual development with many small-impact mistakes that will lead to a thoroughly comprehension of intelligence and its risks before any superhuman-intelligence could pose an existential risk.

Comment author: Rain 29 December 2010 05:32:14PM *  3 points [-]

One of my main problems regarding risks from AI is that I do not see anything right now that would hint at the possibility of FOOM.

I see foom as a completely separate argument from FAI or AGI or extinction risks. Certainly it would make things more chaotic and difficult to handle, increasing risk and uncertainty, but it's completely unnecessary for chaos, risk, and destruction to occur - humans are quite capable of that on their own.

Once an AGI is "out there" and starts getting copied (assuming no foom), I want to make sure they're all pointed in the right direction, regardless of capabilities, just as I want that for nuclear and other weapons. I think there's a possibility we'll be arguing over the politics of enemy states getting an AGI. That doesn't seem to be a promising future. FAI is arms control, and a whole lot more.

Comment author: XiXiDu 29 December 2010 06:27:37PM *  1 point [-]

Once an AGI is "out there" and starts getting copied...

I do not see that. The first AGI will likely be orders of magnitude slower (not less intelligent) than a standard human and be running on some specialized computational substrate (supercomputer). If you remove FOOM from the equation then I see many other existential risks being as dangerous as AI associated risks.

Comment author: Rain 29 December 2010 06:34:56PM 3 points [-]

Again, a point-in-time view. Maybe you're just not playing it out in your head like I am? Because when you say, "the first AGI will likely be orders of magnitude slower", I think to myself, uh, who cares? What about the one built three years later that's 3x faster and runs on a microcomputer? Does the first one being slow somehow make that other one less dangerous? Or that no one else will build one? Or that AGI theory will stagnate after the first artificial mind goes online? (?!?!)

Why does it have to happen 'in one day' for it to be dangerous? It could take a hundred years, and still be orders of magnitude more dangerous than any other known existential risk.

Comment author: XiXiDu 29 December 2010 07:02:55PM 1 point [-]

Does the first one being slow somehow make that other one less dangerous?

Yes, because I believe that the development will be gradually enough to tackle any risks on the way to a superhuman AGI, if superhuman capability is possible at all. There are certain limitations. Shortly after the invention of rocket science people landed on the moon. But the development eventually halted or slowed down. We haven't reached other star systems yet. By that metaphor I want highlight that I am not aware of good arguments or other kinds of evidence indicating that an AGI would likely result in a run-away risk at any point of its development. It is possible but I am not sure that because of its low-probability we can reasonable neglect other existential risks. I believe that once we know how to create artificial intelligence capable of learning on a human level our comprehension of its associated risks and ability to limit its scope will have increased dramatically as well.

Comment author: Rain 29 December 2010 07:43:54PM *  2 points [-]

You're using a different definition of AI than me. I'm thinking of 'a mind running on a computer' and you're apparently thinking of 'a human-like mind running on a computer', where 'human-like' includes a lot of baggage about 'what it means to be a mind' or 'what it takes to have a mind'.

I think any AI built from scratch will be a complete alien, and we won't know just how alien until it starts doing stuff for reasons we're incapable of understanding. And history has proven that the more sophisticated and complex the program, the more bugs, and the more it goes wrong in weird, subtle ways. Most such programs don't have will, intent, or the ability to converse with you, making them substantially less likely to run away.

And again, you're positing that people will understand, accept, and put limits in place, where there's substantial incentives to let it run as free and as fast as possible.

Comment author: Rain 28 December 2010 09:14:07PM *  4 points [-]

To restate my original question, is there anyone out there doing better than your estimated 0.0000003%? Even though the number is small, it could still be the highest.

Comment author: XiXiDu 29 December 2010 10:15:46AM 2 points [-]

To restate my original question, is there anyone out there doing better than your estimated 0.0000003%?

None whose goal is to save humanity from an existential risk. Although asteroid surveillance might come close, I'm not sure. It is not my intention to claim that donating to the SIAI is worthless, I believe that the world does indeed need an organisation that does tackle the big picture. In other words, I am not saying that you shouldn't be donating to the SIAI, I am happy someone does (if only because of LW). But the fervor in this thread seemed to me completely unjustified. One should seriously consider if there are other groups worthy of promotion or if there should be other groups doing the same as the SIAI or being dealing with one of its sub-goals.

My main problem is how far I should go to neglect other problems in favor of some high-impact low-probability event. If your number of possible beings of human descent is high enough, and you assign each being enough utility, you can outweigh any low probability. You could probably calculate not to help someone who is drowning because 1.) you'd risk your own life and all the money you could make to donate to the SIAI 2.) in that time you could tell 5 people about existential risks from AI. I am exaggerating to highlight my problem. I'm just not educated enough yet, I have to learn more math, especially probability. Right now I feel that it is unreasonable to donate my whole money (or a lot) to the SIAI.

It really saddens me to see how often LW perceives any critique of the SIAI as ill-intentioned. As if people want to destroy the world. There are some morons out there, but most really would like to save the world if possible. They just don't see that the SIAI is a reasonable choice to do so.

Comment author: Rain 29 December 2010 04:03:51PM 3 points [-]

the fervor in this thread seemed to me completely unjustified. [...] My main problem is how far I should go to neglect other problems in favor of some high-impact low-probability event.

I agree with SIAI's goals. I don't see it as "fervor". I see it as: I can do something to make this world a better place (according to my own understanding, in a better way than any other possible), therefore I will do so.

I compartmentalize. Humans are self-contradictory in many ways. I can send my entire bank account to some charity in the hopes of increasing the odds of friendly AI, and I can buy a hundred dollar bottle of bourbon for my own personal enjoyment. Sometimes on the same day. I'm not ultra-rational or pure utilitarian. I'm a regular person with various drives and desires. I save frogs from my stairwell rather than driving straight to work and earning more money. I do what I can.

Comment author: Rain 29 December 2010 02:29:17PM 2 points [-]

One should seriously consider if there are other groups worthy of promotion or if there should be other groups doing the same as the SIAI or being dealing with one of its sub-goals.

I have seriously considered it. I have looked for such groups, here and elsewhere, and no one has ever presented a contender. That's why I made my question as simple and straightforward as possible: name something more important. No one's named anything so far, and I still read for many hours each week on this and other such topics, so hopefully if one arises, I'll know and be able to evaluate it.

I donate based on relative merit. As I said at the end of my original supporting post: so far, no one else seems to come close to SIAI. I'm comfortable with giving away a large portion of my income because I don't have much use for it myself. I post it here because it encourages others to give of themselves. I think it's the right thing to do.

I know it's hard to see why. I wish they had better marketing materials. I was really hoping the last challenge, with projects like a landing page, a FAQ, etc., would make a difference. So far, I don't see much in the way of results, which is upsetting.

I still think it's the right place to put my money.

Comment author: TheOtherDave 28 December 2010 07:27:33PM 4 points [-]

If you're going to do this sort of explicit decomposition at all, it's probably also worth thinking explicitly about the expected value of a donation. That is: how much does your .0001 estimate of SIAI's chance of preventing a humanity-destroying AI go up or down based on an N$ change in its annual revenue?

Comment author: XiXiDu 28 December 2010 07:57:53PM *  6 points [-]

Thanks, you are right. I'd actually do a lot more but I feel I am not yet ready to tackle this topic mathematically. I only started getting into math in 2009. I asked several times for an analysis with input variables I could use to come up with my own estimations of the expected value of a donation to the SIAI. I asked people who are convinced of the SIAI to provide a decision procedure on how they were convinced. I asked them to lay it open to public inspection so people could reassess the procedure and calculations to compute their own conclusion. In response they asked me to do so myself. I do not take it amiss, they do not have to convince me. I am not able to do so yet. But while learning math I try to encourage other people to think about it.

Comment author: XiXiDu 28 December 2010 08:14:05PM *  3 points [-]

That is: how much does your .0001 estimate of SIAI's chance of preventing a humanity-destroying AI go up or down based on an N$ change in its annual revenue?

I feel that this deserves a direct answer. I think it is not just about money. The question would be, what would they do with it, would they actually hire experts? I will assume the best-case scenario here.

If the SIAI would be able to obtain a billion dollars I'd estimate the chance of the SIAI to prevent a FOOMing uFAI 10%.

Comment author: Emile 29 December 2010 03:47:18PM 1 point [-]

This part is the one that seems the most different from my own probabilities:

AI going FOOM being an x-risk: 5%

So, do you think the default case is a friendly AI? Or at least innocuous AI? Or that friendly AI is easy enough so that whoever first makes a fooming AI will get the friendliness part right with no influence from the SIAI?

Comment author: Eliezer_Yudkowsky 27 December 2010 04:02:37AM 6 points [-]
Comment author: AlexU 27 December 2010 11:31:28PM *  3 points [-]

Why has this comment been downvoted so much? It's well-written and makes some good points. I find it really disheartening every time I come on here to find that a community of "rationalists" is so quick to muffle anyone who disagrees with LW collective opinion.

Comment author: DSimon 28 December 2010 07:32:59PM *  8 points [-]

I can't speak for anyone else, but I downvoted it because of the deadly combination of:

  • A. Unfriendly snarkiness, i.e. scare-quoting "rationalists" and making very general statements about the flaws of LW without any suggestions for improvements, and without a tone of constructive criticism.

  • B. Incorrect content, i.e. not referencing this article which is almost certainly the primary reason there are so many comments saying "I donated", and the misuse of probability in the first paragraph.

If it were just A, then I could appreciate the comment for making a good point and do my best to ignore the antagonism. If it were just B, then the comment is cool because it creates an opportunity to correct a mistake in a way that benefits both the original commenter and others, and adds to the friendly atmosphere of the site.

The combination, though, results in comments that don't add anything at all, which is why I downvoted srdiamond's comment.

Comment author: shokwave 28 December 2010 04:46:45PM *  17 points [-]

It's been downvoted - I guess - because it sits on the wrong side of a very interesting dynamic: what I call the "outside view dismissal" or "outside view attack". It goes like this:

A: From the outside, far too many groups discover that their supported cause is the best donation avenue. Therefore, be skeptical of any group advocating their preferred cause as the best donation avenue.

B: Ah, but this group tries to the best of their objective abilities to determine the best donation avenue, and their cause has independently come out as the best donation avenue. You might say we prefer it because it's the best, not the other way around.

A: From the outside, far too many groups claim to prefer it because it's the best and not the other way around. Therefore, be skeptical of any group claiming they prefer a cause because it is the best.

B: Ah, but this group has spent a huge amount of time and effort training themselves to be good at determining what is best, and an equal amount of time training themselves to notice common failure modes like reversing causal flows because it looks better.

A: From the outside, far too many groups claim such training for it to be true. Therefore, be skeptical of any group making that claim.

B: Ah, but this group is well aware of that possibility; we specifically started from the outside view and used evidence to update properly to the level of these claims.

A: From the outside, far too many groups claim to have started skeptical and been convinced by evidence for it to be true. Therefore, be skeptical of any group making that claim.

B: No, we really, truly, did start out skeptical, and we really, truly, did get convinced by the evidence.

A: From the outside, far too many people claim they really did weigh the evidence for it to be true. Therefore, be skeptical of any person claiming to have really weighed the evidence.

B: Fine, you know what? Here's the evidence, look at it yourself. You already know you're starting from the position of maximum skepticism.

A: From the outside, there are far too many 'convince even a skeptic' collections of evidence for them all to be true. Therefore, I am suspicious that this collection might be indoctrination, not evidence.

And so on.

The problem is that the outside view is used not just to set a good prior, but also to discount any and all evidence presented to support a higher inside view. This is the opposite of an epistemically unreachable position - an epistemically stuck position, a flawed position (you can't get anywhere from there), but try explaining that idea to A. Dollars to donuts you'll get:

A: From the outside, far too many people accuse me of having a flawed or epistemically stuck position. Therefore, be skeptical of anyone making such an accusation.

And I am sure many people on LessWrong have had this discussion (probably in the form of 'oh yeah? lots of people think they're right and they're wrong' -> 'lots of people claim to work harder at being right too and they're wrong' -> 'lots of people resort to statistics and objective measurements that have probably been fudged to support their position' -> 'lots people claim they haven't fudged when they have' and so on), and I am sure that the downvoted comment pattern-matches the beginning of such a discussion.

Comment author: XiXiDu 28 December 2010 05:38:15PM 2 points [-]

Fine, you know what? Here's the evidence, look at yourself. You already know you're starting from the position of maximum skepticism.

Where is the evidence?

Comment author: shokwave 29 December 2010 05:09:07AM *  2 points [-]

All of the evidence that an AI is possible¹, then the best method of setting your prior for the behavior of an AI².

¹. Our brains are proof of concept. That it is possible for a lump of flesh to be intelligent means AI is possible - even under pessimistic circumstances, even if it means simulating a brain with atomic precision and enough power to run the simulation faster than 1 second per second. Your pessimism would have to reach "the human brain is irreducible" in order to disagree with this proof, by which point you'd have neurobiologists pointing out you're wrong.

². Which would be equal distribution over all possible points in relevant-thing-space, in this case mindspace.

Comment author: TheOtherDave 29 December 2010 05:21:12AM 3 points [-]

Just to clarify: are you asserting that this comment, and the associated post about the size of mindspace, represent the "convince even a skeptic" collection of evidence you were alluding to in its grandparent (which XiXiDu quotes)?

Or was there a conversational disconnect somewhere along the line?

Comment author: shokwave 29 December 2010 05:33:15AM *  1 point [-]

I didn't provide all of the evidence that an AI is possible, just one strong piece. All the evidence, plus a good prior for how likely the AI is to turn us into more useful matter, should be enough to convince even a skeptic. However, the brain-as-proof-of-concept idea is really strong: try and formulate an argument against that position.

Unless they're a skeptic like A above, or they're an "UFAI-denier" (in the style of climate change deniers) posing as a skeptic, or they privilege what they want to believe over what they ought to believe. There are probably half a dozen more failure modes I haven't spotted.

Comment author: TheOtherDave 29 December 2010 06:32:27AM 5 points [-]

Sounds like a conversational disconnect to me, then: at least, going back through the sequence of comments, it seems the sequence began with an expression of skepticism of the claim that "a donation to the Singularity Institute is the most efficient charitable investment," and ended with a presentation of an argument that UFAI is both possible and more likely than FAI.

Thanks for clarifying.

Just to pre-emptively avoid being misunderstood myself, since I have stepped into what may well be a minefield of overinterpretation, let me state some of my own related beliefs: I consider human-level, human-produced AGI possible (confidence level ~1) within the next century (C ~.85-.99, depending on just what "human-level" means and assuming we continue to work on the problem), likely not within the next 30 years (C<.15-.5, depending as above). I consider self-improving AGI and associated FOOM, given human-level AGI, a great big question mark: I'd say >99% of HLAGIs we develop will be architected in such a way that significant self-improvement is unlikely (much as our own architectures make it unlikely for us), but the important question is whether the actual number of exceptions is 0 or 1, and I have no confidence in my intuitions about that (see my comments elsewhere about expected results based on small probabilities of large magnitudes). I consider UFAI given self-improving AGI practically a certainty: >99% of SIAGIs will be UFAIs, and again the important question is whether the number of exceptions is 0 or 1, and whether the exception comes first. (The same thing is true about non-SI AGIs, but I care about that less.) Whether SIAI can influence that last question at all, and if so by how much and in what direction, I haven't a clue about; if I wanted to develop an opinion about that I'd have to look into what SIAI actually does day-to-day.

If any of that is symptomatic of fallacy, I'd appreciate having it pointed out, though of course nobody is under any obligation to do so.

Comment author: shokwave 29 December 2010 07:55:49AM 2 points [-]

There's an argument chain I didn't make clear; "If UFAI is both more possible and more likely than FAI, then influencing this in favour of FAI is a critical goal" and "SIAI is the most effective charity working towards this goal".

The only part I would inquire about is

I'd say >99% of HLAGIs we develop will be architected in such a way that significant self-improvement is unlikely (much as our own architectures make it unlikely for us),

Humans don't have the ability to self-modify (at least, our neuroscience is too underdeveloped to count for that yet) but AGIs will probably be made from explicit programming code, and will probably have some level of command over programming code (it seems like one of the ways in which it would be expected to interact with the world, creating code that achieves its goals). So its architecture is more conducive to self-modification (and hence self-improvement) than ours is.

Of course, a more developed point is that humans are very likely to build a fixed AGI if they can. If you're making that point, and not that AGIs simply won't self-improve, then I see no issues.

Comment author: TheOtherDave 29 December 2010 02:05:15PM 3 points [-]

Re: argument chain... I agree that those claims are salient.

Observations that differentially support those claims are also salient, of course, which is what I understood XiXiDu to be asking for, which is why I asked you initially to clarify what you thought you were providing.

Re: self-improvement... I agree that AGIs will be better-suited to modify code than humans are to modify neurons, both in terms of physical access and in terms of a functional understanding of what that code does.

I also think that if humans did have the equivalent ability to mess with their own neurons, >99% of us would either wirehead or accidentally self-lobotomize rather than successfully self-optimize.

I don't think the reason for that is primarily in how difficult human brains are to optimize, because humans are also pretty dreadful at optimizing systems other than human brains. I think the problem is primarily in how bad human brains are at optimizing. (While still being way better at it than their competition.)

That is, the reasons have to do with our patterns of cognition and behavior, which are as much a part of our architecture as is the fact that our fingers can't rewire our neural circuits.

Of course, maybe human-level AGIs would be way way better at this than humans would. But if so, it wouldn't be just because they can write their own cognitive substrate, it would also be because their patterns of cognition and behavior were better suited for self-optimization.

I'm curious as to your estimate of what % of HLAGIs will successfully self-improve?

Comment author: JoshuaZ 29 December 2010 05:44:06AM 2 points [-]

This doesn't address the most controversial aspect, which is that AI would go foom. If extreme fooming doesn't occur this isn't nearly as big an issue. That is an issue where many people have discussed it and not all have come away convinced. Robin Hanson had a long debate with Eliezer over this and Robin was not convinced. Personally, I consider fooming to be unlikely but plausible. But how likely one thinks it is matters a lot.

Comment author: Rain 29 December 2010 06:04:42PM 4 points [-]

This doesn't address the most controversial aspect, which is that [nuclear weapons] would [ignite the atmosphere]. If extreme [atmospheric ignition] doesn't occur this isn't nearly as big an issue.

Even without foom, AI is a major existential risk, in my opinion.

Comment author: wedrifid 28 December 2010 05:04:58PM 9 points [-]

Downvoted parent and grandparent. The grandparent because:

  • It doesn't deserve the above defence.
  • States obvious and trivial things as though they are deep insightful criticisms while applying them superficially
  • Sneaks through extra elements of an agenda via presumption.

I had left it alone until I saw it given unwarranted praise and a meta karma challenge.

I find it really disheartening every time I come on here to find that a community of "rationalists" is so quick to muffle anyone who disagrees with LW collective opinion.

See the replies to all similar complaints.

Comment author: XiXiDu 28 December 2010 06:06:56PM *  7 points [-]

Initially I wanted to downvote you but decided to upvote you for providing reasons for why you downvoted the above comments.

The reason for why I believe that the comments shouldn't have been downvoted is that in this case something other than signaling disapproval of poor style and argumentation is more important. This post and thread are especially off-putting to skeptical outsiders. Downvoting critical comments will just reinforce this perception. Therefore, if you are fond of LW and the SIAI, you should account for public relations and kindly answer any critical or generally skeptical comments rather than simply downvoting them.

Comment author: ata 28 December 2010 07:57:32PM 6 points [-]

Downvoting critical comments will just reinforce this perception. Therefore, if you are fond of LW and the SIAI, you should account for public relations and kindly answer any critical or generally skeptical comments rather than simply downvoting them.

What is there to say in response to a comment like the one that started this thread? It was purely an outside-view argument that doesn't make any specific claims against the efficacy of SIAI or against any of the reasons that people believe it is an important cause. It wasn't an argument, it was a dismissal.

Comment author: Vaniver 28 December 2010 08:14:28PM *  4 points [-]

Your post right here seems like a good example. You could say something along the lines of "This is a dismissal, not an argument; merely naming a bias isn't enough to convince me. If you provide some specific examples, I'd be happy to listen and respond as best as I can." You can even tack on an "But until then, I'm downvoting this because it seems like it's coming from hostility rather than a desire to find the truth together."

Heck, you could even copy that and have it saved somewhere as a form response to comments like that.

Comment author: XiXiDu 29 December 2010 09:07:05AM 3 points [-]

It wasn't an argument, it was a dismissal.

I noticed the tendency on LW to portray comments as attacks. They may seem that way to trained rationalists and otherwise highly educated folks. But not every negative comment is actually intended to be just a rhetorical device or simple dismissal. It won't help if you just downvote people or call them logical rude. Some people are honestly interested but fail to express themselves adequately. Usually newcomers won't know about the abnormally high standards on LW. You have to tell them about it. You also have to take into account those who are linked to this post, or come across it by other means, who don't know anything about LW. How does this thread appear to them, what are they likely to conclude, especially if no critical comment is being answered kindly but simply downvoted or snidely rejected?

Comment author: DSimon 28 December 2010 07:40:03PM 3 points [-]

Agreed that responding to criticism is important, but I think it's especially beneficial to respond only to non-nasty criticism. Responding nicely to people who are behaving like jerks can create an atmosphere where jerkiness is encouraged.

Comment author: Vaniver 28 December 2010 07:44:27PM 2 points [-]

This is the internet, though; skins are assumed to be tough. There is some benefit to saying "It looks like you wanted to say 'X'. Please try to be less nasty next time. Here's why I don't agree with X" instead of just "wow, you're nasty."

Comment author: Kaj_Sotala 28 December 2010 03:51:22PM 0 points [-]

I agree that it's been downvoted too much. (At -6 as of this comment, up from -7 due to my own upvote.)