juliawise comments on Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
Be an informed donor. I advise reading the GiveWell interview with SIAI from last spring.
To me, it's not a good sign that SIAI said they had no immediate plans for what they would do with new funding.
Jasen Murray's answers to Holden's questions were problematic, and did not well-represent the Singularity Institute's positions. That is an old interview, and since that time we've done many things to explain what we plan to do with new funding. For example we published a strategic plan and I gave this video interview. Moreover, the donation page linked in the OP has the most up-to-date information on what we plan to do with new funding: see Future Plans You Can Help Support.
FWIW, the "Future Plans" list seems to me somewhat understating the value of a donation. I realize it's fairly accurate in that it represents the activities of SI. Yet it seems like it could be presented better.
For example, the first item is "hold the Summit". But I happen to know that the Summit generally breaks even or makes a little money, so my marginal dollar will not make or break the Summit. Similarly, a website redesign, while probably important, isn't exciting enough to be listed as the second item. The third item, publish the open problems document, is a good one, though you should make it seem more exciting.
I think the donation drive page should thoroughly make the case that SI is the best use of someone's charity dollars -- that it's got a great team, great leadership, and is executing a plan with the highest probability of working at every step. That page should probably exist on its own, assuming the reader hasn't read any of the rest of the site, with arguments for why working explicitly on rationality is worthwhile; why transparency matters; why outreach to other researchers matters; what the researchers are currently spending time on and why those are the correct things for them to be working on; and so on. It can be long: long-form copy is known to work, and this seems like a correct application for it.
In fact, since you probably have other things to do, I'll do a little bit of copywriting myself to try to discover if this is really a good idea. I'll post some stuff here tomorrow after I've worked on it a bit.
I shall not complain. :)
OK, here's my crack: http://techhouse.org/~lincoln/singinst-copy.txt
Totally unedited. Please give feedback. If it's good, I can spend a couple more hours on it. If you're not going to use it, please don't tell me it's good, because I have lots of other work to do.
It's good enough that if we use it, we will do the editing. Thanks!
The connection between AI and rationality could be made stronger.
Indeed, that's been my impression for a little while. I'm unconvinced that AI is the #1 existential risk. The set of problems descending from the fact that known life resides in a single biosphere — ranging from radical climate change, to asteroid collisions, to engineered pathogens — seems to be right up there. I want all AI researchers to be familiar with FAI concerns; but there are more people in the world whose decisions have any effect at all on climate change risks — and maybe even on pathogen research risks! — than on AI risks.
But anyone who wants humanity to solve these problems should want better rationality and better (trans?)humanist ethics.
Thanks for pointing out the newer info. The different expansion plans seem sensible.
I'll chime in to agree with both lukeprog in pointing out that the interview is very outdated and with Holden in correcting Louie's account of the circumstances surrounding it.
Being an informed donor requires more than an outdated, non-representative interview. This examination has far more high-quality information and, according to the creator, will update soon (although he is apparently behind on the schedule he set for himself).
He also talked to Jaan Tallinn. His best points in my opinion:
...
...
...
...
...
...
...
...
(Most of these considerations don't apply to developments in pure mathematics, which is my best guess at a fruitful mode of attacking FAI goals problem. The implementation-as-AGI aspect is a separate problem likely of a different character, but I expect we need to obtain basic theoretical understanding of FAI goals first to know what kinds of AGI progress are useful. Jumping to development of language translation software is way off-track.)
Thanks a lot for posting this link. The first point was especially good.
The "I feel" opening is telling. It does seem like the only way people can maintain this confusion beyond 10 seconds of thought is by keeping in the realm of intuition. In fact among the first improvements that could be made to the human predictive algorithm is to remove our tendency to let feelings and preferences get all muddled up with our abstract thought.
Given his influence he seems to be worth the time that it takes to try to explain to him how he is wrong?
The only way to approach general intelligence may be by emulating the human algorithms. The opinion that we are capable of inventing an artificial and simple algorithm exhibiting general intelligence is not a mainstream opinion among AI and machine learning researchers. And even if one assumes that all those scientists are not nearly as smart and rational as SI folks, they seem to have much headway when it comes to real world experience about the field of AI and its difficulties.
I actually share the perception that we have no reason to suspect that we could reach a level above ours without massive and time-costly experimentation (removing our biases merely sounds easy when formulated in English).
I think that you might be attributing too much to an expression uttered in an informal conversation.
What do you mean by "feelings" and "preferences". The use of intuition seems to be universal, even within the field of mathematics. I don't see how computational bounded agents could get around "feelings" when making predictions about subjects that are only vaguely understood and defined. Framing the problem in technical terms like "predictive algorithms" doesn't change anything about the fact that making predictions about subjects that are poorly understood is error prone.
Yes. He just doesn't seem to be someone whose opinion on artificial intelligence should be considered particularly important. He's just a layman making the typical layman guesses and mistakes. I'm far more interested in what he has to say on warps in spacetime!
I agree with Grognor -- that interview is beyond unhelpful. Even calling it an interview of SIAI is incredibly misleading. (I would say a complete lie). Holden interviewed the only visitor at SI who was there last summer who wouldn't have known anything about the organizations funding needs. Jasen was running a student summer program -- not SIAI. I would liken it to Holden interviewing a random boyscout somewhere and then publishing a report complaining that he couldn't understand the organizational funding needs of Boy Scouts of America.
Also, keep in mind that GiveWell is certainly a good service (and I support them) but their process is limited and is unable to evaluate the value of research. In fact, if an opportunity to donate as good as Singularity Institute existed, GiveWell's methodology would blind them to the possibility of discovering it.
Carl Shulman pointed out how absurd this was: If GiveWell had existed 100 years ago, they would have argued against funding the eradication of smallpox. Their process forces them to reject the possibility that an intervention could be that effective.
I'm curious about the new GiveWell Labs initiative though. Singularity Institute does meet all of that program's criteria for inclusion... perhaps that's why they started this program... so that they aren't forced to overlook so many extraordinary donation opportunities forever.
To clarify what I said in those comments:
Holden had a few posts that 1) made the standard point that one should use both prior and evidence to generate one's posterior estimate of a quantity like charity effectiveness, 2) used example prior distributions that assigned vanishingly low probability to outcomes far from the median, albeit disclaiming that those distributions were essential.
I naturally agree with 1), but took issue with 2). A normal distribution for charity effectiveness is devastatingly falsified by the historical data, and even a log-normal distribution has wacky implications, like ruling out long-term human survival a priori. So I think a reasonable prior distribution will have a fatter tail. I think it's problematic to use false examples, lest they get lodged in memory without metadata, especially when they might receive some halo effect from 1).
I said that this methodology and the example priors would have more or less ruled out big historical successes, not that GiveWell would not have endorsed smallpox eradication. Indeed, with smallpox I was trying to point out something that Holden would consider a problematic implication of a thin-tailed prior. With respect to existential risks, I likewise said that I thought Holden assigned a higher prior to x-risk interventions than could be reconciled with a log-normal prior, since he could be convinced by sufficient evidence (like living to see humanity colonize the galaxy, and witnessing other civilizations that perished). These were criticisms that those priors were too narrow even for Holden, not that GiveWell would use those specific wacky priors.
Separately, I do think Holden's actual intuitions are too conservative, e.g. in assigning overly low probability to eventual large-scale space colonization and large populations, and giving too much weight to a feeling of absurdity. So I would like readers to distinguish between the use of priors in general and Holden's specific intuitions that big payoffs from x-risk reduction (and AI risk specifically) face a massive prior absurdity penalty, with the key anti-x-risk work being done by the latter (which they may not share).
Holden seems to have spoken with Jasen "and others", so at least two people. I don't think it's fair to say that speaking with 1/3 of the people in an organization is as unrepresentative as speaking with 1/3,000,000 of the Boy Scouts. And since Holden sent SIAI his notes and got their feedback before publishing, they had a second chance to correct any misstatements made by the guy they gave him to interview.
So calling this interview "a complete lie" seems very unfair.
I agree that GiveWell's process is limited, and I'm interested in the GiveWell Labs project.