Research associates are not salaried staff, but we encourage their Friendly AI-related research outputs by, for example, covering their travel costs for conferences at which they present academic work relevant to our mission.
Are the research associates given enough support that they can work full time on their research?
Are the research associates given enough support that they can work full time on their research?
No. SI needs more funding to be able to support that kind of thing. The Research Associates program allows SI to support some additional research with almost no increase in our funding levels.
I believe the org is in the process of shortening its name from Singularity Institute for Artificial Intelligence (SIAI) to simply Singularity Institute (SI).
I'm irrationally slightly annoyed to have "wasted" the time it took to train my brain to see "superintelligence" instead of "Singularity Institute" for SI back when it was used that way.
Isn't the 'Artificial Intelligence' part the most important?
ETA: Seeing as it's trying to reduce existential risks from AI first and foremost.
(Disclaimer: I don't speak for SingInst, nor am I presently affiliated with them.)
But recall that the old name was "Singularity Institute for Artificial Intelligence," chosen before the inherent dangers of AI were understood. The unambiguous for is no longer appropriate, and "Singularity Institute about Artificial Intelligence" might seem awkward.
I seem to remember someone saying back in 2008 that the organization should rebrand as the "Singularity Institute For or Against Artificial Intelligence Depending on Which Seems to Be a Better Idea Upon Due Consideration," but obviously that was only a joke.
And also, SI is pretty commonly used as an abbreviation for Sports Illustrated and the International System of Units.
I really hope SingInst weakens its policy of secrecy and ensures that the research output of the new associates is made public. Don't care about publishing it in "proper journals", but please please please put everything online.
Because you think doing so would reduce expected existential disaster, or because you want to read the material?
Because I want to read the material, I want to build upon it, and I want to see other people build upon it. For example, Wei Dai is not on the list of new associates. Do you agree that hiding the material from him will likely slow down progress?
It's true that publishing the material can hasten the arrival of unfriendly AI, but it can also give the world a chance where it had none. If the problem of Friendliness is hard enough that SingInst folks can't generate all the required insights by themselves before unfriendly AI arrives, then secrecy has negative expected utility. Looking at the apparent difficulty of the problem and the apparent productivity of SingInst over the 10 years of its existence, that seems to me to be the case. Eliezer believes the solution is just a handful of insights away, but I don't see why.
Huh, Dewey's "Learning what to value" paper didn't cite Eliezer's CFAI. I'm glad he's doing a good job of sharing his paper at the AGI conference - was it published anywhere notable?
And they have an SIAI credit card now. Does it have fees?
Edit: Answered my own question.
Three new research associates. Link to the announcement.