One of the reasons that I am skeptical of contributing money to the SIAI is that I simply don't know what they would do with more money. The SIAI currently seems to be viable. Another reason is that I believe that an empirical approach is required, that we need to learn more about the nature of intelligence before we can even attempt to solve something like friendly AI.

I bring this up because I just came across an old post (2007) on the SIAI blog:

We aim to resolve this crucial question by simultaneously proceeding on two fronts:

1. Experimentation with practical, contemporary AI systems that modify and improve their own source code.
2. Extension and refinement of mathematical tools to enable rigorous formal analysis of advanced self-improving AI’s.

[...]

For the practical aspect of the SIAI Research Program, we intend to take the MOSES probabilistic evolutionary learning system, which exists in the public domain and was developed by Dr. Moshe Looks in his PhD work at Washington University in 2006, and deploy it self-referentially, in a manner that allows MOSES to improve its own learning methodology.

[...]

Applying MOSES self-referentially will give us a fascinating concrete example of self-modifying AI software – far short of human-level general intelligence initially, but nevertheless with many lessons to teach us about the more ambitious self-modifying AI’s that may be possible.

[...]

We are seeking additional funding so as to enable, initially, the hiring of two doctoral or post-doctoral Research Fellows to focus on the above two areas (practical and theoretical exploration of self-modifying AI).

[...]

Part of our goal is to make progress on these issues ourselves, in-house within SIAI; and part of our goal is to, by demonstrating this progress, interest the wider AI R&D community in these foundational issues. Either way: the goal is to move toward a deeper understanding of these incredibly important issues.

[...]

SIAI must boot-strap into existence a scientific field and research community for the study of safe, recursively self-improving systems; this field and community doesn’t exist yet.

Some questions:

  • Has any progress been made on the points mentioned in the announcement above?
  • Is the SIAI still willing to pursue experimental AI research or does it solely focus on hypothetical aspects?
  • What would the SIAI do given various amounts of money? 

I also have some questions regarding the hiring of experts. Is there a way to figure out what exactly the current crew is working on in terms of friendly AI research? Peter de Blanc seems to be the only person who has done some actual work related to artificial intelligence.

I am aware that preparatory groundwork has to be done and capital has to be raised. But why is there no timeline? Why is there no progress report? What is missing for the SIAI to actually start working on friendly AI? The Singularity Institute is 10 years old, what is planned for the decade ahead?

New Comment
48 comments, sorted by Click to highlight new comments since:

AFAIK, that was the stuff Goertzel was doing as Director of Research. Now that he isn't around anymore, those things were dropped.

Pretty much the whole "practical experimentation" angle is, again AFAIK, considered too unsafe by the people currently running things at SIAI. At least that's what I was told during my Visiting Fellow time.

considered too unsafe

I expect improving on state of the art in practical AI is also almost totally useless for figuring out a way towards FAI, so "unsafe" is almost beside the point (except that making things worse while not making them better is not a good plan).

How do you expect to prove anything about an FAI without even knowing what an AGI would look like? I don't think current AI researchers even have that great of an idea of what AGI will eventually look like...

Now improving on state of the art might not be helpful but being in a position where you could improve on state of the art would be; and the best way to make sure you are in such a position is to have actually done it at least once.

How do you expect to prove anything about an FAI without even knowing what an AGI would look like? I don't think current AI researchers even have that great of an idea of what AGI will eventually look like...

It will be (and look) the way we make it. And we should make it right, which requires first figuring out what that is.

An AGI is an extremely complex entity. You don't get to decide arbitrarily how to make it. If nothing else, there are fundamental computational limits on Bayesian inference that are not even well-understood yet. So if you were planning to make your FAI a Bayesian then you should probably at least be somewhat familiar with these issues, and of course working towards their resolution will help you better understand your constraints. I personally strongly suspect there are also fundamental computational limits on utility maximization, so if you were planning on making your FAI a utility maximizer then again this is probably a good thing to study. Maybe you don't consider this AGI research but the main approach to AGI that I consider feasible would benefit at least somewhat from such understanding.

In my opinion, provably friendly AI is hopeless to get to before someone else gets to AGI. The best thing one can hope for is (i) brain uploads come first, or (ii) a fairly transparent AGI design coupled with a good understanding of meta-ethics. This means that as far as I can see, if you want to reduce x-risk from UFAI then you should be doing one of the following:

  • working towards brain uploads to make sure they come first
  • working on the statistical approach to AI to make sure it gets to AGI before the connectionist approach (and developing software tools to help us better understand the statistical algorithms we write)
  • working on something like lukeprog's program of metaethics (this is probably the best of the three)

Do you know where the "we have to have to work towards AGI before we can make progress on FAI" meme came from? (I'm not sure if that's a caricature of the position or what.)

It's an exaggaration in that form, but a milder version seems pretty obvious to me. If you want to design a safe airplane, you need to know something about how to make a working airplane in the first place.

While there are certainly theoretical parts of FAI theory that you can make progress on even without knowing anything about AGI, there's probably a limit to how far you can get that way. For your speculations to be useful, you'll sooner or later need to know something about the design constraints. And they're not only constraints - they'll give you entirely new ideas and directions you wouldn't have considered otherwise.

It sounds nonsensical to claim that you could design safe airplanes without knowing anything about airplanes, that you could be a computer security expert without knowing anything about how software works, or that you could design a safe building without knowing anything about architecture. Why would it make any more sense to claim that you could design FAI without knowing AGI?

In this analogy, the relevant concern maps for me to the notion of "safety" of airplanes. And we know what "safely" for airplanes is. It means people don't die. It's hard to make a proper analogy, since for all usual technology the moral questions are easy, and you are left with technical questions. But with FAI, we also need to do something about moral questions, on an entirely new level.

I agree that solving FAI also involves solving non-technical, moral questions, and that considerable headway can probably be made on these without knowledge about AGI. I was only saying that there's a limit on how far you can get that way.

How far or near that limit is, I don't know. But I would think that there'd be something useful to be found from pure AGI earlier than one might naively expect. E.g. the Sequences draw on plenty of math/compsci related material, and I expect that likewise some applications/techniques from AGI will also be necessary for FAI.

Count me as another person who would switch some of my charitable contribution from VillageReach to SIAI if I had more information on this subject.

As it happens, the most exciting developments in this space in years (to my knowledge) are happening right now, but it will take a while for things to happen and be announced. And that is all I can say for now. Stay tuned. :)

I will back up lukeprog here, things should get exciting soon if all goes well, and I really hope it does.

I will also point out that SingInst not reporting/knowing what SingInst does doesn't mean that SingInst doesn't do things -- though it does mean that they're somewhat bad at cataloging progress. (Though see the quarterly reports, et cetera; many SingInst-critics don't even read those for some reason.) Michael Anissimov is the media director and doesn't hang out with the Research Fellows often, Eliezer doesn't know much about anything anyone else is doing (by his choice, I think), and Michael Vassar is busy all the time with important things. Anna Salamon does a lot of visible work with Carl Shulman, Steve Rayhawk doesn't do much visible work (though SIAI should be announcing one of his recent accomplishments soon I think). Steve does a fair bit of invisible work that I'm pretty sure most of SIAI mostly doesn't even know about (though he'd probably be quick to dismiss my promotion of his efforts as misguided or exaggerated). Jasen Murray is the organizational wizard running the rationality training stuff, Louie Helm just got back into town and is working on the exciting stuff lukeprog mentioned, and lukeprog himself is being insanely productive like usual. (Luke, are you officially employed/contracted by SIAI?) People loosely associated with SingInst are also doing interesting things, often with the help of SingInst researchers.

I'll give $500 to SIAI now if you can put a date on this 'exciting things' prediction. It doesn't have to be exact, but accurate to within a few weeks. $500 more on delivery of said excitement within said time window.

Also, I too would like to know where these quarterlies are stored.

[-]Rain30

The SIAI Newsletter is sent as an email to people who sign up here, I believe. Note that I am not certain about this, since I've signed up to lots of stuff over the years, and there's no obvious link in the newsletters. They also post some, though not all, of the newsletters to the SIAI blog.

Here are some of them:

Thanks for the explanation. I'm looking forward to seeing the results whenever SIAI can/wants to reveal them. I can't find the quarterly reports, though; can you post a link?

[-]Rain00

Barring an official response from SIAI, I've compiled what I know here.

Has any progress been made on the points mentioned in the announcement above?

See http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html (section titled "My History with SIAI") and http://en.wikipedia.org/wiki/OpenCog (which says that OpenCog incorporates the MOSES system mentioned in the 2007 post).

[-][anonymous]90

"What is missing for the SIAI to actually start working on friendly AI?"

I think that question is answered by Yudkowsky in his interview with Baez:

"I probably need to take at least a year to study up on math, and then—though it may be an idealistic dream—I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)"

Yudkowsky's position, widely known, is that it is unsafe to do otherwise. I imagine that is why they are not funding researchers to work on extending MOSES (or any other AGI work for that matter), but that's just speculation on my part.

To learn more about the work people are doing to build AGI, check out the conferences series on AGI at http://agi-conf.org/ agi-conf.org, organized by Ben Goertzel, advisor to SIAI (formerly Director of Research). Videos of most of the talks and tutorials are available for free, along with PDFs of the conference papers.

"What is missing for the SIAI to actually start working on friendly AI?"

The biggest problem in designing FAI is that nobody knows how to build AI. If you don't know how to build an AI, it's hard to figure out how to make it friendly. It's like thinking about how to make a computer play chess well before anybody knows how to make a computer.

In the meantime, there's lots of pre-FAI work to be done. There are many unsolved problems in metaethics, decision theory, anthropics, cosmology, and other subjects that seem to be highly relevant to later FAI development. I'm currently working (with others) toward defining those problems so that they can be engaged by the wider academic community.

Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.

What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it's "anything that works" attitude isn't going to provide one.

Correct!

If you don't know how to build an AI, it's hard to figure out how to make it friendly.

(You don't make an AI friendly. You make a Friendly AI. Making an AI friendly is like making a text file good reading.)

Yes, I know. 'Making an AI friendly' is just a manner of speaking, like talking about humans having utility functions.

I assumed you know, which is why it was a parenthetical, mainly clarifying for the benefit of others. Disagreement with the method of presentation.

Okay.

[-][anonymous]30

Another reason is that I believe that an empirical approach is required (with embedded link to www.opencog.org)

Do you therefore donate to OpenCog? They have the roadmap/timeline you seek (http://opencog.org/roadmap/).

Great questions.

[-][anonymous]00

No delete button is an awesome feature. WTF?

[This comment is no longer endorsed by its author]Reply

I want to be able to downvote this!

Edit: In this particular case, I shouldn't, since it wasn't intentional. (Incidentally, this is the way to amend/retract comments. Adding a note. There is a separate use case of removing stuff posted by mistake.)

jmed writes that it was originally a comment posted by accident that couldn't be deleted, so my comment doesn't apply in this case. But giving users ability to block downvoting and still having their text fixed in discussion seems like a bad thing. Even "revoked" comments need to be community-moderated. I understand that this is intended to set up incentives that prevent deletion of comments, but perhaps limiting negative comment score instead could do the trick.

Oh, so that's why I see all these comments with text struck out! I don't understand why the feature works this way and I don't like it. Did someone explain the rationale somewhere?