One of the reasons that I am skeptical of contributing money to the SIAI is that I simply don't know what they would do with more money. The SIAI currently seems to be viable. Another reason is that I believe that an empirical approach is required, that we need to learn more about the nature of intelligence before we can even attempt to solve something like friendly AI.
I bring this up because I just came across an old post (2007) on the SIAI blog:
We aim to resolve this crucial question by simultaneously proceeding on two fronts:
1. Experimentation with practical, contemporary AI systems that modify and improve their own source code.
2. Extension and refinement of mathematical tools to enable rigorous formal analysis of advanced self-improving AI’s.[...]
For the practical aspect of the SIAI Research Program, we intend to take the MOSES probabilistic evolutionary learning system, which exists in the public domain and was developed by Dr. Moshe Looks in his PhD work at Washington University in 2006, and deploy it self-referentially, in a manner that allows MOSES to improve its own learning methodology.
[...]
Applying MOSES self-referentially will give us a fascinating concrete example of self-modifying AI software – far short of human-level general intelligence initially, but nevertheless with many lessons to teach us about the more ambitious self-modifying AI’s that may be possible.
[...]
We are seeking additional funding so as to enable, initially, the hiring of two doctoral or post-doctoral Research Fellows to focus on the above two areas (practical and theoretical exploration of self-modifying AI).
[...]
Part of our goal is to make progress on these issues ourselves, in-house within SIAI; and part of our goal is to, by demonstrating this progress, interest the wider AI R&D community in these foundational issues. Either way: the goal is to move toward a deeper understanding of these incredibly important issues.
[...]
SIAI must boot-strap into existence a scientific field and research community for the study of safe, recursively self-improving systems; this field and community doesn’t exist yet.
Some questions:
- Has any progress been made on the points mentioned in the announcement above?
- Is the SIAI still willing to pursue experimental AI research or does it solely focus on hypothetical aspects?
- What would the SIAI do given various amounts of money?
I also have some questions regarding the hiring of experts. Is there a way to figure out what exactly the current crew is working on in terms of friendly AI research? Peter de Blanc seems to be the only person who has done some actual work related to artificial intelligence.
I am aware that preparatory groundwork has to be done and capital has to be raised. But why is there no timeline? Why is there no progress report? What is missing for the SIAI to actually start working on friendly AI? The Singularity Institute is 10 years old, what is planned for the decade ahead?
As it happens, the most exciting developments in this space in years (to my knowledge) are happening right now, but it will take a while for things to happen and be announced. And that is all I can say for now. Stay tuned. :)
I will back up lukeprog here, things should get exciting soon if all goes well, and I really hope it does.
I will also point out that SingInst not reporting/knowing what SingInst does doesn't mean that SingInst doesn't do things -- though it does mean that they're somewhat bad at cataloging progress. (Though see the quarterly reports, et cetera; many SingInst-critics don't even read those for some reason.) Michael Anissimov is the media director and doesn't hang out with the Research Fellows often, Eliezer doesn't know much about anything anyone else is doi... (read more)