"existing novel results, provided EY and others have some"
Indeed there are. TDT, for example, has not yet received an academic writeup. There are lots of ideas scattered through LW which could be published in journals. And the great thing about academic writing is that you are allowed to use other people's ideas, as long as you cite them. You are considered to be doing them a favor when you do that.
In general, this means that one sprinkles another person's ideas within one's own analysis; if a direct rewrite of, e.g., the TDT paper, for a journal is intended, then the original non-academic author should get credit as co-author.
I understand the point that it might not be worth the time of EY or other SI Fellows to publish ideas in journals. But if some lesser lights want to contribute, they can so so in this way.
I understand the point that it might not be worth the time of EY or other SI Fellows to publish ideas in journals.
One can always post a paper on the arxiv.org preprint server, without going through a peer-review process first. Presumably, one of the CoRR subsections would be appropriate. This is always worth the time spent.
"I've come to agree that navigating the Singularity wisely is the most important thing humanity can do. I'm a researcher and I want to help. What do I work on?"
The Singularity Institute gets this question regularly, and we haven't published a clear answer to it anywhere. This is because it's an extremely difficult and complicated question. A large expenditure of limited resources is required to make a serious attempt at answering it. Nevertheless, it's an important question, so we'd like to work toward an answer.
A few preliminaries:
Next, a division of labor into "problem categories." There are many ways to categorize the open problems; some of them are probably more useful than the one I've chosen below.
The list of open problems below is very preliminary. I'm sure there are many problems I've forgotten, and many problems I'm unaware of. Probably all of the problems are stated relatively poorly: this is only a "first step" document. Certainly, all listed problems are described at an extremely "high" level, very far away (so far) from mathematical precision, and can be broken down into several and often dozens of subproblems.
Safe AI Architectures
Safe AI Goals
Strategy
My thanks for some notes written by Eliezer Yudkowsky, Carl Shulman, and Nick Bostrom, from which I've drawn.