There are some but not lots of "writings" produced internally by SingInst that are not available to the public. There's lots of scribbled notes and half-finished models and thoughts in brains and stuff like that. We're working to push them out into written form, but that takes time, money, and people — and we're short on all three.
The other problem is that to talk about strategy we first have to explain lots of things that are basic (to a veteran like you but not to most interested parties) in clear, well-organized language for the first time, since much of this hasn't been done yet (these SI papers definitely help, though: 1, 2, 3, 4, 5, 6). To solve this problem we are (1) adding/improving lots of articles on the LW wiki like you suggested a while back (you'll see a report on what we did, later), and (2) working on the AI risk wiki (we're creating the map of articles right now). Once those resources are available it will be easier to speak clearly in public about strategic issues.
We hit a temporary delay in pushing out strategy stuff at SI because two of our most knowledgable researchers & strategists became unavailable for different reasons: Anna took over launching CFAR and Carl took an extended (unpaid) leave of absence to take care of some non-SI things. Also, I haven't been able to continue my own AI risk strategy series due to other priorities, and because I got to the point where it was going to be a lot of work to continue that sequence if I didn't already have clear, well-organized write-ups of lots of standard material. (So, it'll be easier for me to continue once the LW wiki has been improved and once the AI risk wiki exists, both of which we've got people working on right now.)
Moreover, there are several papers in the works — mostly by Kaj (who is now a staff researcher), with some help from myself — but you won't see them for a while. You did see this and this, however. Those are the product of months of part-time work from several remote researchers, plus analysis by Kaj and Stuart. Remote researchers are currently doing large lit reviews for other paper projects (at the pace of part-time work), but we aren't at the stage of analyzing those large data sets yet so that we can write papers about what we found.
Also, a lot of work currently scattered around in papers and posts by SI and FHI people is being collected and tightly organized in Nick Bostrom's forthcoming monograph on superintelligence, which will be a few hundred pages entirely about singularity strategy. Having much of "the basics" organized in that way will also make it easier to produce additional strategy work.
Luke, with the existing people at SI and FHI's disposal, how long do you think it would take (assuming they're not busy with other projects) to produce a document that lays out a cogent argument for some specific Singularity strategy? An argument that takes into account all of the important considerations that have already been raised (for example my comment that Holden quoted)? I will concede that strategy work is not bogged down if you think it can be done in a reasonable time frame. (2 years, perhaps?) But if SI and FHI are merely producing writings tha...
Previously: round 1, round 2, round 3
From the original thread:
Ask away!