Dr_Manhattan comments on Some Thoughts on Singularity Strategies - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (29)
I was informed by Justin Shovelain that recently he independently circulated a document arguing for "IA first", and that most of the two dozen people he showed it to agreed with it, or nearly so.
I'm a bit surprised there hasn't been more people arguing (or at least stating their intuition) that "AI first" is the better strategy.
But I did find that Eliezer had written an argument explaining why he chose the "AI first" strategy in Artificial Intelligence as a Positive and Negative Factor in Global Risk (pages 31-35). Here's the conclusion from that section:
Is this a super-secret document of can we ask Justin to share?
Sorry, I should have said that it's a draft document. I didn't see any particularly sensitive information in it, so presumably Justin will release it when it's ready. But the argument is basically along the same lines as my OP.