You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Squark comments on Computation complexity of AGI design - Less Wrong Discussion

6 Post author: Squark 02 February 2015 08:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (69)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 05 February 2015 07:23:37AM 0 points [-]

Thx for commenting!

To my great shame, I have not, even though I knew about its existence from Yudkowsky's "Microeconomics of Intelligence Explosion" paper. I read it yesterday and realized my treatment of anthropics is unforgivably naive: I so should have known better! However, I'm not thrilled by Shulman & Bostrom's approach either: they use SSA/SIA instead of UDT. It seems that a correct analysis (using UDT & taking my complexity theoretic considerations into account) supports my conclusion that human-based super-intelligence is the primary reference scenario rather than de novo superintelligence. Hopefully, I will write about it soon.