Clarity comments on Proposal for "Open Problems in Friendly AI" - Less Wrong

26 Post author: lukeprog 01 June 2012 02:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread.

Comment author: Clarity 29 July 2015 10:03:44AM 0 points [-]

Much of the technical research agenda should be kept secret for the same reasons you might want to keep secret the DNA for a synthesized supervirus. But some of the Friendly AI technical research agenda is safe to explain so that a broad research community can contribute to it.

I'm uncomfortable with this.

Since this 2012, has MIRI updated it's stance on self-censoring of the AI research agenda and can this be demonstrated with reference to formerly censored material or otherwise?

If not, are there alternative friendly AI focused organisations who accept donations and censor differently or don't censor?

Thanks for your disclosures Lukeprog, I appreciate the general candor and accountability. It was also nice to read that you were an SI intern in 2011 - quickly you rose to the top! :)