You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on Video Q&A with Singularity Institute Executive Director - Less Wrong Discussion

42 Post author: lukeprog 10 December 2011 11:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 10 December 2011 07:00:56PM *  3 points [-]

Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released.

I'm not sure that "most of it" is too dangerous to be released. There is quite a lot of research that can be done in the open. If there wasn't, we wouldn't be trying to write a document like Open Problems in Friendly AI for the public.

Comment author: SilasBarta 13 December 2011 04:19:38PM *  3 points [-]

You've managed to come up with excuses for not posting something as rudimentary as statistics that would substantiate your claims of success for rationality bootcamps.

"That would take too much time!" -> So a volunteer can do it for you. -> "But it's private so we can't release it." -> So anonymize it. -> "That takes too much work too." -> Um? -> "Hey, our alums dress nicely now, that should be enough proof."

Frankly, that doesn't bode well.

Comment author: dlthomas 13 December 2011 04:51:57PM 3 points [-]

It seems that signaling rigor in hidden domains through a policy of rigor in open domains would be appropriate, and possibly sufficient. It may be expensive, but hopefully the domains addressed would still be of some benefit.