You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vaniver comments on Q&A with experts on risks from AI #1 - Less Wrong Discussion

29 Post author: XiXiDu 08 January 2012 11:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 09 January 2012 10:20:26AM 5 points [-]

After taking a look at the research pages, I'm not very afraid of these people...I'm afraid of Abram Demski...

It would help me a lot if you could email or pm me the names of people who you are afraid of so that I can contact them. Thank you.

email: xixidu@gmail.com or da@kruel.co

Comment author: cousin_it 09 January 2012 10:43:33AM *  8 points [-]

You could also try contacting Justin Corwin who won 24 out of 26 AI-box experiments and now develops AGI at a2i2.

Comment author: loup-vaillant 10 January 2012 04:57:29PM 3 points [-]

24 out of 26?! Since Eliezer won his first two, I was already reasonably certain that AI boxing is effectively impossible (at least once you give it the permission to talk to some humans), so I won't meaningfully update here. But this piece of evidence was quite unexpected.