eli_sennesh comments on Why I haven't signed up for cryonics - Less Wrong

29 Post author: Swimmer963 12 January 2014 05:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 15 January 2014 05:55:16PM 0 points [-]

Well, that sounds like a new area of AI safety engineering to explore, no? How to check your work before doing something potentially dangerous?

Comment author: Eugine_Nier 16 January 2014 06:10:18AM 0 points [-]

I believe that is MIRI's stated purpose.

Comment author: [deleted] 16 January 2014 08:06:44AM 2 points [-]

Quite so, which is why I support MIRI despite their marketing techniques being much too fearmongering-laden, in my opinion.

Even though I do understand why they are: Eliezer believes he was dangerously close to actually building an AI before he realized it would destroy the human race, back in the SIAI days. Fair enough on him, being afraid of what all the other People Like Eliezer might do, but without being able to see his AI designs from that period, there's really no way for the rest of us to judge whether it would have destroyed the human race or just gone kaput like so many other supposed AGI designs. Private experience, however, does not serve as persuasive marketing material.