Lumifer comments on Why I haven't signed up for cryonics - Less Wrong

29 Post author: Swimmer963 12 January 2014 05:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 15 January 2014 03:39:18PM 1 point [-]

I'm also wondering why aspiring FAI designers didn't bother to test-run their utility function before actually "running" it in a real optimization process.

Because if you don't construct a FAI but only construct a seed out of which a FAI will build itself, it's not obvious that you'll have the ability to do test runs.

Comment author: [deleted] 15 January 2014 05:55:16PM 0 points [-]

Well, that sounds like a new area of AI safety engineering to explore, no? How to check your work before doing something potentially dangerous?

Comment author: Eugine_Nier 16 January 2014 06:10:18AM 0 points [-]

I believe that is MIRI's stated purpose.

Comment author: [deleted] 16 January 2014 08:06:44AM 2 points [-]

Quite so, which is why I support MIRI despite their marketing techniques being much too fearmongering-laden, in my opinion.

Even though I do understand why they are: Eliezer believes he was dangerously close to actually building an AI before he realized it would destroy the human race, back in the SIAI days. Fair enough on him, being afraid of what all the other People Like Eliezer might do, but without being able to see his AI designs from that period, there's really no way for the rest of us to judge whether it would have destroyed the human race or just gone kaput like so many other supposed AGI designs. Private experience, however, does not serve as persuasive marketing material.