Lumifer comments on Why I haven't signed up for cryonics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (249)
Because if you don't construct a FAI but only construct a seed out of which a FAI will build itself, it's not obvious that you'll have the ability to do test runs.
Well, that sounds like a new area of AI safety engineering to explore, no? How to check your work before doing something potentially dangerous?
I believe that is MIRI's stated purpose.
Quite so, which is why I support MIRI despite their marketing techniques being much too fearmongering-laden, in my opinion.
Even though I do understand why they are: Eliezer believes he was dangerously close to actually building an AI before he realized it would destroy the human race, back in the SIAI days. Fair enough on him, being afraid of what all the other People Like Eliezer might do, but without being able to see his AI designs from that period, there's really no way for the rest of us to judge whether it would have destroyed the human race or just gone kaput like so many other supposed AGI designs. Private experience, however, does not serve as persuasive marketing material.