Vladimir_Nesov comments on The Sword of Good - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (292)
What, you mean try to self-modify? Oh hell no. Human brain not designed for that. But you would have a longer time to try to solve FAI. You could maybe try a few non-self-modifications if you could find volunteers, but uploading and upload-driven-upgrading is fundamentally a race between how smart you get and how insane you get.
You can make volunteers out of your own copies. As long as the modified people aren't too smart, it's safe keep them in a sandbox and look through the theoretical work they produce on overdrive.
AI boxes are pretty dangerous.
(I agree that "as long as the modified people aren't too smart" you're safe, but we are hacking on minds that will probably be able to hack on themselves, and possibly recursively self-improve if they decide, for instance, that they don't want to be shut down and deleted at the end of the experiment. I'm pretty strongly motiviated not to risk insanity by trying dangerous mind-hacking experiments, but I'm not going to be deleted in a few minutes.)