Psy-Kosh comments on The Sword of Good - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (292)
What, you mean try to self-modify? Oh hell no. Human brain not designed for that. But you would have a longer time to try to solve FAI. You could maybe try a few non-self-modifications if you could find volunteers, but uploading and upload-driven-upgrading is fundamentally a race between how smart you get and how insane you get.
*blinks* I understand your "oh hell no" reaction to self modification and "use the speedup to buy extra time to solve FAI" suggestion.
However, I don't quite understand why you think "attempted upgrading of other" is all that much better. If you get that one wrong in a "result is super smart but insane (or, more precisely, very sane but with the goal architecture all screwed up) doesn't one end up with the same potential paths to disaster? At that point, if nothing else, what would stop the target from then going down the self modification path?
Non-self-modification is by no means safe, but it's slightly less insanely dangerous than self-modification.
Ooooh, okay then. That makes sense.
Hrm... given though your suggested scenario, why the need to start with looking for other volunteers? ie, if the initial person is willing to be modified under the relevant constraints, why not just, well, spawn off another instance of themselves, one the modifier and one the modifiee?
EDIT: whoops, just noticed that Vladimir suggested the same thing too.
If insane happens before super-smart, you can stop upgrading the other.
Well, fair enough, there is that.