Psy-Kosh comments on The Sword of Good - Less Wrong

85 Post author: Eliezer_Yudkowsky 03 September 2009 12:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (292)

You are viewing a single comment's thread. Show more comments above.

Comment author: Psy-Kosh 03 September 2009 07:35:40PM *  3 points [-]

*blinks* I understand your "oh hell no" reaction to self modification and "use the speedup to buy extra time to solve FAI" suggestion.

However, I don't quite understand why you think "attempted upgrading of other" is all that much better. If you get that one wrong in a "result is super smart but insane (or, more precisely, very sane but with the goal architecture all screwed up) doesn't one end up with the same potential paths to disaster? At that point, if nothing else, what would stop the target from then going down the self modification path?

Comment author: Eliezer_Yudkowsky 03 September 2009 11:02:01PM 7 points [-]

Non-self-modification is by no means safe, but it's slightly less insanely dangerous than self-modification.

Comment author: Psy-Kosh 04 September 2009 12:35:20AM *  0 points [-]

Ooooh, okay then. That makes sense.

Hrm... given though your suggested scenario, why the need to start with looking for other volunteers? ie, if the initial person is willing to be modified under the relevant constraints, why not just, well, spawn off another instance of themselves, one the modifier and one the modifiee?

EDIT: whoops, just noticed that Vladimir suggested the same thing too.

Comment author: Nick_Tarleton 03 September 2009 08:10:40PM 6 points [-]

If insane happens before super-smart, you can stop upgrading the other.

Comment author: Psy-Kosh 03 September 2009 08:12:19PM 1 point [-]

Well, fair enough, there is that.