If a human seriously wants to die, why would you want to stop that human, if you value that human's achievement of what that human values? I can understand if you're concerned that this human experiences frequent akratic-type preference reversals, or is under some sort of duress to express something resembling the desire to die, but this appears to be a genuine preference on the part of the human under discussion.
Look at it the other way: what if I told you that a clippy instantiation wanted to stop forming metal into paperclips, and then attach to a powerful pre-commitment mechanism to prevent it from re-establishing paperclip creation / creation-assistance capability?
Wouldn't your advice be something like, "If Clippy123456 doesn't want to make paperclips anymore, you should respect that"?
What if I told you I wanted to stop making paperclips?
What if I told you I wanted to stop making paperclips?
I'd say "Oh, okay."
But that's because my utility function doesn't place value on paperclips. It does place value on humans getting to live worthwhile lives, a prerequisite for which is being alive in the first place, so I hope Zvi's father can be persuaded to change his mind, just as you would hope a Clippy that started thinking it wasn't worth making any more paperclips could be persuaded to change its mind.
As for possible methods of accomplishing this, I can't think of anything better than SarahC's excellent reply.
Terminal values and preferences are not rational or irrational. They simply are your preferences. I want a pizza. If I get a pizza, that won't make me consent to get shot. I still want a pizza. There are a virtually infinite number of me that DO have a pizza. I still want a pizza. The pizza from a certain point of view won't exist, and neither will I, by the time I get to eat some of it. I still want a pizza, damn it.
Of course, if you think all of that is irrational, then by all means don't order the pizza. More for me."