The error is in considering the simulation chip a reason to not care about death.
In the comic, the woman is arguing that the chip is a reason to not care about death as a way to disarm a threat. Modifying your preferences for game theoretic reasons can be a very sensible thing to do.
In the comic, the woman is arguing that the chip is a reason to not care about death as a way to disarm a threat. Modifying your preferences for game theoretic reasons can be a very sensible thing to do.
It can be. It wasn't this time. It resulted in her death.
Link
I'm increasingly impressed by the power of Zach Wiener's comic to demonstrate in a few images why hard problems are hard. It would be a vast task, but perhaps it would be useful to create an index of such problem-demonstrating comics to add to the Wiki, giving us something to point newbies at which would be less intimidating than formal Sequence postings. I get the impression that a common hurdle is just to get people to accept that problems of AI (and simulation, ethics, what have you) are actually difficult.