I don't care to argue the true definition of the QI hypothesis, though it does indeed have an identifiable original form. The math I mentioned still works for both your mystical-QI-hypothesis and my physical-QI-hypothesis. Both versions of the hypothesis will have their odds trebled, which could give them enough weight (e.g. for an AIXI-bot) to noticeably affect the choice of action that will return the optimal expected value. Specifically, if it has been programmed to care about the total measure of surviving humanity, then the more weight is given to QI hypotheses, the less weight (proportionally) is given to the false-positive hypothesis, and the less likely the robot is to take new jumps.
Here's Wikipedia's description of the original thought experiment: quantum suicide and immortality. Quite importantly to the whole point, in the thought experiment death does indeed come by a machine controlled by a single quantum event. Here's a critical view, though in my opinion it quickly gets bogged down in dubious philosophy-of-mind assumptions.
A robot is going on a one-shot mission to a distant world to collect important data needed to research a cure for a plague that is devastating the Earth. When the robot enters hyperspace, it notices some anomalies in the engine's output, but it is too late to get the engine fixed. The anomalies are of a sort that, when similar anomalies have been observed in other engines, 25% of the time it indicates a fatal problem, such that the engine will explode virtually every time it tries to jump. 25% of the time, it has been a false positive, and the engine exploded only at its normal negligible rate. 50% of the time it has indicated a serious problem, such that each jump was about a 50/50 chance of exploding. Anyway, the robot goes through the ten jumps to reach the distant world, and the engine does not explode. Unfortunately, the jump coordinates for the mission were a little off, and the robot is in a bad data-collecting position. It could try another jump - if the engine doesn't explode, the extra data it collects could save lives. If the engine does explode, however, Earth will get no data from the distant world at all. (The FTL radio is only good for one use, so he can't collect data and then jump.) So how did you program your robot? Did you program your robot to believe that since the engine worked 10 times, the anomaly was probably a false positive, and so it should make the jump? Or did you program your robot to follow the "Androidic Principle" and disregard the so-called "evidence" of the ten jumps, since it could not have observed any other outcome? People's lives are in the balance here. A little girl is too sick to leave her bed, she doesn't have much time left, you can hear the fluid in her lungs as she asks you "are you aware of the anthropic principle?" Well? Are you?