A robot is going on a one-shot mission to a distant world to collect important data needed to research a cure for a plague that is devastating the Earth. When the robot enters hyperspace, it notices some anomalies in the engine's output, but it is too late to get the engine fixed. The anomalies are of a sort that, when similar anomalies have been observed in other engines, 25% of the time it indicates a fatal problem, such that the engine will explode virtually every time it tries to jump. 25% of the time, it has been a false positive, and the engine exploded only at its normal negligible rate. 50% of the time it has indicated a serious problem, such that each jump was about a 50/50 chance of exploding. Anyway, the robot goes through the ten jumps to reach the distant world, and the engine does not explode. Unfortunately, the jump coordinates for the mission were a little off, and the robot is in a bad data-collecting position. It could try another jump - if the engine doesn't explode, the extra data it collects could save lives. If the engine does explode, however, Earth will get no data from the distant world at all. (The FTL radio is only good for one use, so he can't collect data and then jump.) So how did you program your robot? Did you program your robot to believe that since the engine worked 10 times, the anomaly was probably a false positive, and so it should make the jump? Or did you program your robot to follow the "Androidic Principle" and disregard the so-called "evidence" of the ten jumps, since it could not have observed any other outcome? People's lives are in the balance here. A little girl is too sick to leave her bed, she doesn't have much time left, you can hear the fluid in her lungs as she asks you "are you aware of the anthropic principle?" Well? Are you?
Replying to this thread to test a question mentioned here: http://lesswrong.com/r/discussion/lw/eb9/meta_karma_for_last_30_days/7ap4