Posts

Sorted by New

Wiki Contributions

Comments

I think that traveler's problem may pose two questions instead of one. First of all - is that a right thing to do just once, and the second is if it's good enough to be a universal rule. We can counclude that's the same question, because using it once means we should use it every time when a situation is the same. But using it as a universal rule has an additional side effect - a world where you know you can be killed (depraved of all posessions, etc.) any moment to help some number of strangers is not such a nice place to live in, though sometimes it's possible that the sacrifice is still worth it.

Someone can say that in the least convenient world the general rule is "you only kill strangers when it's absolutely not possible anyone (even the patients) would know that, and they have no one, and so on". In that world it's similar with "living happily in a lie" problem. If the world where people don't know about murdered travellers (the lie) is worse then the world where they know about the murders, then this world is even worse than the previous one.

It just seems the evolution has failed to build a Friendly (to the evolution) AI.

Am I right to say that for classic probability theory the probability is the number of positive outcomes over the number of experiments? If that's correct, then the answer depends on what we consider to be an experiment. If it's the Sleeping Beauty's awakening, then it's two tails and one heads over three awakenings. If the whole procedure (or just a coin flip) is an experiment, then it's one heads and one tails over two procedures.