I'll first explain how I see expected outcome, because I'm not sure my definition is the same as the widely accepted definition.
If I have 50% chance to win 10$, I take it as there are two alternative universes, the only difference being that in one of them, I win 10$ and in the other one, I win nothing. Then I treat the 50% chance as 100% chance to be in both of them, divided by two. If winning 10$ means I'll save myself from 1 hour of work, when divided by two it would be 30 minutes of work. In virtually all cases, when it's about winning small sums of money, you can simply multiply the percentage by the money (in this case, we'll get 5$). Exceptions would be the cases analogous to the one where I'm dying of an illness, I can't afford treatment, but I have all the money I need except for the last 10$ and there isn't any other way to obtain them. So if there's 30% chance to save 10 people's lives, that's the same as saving 3 lives.
If you have no idea what you're talking about, then at least you can see a proof of my problem: I find it hard to explain this idea to people, and impossible for some.
I'm not even sure if the idea is correct. I once posted it on a math forum, asking for evidence, but I didn't find any. So, can someone confirm whether is true, also giving any evidence?
And my main question is, how can I explain this in a way that people can understand it as easily as possible.
(it is possible that it's not clear what I meant - I'll check this thread later for that, and if it turns out to be the case, I'll edit it and add more examples and try to clarify and simplify)
Your understanding of mathematical expectation seems accurate, though the wording could be simplified a bit. I don't think that you need the "many worlds" style exposition to explain it.
One common way of thinking of expected values is as a long-run average. So If I keep playing a game with an expected loss of $10, that means that in the long run it becomes more and more probable that I'll lose an average of about $10 per game.
You could write a whole book about what's wrong with this "long-run average" idea, but E. T. Jaynes already did: Probability Theory: The Logic of Science. The most obvious problem is that it means you can't talk about the expected value of a one-off event. I.e., if Dick is pondering the expected value of (time until he completes his doctorate) given his specific abilities and circumstances... well, he's not allowed to if he's a frequentist who treats probabilities and expected values as long-run averages; there is no ensemble here to take the ave... (read more)