Expressing probabilities as percents gets a bit weird, because subtraction doesn't really work like it should. I don't understand. The expected value is what you'd get by straight subtraction.
Are you implying that the difference between 2% and 3% should be 1 in 50-33=17, or 5.9%? In that case, the difference between 100% and 99.99999% would be 100%, when they're really almost exactly the same.
A couple years ago, Aaron Swartz blogged about what he called the "percentage fallacy":
He recently followed up with a speculation that this may explain some irrational behaviour normally attributed to hyperbolic discounting:
Is this a real thing? Is there any such research? Is there existing evidence that does especially support the usual hyperbolic discounting explanation over this?