Thank you for this formal (and fun!) method to guide (and illustrate/document) decision-making. I think writing it out would help me illustrate leaps or assumptions and come to better decisions. For instance, I often (in my head) zero-out possibilities that have extremely low probabilities out of hand unless the costs are a similar order of magnitude.
...What if the model is giving you an answer you don't like? Well, it means your "system 1" and "system 2" are in conflict! The very first question I asked was which way of making decisions you
In defining A and B as equally valuable, I have to equate the two. That said, it's hard to imagine something that exists that would be as valuable to a non-starving healthy child right now as the meal to the starving child, so if the valuable thing you gave in scenario b were at all marketable, the inefficiency of choosing to use it to help b instead of the x (where x > 1) starving children would make the real psuedo-equation:
utility(a) = utility(b)
cost(a) = x * cost(b)
if x > 1, do a, if x < 1 do b
Here's a dumb question... In the version of this paradox where some agent can perfectly predict the future, why is it meaningful or useful to talk about "decisions" one might make?