Related to: diminishing returns, utility.

I, for example, really don't care that much about trillions of dollars being won in a lottery or offered by an alien AI iff I make 'the right choice'. I mostly deal with things on pretty linear scales, barring sudden gifts from my relatives and Important Life Decisions. So the below was written with trivialities in mind. Why? Because I think we should train our utility-assigning skilz just like we train our prior-probability-estimating ones.

However, I am far from certain we should do it exactly this way. Maybe this would lead to a shiny new bias. At least I vaguely think I already have it, and formalizing it shouldn't make me worse off. I have tried to apply to myself the category of 'risk-averse', but in the end, it didn't change my prevailing heuristic: 'Everything's reasonable, if you have a sufficient reason.' Like, a pregnant woman should not run if she cares about carrying her child, but even then she should run if the house is on fire. Maybe my estimates of 'sufficient' are different than other people's, but they have served me so far; and setting the particular goal of ridding self of particular biases seems less instrumentally rational than just checking how accurate my individual predictions/impressions/any kind of actionable thoughts are.

So I drew up this list of utility components and will try it out at my leisure, tweaking it ad hoc and paying with time and money and health for my mistakes.

Utility of a given item/action for a given owner/actor = produced value + reduced cost + saved future opportunities + fun.

PV points: -2 if A/I 'takes from tomorrow'*, -1 if'harmful' only within the day, 0 if gives zero on net, 1 ifuseful within the day, 2 if 'gives to tomorrow'

*'tomorrow' is foreseeable future:)

RC points: -3 if takes from overall amount of money I have, less the *really* last-resort stash, -2 if takes from more than one-day-budget, -1 if takes from one-day-budget, 0 if zero on net, 1 if saves within a day (like 'saved on a ticket, might buy candy'), 2 saves for 'tomorrow' on net

SFO points: -2 if 'really sucks', -1 if no, 0 if dunno, 1 if yes

F points: -1 if no, 0 if okay, 1 if yes, 2 if hell yes.

U(bout of flue) =-2-3+0-1=-6. Even if I have flue, I might do research or call a friend or do something useful if it'snot very bad, then it will be only -5. On the other hand, I might get pneumonia, which really sucks, and then it willbe -7. Knowing this, I can, when I feel myself going under, 1) make sure I don't get pneumonia, and 2) go through low-effort stuff I keep labelling 'slow-day-stuff'.

U(room of a house) = use + status -maintenance = U(weighted activities of, well, life) + U(weighted signalling activities, like polishing family china) - U(weighted repair activities).

U(route) = f(weather, price, time, destination, health, 'carrying' potential, changeability on short notice, explainability to somebody else) = U(clothes) + U(activities during commute) + U(shopping/exchanging things/..) + U(emergencies)+ U(rescue missions).

What do you think? 

New Comment
1 comment, sorted by Click to highlight new comments since:

I think utilitarianism is just the gateway economic philosophy for people who aren't yet comfortable measuring the value of human lives in dollars, and you could eliminate the unnecessary abstraction of "utility" and measure dollar cost/value instead (which has a fairly straightforward translation into other values you may care about, vis a vis the market giving you translation prices), using opportunity costs and revealed preference to consider the issues at hand.