I've been doing thought experiments involving a utilitometer: a device capable of measuring the utility of the universe, including sums-over-time and counterfactuals (what-if extrapolations), for any given utility function, even generic statements such as, "what I value." Things this model ignores: nonutilitarianism, complexity, contradictions, unknowability of true utility functions, inability to simulate and measure counterfactual universes, etc.
Unfortunately, I believe I've run into a pathological mindset from thinking about this utilitometer. Given the abilities of the device, you'd want to input your utility function and then take a sum-over-time from the beginning to the end of the universe and start checking counterfactuals ("I buy a new car", "I donate all my money to nonprofits", "I move to California", etc) to see if the total goes up or down.
It seems quite obvious that the sum at the end of the universe is the measure that makes the most sense, and I can't see any reason for taking a measure at the end of an action as is done in all typical discussions of utility. Here's an example: "The expected utility from moving to California is negative due to the high cost of living and the fact that I would not have a job." But a sum over all time might show that it was positive utility because I meet someone, or do something, or learn something that improves the rest of my life, and without the utilitometer, I would have missed all of those add-on effects. The device allows me to fill in all of the unknown details and unintended consequences.
Where this thinking becomes a problem is when I realize I have no such device, but desperately want one, so I can incorporate the unknown and the unintended, and know what path I should be taking to maximize my life, rather than having the short, narrow view of the future I do now. In essence, it places higher utility on 'being good at calculating expected utility' than almost any other actions I could take. If I could just build a true utilitometer that measures everything, then the expected utility would be enormous! ("push button to improve universe"). And even incremental steps along the way could have amazing payoffs.
Given that a utilitometer as described is impossible, thinking about it has still altered my values to place steps toward creating it above other, seemingly more realistic options (buying a new car, moving to California, etc). I previously asked the question, "How much time and effort should we put into improving our models and predictions, given we will have to model and predict the answer to this question?" and acknowledged it was circular and unanswerable. The pathology comes from entering the circle and starting a feedback loop; anything less than perfect prediction means wasting the entire future.
Yes.
From the number of intuitively obvious answers to this post, I'm beginning to think that others just don't care about the sorts of problems I'm interested in. (Likely alternative: I suck at explaining them). I see "when to measure (predicting the future utility of actions)" as one of the fundamental flaws of current theory, but everyone else seems to just say "when you calculate you should do so", as if they have some sort of fully functioning ability to step out of the analysis / predictive phase and take concrete action. I don't understand that.
This flows into the other main problem I have, which is "what to value (crafting the proper utility function)". Several times, I've been told that we do not create the function, rather we discover it, in which case I reformulate the problem as "setting the proper instrumental goals (achieving ambiguous or fluctuating terminal values)".
When you're not even sure[1] what it is you want, and you're not sure[1] that doing a particular thing will lead to [very long term] positive results in the direction you want, why take any action other than research? Judgment under uncertainty is extraordinarily difficult for me.
[1] Please note that this use of "not sure" is meant along the lines of wild utility fluctuation in positive and negative directions due to unintended consequences, unknown results, and random events outside of your control. There are many ways in which short term benefits are outdone by long term detriments, which are then negated by even longer term benefits, in nearly impossible to predict patterns. I see almost every action as useless static noise, given X years of consequences.
If almost every action is static noise apart from it's predict... (read more)