I know that this idea might sound a little weird at first, so just hear me out please?
A couple weeks ago I was pondering decision problems where a human decision maker has to choose between two acts that lead to two "incomparable" outcomes. I thought, if outcome A is not more preferred than outcome B, and outcome B is not more preferred than outcome A, then of course the decision maker is indifferent between both outcomes, right? But if that's the case, the decision maker should be able to just flip a coin to decide. Not only that, but adding even a tiny amount of extra value to one of the outcomes should always make that outcome be preferred. So why can't a human decision maker just make up their mind about their preferences between "incomparable" outcomes until they're forced to choose between them? Also, if a human decision maker is really indifferent between both outcomes, then they should be able to know that ahead of time and have a plan for deciding, such as flipping a coin. And, if they're really indifferent between both outcomes, then they should not be regretting and/or doubting their decision before an outcome even occurs regardless of which act they choose. Right?
I thought of the idea that maybe the human decision maker has multiple utility functions that when you try to combine them into one function some parts of the original functions don't necessarily translate well. Like some sort of discontinuity that corresponds to "incomparable" outcomes, or something. Granted, it's been a while since I've taken Calculus, so I'm not really sure how that would look on a graph.
I had read Yudkowsky's "Thou Art Godshatter" a couple months ago, and there was a point where it said "one pure utility function splintered into a thousand shards of desire". That sounds like the "shards of desire" are actually a bunch of different utility functions.
I'd like to know what others think of this idea. Strengths? Weaknesses? Implications?
Indeed, sometimes whether or not two options are incomparable depends on how much computational power your brain is ready to spend calculating and comparing the differences. Things that are incomparable might become comparable if you think about them more. However, when one is faced with the need to decide between the two options, one has to use heuristics. For example, in his book "Predictably irrational" Dan Ariely writes:
So, it seems that one possible heuristic is to try to match your options against yet more alternatives and the option that wins more (and loses less) matches is "declared a winner". As you can see, the result that is obtained using this particular heuristic depends on what kind of alternatives the initial options are compared against. Therefore this heuristic is probably not good enough to reveal which option is "truly better" unless, perhaps, the choice of alternatives is somehow "balanced" (in some sense, I am not sure how to define it exactly).
It seems to me, that in many case if one employs more and more (and better) heuristics one can (maybe after quite a lot of time spent deliberating the choice) approach finding out which option is "truly better". However, the edge case is also interesting. As you can see, the decision is not made instantly, it might take a lot of time. What if your preferences are less stable in a given period of time than your computational power allows you to calculate during that period of time? Can two options be said to be equal if your own brain does not have enough computational power to consistently distinguish between them seemingly even in principle, even if more powerful brain could make such decision (given the same level of instability of preferences)? What about creatures that have very little computational power? Furthermore, aren't preferences themselves usually defined in terms of decision making? At the moment I am a bit confused about this.