Preferences of any actual human seem to form a directed graph, but it's incomplete and can contain cycles.
I suspect you are not talking about neurons in the brain, but have no idea what you do mean...
Any way to transform it into a complete acyclic graph (any pair of situations comparable, no preference loops) must differ from the original graph somewhere. Different algorithms will destroy different facets of actual human preference, but there's certainly no algorithm that can preserve all of it; that much we can consider already proven beyond reasonable doubt. It's not obvious to me that there's a single, well-defined, canonical way to perform this surgery.
By Church-Turing thesis, you can construct an artifact behaviorally indistinguishable from a human based even on expected utility maximization (even though it's an inadequate thing to do). Whatever you can expect of a real human, including answering hypothetical questions, you can expect from this construction.
Here I talk about encoding actual human preferences over all possible futures, not designing an algorithm that will yield one good future. For example, an algorithm that gives one good future may never actually have to worry about torture vs dust specks. So it's not clear that we should worry about it either.
Algorithms are strategies, they are designed to work depending on observations. When you design an algorithm, you design behaviors for all possible futures. Other than giving this remark, I don't know what to do with your comment...
Nodes in the graph are hypothetical situations, and arrows are preferences.
That was the epigraph Eliezer used on a perfectly nice post reminding us to shut up and multiply when valuing human lives, rather than relying on the (roughly) logarithmic amount of warm fuzzies we'd receive. Implicit in the expected utility calculation is the idea that the value of human lives scales linearly: indeed, Eliezer explicitly says, "I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable."
However, in a comment on Wei Dai's brilliant recent post comparing boredom and altruism, Vladimir Nesov points out that "you can value lives sublinearly" and still make an expected utility calculation rather than relying on warm-fuzzy intuition. This got me thinking about just what the functional form of U(Nliving-persons) might be.
Attacking from the high end (the "marginal" calculation), it seems to me that the utility of human lives is actually superlinear to a modest degree1; that is, U(N+1)-U(N) > U(N)-U(N-1). As an example, consider a parent and young child. If you allow one of them to die, not only do you end that life, but you make the other one significantly worse off. But this generalizes: the marginal person (on average) produces positive net value to society (though being an employee, friend, spouse, etc.) in addition to accruing their own utilons, and economies of scale dictate that adding another person allows a little more specialization and hence a little more efficiency. I.e., the larger the pool of potential co-workers/friends/spouses is, the pickier everyone can be, and the better matches they're likely to end up with. Steven Landsburg (in Fair Play) uses a version of this argument to conclude that children have positive externalities and therefore people on average have fewer children than would be optimal.
In societies with readily available birth control, that is. And naturally, in societies which are insufficiently technological for each marginal person to be able to make a contribution, however indirect, to (e.g.) the food output, it's quite easy for the utility of lives to be sublinear, which is the classical Malthusian problem, and still very much with us in the poorer areas of the world. (In fact, I was recently informed by a humor website that the Black Death had some very positive effects for medieval Europe.)
Now let's consider the other end of the problem (the "inductive" calculation). As an example let's assume that humanity has been mostly supplanted by AIs or some alien species. I would certainly prefer to have at least one human still alive: such a person could represent humanity (and by extension, me), carry on our culture, values and perspective on the universe, and generally push for our agenda. Adding a second human seems far less important—but still quite important, since social interactions (with other humans) are such a vital component of humanity. So adding a third person would be less important still, and so on. A sublinear utility function.
So are the marginal calculation and the inductive calculation inconsistent? I don't think so: it's perfectly possible to have a utility function whose first derivative is complex and non-monotonic. The two calculations are simply presenting two different terms of the function, which are dominant in different regimes. Moreover the linear approximation is probably good enough for most ordinary circumstances; let's just remember that it is an approximation.
1. Note that in these arguments I'm averaging over "ability to create utility" (not to mention "capacity to experience utility").