by [anonymous]
4 min read31st May 201049 comments

10

This is a post about moral philosophy, approached with a mathematical metaphor.

Here's an interesting problem in mathematics.  Let's say you have a graph, made up of vertices and edges, with weights assigned to the edges.  Think of the vertices as US cities and the edges as roads between them; the weight on each road is the length of the road. Now, knowing only this information, can you draw a map of the US on a sheet of paper? In mathematical terms, is there an isometric embedding of this graph in two-dimensional Euclidean space?

When you think about this for a minute, it's clear that this is a problem about reconciling the local and the global.  Start with New York and all its neighboring cities.  You have a sort of star shape.  You can certainly draw this on the plane; in fact, you have many degrees of freedom; you can arbitrarily pick one way to draw it.  Now start adding more cities and more roads, and eventually the degrees of freedom diminish.  If you made the wrong choices earlier on, you might paint yourself in a corner and have no way to keep all the distances consistent when you add a new city.  This is known as a "synchronization problem."  Getting it to work locally is easy; getting all the local pieces reconciled with each other is hard.

This is a lovely problem and some acquaintances of mine have written a paper about it.  (http://www.math.princeton.edu/~mcucurin/Sensors_ASAP_TOSN_final.pdf)  I'll pick out some insights that seem relevant to what follows.  First, some obvious approaches don't work very well.  It might be thought we want to optimize over all possible embeddings, picking the one that has the lowest error in approximating distances between cities.  You come up with a "penalty function" that's some sort of sum of errors, and use standard optimization techniques to minimize it.  The trouble is, these approaches tend to work spottily -- in particular, they sometimes pick out local rather than global optima (so that the error can be quite high after all.) 

The approach in the paper I linked is different. We break the graph into overlapping smaller subgraphs, so small that they can only be embedded in one way (that's called rigidity) and then "stitch" them together consistently.  The "stitching" is done with a very handy trick involving eigenvectors of sparse matrices.  But the point I want to emphasize here is that you have to look at the small scale, and let all the little patches embed themselves as they like, before trying to reconcile them globally.

Now, rather daringly, I want to apply this idea to ethics.  (This is an expansion of a post people seemed to like: http://lesswrong.com/lw/1xa/human_values_differ_as_much_as_values_can_differ/1y )

The thing is, human values differ enormously.  The diversity of values is an empirical fact.  The Japanese did not have a word for "thank you" until the Portuguese gave them one; this is a simple example, but it absolutely shocked me, because I thought "thank you" was a universal concept.  It's not.    (edited for lack of fact-checking.) And we do not all agree on what virtues are, or what the best way to raise children is, or what the best form of government is.  There may be no principle that all humans agree on -- dissenters who believe that genocide is a good thing may be pretty awful people, but they undoubtedly exist.  Creating the best possible world for humans is a synchronization problem, then -- we have to figure out a way to balance values that inevitably clash.  Here, nodes are individuals, each individual is tied to its neighbors, and a choice of embedding is a particular action.  The worse the embedding near an individual fits the "true" underlying manifold, the greater the "penalty function" and the more miserable that individual is, because the action goes against what he values.

If we can extend the metaphor further, this is a problem for utilitarianism.  Maximizing something globally -- say, happiness -- can be a dead end.  It can hit a local maximum -- the maximum for those people who value happiness -- but do nothing for the people whose highest value is loyalty to their family, or truth-seeking, or practicing religion, or freedom, or martial valor.  We can't really optimize, because a lot of people's values are other-regarding: we want Aunt Susie to stop smoking, because of the principle of the thing.  Or more seriously, we want people in foreign countries to stop performing clitoridectomies, because of the principle of the thing.  And Aunt Susie or the foreigners may feel differently.  When you have a set of values that extends to the whole world, conflict is inevitable.

The analogue to breaking down the graph is to keep values local.  You have a small star-shaped graph of people you know personally and actions you're personally capable of taking.  Within that star, you define your own values: what you're ready to cheer for, work for, or die for.  You're free to choose those values for yourself -- you don't have to drop them because they're perhaps not optimal for the world's well-being.  But beyond that radius, opinions are dangerous: both because you're more ignorant about distant issues, and because you run into this problem of globally reconciling conflicting values.  Reconciliation is only possible if everyone's minding their own business. If things are really broken down into rigid components.  It's something akin to what Thomas Nagel said against utilitarianism:

"Absolutism is associated with a view of oneself as a small being interacting with others in a large world.  The justifications it requires are primarily interpersonal. Utilitarianism is associated with a view of oneself as a benevolent bureaucrat distributing such benefits as one can control to countless other beings, with whom one can have various relations or none.  The justifications it requires are primarily administrative." (Mortal Questions, p. 68.)

Anyhow, trying to embed our values on this dark continent of a manifold seems to require breaking things down into little local pieces. I think of that as "cultivating our own gardens," to quote Candide. I don't want to be so confident as to have universal ideologies, but I think I may be quite confident and decisive in the little area that is mine: my personal relationships; my areas of expertise, such as they are; my own home and what I do in it; everything that I know I love and is worth my time and money; and bad things that I will not permit to happen in front of me, so long as I can help it.  Local values, not global ones. 

Could any AI be "friendly" enough to keep things local?
 

New Comment
49 comments, sorted by Click to highlight new comments since: Today at 6:54 PM

The Japanese did not have a word for "thank you" until the Portuguese gave them one.

This is apocryphal, as can be seen on the Wikipedia page for Portuguese loanwords into Japanese.

I should add, though, that there are some surprising exceptions to universality out there, like the lack of certain (prima facie important) colour terms or numbers.

However, as someone who used to study languages passionately, I came to reject the stronger versions of the Sapir-Whorf hypothesis (language determines thought). One gradually comes to see that, though differences can be startling, the really odd omissions are often just got around by circumlocutions. For example, Russian lacks the highly specific past tenses English has (was, have been, had been, would have been, would be), but if there is any actual confusion, they just get around it with a few seconds explanation. Or in the Japanese example, even supposing it were true, I would expect some other formalized way of showing gratitude; hence "thanks" the concept would live on even if there was no word used in similar contexts to English "thanks."

Indeed; for another example, classical Latin did not use words for "yes" and "no". A question such as "Do you see it?" would have been answered with "I see it"/"I don't see it".

Alicorn's note about Chinese probably explains the basis for the eventual "Do not want!" meme, which came from a reverse translation of a crude Chinese translation of Darth Vader saying "Noooooooooooooo!" Link

The Chinese translator probably looked up what "No" means. Translation dictionaries, in turn, recognize that "No" doesn't have a direct translation, so they list several options, given the context. In the case that "no" is a refusal of something, the translation in Chinese should take the form "[I] do not want [that]". (If they have to list only one option, they pick the most likely meaning, and that may have been it.)

Then, clumsily using this option, the Chinese translator picked something that translates back as "do not want".

Chinese does something similar. "Do you see that?" would be answered affirmatively by saying the word for "See", or negatively by saying "Don't see". In some contexts, the words for "correct" and "incorrect" can be used a bit like "yes" and "no".

[-][anonymous]14y10

The common part is that in both Latin and Chinese the subject can be/is implicitly included in the verb. Using "I" explicitly, at least In Chinese, would emphasis something along the line "but you may not" (due to whatever). (This is at least what I've been told, standard disclaimer on insufficient knowledge applies).

Or in the Japanese example, even supposing it were true, I would expect some other formalized way of showing gratitude; hence "thanks" the concept would live on even if there was no word used in similar contexts to English "thanks."

Very true. "Thank you" doesn't really have a meaning the same way other words do, since it's more of an interjection. When finding the translation for "thank you" in other languages, you just look at what the recipient of a favor says to express appreciation in that language, and call that their "thank you".

Otherwise, you could argue that "Spanish doesn't have a word for thank you -- but hey, on an unrelated note, native Spanish speakers have this odd custom of saying "gratitude" (gracias) whenever they want to thank someone..."

Advising people to have values that are convenient for the purposes of creating a social utility function ought to not move them. E.g. if you already disvalue the genital mutilation of people outside your social web, it ought not weigh morally that such a value is less convenient for the arbiters of overall social utility.

The math certainly sounds interesting, though!

EDIT:

"But if we eat babies the utility function will be a perfect square! C'm on, it'll make the math SO much easier"

The approach in the paper I linked is different. We break the graph into overlapping smaller subgraphs, so small that they can only be embedded in one way (that's called rigidity) and then "stitch" them together consistently. The "stitching" is done with a very handy trick involving eigenvectors of sparse matrices. But the point I want to emphasize here is that you have to look at the small scale, and let all the little patches embed themselves as they like, before trying to reconcile them globally.

Forget the context for a moment - this note is a very general, very useful observation!

Could any AI be "friendly" enough to keep things local?

Any goal, any criterion for action, any ethical principle or decision procedure can be part of how an AI makes its choices. Whether it attaches utility to GDP, national security, or paperclips, it will act accordingly. If it is designed to regard localism as an axiomatic virtue, or if its traits otherwise incline it to agree with Voltaire, then it will act accordingly. The question for FAI designers is not, could it be like that; the question is, should it be like that.

There are optimization problems where a bottom-up approach works well, but sometimes top-down or in most cases not so easily labeled methods are necessary.

If mathematical optimization is a proper analogy (or even framework) for solving social/ethics etc. problems, then the logical conclusion would be: The approach must depend heavily on the nature of the problem at hand.

Locality has its very important place, but I can't see how one could address planet-wide "tragedy of the commons"-type issues by purely local methods.

This post, and the prior comment it refers to, have something to say; but they're attacking a straw-man version of utilitarianism.

Utilitarianism doesn't have to mean that you take everybody's statements about what they prefer about everything in the world and add them together linearly. Any combining function is possible. Utilitarianism just means that you have a function, and you want to optimize it.

Maximizing something globally -- say, happiness -- can be a dead end. It can hit a local maximum -- the maximum for those people who value happiness -- but do nothing for the people whose highest value is loyalty to their family, or truth-seeking, or practicing religion, or freedom, or martial valor.

Then you make a new utility function that takes those values into account.

You might not be able to find a very good optimum using utilitarianism. But, by definition, not being a utilitarian means not looking for an optimum, which means you won't find any optimum at all unless by chance.

If you still want to argue against utilitarianism, you need to come up with some other plausible way of optimizing. For instance, you could make a free-market or evolutionary argument, that it's better to provide a free market with free agents (or an evolutionary ecosystem) than to construct a utility function, because the agents can optimize collective utility better than a bureaucracy can, without ever needing to know what the overall utility function is.

I agree with the beginning of your comment. I would add that the authors may believe they are attacking utilitarianism, when in fact they are commenting on the proper methods for implementing utilitarianism.

I disagree that attacking utilitarianism involves arguing for different optimization theory. If a utilitarian believed that the free market was more efficient at producing utility then the utilitarian would support it: it doesn't matter by what means that free market, say, achieved that greater utility.

Rather, attacking utilitarianism involves arguing that we should optimize for something else: for instance something like the categorical imperative. A famous example of this is Kant's argument that one should never lie (since it could never be willed to be a universal law, according to him), and the utilitarian philosopher loves to retort that lying is essential if one is hiding a Jewish family from the Nazis. But Kant would be unmoved (if you believe his writings), all that would matter are these universal principles.

If you're optimizing, you're a form of utilitarian. Even if all you're optimizing is "minimize the number of times Kant's principles X, Y, and Z are violated".

This makes the utilitarian/non-utilitarian distinction useless, which I think it is. Everybody is either a utilitarian of some sort, a nihilist, or a conservative, mystic, or gambler saying "Do it the way we've always done it / Leave it up to God / Roll the dice". It's important to recognize this, so that we can get on with talking about "utility functions" without someone protesting that utilitarianism is fundamentally flawed.

The distinction I was drawing could be phrased as between explicit utilitarianism (trying to compute the utility function) and implicit utilitarianism (constructing mechanisms that you expect will maximize a utility function that is implicit in the action of a system but not easily extracted from it and formalized).

There is a meaningful distinction between believing that utility should be agent neutral and believing that it should be agent relative. I tend to assume people are advocating an agent neutral utility function when they call themselves utilitarian since as you point out it is rather a useless distinction otherwise. What terminology do you use to reflect this distinction if not utilitarian/non-utilitarian?

It's the agent neutral utilitarians that I think are dangerous and wrong. The other kind (if you want to still call them utilitarians) are just saying the best way to maximize utility is to maximize utility which I have a hard time arguing with.

There is a meaningful distinction between believing that utility should be agent neutral and believing that it should be agent relative.

Yes; but I've never thought of utilitarianism as being on one side or the other of that choice. Very often, when we talk about a utility function, we're talking about an agent's personal, agent-centric utility function.

As an ethical system it seems to me that utilitarianism strongly implies agent neutral utility. See the wikipedia entry for example. I get the impression that this is what most people who call themselves utilitarians mean.

I think what you're calling utilitarianism is typically called consequentialism. Utilitarianism usually connotes something like what Mill or Bentham had in mind - determine each individual's utility function, then contruct a global utility function that is the sum/average of the individuals. I say connotes because no matter how you define the term, this seems to be what people think when they hear it, so they bring up the tired old cached objections to Mill's utilitarianism that just don't apply to what we're typically talking about here.

I would argue that deriving principles using the categorical imperative is a very difficult optimization problem and that there is a very meaningful sense in which one is a deontologist and not a utilitarian. If one is a deontologist then one needs to solve a series of constraint-satisfaction problems with hard constraints (i.e. they cannot be violated). In the Kantian approach: given a situation, one has to derive the constraints under which one must act in that situation via moral thinking then one must accord to those constraints.

This is very closely related to combinatorial optimization problems. I would argue that often there is a "moral dual" (in the sense of a dual program) where those constraints are no longer treated as absolute and you can assign different costs to each violation and you can then find a most moral strategy. I think very often we have something akin to strong duality where the utilitarian dual is equivalent to the deontological problem, but its an important distinction to remember that the deontologist has hard constraints and zero gradient on their objective functions (by some interpretations).

The utilitarian performs a search over a continuous space for the greatest expected utility, while the deontologist (in an extreme case) has a discrete set of choices, from which the immoral ones are successively weeded out.

Both are optimization procedures, and can be shown to produce very similar output behavior but the approach and philosophy are very different. The predictions of the behavior of the deontologist and the utilitarian can become quite different under the sorts of situations that moral philosophers love to come up with.

If one is a deontologist then one needs to solve a series of constraint-satisfaction problems with hard constraints (i.e. they cannot be violated).

If all you require is to not violate any constraints, and you have no preference between worlds where equal numbers of constraints are violated, and you can regularly achieve worlds in which no constraints are violated, then perhaps constraint-satisfaction is qualitatively different.

In the real world, linear programming typically involves a combination of hard constraints and penalized constraints. If I say the hard-constraint solver isn't utilitarian, then what term would I use to describe the mixed-case problem?

The critical thing to me is that both are formalizing the problem and trying to find the best solution they can. The objections commonly made to utilitarianism would apply equally to moral absolutism phrased as a hard constraint problem.

There's the additional, complicating problem that non-utilitarian approaches may simply not be intelligible. A moral absolutist needs a language in which to specify the morals; the language is so context-dependent that the morals can't be absolute. Non-utilitarian approaches break down when the agents are not restricted to a single species; they break down more when "agent" means something like "set".

To be clear I see the deontologist optimization problem as being a pure "feasibility" problem: one has hard constraints and zero gradient (or approximately zero gradient) on the moral objective function given all decisions that one can make.

Of the many, many critiques of utilitarianism some argue that its not sensible to actually talk about a "gradient" or marginal improvement in moral objective functions. Some argue this on the basis of computational constraints: there's no way that you could ever reasonably compute a moral objective function (because the consquences of any activity are much to complicated) to other critiques that argue the utilitarian notion of "utility" is ill-defined and incoherent (hence the moral objective function has no meaning). These sorts of arguments undermine argue against the possibility of soft-constraints and moral objective functions with gradients.

The deontological optimization problem, on the other hand, is not susceptible to such critiques because the objective function is constant, and the satisfaction of constraints is a binary event.

I would also argue that the most hard-core utilitarian practically acts pretty similarly to a deontologist. The reason is that we only consider a tiny subspace of all possible decisions, and our estimate of the moral gradient will be highly inaccurate over most possible decision axis (I buy the computational-constraint critique), and its not clear that we have enough information about human experience in order to compute those gradients. So, practically speaking: we only consider a small number of different way to live our lives (hence we optimize over a limited range of axes) and the directions we optimize over is not-random for the most part. Think about how most activists and most individuals who perform any sort of advocacy focus on a single issue.

Also consider the fact that most people don't murder or perform certain forms of horrendous crimes. These single issue thinking, law-abiding types may not think of themselves as deontologist but a deontologist would behave very similarly to them since neither attempts to estimate moral gradients over decisions and treats many moral rules as binary events.

The utilitarian and the deontologist are distinguished in practice in that the utilitarian computes a noisy estimate of the moral gradient along a few axes of their potential decision-space: while everywhere else we think of hard constraints and no gradients on the moral objective. The pure utilitarian is at best a theoretical concept that has no potential basis in reality.

Some argue this on the basis of computational constraints: there's no way that you could ever reasonably compute a moral objective function (because the consquences of any activity are much to complicated)

This attacks a straw-man utilitarianism, in which you need to compute precise results and get the one correct answer. Functions can be approximated; this objection isn't even a problem.

to other critiques that argue the utilitarian notion of "utility" is ill-defined and incoherent (hence the moral objective function has no meaning).

A utility function is more well-defined than any other approach to ethics. How do a deontologist's rules fare any better? A utility function /provides/ meaning. A set of rules is just an incomplete utility function, where someone has picked out a set of values, but hasn't bothered to prioritize them.

This attacks a straw-man utilitarianism, in which you need to compute precise results and get the one correct answer. Functions can be approximated; this objection isn't even a problem.

Not every function can be approximated efficiently, though. I see the scope of morality as addressing human activity where human activity is a function space itself. In this case the "moral gradient" that the consequentialist is computing is based on a functional defined over a function space. There are plenty of function spaces and functionals which are very hard to efficiently approximate (the Bayes predictors for speech recognition and machine vision fall into this category) and often naive approaches will fail miserably.

I think the critique of utility functions is not that they don't provide meaning, but that they don't necessarily capture the meaning which we would like. The incoherence argument is that there is no utility function which can represent the thing we want to represent. I don't buy this argument mostly because I've never seen a clear presentation of what it is that we would preferably represent, but many people do (and a lot of these people study decision-making and behavior whereas I study speech signals). I think it is fair to point out that there is only a very limited biological theory of "utility" and generally we estimate "utility" phenomenologically by studying what decisions people make (we build a model of utility and try to refine it so that it fits the data). There is a potential that no utility model is actually going to be a good predictor (i.e. that there is some systematic bias). So, I put a lot of weight on the opinions of decision experts in this regard: some think utility is coherent and some don't.

The deontologist's rules seem to do pretty well as many of them are currently sitting in law books right now. They form the basis for much of the morality that parents teach their children. Most utilitarians follow most of them all the time, anyway.

My personal view is to do what I think most people do: accept many hard constraints on one's behavior and attempt to optimize over estimates of projections of a moral gradient along a few dimensions of decision-space. I.e. I try to think about how my research may be able to benefit people, I also try to help out my family and friends, I try to support things good for animals and the environment. These are areas where I feel more certain that I have some sense where some sort of moral objective function points.

What is the justification for the incoherence argument? Is there a reason, or is it just "I won't believe in your utility function until I see it"?

A moral absolutist needs a language in which to specify the morals; the language is so context-dependent that the morals can't be absolute.

Wait, that applies equally to utilitarianism, doesn't it?

I would like you to elaborate on the incoherence of deontology so I can test out how my optimization perspective on morality can handle the objections.

Can you explain the difference between deontology and moral absolutism first? Because I see it as deontology = moral absolutism, and claims that they are not the same as based on blending deontology + consequentialism and calling the blend deontology.

That is a strange comment. Consequentialists, by definition, believe that doing that action that leads to the best consequences is a moral absolute. Why would deontologists be any more moral absolutists?

[-][anonymous]14y40

The trouble is, not everyone wants to mind their own business, especially because there are incentives for cultures to engage in economic interaction (which inevitably leads to cultural exchange and thus a conflict of values). Though it would theoretically be beneficial if everyone cultivated their own garden, it seems nearly impossible in practice. Instead of keeping value systems in isolation, which is an extremely difficult task even for an AI, wouldn't it be better to allow cultures to interact until they homogenize?

I think that this post has something to say about political philosophy. The problem as I see it is that we want to understand how our local decision-making affects the global picture and what constraints should we put on our local decisions. This is extremely important because, arguably, people make a lot of local decisions that make us globally worse off: such as pollution ("externalities" in econo-speak). I don't buy the author's belief that we should ignore these global constraints: they are clearly important--indeed its the fear of the potential global outcomes of careless local decision-making that arguably led to the creation of this website.

However, just like a computers we have a lot of trouble integrating the global constraints into our decision-making (which is necessarily a local operation), and we probably have a great deal of bias in our estimates of what is the morally best set of choices for us to make. Just like the algorithm we would like to find some way to make the computational burden on us less in order to achieve these moral ends.

There is an approach in economics to understand social norms advocated by Herbert Gintis [PDF] that is able to analyze these sorts of scenarios. The essential idea is this: agents can engage in multiple correlated equilibria (these are a generalized version of Nash equilibria) possible as a result of various social norms. These correlated equilibria are, in a sense, patched together by a social norm from the "rational" (self-interested, local expected utility maximizers) agents' decisions. Human rights could definitely be understood in this light (I think: I haven't actually worked out the model).

Similar reasoning may also be used to understand certain types of laws and government policies. It is via these institutions (norms, human organizations, etc.) that we may efficiently impose global constraints on people's local decision-making. The karma system, for instance, on Less wrong probably changes the way that people make their decision to comment.

There is a probably a computer science - economics crossover paper here that would describe how institutions can lower the computational burden on individuals in their decision-making: so that when individuals make decisions in these simpler domains we can be sure that we will still be globally better off.

One word of caution is that this is precisely the rational behind "command economies" and these didn't work out so well during the 20th century. So choosing the "patching together" institution well is absolutely essential.

I'm lost from the beginning because I see a conflict between the pure mathematical problem and your application of it to the US. At a fine enough level, the road system in the US loses planarity -- it has overpasses, direct connects in highway interchanges, etc. So if you were to encode the map such that all paths were included, you definitely wouldn't be able to make it a planar graph.

On the other hand, under the simplifying assumption that there are no overpasses, you can just put a node at every road intersection, and then every edge is a connection between intersections. In that case, the synchronization problem is immediately solved (per the synchronization rules resulting from the curvature of the earth).

So I'm unable to use this metaphor to gain insight on ethics. Can anyone help?

I also am confused by the metaphor; however it's worth noting is that the problem is not to embed a graph in the plane without crossings, but rather to embed a weighted graph in the plane (possibly with crossings) such that the distances given are equal to the Euclidean distances from the embedding. And adding nodes would be changing the problem, no?

[-][anonymous]14y10

This is correct.

Silas, the problem isn't a perfect match to the actual US -- it assumes straight-line highways, for example.

The graph indeed doesn't have to be planar. We just want to embed it in the plane while preserving distances. And adding nodes does change the problem.

But if all highways are straight, and the graph can have crossovers, doesn't the existing road map already preserve distances, meaning your solution can just copy the map?

I guess I'm not understanding the problem constraints.

[-][anonymous]14y20

You don't have a road map, to start with. You're ONLY given a list of cities and the distances between some of them. From that list, draw a map. That is not a trivial task.

Okay, that makes more sense then. Maybe you should have referred to an imaginary land (instead of the US), so as not to imply people already know what it looks like from above.

Here's an equivalent problem that may make more sense: you have a group of soldiers on a battlefield without access to GPS equipment, and they need to figure out where they are in relation to each other... they each have radios, and can measure propagation latency between each other, telling them linear distance separating each of them, but telling them nothing about directionality, and from that data they need to construct a map of their locations.

The problem is to derive the map, based on the limited set of data you're given. Copying a map would be cheating.

I think you're trying to interpret this as a practical problem of cartography, when in reality its more of a computer sciencey graph theory problem.

And adding nodes would be changing the problem, no?

It depends on whether the US cities bit was just an illustrative example, or a typical constraint on the problem.

Does the problem take for granted, e.g., that roads can be winding so that the weight necessarily does not equal the Euclidean distance (Riemannian distance, really, on a curved paper, but whatever), and you have to make a planar map that locates the nodes so that the weight is (proportional to) the Euclidean distance?

I don't see how this is relevant to the statement that adding nodes would be changing the problem. You're given a specific graph of distances, the challenge is to realize it in the plane. You can't just add nodes and decide to realize a different graph in the plane instead; where would the distances even come from, anyway, if you haven't yet computed an embedding?

SarahC cleared it up, so I understand what you do and don't know in the problem, and why I assumed certain things were given that aren't.

Though I agree with Roko's comment that this doesn't seem to provide insight on resolving ethical differences.

This is an interesting idea, to be sure. It sounds like you might be sympathetic to virtue ethics (as indeed am I, though I become a consequentialist when the stakes get high).

Also, have you read Bernard Williams' critique of utilitarianism? Because

You're free to choose those values for yourself -- you don't have to drop them because they're perhaps not optimal for the world's well-being.

would definitely be one of his arguments.

[-][anonymous]14y00

I haven't read it, but I'll have to look into it.

Local does not necessarily mean that you're knowledgeable and free of values conflicts, and distant does not necessarily mean that you're ignorant or that values conflict. Within a household, a person might have values disagreements with their spouse about religion, work/family balance, or how to raise their kids, or with their kids about their sexuality or their career. Across the world, many efforts at acting morally for the benefit of foreigners are aimed at shared values like health & survival, as with charities that provide food and medical assistance (there may be difficulties in implementation, but not because the recipients of the aid don't similarly value their health & survival).

[-][anonymous]14y10

After reading the first two or three paragraphs, I pondered for a moment. The first approach I came up with was to start with a triangle and begin adding everything that has at least three links to existing things. The second I came up with was to start with a bunch of triangles on independent "maps" and begin stitching those maps together.

And hey, the second one turns out to be the one they used, judging by the fourth paragraph.

Given that I'm not consciously being deceitful, what should I conclude about myself?

Given that I'm not consciously being deceitful, what should I conclude about myself?

Nothing. It's absolutely natural to do it that way. If there are any triangles in the graph.

[-][anonymous]14y10

Ah, right. Apparently I was a bit quick to assume that I had the right answer.

"Absolutism is associated with a view of oneself as a small being interacting with others in a large world. The justifications it requires are primarily interpersonal."

This makes no sense to me. If "absolutism" means moral absolutism, then I think the quote is not even wrong; it has no obvious point of contact with absolutism.

I think there is definitely potential to the idea, but I don't think you pushed the analogy quite far enough. I can see an analogy between what is presented here and human rights and to Kantian moral philosophy.

Essentially, we can think of human rights as being what many people believe to be an essential bare-minimum conditions on human treatment. I.e. that the class of all "good and just" worlds everybody's human rights will be respected. Here human rights corresponds to the "local rigidity" condition of the subgraph. In general, too, human rights are generally only meaningful for people one immediately interacts with in your social network.

This does simplify the question of just government and moral action in the world (as political philosophers are so desirous of using such arguments). I don't think, however, that the local conditions for human existence are as easy to specify as in the case of a sensor network graph.

In some sense there is a tradition largely inspired by Kant that attempts to do the moral equivalent of what you are talking about: use global regularity conditions (on morals) to describe local conditions (on morals: say the ability to will a moral decision to a universal law). Kant generally just assumed that these local conditions would achieve the necessary global requirements for morality (perhaps this is what he meant by a Kingdom of Ends). For Kant the local conditions on your decision-making were necessary and sufficient conditions for the global moral decision-making.

In your discussion (and in the approach of the paper), however, the local conditions placed (on morals or on each patch) are not sufficient to achieve the global conditions (for morality, or on the embedding). So its a weakening of the approach advanced by Kant. The idea seems to be that once some aspects (but not all) of the local conditions have been worked out one can then piece together the local decision rules into something cohesive.

Edit: I rambled, so I put my other idea into another commend