The human problem: This argues that the qualia and values we have now are only the beginning of those that could evolve in the universe, and that ensuring that we maximize human values - or any existing value set - from now on, will stop this process in its tracks, and prevent anything better from ever evolving. This is the most-important objection of all.
Better by which set of, ahem, values? And anyway, if evolution of values is a value, then maximising overall value will by construction take that into account.
Presumably, values will evolve differently depending on future contingencies. For example, a future with a world government that imposes universal birth control to limit population growth would probably evolve different values compared to a future that has no such global Singleton. Do you agree, and if so do you think the values evolved in different possible futures are all equivalent as far as you are concerned? If not, what criteria are you using to judge between them?
ETA: Can you explain John Holland's theorems, or at least link to the book you're talking about (Wikipedia says he wrote three). If you think allowing values to evolve is the right thing to do, I'm surprised you haven't put more effort into making a case for it, as opposed to just criticizing SI's plan.
"A point I may not have made in these posts, but made in comments, is that the majority of humans today think that women should not have full rights, homosexuals should be killed or at least severely persecuted, and nerds should be given wedgies. These are not incompletely-extrapolated values that will change with more information; they are values. Opponents of gay marriage make it clear that they do not object to gay marriage based on a long-range utilitarian calculation; they directly value not allowing gays to marry. Many human values horrify most people on this list, so they shouldn't be trying to preserve them."
This has always been my principal objection to CEV. I strongly suspect that were it implemented, it would want the death of a lot of my friends, and quite possibly me, too.
CEV is supposed to preserve those things that people value, and would continue to value were they more intelligent and better informed. I value the lives of my friends. Many other people value the death of people like my friends. There is no reason to think that this is because they are less intelligent or less well-informed than me, as opposed to actually having different preferences. TimS claimed that in a situation like that, CEV would do nothing, rather than impose the extrapolated will of the majority.
My claim is that there is nothing -- not one single thing -- which would be a value held by every person in the world, even were they more intelligent and better informed. An intelligent, informed psychopath has utterly different values from mine, and will continue to have utterly different values upon reflection. The CEV therefore either has to impose the majority preferences upon the minority, or do nothing at all.
Upvoting back to zero because I think this is an important question to address.
If I prefer that people not be tortured, and that's more important to me than anything else, then I ought not prefer a system that puts all the torturers in their own part of the world where I don't have to interact with them over a system that prevents them from torturing.
More generally, this strategy only works if there's nothing I prefer/antiprefer exist, but merely things that I prefer/antiprefer to be aware of.
The child molester cluster (where they grow child simply to molest them, then kill them) doesn't bother you, even if you never interact with it?
Because I'm fairly certain I wouldn't like what CEV(child molester) would output and wouldn't want an AI to implement it.
Some quotes from the CEV document:
Coherence is not a simple question of a majority vote. Coherence will reflect the balance, concentration, and strength of individual volitions. A minor, muddled preference of 60% of humanity might be countered by a strong, unmuddled preference of 10% of humanity. The variables are quantitative, not qualitative.
(...)
It should be easier to counter coherence than to create coherence.
(...)
In qualitative terms, our unimaginably alien, powerful, and humane future selves should have a strong ability to say "Wait! Stop! You're going to predictably regret that!", but we should require much higher standards of predictability and coherence before we trust the extrapolation that says "Do this specific positive thing, even if you can't comprehend why."
Though it's not clear to me how the document would deal with Wei Dai's point in the sibling comment. In the absence of coherence on the question of whether to protect, persecute, or ignore impopular minority groups, does CEV default to protecting them or ignoring them? You might say that as written, it would obviously not protect them, because there was no coherence in favor of doing so; but what if protection of minority groups is a side effect of other measures CEV was taking anyway?
(For what it's worth, I suspect that extrapolation would in fact create enough coherence for this particular scenario not to be a problem.)
Note that there's nothing physically impossible about altering the probability of being born gay, straight, bi, male, female, asexual, etc.
I don't know how to reply to this without violating the site's proscription on discussions of politics, which I prefer not to do.
Not dealing with your point, but that sort of analysis is why I find Heinlein so distasteful - the awful philosophy. For example in #1, 5 seconds of thought suffices to think of counterexamples like temporary derangements (drug use, treatable disease, particularly stressful circumstances, blows to the head), and more effort likely would turn up powerful empirical evidence like possibly an observation that most murderers do not murder again even after release (and obviously not execution).
Who would voluntarily accept a more than 50% chance of being treated like a patronized child (and a second-class citizen) for life?
Someone believing that this sort of paternalism is essential to gender and unable or unwilling to accept a society without it. Someone convinced that this was part of God's plan or otherwise metaphysically necessary. Someone not very fond of making independent decisions. I don't think any of these categories are strikingly rare.
That's about as specific as I'd like to get; anything more so would incur an unacceptable risk of political entanglements. In general, though, I think it's important to distinguish fears and hatreds arising against groups which happen to be on the wrong side of some social line (and therefore identity) from the processes that led to that line being drawn in the first place: it's possible, and IMO quite likely, for people to coherently support most traditional values concerning social dichotomies without coherently endorsing malice across them. This might not end up being stable, human psychology being what it is, but it doesn't seem internally inconsistent.
The way people's values intersect with the various consequences of...
Non-positional, mutually-satisfiable values (physical luxury, for instance) Positional, zero-sum social values, such as wanting to be the alpha male or the homecoming queen
All mutually-satisfiable values have more in common with each other than they do with any non-mutually-satisfiable values, because mutually-satisfiable values are compatible with social harmony and non-problematic utility maximization, while non- mutually-satisfiable values require eternal conflict.
David Friedman pointed out that this isn't correct, it's actually it's quite easy to make positional values mutually satisfiable:
...It seems obvious that, if one's concern is status rather than real income, we are in a zero sum game..... Like many things that seem obvious, this one is false. It is true that my status is relative to yours. It does not, oddly enough, follow that if my status is higher than yours, yours must be lower than mine, or that if my status increases someone else's must decrease. Status is not, in fact, a zero sum game.
This point was originally made clear to me when I was an undergraduate at Harvard and realized that Harvard had, in at least one interesting way, the perfect social system
It seems that what you have argued here is not much related to Holden's objection 1 - his objection is that we cannot reasonably expect a safe and secure implementation of a "Friendly" utility function (even if we had one), because humans have consistently been unable to construct bug-free working-correctly (computer) systems on the first try, proofs have been wrong, etc. You, on the other hand, are arguing against the Friendliness concept on object-level / meta-level ethical grounds.
Opponents of gay marriage make it clear that they do not object to gay marriage based on a long-range utilitarian calculation; they directly value not allowing gays to marry.
Well, most of them do so in part out of their deity telling them that that's a value. If the extrapolated CEV takes into account that they are just wrong about there being such a deity, it should respond accordingly. (I'm working under the what should not be controversial assumption that the AGI isn't going to find out that in fact there is such a deity hanging around.)
What are the historical attitudes towards homosexuality among East Asians and South Asians?
Man, that's variable. Especially in South Asia, where "Hinduism" is more like a nice box for outsiders to describe a huge body of different practices and theoretical approaches, some of them quite divergent. Chastity in general was and is a core value in many cases; where that's not the case, or where the particular sect deals pragmatically with the human sex drive despite teaching chastity as a quicker path to moksha, there might be anything from embrace of erotic imagery and sexual diversity to fairly strict rules about that sort of conduct. Some sects unabashedly embrace sexuality as a good thing, including same-sex sexuality. Islam has historically been pretty doctrinally down on it, but even that has its nuances -- sodomy was often considered a grave sin and still is in many places, while non-penetrative same-sex contact might well be seen as simply a minor thing, not strictly appropriate but hardly anything to get worked up about.
"East Asia" has a very large number of religions as well, and the influence of Confucianism and Buddhism hasn't been uniform in this reg...
The introduction is a catalog of ambiguities about sex, gender, and sexual orientation:
...My partner was diagnosed male at birth because he was born with, and indeed still has, a fully functioning penis ... My partner's DNA has a pattern that is simultaneously male, female and neither. This particular genetic pattern, XXY, is the signature of Kleinfelter syndrome ...
We've known full well since Kinsey that a large minority...37 percent...of men have hat at least one same-sex sexual experience in their lives.
No act of Congress of Parliament exists anywhere that defines exactly what heterosexuality is or regulates exactly how it is to be enacted.
Historians have tracked major shifts in other aspects of what was considered common or "normal" in sex and relationships: was marriage ideally an emotional relationship, or an economic and pragmatic one? Was romantic love desirable, and did it even really exist? Should young people choose their own spouses, or should marriage partners be selected by family and friends?
As unnumbered sailors, prisoners, and boarding-school boys have demonstrated, whether one behaves heterosexually or homosexually sometimes seems like little more tha
Neither is it correct to say that people haven't noticed that it's very common for people to have sex with people who are physically adjacent to them. But that's not to say that people often think "I'm the sort of person who has sex with people physically adjacent to me."
There's a difference between eating meat from time to time, being aware that I eat meat from time to time, and explicitly thinking of myself as a "meat eater," or as an "omnivore," or as a "carnivore". There's a difference between being really smart, being aware of how well I do at various cognitive tasks, and thinking of myself as "a really smart person".
More generally, there's a difference between having the property X, being aware of evidence of X and acting accordingly, and having formed a mental structure in my mind that represents me as having X.
There's also a difference between all of those and being part of a culture that has "people who have X" as a social construct.
I don't know what TheOtherDave means, but I have heard it said before that the notion of treating sexual preference as identity is relatively recent. In the past -- or so the claim goes -- people did of course recognize that some people prefer to have intercourse with members the opposite sex, whereas others did not. But this was seen as merely a preference, similar to disliking broccoli or liking the color red or whatever. A person wouldn't identify as "a heterosexual" or "a homosexual", no more than one would identify as "an anti-broccolist" or a "red-ist" or whatever.
Oh, so her thesis is that in the west, orientation-as-identity dates back to 1860-ish. I can imagine that being defensible. That's way different from what you originally wrote, though.
You see, the first thing that came to mind was Aristophanes' speech in the Symposium, which explicitly recognizes orientation-as-identity and predates the Catholic Church by a couple centuries.
It made me a lot more comfortable dealing with people who might be seen as "regressive", "bland", "conservative" or just who seem otherwise not very in-synch with my own social attitudes and values. Getting to understand that culture and culturally-transmitted worldviews do constitute umbrella groups, but that people vary within them to similar degrees across such umbrellas, made it easier to just deal with people and adapt my own social responses to the situation, and where I feel like the person has incorrect, problematic or misguided ideas, it made it easier to choose my responses and present them effectively.
It made me more socially-conscious and a bit more socially-successful. I have some considerable obstacles there, but just having cultural details available was huge in informing my understanding of certain interactions. When I taught ESL, many of my students were Somali and Muslim. I'm also trans, and gender is a very big thing in many Islam-influenced societies (particularly ones where men and women for the most part don't socialize). I learned a bit about fashion sense and making smart choices just by noticing how the men reacted to what I w...
Fascinating, but... my Be Specific detector is going off and asking, not just for the abstract generalizations you concluded, but the specific examples that made you conclude them. Filling in at least one case of "I thought I should dress like X, but then Y happened, now I dress like Z", even - my detector is going off because all the paragraphs are describing the abstract conclusions.
With regard to examples about clothing, one handy one would be:
I'd been generally aware that while the Muslim women's reactions to me seemed to be more or less constant for a while, it had stood out to me that the men's reactions were considerably more volatile. At the time I gauged this in terms of body language: the apparent tension of the facial muscles, the set of the shoulders, the extension of the arms, what the hands are doing, gestural or expressive mirroring... I don't have formal training in this stuff, and being fairly autistic I don't seem to have the same reactions to it that neurotypical people do, but on some perceptual level it just clicks that this person is relaxed or curious or uncomfortable or very uncomfortable.
Anyway, so I hadn't really put thought into how I should dress before, in that context. I just wore the clothes I was comfy with the first day I started teaching, and didn't notice any issues that stood out to me. I kept doing that until summer arrived. My usual fashion sense is fairly covering and drapey (I like cardigans, skirts and "big billowy hippie pants"). At the time I also had a penchant for wearing a head scarf (not a full wrap like ...
I suspect humans are a lot better at remembering abstract generalizations about what occurs than specific instances. (And probably with good reason; abstract generalizations probably take up less space.)
As a child, arguing with siblings, I had lots of arguments of the form "You're accusing me of X? But you always do it yourself!" / "Oh yeah? Name one example!" / "I can't think of any, but you still always do it!" But even if I was on the side asking for examples, I kind of knew in the back of my head that I was being dishonest, because I remembered the abstract generalization myself as well.
Of course being specific is still a good idea. It may be that the habit of being specific only helps you going forward, as you begin to get in the habit of storing specific instances.
It convinced me that the sort of attitudes I see expressed on LW towards "tradition" and traditional culture [...] are so hopelessly confused about the thing they're trying to address that they essentially don't have anything meaningful to say about it
(I think this could make an interesting and valuable top-level post.)
the majority of humans today think that women should not have full rights, homosexuals should be killed or at least severely persecuted, and nerds should be given wedgies. These are not incompletely-extrapolated values that will change with more information; they are values. Opponents of gay marriage make it clear that they do not object to gay marriage based on a long-range utilitarian calculation; they directly value not allowing gays to marry.
Without endorsing the remainder of your argument, I agree that these observations must be adequately explained, and rejection of the conclusions well justified - or the concept of provably Friendly AI must be considered impossible.
Thanks for tying these together.
I would love to hear someone who believes in the in-principle viability of performing a bottom-up extrapolation of human values into a coherent whole that can be implemented by a system vastly different from a human in a way I ought to endorse make a case for that viability that addresses these concerns specifically; while I don't fully agree with everything said here, it captures much of my own skepticism about that viability much more coherently than I've been able to express it myself .
(This is a revealing post, in that it takes the problem of values and treats it in a mathematically-precise way, and received many downvotes without any substantive objections to either the math or to the analogy asserting that the math is appropriate. I have found in other posts as well that making a mathematical argument based on an abstraction results in more downvotes than does merely arguing from a loose analogy.)
(emphasis added.)
Except Peter de Blanc's comments.
I wanted to write about my opinion that human values can't be divided into final values and instrumental values, the way discussion of FAI presumes they can. This is an idea that comes from mathematics, symbolic logic, and classical AI. A symbolic approach would probably make proving safety easier. But human brains don't work that way. You can and do change your values over time, because you don't really have terminal values.
You may have wanted to - but AFAICS, you didn't - apart from this paragraph. It seems to me that it fails to make its case. The split applies to any goal-directed agent, irrespective of implemetation details.
The human problem: This argues that the qualia and values we have now are only the beginning of those that could evolve in the universe, and that ensuring that we maximize human values - or any existing value set - from now on, will stop this process in its tracks, and prevent anything better from ever evolving. This is the most-important objection of all.
If you can convince people that something is better than present human values, then CEV will implement these new values. I mean, if you just took CEV(PhilGoetz), and you have the desire to see the u...
This seems a nice place to link to Marcello's objection to CEV, which says you might be able to convince people of pretty much anything, depending on the order of arguments.
The link to your group selection update seems broken. Looks like it's got an extra lesswrong.com/ in it.
Do you think an AI reasoning about ethics would be capable of coming to your conclusions? And what "superintelligence policy" do you think it would recommend?
I'm not sure if this is appropriate but like the original author I am unsure if a CEV is a thing that can be expressed in formal logic even if he brain were fully mapped into a virtual environment. A lot of how we craft our values are based on complex environmental factors that are not easily models. Please read Schall's "Disgust embodied as moral judgement" or J Greene's fMRI Investigation of Emotional Engagement in Moral Judgement. Our values are fluid and Non-Hierarchical . Developing values that have a strict hierarchy , as the OP says can lead to systems which can not change.
If the evolutionary process results in either convergence, divergence or extinction, and most often results in extinction, what reason(s) do I have to think that this 23rd emerging complex homo will not go the way of extinction also? Are we throwing all our hope towards super intelligence as our salvation?
I have a few more objections I didn't cover in my last comment because I hadn't thoroughly thought them out yet.
Those of you who are operating under the assumption that we are maximizing a utility function with evolved terminal goals, should I think admit these terminal goals all involve either ourselves, or our genes.
No, these terminal goals can also involve other people and the state of the world, even if they are evolved. There are several reasons human consciousnesses might have evolved goals that do not involve themselves or their genes. The m...
The much stronger issue he raised is that it may well be that outside imagination and fiction, there is no monolithic 'intelligence' thing, and the 'benevolent ruler of the earth' software is then more dangerous than e.g. software that uses search and hill climbing to design better microchips, or design cures for diseases, or the like, without being 'intelligent' in the science fictional sense, and while lacking any form of real world volition. The "benevolent ruler of the earth" software would then, also, fail to provide any superior technical s...
Humans have a values hierarchy. Trouble is, most do not even know what it is (or, they are). IOW, for me honesty is one of the most important values to have. Also, sanctity of (and protection of) life is very high on the list. I would lie in a second to save my son's life. Some choice like that are no-brainers, however few people know all the values that they live by, let alone the hierarchy. Often humans only discover what these values are as they find themselves in various situations.
Just wondering... has anyone compiled a list of these values, morals, e...
EDIT: To edit and simplify my thoughts, in order to get a General Intelligence Algorithm Instance to do anything requires masterful manipulation of parameters with full knowledge of generally how it is going to behave as a result. A level of understanding of psychology of all intelligent (and sub-intelligent) behavior. It is not feasible that someone would accidentally program something that would become an evil mastermind. GIA instances could easily be made to behave in a passive manner even when given affordances and output, kind of like a person tha...
This argues that the qualia and values we have now are only the beginning of those that could evolve in the universe, and that ensuring that we maximize human values - or any existing value set - from now on, will stop this process in its tracks, and prevent anything better from ever evolving.
This is unhelpfully circular. While it's not logically impossible for us to value values that we don't have, it's surely counterintuitive. What makes future values better?
It sounds like you're worried about humans optimizing the universe according to human values because they are the wrong values. At the same time you seem to be saying that this won't be accomplished by building FAI, because only humans can have human values. Is this correct?
Does is also worry you that humans might (mistakenly) optimize the universe with non-human values that also happen to be wrong? If so, do you have any suggestions about how we might get the universe to be optimized according to the right values?
[Deleting because I didn't notice Phil already answered in another comment.]
Wouldn't a value of freedom be the best bet for AI? If it created a communist society with itself at the head, where people can pool their land resources together to create sub-societies with their own rules, everyone would get to live by their own values. Of course, there would be some universal laws, but they would be minimal: don't harm others (AI included), and don't harm their property. If someone disobeys the rules of a sub-society, they would not be harmed, merely suspended or expelled by the sub-society.
In this way, humans would still be allowed t...
Nick_Beckstead asked me to link to posts I referred to in this comment. I should put up or shut up, so here's an attempt to give an organized overview of them.
Since I wrote these, LukeProg has begun tackling some related issues. He has accomplished the seemingly-impossible task of writing many long, substantive posts none of which I recall disagreeing with. And I have, irrationally, not read most of his posts. So he may have dealt with more of these same issues.
I think that I only raised Holden's "objection 2" in comments, which I couldn't easily dig up; and in a critique of a book chapter, which I emailed to LukeProg and did not post to LessWrong. So I'm only going to talk about "Objection 1: It seems to me that any AGI that was set to maximize a "Friendly" utility function would be extraordinarily dangerous." I've arranged my previous posts and comments on this point into categories. (Much of what I've said on the topic has been in comments on LessWrong and Overcoming Bias, and in email lists including SL4, and isn't here.)
The concept of "human values" cannot be defined in the way that FAI presupposes
Human errors, human values: Suppose all humans shared an identical set of values, preferences, and biases. We cannot retain human values without retaining human errors, because there is no principled distinction between them.
A comment on this post: There are at least three distinct levels of human values: The values an evolutionary agent holds that maximize their reproductive fitness, the values a society holds that maximizes its fitness, and the values a rational optimizer holds who has chosen to maximize social utility. They often conflict. Which of them are the real human values?
Values vs. parameters: Eliezer has suggested using human values, but without time discounting (= changing the time-discounting parameter). CEV presupposes that we can abstract human values and apply them in a different situation that has different parameters. But the parameters are values. There is no distinction between parameters and values.
A comment on "Incremental progress and the valley": The "values" that our brains try to maximize in the short run are designed to maximize different values for our bodies in the long run. Which are human values: The motivations we feel, or the effects they have in the long term? LukeProg's post Do Humans Want Things? makes a related point.
Group selection update: The reason I harp on group selection, besides my outrage at the way it's been treated for the past 50 years, is that group selection implies that some human values evolved at the group level, not at the level of the individual. This means that increasing the rationality of individuals may enable people to act more effectively in their own interests, rather than in the group's interest, and thus diminish the degree to which humans embody human values. Identifying the values embodied in individual humans - supposing we could do so - would still not arrive at human values. Transferring human values to a post-human world, which might contain groups at many different levels of a hierarchy, would be problematic.
I wanted to write about my opinion that human values can't be divided into final values and instrumental values, the way discussion of FAI presumes they can. This is an idea that comes from mathematics, symbolic logic, and classical AI. A symbolic approach would probably make proving safety easier. But human brains don't work that way. You can and do change your values over time, because you don't really have terminal values.
Strictly speaking, it is impossible for an agent whose goals are all indexical goals describing states involving itself to have preferences about a situation in which it does not exist. Those of you who are operating under the assumption that we are maximizing a utility function with evolved terminal goals, should I think admit these terminal goals all involve either ourselves, or our genes. If they involve ourselves, then utility functions based on these goals cannot even be computed once we die. If they involve our genes, they they are goals that our bodies are pursuing, that we call errors, not goals, when we the conscious agent inside our bodies evaluate them. In either case, there is no logical reason for us to wish to maximize some utility function based on these after our own deaths. Any action I wish to take regarding the distant future necessarily presupposes that the entire SIAI approach to goals is wrong.
My view, under which it does make sense for me to say I have preferences about the distant future, is that my mind has learned "values" that are not symbols, but analog numbers distributed among neurons. As described in "Only humans can have human values", these values do not exist in a hierarchy with some at the bottom and some on the top, but in a recurrent network which does not have a top or a bottom, because the different parts of the network developed simultaneously. These values therefore can't be categorized into instrumental or terminal. They can include very abstract values that don't need to refer specifically to me, because other values elsewhere in the network do refer to me, and this will ensure that actions I finally execute incorporating those values are also influenced by my other values that do talk about me.
Even if human values existed, it would be pointless to preserve them
Only humans can have human values:
Human values differ as much as values can differ: There are two fundamentally different categories of values:
All mutually-satisfiable values have more in common with each other than they do with any non-mutually-satisfiable values, because mutually-satisfiable values are compatible with social harmony and non-problematic utility maximization, while non- mutually-satisfiable values require eternal conflict. If you find an alien life form from a distant galaxy with non-positional values, it would be easier to integrate those values into a human culture with only human non-positional values, than to integrate already-existing positional human values into that culture.
It appears that some humans have mainly the one type, while other humans have mainly the other type. So talking about trying to preserve human values is pointless - the values held by different humans have already passed the most-important point of divergence.
Enforcing human values would be harmful
The human problem: This argues that the qualia and values we have now are only the beginning of those that could evolve in the universe, and that ensuring that we maximize human values - or any existing value set - from now on, will stop this process in its tracks, and prevent anything better from ever evolving. This is the most-important objection of all.
Re-reading this, I see that the critical paragraph is painfully obscure, as if written by Kant; but it summarizes the argument: "Once the initial symbol set has been chosen, the semantics must be set in stone for the judging function to be "safe" for preserving value; this means that any new symbols must be defined completely in terms of already-existing symbols. Because fine-grained sensory information has been lost, new developments in consciousness might not be detectable in the symbolic representation after the abstraction process. If they are detectable via statistical correlations between existing concepts, they will be difficult to reify parsimoniously as a composite of existing symbols. Not using a theory of phenomenology means that no effort is being made to look for such new developments, making their detection and reification even more unlikely. And an evaluation based on already-developed values and qualia means that even if they could be found, new ones would not improve the score. Competition for high scores on the existing function, plus lack of selection for components orthogonal to that function, will ensure that no such new developments last."
Averaging value systems is worse than choosing one: This describes a neural-network that encodes preferences, and takes some input pattern and computes a new pattern that optimizes these preferences. Such a system is taken as analogous for a value system and an ethical system to attain those values. I then define a measure for the internal conflict produced by a set of values, and show that a system built by averaging together the parameters from many different systems will have higher internal conflict than any of the systems that were averaged together to produce it. The point is that the CEV plan of "averaging together" human values will result in a set of values that is worse (more self-contradictory) than any of the value systems it was derived from.
A point I may not have made in these posts, but made in comments, is that the majority of humans today think that women should not have full rights, homosexuals should be killed or at least severely persecuted, and nerds should be given wedgies. These are not incompletely-extrapolated values that will change with more information; they are values. Opponents of gay marriage make it clear that they do not object to gay marriage based on a long-range utilitarian calculation; they directly value not allowing gays to marry. Many human values horrify most people on this list, so they shouldn't be trying to preserve them.