Peterdjones comments on Train Philosophers with Pearl and Kahneman, not Plato and Kant - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (510)
You think paperclipping is morality?
As I said, I suspect we are using different definitions of "morality"; could we proceed without using the term?
It would have helped if you had said why you think we have differnt definitions. I don't think I am asserting anything unsual (as far as the wider world is concerned) when I say morality is principallly about regulating interactions between people so that one persons actions take the interests of affected parties into account. Since, to me, that is a truism, it is hard for me to guess why anyone would demur. Other LWers have defined morality as decision theory, as something that just guides their actions, without necessarily taking others into account. I think that is clearly wrong because it suggests that a highly effective serial killer is "good", since they are maximising their own value. But now I am struggling to guess something you could easily just tell me.
You stated that there was some way to determine the validity of our ethics - by which I meant the moral preferences humans hold, as distinct from whatever source may have given them to us, be tit prisoner's dilemmas or the will of God - without recourse to those same ethical intuitions.
When challenged on this assertion, you stated that our preferences may be revealed as incoherent by logic; yet, as I pointed out, an agent's preferences may be perfectly coherent without being anything we would regard as "right".
So either there has been some misunderstanding, or ... show us this mysterious method of determining the Rightness of something without recourse to our ethical intuitions.
I stated:
emphasis added. Your counterexample was paperclipping, which you say is coherent. My response was:
So you still need an example of coherent morlaity that is somehow readically different from ours, showing that coherent morality doens't converge enought to be called objective (or at least EDIT: intersubjective).
May I refer you to the Chanur series, by C.J. Cherryh? Depicted in that series are several alien species along with alien modes of thought and alien moralities.
Consider for a moment the Kif. The Kif are a race of carnivores; they lack the internal wiring to appreciate emotions (as you and I understand the term "emotion") and eat their food live (the notion of eating dead carrion disgusts them, no matter whether it's been cooked in the meantime). Their terminal value is to maximise a quantity that they refer to as sfik, which has the following properties, among others:
It's a radically different form of morality; murder is not considered a crime in Kifish society; but it's also coherent (though more complex than paperclipping). I'm not entirely sure what you mean by "it's ability to actually be morality", though; that looks like a circular definition to me.
I think I am with Peterdjones on this: I don't see how this can be called morality -- it's certainly a set of values, sfik-seeking is certainly something that motivates Kifs, and which they prefer to do; but to call it "morality" in the sense that humans recognize the word is no better than calling it "lust" (though it contains no sex) or "love" (though it contains no element of caring for others).
That it's in their brains and motivates them isn't enough to call it "morality" meaningfully. For it to be called "morality" meaningfully it has to motivate them in roughly the same manner that human morality motivates humans. Baby-eaters and Superhappies were motivated by morality, even if they were vastly different moralities. I don't see anything in the Kifs that could be called morality.
So how does one distinguish a system of motivations from being a system of morality or not?
Well, I think one of the minimal elements required to identify something as morality, is that one tends to prefer other people to be moral, at least in general, at least when their immorality doesn't help you directly.
Babyeaters wanted other people to eat babies, and superhappies wanted other people to..superhappy, but Kifs don't seem to have any reason to encourage sfik-seeking in others.
Kif find sfik-seeking entities to be easier to predict, and therefore easier to control. Thus they prefer sfik-seeking behaviour in others, for purely practical reasons.
That's an interesting distinction. So a paperclip maximizer would seem to be an entity with a moral system.
I am not convinced that Kif morality is coherent. Looking at it game-theoretically:
There is a set of actions (A) that run a non-infinitessimal risk of infinite loss, ie anything that has a risk of losing ones life.
There is a set of actions (B) that have a non-infintiessimal chance of finite gain.
There is set of inactions that (C) that maintain an equilibrium.
Game theoretically, a Kif should avoid A, and avoid B if it entails A to any. Because losses are infinite and gains finite, it makes no sense to endanger ones life for any putative gain. (Because of the infinity in A, the relative likelihoods of loss and gain don't matter). Kif should either avoid each other, or adopt pacifism (both versions of C). (Kif in fact have much more motivation to be pacifistic than humans). Kif Pacifism if is clearly not C J Cherryh's intention.
The key to whole arguent is the "infinite". Perhaps the "Infinite" is an exageration, or perhaps C J Cherryh is one of those people who thinks infinity is a large finite number. The intended results would follow in that case. Infinity is game changing.
Except they don't have eachothers' sourcecode, so even inaction or pacifist policies incur non-infinitesimal risk of infinite loss. There is, in fact, no set of actions which does not incur such a risk, due to free rider effects and multiplayer PD and all this tragedy of the commons going on in there.
I don't think the "infinite" poses quite as much of a problem as it first seems, though it does make things kind of complicated.
I don't see what sourcecode has to do with it. A Kif can (perhaps only) minimise the risk of infintite loss by keeping interaction to an absolute minimum, preferably zero. That is not mere "inaction"in the sense of standing quietly in a elevator full of other Kif. That is "Another Kif....RUN!!!"
I don't see what free riding has to do with it either.
Running from other Kifs also (but less obviously) increases risk of -inf sfik. Cooperating or living socially with other kifs vastly increases your chances of survival in the wild, I presume. You run into a risk equilibrium problem (which can be reduced to raw math if you have enough data about the world and precise sfik values for Kif life and thing-ownership, though it does become very complicated maths (i.e. far more complex than I can solve) due to instrumental value effects and relational comparisons of -inf sfik risks) pretty quickly, but it does seem analytically solvable.
So Kifs have strong incentive to work together in some manner in order to avoid the high risks of death when going at it alone in the wilds, but also various recursive functions that compute sfik with some instrumental -inf sfik parameters plugging into the +fin sfik actions. Since this is apparently the case for most Kifs working together, but you have no guarantee that all Kifs can reliably precommit to cooperation (which is where source code factors in - if you could read the source code, you could see which ones can reliably precommit and under what conditions and how to enforce this), so there's also some incentive to freeride on the Kif cooperation wave and gain a few sfiks by killing some other kifs and taking some of their things, so long as your ability to do so and the marginal gains from this (as well as the weighted instrumental parameter of -inf sfik risk reduction) outweigh the increase in -inf sfik that your higher status incur.
As an intuitive guess, Kifs are extremely cautious socially, but on average power structures and hierarchies of exponential rarity still form, as every Kif is aware that other Kifs have incentive to freeride and this means a risk that they get killed in the process of this free riding.
"Infinite" was probably an exaggeration on my part. Individual Kif certainly do all that they can to avoid death, don't care much about their legacy, and are very surprised to find that other races have the strange concept of a 'martyr' - that sfik can count after death.
I have sometimes talked in terms of objective morality, because it seems truer than subjective morality. But morality is not really objective because it will vary with biology, and is of no use to sticks or stones. It is truer still to say that it is intersubjective. Humans are not going to adopt Kif morality because humans are not Kif.
My first thought on reading this is that a group of Kifish with human morality would eventually rule the world, if not actually wipe out the originals. (My second is "wait, this just a formalization of the standard Evil Bully Race", isn't it?" And my third is "how do they tell how much sfik others have for the purposes of killing them/ taking their stuff?)
Why? I'm curious as to the reasoning behind this.
Yes. The Kif are the villains through most of the series. (Note; most, not all. There are some individual Kif who, while still remaining sfik-maximisers, make significant efforts to aid the heroes and avert a major war).
Mainly reputation, along with a certain degree of body language. This is not perfect, however; a Kif more or less has as much sfik as he can persuade others that he has, until such time as he fails in some task (in which case his followers will immediately defect to the side of whoever defeated him. In fact, they will often defect as soon as it looks like he is starting to fail; your average Kifish footsoldier does not believe in glorious last stands).
It's also worth noting that a Kif who elects to become the follower of a given leader is expected not to kill or take the stuff of the other followers of the same leader, and the leader will enforce this by killing troublemakers.
Cooperation and common goals. Are they portrayed as highly successful in the series?
The Kif, like the rationalist, plays to win, no matter what that takes. They will keep promises if they feel that a reputation for keeping promises will help them to win later. They are perfectly capable of seeing, and taking advantage of, common goals; they can and will make alliances with each other, and with members of other species, as and where necessary. They have their own mental biases; but the highest-sfik Kif gain their status, in part, by being able to correct for those biases to some degree.
They would actually make reasonable rationalists, if it wasn't for their racial tendency to defect in Prisoners' Dilemmas and their complete lack of human morality. They'd certainly love the idea of rationality, because it leads to more sfik.
As far as success goes; they are not the biggest economic power in the series (that's the non-violent Stsho); they are not the biggest political power in the series (that's probably the Mahendo'sat), they don't have the best technology (that's the enigmatic Knnn), they may be the biggest military power in the series (or they may be second to the Knnn; the Mahendo'sat also have a comparable fleet), but not by much of a margin. (Especially since they're having a civil war at the time; two different Kif are contending for the position of species-wide leader).
They're a threat directly to the protagonist, and directly to the protagonists' home planet (which is most certainly not a major military power); and their civil war threatens to start dragging in other species and getting really messy.
How do you identify morality without simply comparing it to your own intuitions?
I'm using a kind fo functional role analysis: the role of morality is to regulate the behaviour of each individual to account for the preferences of others. That isn't an intuition in the sense of "men kissing - yeuch!"
The intuition of "men kissing - yeuch!" is superseded by other intuitions. There's a whole sequence on metaethics, you know. And you haven't answered my question.
Says who? In my theory? In EY's theory? Why should I care?
I know. I don't find it very persuasive or cogent.
Yes I have. I identify morality by performing a functional role analysis and seeing whether candidates fit the funtional role.
You have first-order moral intuitions, yes, and you have intuitions about how to resolve contradictions between these intuitions. Yes? That's how everyone acquires knowledge of morality. do you have some other method of acquiring such knowledge?
It's an intuition in the sense of "killing=bad".
What I said was not an intution in the sense of killing=bad, because
a) it's not an intiution. It's a functional role anaysis
b) It's metaethical (what kind of thing morality is) not first-order ethical (what is wrong)
Indeed. I had already noticed I was talking nonsense and retracted the comment by the time I recieved this. Sorry. I have now given an actual, non-stupid reply here.