I'm fine with a galaxy without humor, music, or art. I'd sacrifice all of that reproductive fitness signalling (or whatever it is) to maximize my persistence odds as a subjective conscious entity, if that "dilemma" was presented to me.
Hopefully: But would you replace those with anything else? I'd want persistance, but I'd want growth and, well, fun! :)
I think that your use of the word arbitrary differs from mine. My mind labels statements such as "we should preserve human laughter for ever and ever" with the "roko-arbitrary" label. Not that I don't enjoy laughter, but there are plenty of things that I presently enjoy that, if I had the choice, I would modify myself to enjoy less. Activities such as enjoying making fun of other people, eating sweet foods, etc. It strikes me that the dividing line between "things I like but wish I didn't like" and "things I like and want...
Eliezer: "I am not playing games by redefining 'good' or 'arbitrary' [...]"
I imagine the counterargument would be that while you're not playing e-games by e-redefining the terms, you are playing games by redefining the terms.
Upon a preview it looks like Roko beat me to it.
It also worries me quite a lot that eliezer's post is entirely symmetric under the action of replacing his chosen notions with the pebble-sorter's notions. This property qualifies as "moral relativism" in my book, though there is no point in arguing about the meanings of words.
My posts on universal instrumental values are not symmetric under replacing UIVs with some other set of goals that an agent might have. UIVs are the unique set of values X such that in order to achieve any other value Y, you first have to do X. Maybe I find this satisfying because I have always been more at home with category theory than logic; I have defined a set of values by requiring them to satisfy a universal property.
But laughter, I suspect, may be rarer by far than mercy.
Curious why you suspect this. Is it particularly mammalian in some respect? I confess I could be naive, but it seems to me that any sufficiently intelligent being/agent would be just as likely as we humans are to have humor. I suppose that raises the question of how likely it is, and are we just incredibly lucky to have inherited such a trait. Still, it's such a core aspect of so much of our species -- even more than mercy, I think! -- that I'm curious why you think that.
Eliezer, you claim that there is no necessity we should accept Dennis's claim he should get the whole pie as fair. I agree.
There is also no necessity he should accept our alternative claim as fair.
There is no abstract notion that is inherently fair. What there is, is that when people do reach agreement that something is fair, then they have a little bit more of a society. And when they can't agree about what's fair they have a little less of a society. There is nothing that says ahead of time that they must have that society. There is nothing that says ahe...
Though Eliezer does not say it explicitly today, the totality of his public pronouncements on laughter leads me to believe that he considers laughter an instrinsic good of very high order. I hope he does not expect me to accept the highness of the probability of the rareness of humor in the universe as evidence for humor's intrinsic goodness. After all, spines are probably very rare in the universe, too. At least spines with 32 (or however many humans have) vertebrae are.
Eliezer does not explicitly say today that happiness is an intrinsic good, but he d...
I find Eliezer's seemingly-completely-unsupported belief in the rightness of human benevolence, as opposed to sorting pebbles, pretty scary.
@J Thomas: "Why would anybody think that there is a single perfect morality, and if everybody could only see it then we'd all live in peace and harmony?"
Because they have a specific argument which leads them to believe that?
You know, there's no reason why one couldn't consider one language more efficient at communication than others, at least by human benchmarks, all else being equal (how well people know the language, etc.). Ditto for morality.
Thomas, you are running in to the same problem Eliezer is: you can't have a convincing argument about w...
Eliezer can reply that moral conclusions are different, so the sermon does not apply. Well, I think it should apply, in certain case, such as when you are contemplating the launch of the seed of a superintelligence, which is an occasion that IMO demands a complete reevaluation of one's terminal values and the terminal values of one's society.
I find Eliezer's seemingly-completely-unsupported belief in the rightness of human benevolence, as opposed to sorting pebbles, pretty scary.
...
Richard: "Eliezer answer might refer to the difference between the simplicity of Autobliss 1.0 and the complexity of a human."
I'm pretty sure he wouldn't say that. Rather, the claim (if I'm reading him correctly) is that the true referent of good really is a really complicated bundle of human values. In a material universe, you can't cash out "intrinsic goodness" in the intuitive way.
I'm really having trouble understanding how this isn't tantamount to moral relativism -- or indeed moral nihilism. The whole point of "morality" is that it's supposed to provide a way of arbitrating between beings, or groups, with different interests -- such as ourselves and Pebblesorters. Once you give up on that idea, you're reduced, as in this post, to the tribalist position of arguing that we humans should pursue our own interests, and the Pebblesorters be damned. When a conflict arises (as it inevitably will), the winner will then be whoever has the bigger guns, or builds AI first.
Mind you, I don't disagree that this is the situation in which we in fact find ourselves. But we should be honest about the implications. The concept of "morality" is entirely population-specific: when groups of individuals with common interests come into contact, "morality" is the label they give to their common interests. So for us humans, "morality" is art, music, science, compassion, etc. in short, all the things that we humans (as opposed to Pebblesorters) like. This is what I understand Eliezer to be arguing. But if this is your position, you may as well ...
Roko,
"UIVs are the unique set of values X such that in order to achieve any other value Y, you first have to do X." Roko,
You know that all of the so-called 'UIVs' that have been postulated only apply for some Y under some conditions (the presence of other powerful agents and game theoretic considerations or manipulation, self-referential utility functions, preferences over mathematical truths, and many other considerations make so-called UIVs useless or sources of terminal disvalue for an infinite number of cases), and an agent could have the ter...
There has never been, so far as I able to determine, any force so unfriendly to humans as humans. Yet we read day after day about one very smart man's philosophizing about the essence of humanity, supposedly so that it can be included in the essence of fAI. Wouldn't it be incredible if tomorrow, or sometime in the near future, someone who has been working and actually come up with some designs for fAI or AGI produces a real product, and it makes all the hubris of these responses irrelevant? What is the purpose of an intelligence that is able to take all t...
Clarification: in the first paragraph of the above comment, when I wrote "The whole point of 'morality' is..." what I meant was "The whole point of non-relativist 'morality' is...".
"I find Eliezer's seemingly-completely-unsupported belief in the rightness of human benevolence, as opposed to sorting pebbles, pretty scary."
Kip,
Given Eliezer's definition of rightness (which is different from current object-level views), if there is a sufficiently cogent and convincing argument for pebblesorting, then pebblesorting is both right and p-right. Do you think that there is a significant chance you would ever view pebblesorting as k-right with expanded intelligence and study?
It looks like fairness can be said to be f-morality, built from current morality so that it is known to be sufficiently stable under reflection (that is, (meta)*-f-moral), and as moral as possible. While we travel the road of moral progress, avoiding getting trapped in the simplistic ditches of fake moralities, we need a solid target for agreement, and this is what a particular fairness is. Morality unfolds in a moral way, while casting a shadow of unfolded fairness.
Yes, Z.M., human happiness is not what Eliezer plans to use the superintelligence to maximize. Good to make that clear. But it might be worthwhile to question the intrinsic goodness of human happiness, as a warm-up to questioning the coherent extrapolated volition (CEV) of the humans.
But most of all - why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good? What is even the appeal of this, morally or otherwise? At all?
I don't think you ought to try to optimise fitness. Your opinion about fitness might be quite wrong, even if you accept the goal of optimising fitness. Say you sacrifice trying to optimise fitness and then it turns out you failed. Like, you try to optimise for intelligence just before a plague hits that kills 3/4 of the public. You should have optimised for...
"Why would anybody think that there is a single perfect morality, and if everybody could only see it then we'd all live in peace and harmony?"
Because they have a specific argument which leads them to believe that?
Sure, but have you ever seen such an argument that wasn't obviously fallacious? I have not seen one yet. It's been utterly obvious every time.
Thomas, you are running in to the same problem Eliezer is: you can't have a convincing argument about what is fair, versus what is not fair, if you don't explicitly define "fair" in the f...
Re: why on Earth would any human being think that one ought to optimize inclusive genetic fitness
"Ought" is a word that only makes sense in the context of an existing optimisation strategy. As far as biologists can reasonably tell, the optimisation strategy of organisms involves maximising their inclusive genetic fitness. So the short answer to this is: because nature built them that way.
The bigger puzzle is not why organisms act to maximise their inclusive genetic fitness, but why they sometimes systematically fail to do so. What cognitive ma...
For evolution being wasteful, see: http://alife.co.uk/essays/evolution_is_good/
For evolution being stupid, see: http://alife.co.uk/essays/evolution_sees/
komponisto: "I'm really having trouble understanding how this isn't tantamount to moral relativism"
I think I see an element of confusion here in the definition of moral relativism. A moral relativist holds that "no universal standard exists by which to assess an ethical proposition's truth". However, the word universal in this context (moral philosophy) is only expected to apply to all possible humans, not all conceivable intelligent beings. (Of all the famous moral relativist philosophers, how many have addressed the morals of general ...
You can apply the standard of goodness to all intelligent beings, no problem. It's just that they won't apply it to themselves.
The content of "good" is an abstracted idealized dynamic, or as Steven put it, a rigid designator (albeit a self-modifying rigid designator). Thus what is good, or what is not good, is potentially as objective as whether a pile of pebbles is prime. It is just that not every possible optimization process, or every possible mind, does what is good. That's all.
I wish Arnie's character had made a long speech in the middle of the film explaining that Predator wasn't evil or even wrong, he was just working around a different optimization process.
@HA
I'm fine with a galaxy without humor, music, or art. I'd sacrifice all of that...to maximize my persistence odds as a subjective conscious entity....
So existing is a terminal value in and of itself for you HA. Wouldn't you get bored? Or would you try to excise your boredom circuits, along with your humour, music and art circuits? How about your compassion circuits? Do yo...
In the real world, everything worth having comes from someone's effort -- even wild fruit has to be picked, sorted, and cleaned and fish need to be caught, gutted etc. I think this universal fact of required effort is probably part of the data we get the concept of fairness from in the first place, so reasoning in a space where pies pop in to existence from nothing seems like whatever you conclude might not be applicable to the real world anyway.
Ben, you write "Do you strive for the condition of perfect, empty, value-less ghost in the machine, just for its own sake...?".
But my previous post clearly answered that question: "I'd sacrifice all of that reproductive fitness signalling (or whatever it is) to maximize my persistence odds as a subjective conscious entity, if that "dilemma" was presented to me."
It is pretty clever to suggest objective morality without specifying an actual moral code, as it is always the specifics that cause problems.
My issue would be how Eliezer appears to suggest that human morality and alien morality could be judged separately from the genetics of each. Would super intelligent alien bees have the same notions of fairness as we do, and could we simply transplant our morality onto them, annd judge them accordingly, with no adjustments made for biological differences? I think it is very likely that such a species would consider ...
To be fair (cough), your argument that '5 people means the pie should be divided into 5 equal parts' assumes several things...
1) Each person, by virtue of merely being there, is entitled to pie.
2) Each person, by virtue of merely being there, is entitled to the same amount of pie as every other person.
While this division of the pie may be preferable for the health of the collective psyche, it is still a completely arbitrary (cough) way to divide the pie. There are several other meaningful, rational, logical ways to divide the pie. (I believe I suggested on...
Eliezer: "I really don't consider myself a moral relativist - not even in the slightest!"
Meta-ethical relativism (wikipedia)
Meta-ethical relativists, in general, believe that the descriptive properties of terms such as "good", "bad", "right", and "wrong" do not stand subject to universal truth conditions, but only to societal convention and personal preference. Given the same set of verifiable facts, some societies or individuals will have a fundamental disagreement about what one ought to do based on socie...
Lakshmi, Eliezer does have a point, though.
While there are many competing moral justifications for different ways to divide the pie, and while a moral relativist can say that no one of them is objectively correct, still many human beings will choose one. Not even a moral relativist is obligated to refrain from choosing moral standards. Indeed, someone who is intensely aware that he has chosen his standards may feel much more intensely that they are his than someone who believes they are a moral absolute that all honest and intelligent people are obligated ...
As far up as you go, there's no level that calls for unconditional surrender.I see no reason to presume that a concept of fairness would never require that one involved entity cede their demands and give in.
Meta-ethical relativists, in general, believe that the descriptive properties of terms such as "good", "bad", "right", and "wrong" do not stand subject to universal truth conditions, but only to societal convention and personal preference. Given the same set of verifiable facts, some societies or individuals will have a fundamental disagreement about what one ought to do based on societal or individual norms, and one cannot adjudicate these using some independent standard of evaluation. The latter standard will alway...
@Eliezer: "what one ought to do" vs. "what one p-ought to do"
Suppose that the pebblesorter civilization and the human civilization meet, and (fairly predictably) engage in a violent and bitter war for control of the galaxy. Why can you not resolve this war by bringing the pebblesorters and the humans to a negotiating table and telling them "humans do what they ought to do and Pebblesorters do what they p-ought to do"?
You cannot play this trick because p-ought is grounded in what the pebblesorters actually do, which is in turn ...
You cannot play this trick because p-ought is grounded in what the pebblesorters actually do, which is in turn grounded in the state of the universe they aim for, which is the same universe that we live in. The humans and the pebblesorters seem to be disagreeing about something as they fight each other
Does a human being disagree with natural selection? About what, exactly? How would we argue natural selection into agreement with us?
Standard game theory talks about interactions between agents with different goals. It does not presume that all agents mu...
Do the fox and the rabbit disagree? It seems reasonable so say that they do if they meet: the rabbit thinks it should be eating grass, and the fox thinks the rabbit should be in the fox's stomach. They may argue passionately about the rabbit's fate - and even stoop to violence.
Do the fox and the rabbit disagree? It seems reasonable so say that they do if they meet: the rabbit thinks it should be eating grass, and the fox thinks the rabbit should be in the fox's stomach. They may argue passionately about the rabbit's fate - and even stoop to violence.
Really? I would be interested in hearing their philosophical arguments then as for why the rabbit should be eating grass or the rabbit should be in the fox's stomach. I understand, of course, that the rabbit does eat grass and that the fox does hunt the rabbit, but I was not aware ...
Eliezer: No, "good" is defined as that which leads to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc.
I would be interested in hearing their philosophical arguments then as for why the rabbit should be eating grass or the rabbit should be in the fox's stomach. I understand, of course, that the rabbit does eat grass and that the fox does hunt the rabbit, but I was not aware that these were persuasive moral arguments.
They are to the parties in question:
The rabbit argues that if it is eaten by the fox, then it will die - and that should not happen.
The fox argues that if it doesn't eat rabbits, then it will die - and that should not happen.
Neither considers ...
@Eli: All attempts to justify an ethical theory take place against a background of what-constitutes-justification. You, for example, seem to think that calling something "universally instrumental" constitutes a justification for it being a terminal value, whereas for me this is a nonstarter. For every mind that thinks that terminal value Y follows from moral argument X, there will be an equal and opposite mind who thinks that terminal value not-Y follows from moral argument X. I do indeed have a word for theories that deny this: I call them "...
Eliezer, I think I kind-of understand by now why you don't call yourself a relativist. Would you say that it's the "psychological unity of mankind" that distinguishes you from relativists?
A relativist would stress that humans in different cultures all have different - though perhaps related - ideas about "good" and "right" and so on. I believe your position is that the bulk of human minds are similar enough that they would arrive at the same conclusions given enough time and access to enough facts; and therefore, that it's an ...
It would seem that you are not distinguishing between what a system does and what it should do.
In my book, there's not really any such thing as what a system should do.
Should only makes sense with respect to the morals of some agent.
If you don't specify an agent, should becomes an extremely vague and ambiguous term.
Should statements are not about what happens, but about the desirability of what might happen - according to the moral system of some agent.
Concerning the charge of relativism: it seems clear that Eliezer is a moral relativist in the way that the term is normally understood, but not as he understands it. There may be a legitimate dispute here, but as far as communication goes, we should not be having problems. In deference to common usage, I would reserve right for the moral realism of Roko et al. and use something like h-right for Eliezer's notion of humanity's abstracted idealized dynamic--but I don't think it really matters right now.
Roko writes: "My list is the current human notion of...
Eliezer, I think you come closer to sharing my understanding of morality than anyone else I've ever met. Places where I disagree with you:
First, as a purely communicative matter, I think you'd be clearer if you replaced all instances of "right" and "good" with "E-right" and "E-good."
Second, as I commented a couple threads back, I think you grossly overestimate the psychological unity of humankind. Thus I think that, say, E-right is not at all the same as J-right (although they're much more similar than either is to...
Odd how, despite the psychological unity of mankind and the ease of 'extrapolating' human volition, these discussions always seem to end in establishing specialized words to refer to the perceptions and beliefs of specific individuals.
Why "ought" vs. "p-ought" instead of "h-ought" vs. "p-ought"?
Sure, it might just be terminology. But change
"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right."
to
"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is h-right."
and the difference between "because it is the human one" and "because it is h-right" sounds a lot less convincing.
To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.
Nominull, don't the primalists have a morality about heaps of stones?
They believe there are right ways and wrong ways to do it. They sometimes disagree about the details of which ways are right and they punish each other for doing it wrong.
How is that different from morality?
If you've ever taken a mathematics course in school, you yourself may have been introduced to a situation where it was believed that there were right and wrong ways to factor a number into primes. Unless you were an exceptionally good student, you may have disagreed with your teacher over the details of which way was right, and been punished for doing it wrong.
It strikes me as plainly apparent that math homework is not morality.
To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.
Thank you, Nominull. I'm glad someone gets it, anyway.
If you've ever taken a mathematics course in school, you yourself may have been introduced to a situation where it was believed that there were right and wrong ways to factor a number into primes. Unless you were an exceptionally good student, you may have disagreed with your teacher over the details of which way was right, and been punished for doing it wrong.
My experience with math classes was much different from yours. When we had a disagreement, the teacher said, "How would we tell who's right? Do you have a proof? Do you have a counter-example?&q...
To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.
But by Eliezer's standards, it's impossible for anyone to be a relativist about anything.
Consider what Einstein means when he says time and space are relative. He doesn't mean you can just say whatever you want about them, he means that they're relative to a certain reference frame. An observer on Earth may think it's five years since a spaceship launched, and an observer on the spaceship may think it's only been one, and each of them is correct relative to their reference frame.
We could define "time" to mean "time as it passes on Earth, where the majority of humans live." Then an observer on Earth is objectively correct to believe that five years have passed since the launch. An observer on the spaceship who said "One year has passed" would be wrong; he'd really mean "One s-year has passed." Then we could say time and space weren't really relative at all, and people on the groun...
Thanks, Yvain. Comparing well-understood special relativity to things characterized as "subjective" helps to clarify the sense in which they are really "objective", but look differently for different minds and are meaningless without any mind at all. You need a reference frame, and phenomenon does look different in different reference frames, but there are strict and consistent rules for converting between reference frames.
I think it's is more akin to saying that "easy" could just as well mean difficult in some alien language, and so words don't mean anything and language is a farce. That's the true linguistic relativist position.
Yvain, I don't see why I would care about this thing you would call "moral", or refer to it often enough to justify such a short name.
People keep using the term "moral relativism". I did a Google search of the site and got a variety of topics with the term dating from 2007 and 2008. Here's what it means to me.
Relative moral relativism means you affirm that to the best of your knowledge nobody has demonstrated any sort of absolute morality. That people differ in moralities, and if there's anything objective to say one is right and another is wrong that you haven't seen it. That very likely these different moralities are good for different purposes and different circumstances, an...
Submitted humbly for consideration: Ayn Rand is to libertarianism as Greg Egan is to transhumanism as Eliezer Yudkowsky is to moral relativism?
"But most of all - why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good?"
You are asking why anyone would choose life rather than what is good. Inclusive genetic fitness is just the long term form of life, as personal survival is the short-term form.
The answer is, of course, that one should not. By definition, one should always choose what is good. However, while there are times when it is right to give up one's life for a greater good, they are the exception. Most of the time, life is a subgoal of what is good, so there is no conflict.
A sidenote:
Eliezer: "It has something to do with natural selection never choosing a single act of mercy, of grace, even when it would cost its purpose nothing: not auto-anesthetizing a wounded and dying gazelle, when its pain no longer serves even the adaptive purpose that first created pain."
It always costs something; it is cheaper to build a gazelle that always feels pain than one that does so until some conditions are met. This is the related to the case of supposing a spaceship that has passed out of your lightcone still exists.
Natural select...
Yvian: "So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is h-right."
- well said. Modulo Eliezer's lack of explicitness about his definition of "h-right", I fail to see how the human perspective could be anything other than h-right. This post is just an applause light for the values that we currently like, and I think that that is a bad sign.
If human values were so great, you wouldn't have to artificially make them look better by saying things like
"So which...
Z.M Davis: "Submitted humbly for consideration: Ayn Rand is to libertarianism as Greg Egan is to transhumanism as Eliezer Yudkowsky is to moral relativism?"
- not sure I get this... Rand abhorred libertarianism because she thought it was half-baked and amateurish, but actually she is a libertarian, Egan spoke out against transhumanism because, uuum, he thinks we're all crackpots, but actually he's a transhumanist, Yudkowsky speaks out against moral relativism but actually he's the canonical example of a relativist. Ah, yes, ok.
Spot on, Z.M, Seconded.
Michael Anissimov, August 14, 2008 at 10:14 PM asked me to expound.
Sure. I don't want to write smug little quips without explaining myself. Perhaps I'm wrong.
It's difficult to engage Eliezer in debate/argument, even in a constructive as opposed to adversarial way, because he writes so much material, and uses so many unfamiliar terms. So, my disagreement may just be based on an inadequate appreciation of his full writings (e.g. I don't read every word he posts on overcomingbias; although I think doing so would probably be good for my mind, and I eagerly ...
Kip Werking: "P2. But, all we have to prove that giving to charity, etc., is right, is that everyone thinks it is"
You're stating that there exists no other way to prove that giving to charity is right. That's an omniscient claim.
Still, it's unlikely to be defeated in the space of a comment thread, simply because your sweeping generalization about the goodness of charity is far from being universally accepted. A very general claim like that, with no concrete scenario, no background information on where it is to be applied, makes relativism a fore...
I think that all the morality is is just to be sure that the persons close to me rank high in the prisioner dilemma game, and to assure others that I rank high too. Even higuer than I really am.
For this purpose is all that has been done by evolution, intellectual and religious thinking.
The same boy who rationalized a way into believing there was a chocolate cake in the asteroid belt, should know better than to rationalize himself into believing it is right to prefer joy over sorrow.
Obviously, he does know. So the next question is, why does he present material that he knows is wrong?
Professional mathematicians and scientists try not to do that because it makes them look bad. If you present a proof that's wrong then other mathematicians might embarrass you at parties. But maybe Eliezer is immune to that kind of embarrassment. Socrates pres...
I do apologize for coming late to the party; I've been reading, and really feel like I'm missing an inferential step that someone can point me towards.
I'll try to briefly summarize, knowing that I'll gloss over some details; hopefully, the details so glossed over will help anyone who wishes to help me find the missing step.
It seems to me that Eliezer's philosophy of morality (as presented in the metaethics sequence) is: morality is the computation which decides which action is right (or which of N actions is the most right) by determining which action maxi...
My first problem (which may well be a missed inferential step) is with the assumed universality, within humanity, of a system of goals.
From what I've seen, others have the same objection; I do as well, and I have not seen an adequate response.
how is it that humans have discovered "right" while the Pebble-people have discovered only "p-right"? Even if I grant the assertion that all humans are using the same fundamental morality, and Alice and Bob would necessarily agree if they had access to the same information, how is it that humans have discovered "right" and not "h-right"?
From what I understand, everyone except Eliezer is more likely to hold the view that he found "h-right", but he seems unwilling to call it that even when pressed on the matter. It's another point on which I agree with your confusion.
as I understand it, Eliezer's morality simply says "do whatever the computation tells you to do" without offering any help on what that computation actually looks like
We don't have quite the skill to articulate it just yet, but possibly AI and neuroscience will help. If not, we might be in trouble.
...As I said, I
OK... let me see if I'm following.
The idea, trying to rely on as few problematic words as possible, is:
There exists a class of computations M which sort proposed actions/states/strategies into an order, and which among humans underly the inclination to label certain actions/states "good", "bad", "right", "wrong", "moral", etc.(4) For convenience I'll label the class of all labels like that "M-labels."
If two beings B1 and B2 don't implement (1) a common M-instance, M-labels may not be meanin
So I really must deny the charges of moral relativism: I don't think that human morality is arbitrary at all, and I would expect any logically omniscient reasoner to agree with me on that. We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don't. Just as the Pebblesorters are p-better than us, because they care about pebble heaps, and we don't. Human morality is p-arbitrary, but who cares? P-arbitrariness is arbitrary.
Is the Logically Omniscient Reasoner agreeing that human morality is not h-arbitrary, or that it is not lor-arbitrary?
How do we know that The LOR (ha! walked into that one) isn't a Pebblesorter?
What p-bothers me (sorry couldn't resist!) about this approach is that "rightness" nowhere explicitly refers to "others", i.e. other conscious beings / consciousness-moments. Isn't there an interesting difference between a heap of eight pebbles (very p-bad) and a human getting tortured (very bad)? Concerning the latter, we can point to that human's first-person-perspective directly evaluating its current conscious state and concluding that the state is bad, i.e. that the person wants to get the hell out of it. This is a source of disval...
You, the human, might say we really should pursue beauty and laughter and love (which is clearly very important), and that we p-should sort pebbles (but that doesn't really matter). And that our way of life is really better than the Pebblesorters, although their way of life has the utterly irrelevant property of being p-better.
But the Pebblesorters would say we h-should pursue beauty and laughter and love (boring!), and that we really should sort pebbles (which is the self-evident meaning of life). Further, they will say their way of life is really better ...
Followup to: Is Fairness Arbitrary?, Joy in the Merely Good, Sorting Pebbles Into Correct Heaps
Yesterday, I presented the idea that when only five people are present, having just stumbled across a pie in the woods (a naturally growing pie, that just popped out of the ground) then it is fair to give Dennis only 1/5th of this pie, even if Dennis persistently claims that it is fair for him to get the whole thing. Furthermore, it is meta-fair to follow such a symmetrical division procedure, even if Dennis insists that he ought to dictate the division procedure.
Fair, meta-fair, or meta-meta-fair, there is no level of fairness where you're obliged to concede everything to Dennis, without reciprocation or compensation, just because he demands it.
Which goes to say that fairness has a meaning beyond which "that which everyone can be convinced is 'fair'". This is an empty proposition, isomorphic to "Xyblz is that which everyone can be convinced is 'xyblz'". There must be some specific thing of which people are being convinced; and once you identify that thing, it has a meaning beyond agreements and convincing.
You're not introducing something arbitrary, something un-fair, in refusing to concede everything to Dennis. You are being fair, and meta-fair and meta-meta-fair. As far up as you go, there's no level that calls for unconditional surrender. The stars do not judge between you and Dennis—but it is baked into the very question that is asked, when you ask, "What is fair?" as opposed to "What is xyblz?"
Ah, but why should you be fair, rather than xyblz? Let us concede that Dennis cannot validly persuade us, on any level, that it is fair for him to dictate terms and give himself the whole pie; but perhaps he could argue whether we should be fair?
The hidden agenda of the whole discussion of fairness, of course, is that good-ness and right-ness and should-ness, ground out similarly to fairness.
Natural selection optimizes for inclusive genetic fitness. This is not a disagreement with humans about what is good. It is simply that natural selection does not do what is good: it optimizes for inclusive genetic fitness.
Well, since some optimization processes optimize for inclusive genetic fitness, instead of what is good, which should we do, ourselves?
I know my answer to this question. It has something to do with natural selection being a terribly wasteful and stupid and inefficient process. It has something to do with elephants starving to death in their old age when they wear out their last set of teeth. It has something to do with natural selection never choosing a single act of mercy, of grace, even when it would cost its purpose nothing: not auto-anesthetizing a wounded and dying gazelle, when its pain no longer serves even the adaptive purpose that first created pain. Evolution had to happen sometime in the history of the universe, because that's the only way that intelligence could first come into being, without brains to make brains; but now that era is over, and good riddance.
But most of all—why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good? What is even the appeal of this, morally or otherwise? At all? I know people who claim to think like this, and I wonder what wrong turn they made in their cognitive history, and I wonder how to get them to snap out of it.
When we take a step back from fairness, and ask if we should be fair, the answer may not always be yes. Maybe sometimes we should be merciful. But if you ask if it is meta-fair to be fair, the answer will generally be yes. Even if someone else wants you to be unfair in their favor, or claims to disagree about what is "fair", it will still generally be meta-fair to be fair, even if you can't make the Other agree. By the same token, if you ask if we meta-should do what we should, rather than something else, the answer is yes. Even if some other agent or optimization process does not do what is right, that doesn't change what is meta-right.
And this is not "arbitrary" in the sense of rolling dice, not "arbitrary" in the sense that justification is expected and then not found. The accusations that I level against evolution are not merely pulled from a hat; they are expressions of morality as I understand it. They are merely moral, and there is nothing mere about that.
In "Arbitrary" I finished by saying:
This was to help shake people loose of the idea that if any two possible minds can say or do different things, then it must all be arbitrary. Different minds may have different ideas of what's "arbitrary", so clearly this whole business of "arbitrariness" is arbitrary, and we should ignore it. After all, Sinned (the anti-Dennis) just always says "Morality isn't arbitrary!" no matter how you try to persuade her otherwise, so clearly you're just being arbitrary in saying that morality is arbitrary.
From the perspective of a human, saying that one should sort pebbles into prime-numbered heaps is arbitrary—it's the sort of act you'd expect to come with a justification attached, but there isn't any justification.
From the perspective of a Pebblesorter, saying that one p-should scatter a heap of 38 pebbles into two heaps of 19 pebbles is not p-arbitrary at all—it's the most p-important thing in the world, and fully p-justified by the intuitively obvious fact that a heap of 19 pebbles is p-correct and a heap of 38 pebbles is not.
So which perspective should we adopt? I answer that I see no reason at all why I should start sorting pebble-heaps. It strikes me as a completely pointless activity. Better to engage in art, or music, or science, or heck, better to connive political plots of terrifying dark elegance, than to sort pebbles into prime-numbered heaps. A galaxy transformed into pebbles and sorted into prime-numbered heaps would be just plain boring.
The Pebblesorters, of course, would only reason that music is p-pointless because it doesn't help you sort pebbles into heaps; the human activity of humor is not only p-pointless but just plain p-bizarre and p-incomprehensible; and most of all, the human vision of a galaxy in which agents are running around experiencing positive reinforcement but not sorting any pebbles, is a vision of an utterly p-arbitrary galaxy devoid of p-purpose. The Pebblesorters would gladly sacrifice their lives to create a P-Friendly AI that sorted the galaxy on their behalf; it would be the most p-profound statement they could make about the p-meaning of their lives.
So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right. I do not know perfectly what is right, but neither can I plead entire ignorance.
And the Pebblesorters, who simply are not built to do what is right, choose the Pebblesorting perspective: not merely because it is theirs, or because they think they can get away with being p-arbitrary, but because that is what is p-right.
And in fact, both we and the Pebblesorters can agree on all these points. We can agree that sorting pebbles into prime-numbered heaps is arbitrary and unjustified, but not p-arbitrary or p-unjustified; that it is the sort of thing an agent p-should do, but not the sort of thing an agent should do.
I fully expect that even if there is other life in the universe only a few trillions of lightyears away (I don't think it's local, or we would have seen it by now), that we humans are the only creatures for a long long way indeed who are built to do what is right. That may be a moral miracle, but it is not a causal miracle.
There may be some other evolved races, a sizable fraction perhaps, maybe even a majority, who do some right things. Our executing adaptation of compassion is not so far removed from the game theory that gave it birth; it might be a common adaptation. But laughter, I suspect, may be rarer by far than mercy. What would a galactic civilization be like, if it had sympathy, but never a moment of humor? A little more boring, perhaps, by our standards.
This humanity that we find ourselves in, is a great gift. It may not be a great p-gift, but who cares about p-gifts?
So I really must deny the charges of moral relativism: I don't think that human morality is arbitrary at all, and I would expect any logically omniscient reasoner to agree with me on that. We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don't. Just as the Pebblesorters are p-better than us, because they care about pebble heaps, and we don't. Human morality is p-arbitrary, but who cares? P-arbitrariness is arbitrary.
You've just got to avoid thinking that the words "better" and "p-better", or "moral" and "p-moral", are talking about the same thing—because then you might think that the Pebblesorters were coming to different conclusions than us about the same thing—and then you might be tempted to think that our own morals were arbitrary. Which, of course, they're not.
Yes, I really truly do believe that humanity is better than the Pebblesorters! I am not being sarcastic, I really do believe that. I am not playing games by redefining "good" or "arbitrary", I think I mean the same thing by those terms as everyone else. When you understand that I am genuinely sincere about that, you will understand my metaethics. I really don't consider myself a moral relativist—not even in the slightest!
Part of The Metaethics Sequence
Next post: "You Provably Can't Trust Yourself"
Previous post: "Is Fairness Arbitrary?"