Posts

Sorted by New

Wiki Contributions

Comments

Note that there's a similar problem in the free will debate:

Incompatilist: "Well, if a godlike being can fix the entire life story of the universe, including your own life story, just by setting the rules of physics, and the initial conditions, then you can't have free will."

Compatibilist: "But in order to do that, the godlike being would have to model the people in the universe so well, that the models are people themselves. So there will still be un-modeled people living in a spontaneous way that wasn't designed by the godlike being. (And if you say that the godlike being models the models, too, the same problem arises in another iteration; you can't win that race, incompatibilist; it's turtles all the way down.")

Incompatibilist: I'm not sure that's true. Maybe you can have models of human behavior that don't themselves result in people. But even if that's true, people don't create themselves from scratch. Their entire life stories are fixed by their environment and heredity, so to speak. You may have eliminated the rhetorical device used to make my point; but the point itself remains true.

At which point, the two parties should decide what "free will" even means.

The "problem" seems based on several assumptions:

  1. that there is objectively best state of the world, to which a Friendly should steer the universe
  2. pulling the plug on a Virtual Universe containing persons is wrong
  3. there is something special about "persons," and we should try to keep them in the universe and/or make more of them

I'm not sure any of these are true. Regarding 3, even if there is an X that is special, and that we should keep in the universe, I'm not sure "persons" is it. Maybe it is simpler: "pleasure-feeling-stuff" or "happiness-feeling-stuff." Even if there is a best state of the universe, I'm not sure that are any persons in it, at all. Or perhaps only one.

In other words, our ethical views, (to the extent that godlike minds can sustain any) might find that "persons" are coincidental containers for ethically-relevant-stuff, and not the ethically-relevant-stuff itself.

The notion that we should try to maximize the number of people in the world, perhaps in order to maximize the amount of happiness in the world, has always struck me as taken the Darwinian carrot-on-the-stick one step too far.

Michael Anissimov, August 14, 2008 at 10:14 PM asked me to expound.

Sure. I don't want to write smug little quips without explaining myself. Perhaps I'm wrong.

It's difficult to engage Eliezer in debate/argument, even in a constructive as opposed to adversarial way, because he writes so much material, and uses so many unfamiliar terms. So, my disagreement may just be based on an inadequate appreciation of his full writings (e.g. I don't read every word he posts on overcomingbias; although I think doing so would probably be good for my mind, and I eagerly look forward to reading any book he writes).

Let me just say that I'm a skeptic (or "anti-realist") about moral realism. I think there is no fact of the matter about what we should or should not do. In this tradition, I find the most agreement with Mackie (historically) and Joshua Greene at Harvard (today). I think Eliezer might benefit greatly from reading both of them. You can find Greene's Ph.d thesis here:

http://www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-Dissertation.pdf

It's worth reading in entirety.

Why am I a moral skeptic? Before I give good reasons, let me suggest some possibly bad ones: it's a "shocking" and unpopular position. And I certainly love to be a gadfly. So, if Eliezer and I do have a real disagreement here, it may be drawn along the same lines we have with the free will debate: Eliezer seems to have strong compatibilist leanings, and I'm more inclined towards non-realism about free will. Thus, Eliezer may be inclined to resist shocking or uncomfortable truths, or I may be overly eager to find them. That's one possible reason for my moral skepticism.

I certainly believe that any philosophical investigations which lead people to generally safe and comfortable positions, in which common sense is vindicated, should give us pause. And people who see their role as philosopher as vindicating common sense, and making cherished beliefs safe for the world, are dishonoring the history of philosophy, and doing a disservice to themselves and the world. To succeed in that project, fully at least, one must engage in the sort of rationalization Eliezer has condemned over and over.

Now let me give my good reasons:

P1. An essential aspect of what it means for something to be morally right is that the something is not just morally right because everyone agrees that the something is. Thus, everyone agrees that if giving to charity, or sorting pebbles, is morally right, it is not just right because everyone says that it is. It is right in some deeper sense.

P2. But, all we have to prove that giving to charity, etc., is right, is that everyone thinks it is (to the extent they do, which is not 100%).

You might say: well, giving to charity increases the sum amount of happiness in the world, or is more fair, or follows some Kantian rule. But, then again, we ask: why? And the only answer seems to be that everyone agrees that happiness should be maximized, or fairness maximized, or that rule followed. But, as we said when we started, the fact that everyone agreed wasn't a good enough reason.

So we're left with reasons which we already agree are not good enough. We can only get around this through fancy rationalization, and in particular by forgetting P1.

Eliezer offers his own reasons for believing something is right:

"The human one, of course; not because it is the human one, but because it is right. I do not know perfectly what is right, but neither can I plead entire ignorance."

What horribly circular logic is that? It's right because it's right?

The last few words present a link to another article. And there you find quotes like these:

"Why not accept that, ceteris paribus, joy is preferable to sorrow?"

"You might later find some ground within yourself or built upon yourself with which to criticize this - but why not accept it for now? Not just as a personal preference, mind you; but as something baked into the question you ask when you ask "What is truly right"?"

"Are you willing to relinquish your Socratean ignorance?"

This is special pleading. It is hand waving. It is the sort of insubstantial, waxing poetic that pastors use to captivate their audiences, and young men use to romance young women. It is a sweet nothing. It should make you feel like you're being dealt with by a used car salesman; that's how I feel when I read it.

The question isn't "why not prefer joy over sorrow?" That's a wild card that can justify anything (just flip it around: "why not prefer sorrow over joy?"). You might not find a decisive reason against preferring joy to sorry, but that's just because you're not going to find a decisive reason to believe anything is right or wrong. Any given thing might make the world happier, or follow a popular rule, but what makes that "right"? Nothing. The problem above, involving P1 and P2, does not go away.

The content of morality is not baked into the definitions of words in our moral vocabulary, either (as Eliezer implies when he writes: "you will have problems with the meaning of your words, not just their plausibility"---another link). Definitions are made by agreement and, remember, P1 says that something can't be moral just because everyone agrees that the something is moral. The language of morality just refers to what we should do. The words themselves, and their definitions, are silent about what the content of that morality is, what the things are that we should actually do.

So I seem to disagree with Eliezer quite substantially about morality, and in a similar way to how we disagree about free will.

Finally, I can answer the question: what scares me about Eliezer's view? Certainly not that he loves joy and abhors suffering so much. Believe me when I say, about his mission to make the universe one big orgasm: godspeed.

Rather, it's his apparent willingness to compromise his rationalist and critical thinking principles in the process. The same boy who rationalized a way into believing there was a chocolate cake in the asteroid belt, should know better than to rationalize himself into believing it is right to prefer joy over sorrow.

What he says sounds nice, and sexy, and appealing. No doubt many people would like for it to be true. As far as I can tell, it generally vindicates common sense. But at what cost?

Joy feels better than sorrow. We can promote joy instead of sorrow. We will feel much better for doing so. Nobody will be able to criticize us for doing the wrong thing. The world will be one big orgasm. Let's satisfy ourselves with that. Let's satisfy ourselves with the merely real.

I find Eliezer's seemingly-completely-unsupported belief in the rightness of human benevolence, as opposed to sorting pebbles, pretty scary.

"I can't abjure my own operating system."

We don't need to get into thorny issues involving free will and what you can or can't do.

Suffice it to say that something's being in our DNA is neither sufficient nor necessary for it to be moral. The tablet and our DNA are relevantly similar in this respect.

I should add: when discussing morality, I think it's important to give the anti-realist's position some consideration (which doesn't seem to happen in the post above). See Joshua Greene's The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It, and J.L. Mackie's Ethics: Inventing Right and Wrong.

As far as I can tell, Eliezer is concluding that he should trust part of his instincts about morality because, if he doesn't, then he won't know anything about it.

There are multiple arguments here that need to be considered:

  1. If one doesn't know anything about morality, then that would be bad; I wanna know something about morality, therefore it's at least somewhat knowable. This argument is obviously wrong, when stated plainly, but there are hints of it in Eliezer's post.

  2. If one doesn't know anything about morality, then that can't be morality, because morality is inherently knowable (or knowable by definition). But why is morality inherently knowable. I think one can properly challenge this idea. It seems to be prima facie plausible that morality, and/or its content, could be entirely unknown, at least for a brief period of time.

  3. If one doesn't know anything about morality, then morality is no different than a tablet saying "thou shalt murder." This might be Eliezer's primary concern. However, this is a concern about arbitrariness, and not a concern about knowability. The two concerns seem to me to be orthogonal to each other (although I'd be interested to hear reasons why they are not). An easy way to see this is to recognize that the subtle intuitions Eliezer wants to sanction as "moral", are just as arbitrary as the "thou shall murder" precept on the tablet. That is, there seems to be no principled reason for regarding one, and not the other, as non-arbitrary. In both cases, the moral content is discovered, and not chosen, one just happens to be discovered in our DNA, and not in a tablet.

So, in view of all three arguments, it seems to me that morality, in the strong sense Eliezer is concerned with, might very well be unknowable, or at least is not in principle always partly known. (And we should probably concern ourselves with the strong sense, even if it is more difficult to work with, if our goal is to be an AI to rewrite the entire universe according to our moral code of choice, whatever that may turn out to be.) This was his original position, it seems, and it was motivated by concerns about "mere evolution" that I still find quite compelling.

Note that, if I understand Eliezer's view correctly, he currently plans on using a "collective volition" approach to friendly AI, whereby the AI will want to do whatever very-very-very-very smart future versions of human beings want it to do (this is a crude paraphrasing). I think this would resolve the concerns I raise above: such a smart AI would recognize the rightness or wrongness of any arguments against his view, like those I raise above, as well as countless other arguments, and respond appropriately.

This is one of my favorite quotes (and one of only two I post on my facebook page, the other being "The way to love something is to realize that it might be lost", which is cited at the top of the scarcity chapter in Cialdini's Influence).

I'm not sure if I interpret it the same way as Schopenhauer (who was batsh** crazy as far as I can tell), but I take it to mean this:

Control bottoms out. In the race between A, "things influencing/determining how you decide/think/act" and B, "your control over these things that influence/determine how you decide/think/act", A will always win. The desire for infinite control, control that doesn't bottom out, that bootstraps itself out of nothingness (what some people have associated with free will), is doomed to frustration.

[In fact, Einstein cites exactly this quote in explaining why he didn't believe in free will: "In human freedom in the philosophical sense I am definitely a disbeliever. Everybody acts not only under external compulsion but also in accordance with inner necessity. Schopenhauer's saying, that "a man can do as he will, but not will as he will," has been an inspiration to me since my youth up, and a continual consolation and unfailing well-spring of patience in the face of the hardships of life, my own and others'. This feeling mercifully mitigates the sense of responsibility which so easily becomes paralyzing, and it prevents us from taking ourselves and other people too seriously; it conduces to a view of life in which humour, above all, has its due place."]

Shopenhauer draws the line between action and will: we choose how we act, given our will, but we don't choose how we will. Many would take issue with that. But it doesn't really matter where you draw the line, the point is that eventually the line will be drawn. Someone might say: "oh, I choose how I will!" And then Schopenhauer might say (I like to think): "oh really, and what is this choice based on? Did you choose that?"

To some people, the fact that we don't have this ultimate control (free will, if you like) is obvious. "Of course we don't have that kind of free will, it's obviously non-existent, because it's logically impossible." But not all necessary truths are obvious, and most people are happy to believe in logical impossibilities---just pick up a philosophy of religion book and read about the many paradoxes associated with a perfectly loving, just, omnipresent, and omnipotent (etc.) God.

Note also that Schopenhauer's insight has a consequence: because everything we do, our entire lives, can be traced back to things entirely outside of our control, it follows that a sufficiently powerful and intelligent being could design our entire lives before we are born. Our entire life story, down to the last detail, could have been predetermined and preprogrammed (assuming the universe is deterministic in the right way). Most people don't realize how interesting Schopenhauer's insight, or at least the kernal of truth I think it captures, is, until you phrase it in those dramatic terms.

I'm already convinced that nothing is right or wrong in the absolute sense most people (and religions) imply.

So what do I do? Whatever I want. Right now, I'm posting a comment to a blog. Why? Not because it's right. Right or wrong has nothing to do with it. I just want to.

"You might as well say that you can't possibly choose to run into the burning orphanage, because your decision was fully determined by the future fact that the child was saved."

I don't see how that even begins to follow from what I've said, which is just that the future is fixed2 before I was born. The fixed2 future might be that I choose to save the child, and that I do so. That is all consistent with my claim; I'm not denying that anyone chooses anything.

"If you are going to talk about causal structure, the present screens off the past."

If only that were true! Unfortunately, even non-specialists have no difficulty tracing causal chains well into the past. The present might screen off the past if the laws of physics are were asymmetrical (if multiple pasts could map onto the same future)---but this is precisely what you deny in the same comment. The present doesn't screen off the past. A casual observation of a billiards game shows this: ball A causes ball B to move, hitting ball C, which causes ball C to move, hitting ball D, etc. (Caledion makes the same point above).

I'm not sure how long your willing to keep the dialogue going (as Honderich says "this problems gets a hold on you" and doesn't let go), but I appreciate your responses. There's a link from the Garden of Forking Paths here now, too.

Load More