To fully confront the ontological crisis that we face, we would have to upgrade our world model to be based on actual physics, and simultaneously translate our utility functions so that their domain is the set of possible states of the new model. We currently have little idea how to accomplish this, and instead what we do in practice is, as far as I can tell, keep our ontologies intact and utility functions unchanged, but just add some new heuristics that in certain limited circumstances call out to new physics formulas to better update/extrapolate our models. This is actually rather clever, because it lets us make use of updated understandings of physics without ever having to, for instance, decide exactly what patterns of particle movements constitute pain or pleasure, or what patterns constitute oneself.
This seems true and important.
should we be very sanguine that for humans everything must "add up to moral normality"?
No. People do go out of their minds on nihilism now and then.
And I've already seen two LWers who have discovered such compassion for the suffering of animals that they want to exterminate them all except for a few coddled pets, one of whom isn't sure that humans should exist at all.
Less dramatically, any number of Buddhists have persuaded themselves that they don't exist, although I'm not sure how many just believe that they believe that.
"If nothing really exists and it's all just emptiness and even the emptiness is empty of existence, then how could I have killed all those nonexistent people with my nonexistent hands?
"Tell it to the nonexistent prison walls, buddy."
I went through my nihilism crisis a little over two years ago. I was depressed and sad, and didn't see any point in existing. After about two weeks, I realized what was going on - that I had kicked that last pillar of "the universe has meaning" out from under my model of the world. It was odd, having something that seemed so trivial have so much of an impact on my well being. Prior to that experience, I would not have expected it.
But once I realized that was the problem, once I realized that life had no point, things changed. The problem at that point simply became, "why do I exist, and why do I care?" The answer I came up with is that I exist because the universe happens to be set up this way. And I care (about any/everything) simply because my genetics, atoms, molecules, and processing architecture are set up in a way that happens to care.
This was good enough. In fact, it's turned out to be better than what I had before. I love life, I want to experience things, I want to contribute, I want to maximize the utility function that is partially of my own making and partially not.
Getting through to true nihilism can be difficult, and I can see many people not having the ability to do so. But in my case, it has served me well, as my model of the world is now more accurate.
Buddhism merely states that there's a psychological continuum in which there is nothing unchanging. The "self" that's precluded is just an unchanging one. (That said, in the Abhidharma there are unchanging elements from which this psychological continuum is constituted.) The Mahayana doctrine of emptiness (which isn't common to all Buddhism, just the schools that are now found in the Himalayas and East Asia) essentially states that everything is without inherent existence; things only exist as conditioned phenomena in relation to other things, nothing can exist in or of itself because this would preclude change. It's essentially a restatement of impermanence (everything is subject to change) with the addition of interdependence. So I'd imagine few Buddhists have convinced themselves they don't exist.
There's a pretty drastic difference between a rock and a human mind. Rocks don't appear to change just by looking at them. Thoughts do.
Timescale and proximity.
You can't really contrast anything from the workings of your mind, because it's all you ever see.
I don't see the difficulty. I contrast how I feel when I first wake up in the morning (half dead) and how I feel half an hour later (alive). I contrast myself before and after a glass of beer. When I drive a car, I notice if I am making errors of judgement. While I am sure I am not perfect at seeing my own flaws, to the extent that I do, it's a routine sort of thing, not a revelation.
I believe that this particular flaw is chiefly responsible for a good deal, if not all, of human suffering, as well as ambition. I don't know if that belief can be used towards anything as categorically practical as transfiguration, but it certainly is useful for solving ontological and existential crises and improving happiness and peace of mind.
So the Buddhists say, but I've done a fair amount of meditation and never noticed any connection between contemplating my interior life and the presence or absence of suffering. Neither has the experience thrown up so much as a speedbump, never mind a serious challenge to anything. Wow, maybe I'm naturally enlightened already! Except I wouldn't say my life manifested any evidence of that.
An ontology consisting of Gods, Self, Other People, and Dumb Matter just isn't very different from one consisting of Self, Other People, and Dumb Matter
Benja just posted a neat proof of why, if your preferences don't satisfy the axiom of continuity in von Neumann-Morgenstern utility theory, your rational behavior would be almost everywhere identical to the behavior of someone whose continuity-satisfying-preferences simply ignore the "lower priority" aspects of yours. E.g. if you prefer "X torture plus N dust specks" over "Y torture plus M dust specks" for any X < Y but also for any X==Y, N < M, then you might as well ignore the existence of dust specks because in practical questions there's always going to be some epsilon of probability between X and Y.
But now what if instead of "torture" and "dust specks" we have a lexical preference ordering on "displeasure of God(s)" and "everything else bad", and then we remove the former from the picture? Suddenly the parts of probability space that you were previously ignoring (except indirectly insofar as you tried to reflect the preferences of God(s) regarding ...
To fully confront the ontological crisis that we face
What ontological crisis is that? The rest of the article is written in general terms, but this phrase suggests you have a specific one in mind, but without actually being specific. Is there some ontological crisis that you are facing, that moved this article?
Personally, learning that apples are made of atoms doesn't give me any difficulty in eating them.
...To fully confront the ontological crisis that we face, we would have to upgrade our world model to be based on actual physics, and simultaneously translate our utility functions so that their domain is the set of possible states of the new model. We currently have little idea how to accomplish this, and instead what we do in practice is, as far as I can tell, keep our ontologies intact and utility functions unchanged, but just add some new heuristics that in certain limited circumstances call out to new physics formulas to better update/extrapolate our mo
This seems to be one of the biggest problems for FAI... keeping an utility function constant in a self-modifying agent is hard enough, but keeping it the same over a different domain... well, that's real hard.
Actually, there might be three outcomes:
To fully confront the ontological crisis that we face, we would have to upgrade our world model to be based on actual physics, and simultaneously translate our utility functions so that their domain is the set of possible states of the new model.
Once upon a time, developmental psychology claimed that human babies learned object permanence as they aged. I don't know if that's still the dominant opinion, but it seems at least possible to me, a way that the world could be, if not the way it is. What would that mean, for a baby to go from not having a sens...
Do we have any examples of humans successfully navigating an ontological crisis?
How do you define "successfully"?
For example, all the disagreement over "free will" seems to be because many humans have a sense of morality which presumes some sense of an extraphysical free will. Confronted with evidence that we are physical systems, some people resort to claiming that we aren't actually physical systems, others modify their conception of free will to be compatible with us being physical systems, and some declare that since we are phys...
When your preferences operate over high-level things in your map, the problem is not that they don't talk about the real world. Because there is a specific way in which the map gets determined by the world, in a certain sense they already do talk about the real world, you just didn't originally know how to interpret them in this way. You can compose the process that takes the world and produces your map with the process that takes your map and the high-level things in it and produces a value judgement, obtaining a process that takes the world and produces ...
I recently realized that a couple of problems that I've been thinking over (the nature of selfishness and the nature of pain/pleasure/suffering/happiness) can be considered instances of ontological crises in humans (although I'm not so sure we necessarily have the cognitive algorithms to solve them).
Uploading also seems like it's going to spawn a whole lot of ontological crises in humans. Suppose you value equality and want to weigh everyone's views equally, either via an ordinary democratic process or something more exotic like CEV. So what do you do w...
Given that we do not actually have at hand a solution to ontological crises in general or to the specific crisis that we face, what's wrong with saying that the solution set may just be null?
My answer is that this seems like a severely underdetermned problem, not an overdetermined one. If we start from the requirement that the change-gathering robot still gathers change in the ancestral environment, we're at least starting with lots of indeterminacy, and I don't think there's some tipping point as we add desiderata. Unless we take the tragic step of requiring our translated morality to be the only possible translation :D
Do we have any examples of humans successfully navigating an ontological crisis?
I bet a number of LW regulars have done so. I commented earlier (and rather too often) of how my ontology shifted from realism to instrumentalism after the standard definitions of terms "reality" and "existence" proved unsatisfactory for me. Certainly being exposed to the stuff discussed on this forum ought to have had a profound impact at least on some others, as well.
Actually, dealing with a component of your ontology not being real seems like a far harder problem than the problem of such a component not being fundamental.
According to the Great Reductionist Thesis everything real can be reduced to a mix of physical reference and logical reference. In which case, if every component of your ontology is real, you can obtain a formulation of your utility function in terms of fundamental things.
The case where some components of your ontology can't be reduced because they're not real and where your utility function refer exp...
A ontological crisis would indicate a serious problem. I don't see one.
There may be weird boundary cases where our base ontologies have a problem, but I don't find applying morality a daily struggle.
Do you have concrete examples of serious problems with applying morality in the real world today?
Do you have concrete examples of serious problems with applying morality in the real world today?
When is a fetus or baby capable of feeling pain (that has moral disvalue)? What about (non-human) animals?
...By the way, I think nihilism often gets short changed around here. Given that we do not actually have at hand a solution to ontological crises in general or to the specific crisis that we face, what's wrong with saying that the solution set may just be null? Given that evolution doesn't constitute a particularly benevolent and farsighted designer, perhaps we may not be able to do much better than that poor spare-change collecting robot? If Eliezer is worried that actual AIs facing actual ontological crises could do worse than just crash, should we be very
Imagine a robot that was designed to find and collect spare change around its owner's house. It had a world model where macroscopic everyday objects are ontologically primitive and ruled by high-school-like physics and (for humans and their pets) rudimentary psychology and animal behavior. Its goals were expressed as a utility function over this world model, which was sufficient for its designed purpose. All went well until one day, a prankster decided to "upgrade" the robot's world model to be based on modern particle physics. This unfortunately caused the robot's utility function to instantly throw a domain error exception (since its inputs are no longer the expected list of macroscopic objects and associated properties like shape and color), thus crashing the controlling AI.
According to Peter de Blanc, who used the phrase "ontological crisis" to describe this kind of problem,
I recently realized that a couple of problems that I've been thinking over (the nature of selfishness and the nature of pain/pleasure/suffering/happiness) can be considered instances of ontological crises in humans (although I'm not so sure we necessarily have the cognitive algorithms to solve them). I started thinking in this direction after writing this comment:
What struck me is that even though the world doesn't divide cleanly into these 3 parts, our models of the world actually do. In the world models that we humans use on a day to day basis, and over which our utility functions seem to be defined (to the extent that we can be said to have utility functions at all), we do take the Self, Other People, and various Dumb Matter to be ontologically primitive entities. Our world models, like the coin collecting robot's, consist of these macroscopic objects ruled by a hodgepodge of heuristics and prediction algorithms, rather than microscopic particles governed by a coherent set of laws of physics.
For example, the amount of pain someone is experiencing doesn't seem to exist in the real world as an XML tag attached to some "person entity", but that's pretty much how our models of the world work, and perhaps more importantly, that's what our utility functions expect their inputs to look like (as opposed to, say, a list of particles and their positions and velocities). Similarly, a human can be selfish just by treating the object labeled "SELF" in its world model differently from other objects, whereas an AI with a world model consisting of microscopic particles would need to somehow inherit or learn a detailed description of itself in order to be selfish.
To fully confront the ontological crisis that we face, we would have to upgrade our world model to be based on actual physics, and simultaneously translate our utility functions so that their domain is the set of possible states of the new model. We currently have little idea how to accomplish this, and instead what we do in practice is, as far as I can tell, keep our ontologies intact and utility functions unchanged, but just add some new heuristics that in certain limited circumstances call out to new physics formulas to better update/extrapolate our models. This is actually rather clever, because it lets us make use of updated understandings of physics without ever having to, for instance, decide exactly what patterns of particle movements constitute pain or pleasure, or what patterns constitute oneself. Nevertheless, this approach hardly seems capable of being extended to work in a future where many people may have nontraditional mind architectures, or have a zillion copies of themselves running on all kinds of strange substrates, or be merged into amorphous group minds with no clear boundaries between individuals.
By the way, I think nihilism often gets short changed around here. Given that we do not actually have at hand a solution to ontological crises in general or to the specific crisis that we face, what's wrong with saying that the solution set may just be null? Given that evolution doesn't constitute a particularly benevolent and farsighted designer, perhaps we may not be able to do much better than that poor spare-change collecting robot? If Eliezer is worried that actual AIs facing actual ontological crises could do worse than just crash, should we be very sanguine that for humans everything must "add up to moral normality"?
To expand a bit more on this possibility, many people have an aversion against moral arbitrariness, so we need at a minimum a utility translation scheme that's principled enough to pass that filter. But our existing world models are a hodgepodge put together by evolution so there may not be any such sufficiently principled scheme, which (if other approaches to solving moral philosophy also don't pan out) would leave us with legitimate feelings of "existential angst" and nihilism. One could perhaps still argue that any current such feelings are premature, but maybe some people have stronger intuitions than others that these problems are unsolvable?
Do we have any examples of humans successfully navigating an ontological crisis? The LessWrong Wiki mentions loss of faith in God:
But I don't think loss of faith in God actually constitutes an ontological crisis, or if it does, certainly not a very severe one. An ontology consisting of Gods, Self, Other People, and Dumb Matter just isn't very different from one consisting of Self, Other People, and Dumb Matter (the latter could just be considered a special case of the former with quantity of Gods being 0), especially when you compare either ontology to one made of microscopic particles or even less familiar entities.
But to end on a more positive note, realizing that seemingly unrelated problems are actually instances of a more general problem gives some hope that by "going meta" we can find a solution to all of these problems at once. Maybe we can solve many ethical problems simultaneously by discovering some generic algorithm that can be used by an agent to transition from any ontology to another?
(Note that I'm not saying this is the right way to understand one's real preferences/morality, but just drawing attention to it as a possible alternative to other more "object level" or "purely philosophical" approaches. See also this previous discussion, which I recalled after writing most of the above.)