(Or possibly the worst kind of zombie. But still, metaphorically.)

Since I was a kid, as far back as I can remember having thought about the issue at all, the basic arguments against existential angst have seemed obvious to me.

I used to express it something like: "If nothing really matters [ie, values aren’t objective, or however I put it back then], then it doesn't matter that nothing matters. If I choose to hold something as important, I can't be wrong."

However, a few months ago, it occurred to me to apply another principle of rationality to the issue, and that actually caused me to start having problems with existential angst!

I don't know if we have a snappy name for the principle, but this is my favorite expression of it:

"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments.  But if you’re interested in producing truth, you will fix your opponents’ arguments for them.  To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."

[I first read it used as the epigram to Yvain's "Least Convenient Possible World". Call it, what, "Fight your own zombies"?]

Sure, "The universe is a mere dance of particles, therefore your hopes and dreams are meaningless and you should just go off yourself to avoid the pain and struggle of trying to realize them" is a pretty stupid argument, easily dispatched.

But... what if contains the seed for a ravenous, undead, stone-cold sense-making monster?

I just got the feeling that maybe it did, and I was having a lot of trouble pinning down what exactly it could be so that I could either refute it or prove that the line of thought didn't actually go anywhere in the first place.

Now, I had just suffered a disappointing setback in my life goals, which obviously supports the idea that the philosophical issues weren’t fundamental to my real problems. I knew this, but that didn’t stop the problem. The sense of dread that maybe there was something to this existential angst thing was playing havoc with all my old techniques for picking myself up, re-motivating myself, and getting back to work!

In the end, I never quite managed to pin it down to my full satisfaction. I more-or-less managed to express my worries to myself, refuted those half-formed reasons to fear, and that more-or-less let me move on.

Has anyone else ever had similar problems? And if so, how did you express your fears, and how did you refute them?

For myself, the best I could come up with was that I was worried that my own utility function was somehow inconsistent with itself and/or what was really possible. (And I don’t mean like propositional values, of course, but the real involuntary basics that are part of who you are as a human being.)

To use a non-emotional-charged analogy, say you had a being that valued spending its life enjoying eating broccoli. Except it turns out that it didn’t really like broccoli. And whether or not its values prohibited modifying itself and/or broccoli, it was nowhere near having the technology to do so anyway. So it was going to be in internal emotional conflict for a long time.

So maybe it should trade-off a short-term slight intensification in the internal conflict in order to drastically shorten the total period of conflict. By violating its value of self preservation and committing painless suicide ASAP.

And while the being is not particularly enthusiastic about killing itself, it starts to worry that maybe its reluctance is really just a form of akrasia. It wonders if maybe deep down it really knows that, realistically, suicide is the best option, but it knows that it anticipates feeling really awful if it commits to that path enough to actually go prepare for it, even though it would only have to suffer the short period while preparing.

Broccoli being an analogy for... meaningful human relationships or something?

Now as to the counter-arguments I came up with-- well, what would you come up with? Make your own zombies out of my hasty sketch of one, and figure out how to strike it down.

Quite honestly, expressing your existential angst in terms of broccoli probably helps a bunch in itself!

New Comment
43 comments, sorted by Click to highlight new comments since: Today at 1:26 PM

say you had a being that valued spending its life enjoying eating broccoli. Except it turns out that it didn’t really like broccoli. [...] So it was going to be in internal emotional conflict for a long time.

If that being is anything like human, I would predict that there's a lot more to it than that, beneath the surface.

I strongly recommend talking to someone who knows more about you personally or someone who's good at talking people through their problems or just anyone in real-time. Less Wrong is great for some purposes, but it may not be what you most need right now.

Ha, I aint exactly about to off myself any time soon! :P

I said this was a problem I more-or-less fixed for myself.

The bits of it that could be handled off of lesswrong, I did.

I'm not looking for counseling here. I'm looking to see how other people try to solve the philosophical problem.

That's good to hear. I myself have never experienced this particular problem.

In my opinion, it's no use to engage in high-powered epistemic-rationality techniques like "optimize your opponent's argument for him" if the process of arriving at the truth makes you worse off than the benefits of the truth. Clearly, if the process is emotionally damaging to you, and the payoff is expected to be comparatively low, it's not rational to engage the problem at all.

The way I see it, our brains go through a lot of trouble to make us believe we're important and our values matter. We are also the dominant species on the planet, so I'd hold out for a very good payoff before I'd start questioning that.

Another angle is: what difference in sensory experience do you anticipate if we are completely irrelevant on a grand scale? None? Then clearly that part of your map isn't for arriving at predictions about reality, and updating it will not make you more effective anyways.

...I don't think "don't engage the problem at all" is really a viable option. Once you've taken the red pill, you can't really go back up the rabbit hole, right?

My original problem immediately made me think, "Okay, this conclusion is totally bumming me out, but I'm pretty sure it's coming from an incomplete application of logic". So I went with that and more-or-less solved it. I could do with having my solution more succinctly expressed, in a snappy, catchy sentence or two, but it seems to work. What I'm asking here is, has anybody else had to solve this problem, and how did they do it?

what difference in sensory experience do you anticipate if we are completely irrelevant on a grand scale? None?

...What? We already know that we're completely "irrelevant" on any scale, in the sense that there is no universal utility function hardwired into the laws of physics. Discriminating between oughts and is-es is pretty basic.

The question is not whether our human utility functions are universally "true". We already know they aren't, because they don't have an external truth value.

The question is, are our values internally consistent? How do you prove that they don't eat themselves from the inside out, or prove that such a problem doesn't even make sense?

...I don't think "don't engage the problem at all" is really a viable option. Once you've taken the red pill, you can't really go back up the rabbit hole, right?

You don't have to fall down it and smash out your brains at the bottom either.

The question is, are our values internally consistent? How do you prove that they don't eat themselves from the inside out, or prove that such a problem doesn't even make sense?

This is a basilisk that appears in many forms. For children, it's "How do you know there isn't a monster under the bed?" For horror readers, it's "How do you know that everyday life is anything more than a terrifyingly fragile veneer over unspeakable horrors that would instantly drive us mad if we so much as suspected their existence?" For theists, "How do you know God in His omnibenevolence passing human understanding doesn't torture every sentient being after death for all eternity?" For AGI researchers, "How do you know that a truly Friendly AI wouldn't in Its omnibenevolence passing human understanding reanimate every sentient being and torture them for all eternity?" For utilitarians, "How can we calculate utility, when we are responsible for the entire future lightcone not only of ourselves but of every being sufficiently like us anywhere in the universe?" For philosophers, "How can we ever know anything?" "Does anything really exist?"

It's all down to how to deal with not having an answer. The fact is, there is no ultimate foundation for anything: you will always have questions that you currently have no answer to, because it is easier to question an answer than to answer a question (as the parents of any small child know). Terror about what the unknown answers might be doesn't help. I prefer to just look directly at the problem instead of haring off after solutions I know I can't find, and then Ignore it.

...I don't think "don't engage the problem at all" is really a viable option. Once you've taken the red pill, you can't really go back up the rabbit hole, right?

You don't have to fall down it and smash out your brains at the bottom either.

I... don't think that metaphor actually connects to any choices you can actually make.

Don't get me wrong, I'm not against ignoring things. There are so many things you could pay attention to and the vast majority simply aren't worth it.

But when you find yourself afraid, or uneasy, or upset, I don't think you should ignore it. I don't think you really can.

There's got to be some thought or belief that's disturbing you (usually, caveats, blah blah), and you've got to track it down and nail it to the ground. Maybe it's a real external problem you've got to solve, maybe it's a flawed belief you've got to convincingly disprove to yourself, or at least honestly convince yourself that the problem belongs to the huge class of things that aren't worth worrying about.

But if that's the correct solution to a problem, just convincing yourself it aint worth worrying about, you've still got to arrive at that conclusion as an actual belief, by actually thinking about it. You can't just decide to believe it, cuz that'd just be belief in belief, and that doesn't really seem to work, emotionally, even for people who haven't generalized the concept of belief in belief.

Anyway, the more I've had to practice articulating what was bothering me (that about our/my values logically auto-cannibalizing themselves), the more I've come to actually believe that it's not worth worrying about. (It no longer feels particularly likely to really mean much, and then even if it did why would it apply to me?)

So when you said:

I prefer to just [A] look directly at the problem instead of [B] haring off after solutions I know I can't find, and then [C] Ignore it.

Yeah, that's pretty much exactly what I was doing when I wrote this post. Just that in order to effectively reach [C:Ignore], you've got to properly do [A:look at problem] first, and [B: try looking for solutions] is part of that.

Hm. So the challenge here, then, is to construct an argument with a premise that "Nothing really matters" and a conclusion of "Existential Angst" that would negate the standard objection of "If nothing matters, then I am allowed to have subjective values that things do matter, and I am not provably wrong in doing so."

This seems like it will take a bit of mental gymnastics: the bottom line of the argument is already filled in, but I will try.

So, somehow, it has to be argued that even if nothing matters, that you are not allowed to just posit subjective values.

I suppose the best argument for that might go something like:

You are not a purely rational agent that can divide things so neatly. Your brain is for chasing gazelle and arguing about who gets the most meat, not high-level theories of value. As such, it doesn't parse the difference between "subjective" and "objective" value systems in the way you want. When you say "subjective values" your brain doesn't really interpret it that way, it treats it in a manner identical to how it would treat an objective value system. What you're really doing is guarding your brain against existential angst by giving it an unfalsifiable "angst protection" term by putting an artifical label of "subjective" in front of your value system. It still doesn't really matter, you are just cleverly fooling yourself because you don't want to face your angst. That's fine if all you care about is use, but you claim to care about truth: and the truth is that nothing matters, including your so-called "Meaningful personal relationships," "Doing good," or "Being happy."

Hm. That wasn't actually as difficult as I thought it would be. Thank you, brain, for being so good at clever arguments.

I seem to have constructed something of a "stronger zombie opponent" here. I've also figured out its weak point, but I am curious to see who kills it and how.

Heh, yeah, it's kind of an odd case in which the fact that you want to write a particular bottom line before you begin is quite possibly an argument for that bottom line?

Quite honestly that zombie doesn't even seem to be animated to me. My ability to discriminate 'ises' and 'oughts' as two distinct types feels pretty damn natural and instinctive to me.

What bothered me was the question of whether my oughts were internally inconsistent.

Ah. Perhaps I talked around the issue of that zombie, rather than at it directly:

The specific issue I was getting at is that even if your moral "ought" isn't based in some state of the world (an "is"), you will treat it like it is: you will act like your "oughts" are basic, even when they aren't. You will treat your oughts as if they matter outside of your own head, because as a human brain you are good at fooling yourself.

To put it another way: would you treat your oughts any different if they DID turn out to be based in some metaphysically basic truth somehow?

If the answer is no, then you are treating your 'subjective' values the same as you would 'objective' ones. So applying the 'subjective' label doesn't pull any weight: your values don't really matter, and thus depression and angst are simply the natural path to take once you know how the world works.

(Note: I am not actually arguing something I believe here: I am just letting the zombie get in a few good swings. I don't actually think it is true and already have a couple of tracks against it. But I would be a poor rhetorical necromancer if I let my argument-zombies fall apart too easily.)

would you treat your oughts any different if they DID turn out to be based in some metaphysically basic truth somehow?

I... can't even answer that, because I can't conceive of a way in which that COULD be true. What would it even MEAN?

Still seems like a harmless corpse to me. I mean, not to knock your frankenskillz, but it seems like sewing butterfly wings onto a dead earthworm and putting it on top of a 9 volt battery. XD

I can conjure a few scenarios. Imagine that you expected to find Valutrons- subatomic particles that impose "value" onto things: the things you value are such because they have more Valutrons, and the things you don't do not. Or imagine that Omega comes up to you and tells you that there is a "true value" associated with ordinary objects. If you discovered that your values were based in something that was non-subjective, would you treat those oughts any differently?

...I guess I would get valutron-dynamics worked out, and engineer systems to yield maximum valutron output?

Except that I'd only do that if I believed it would be a handy shortcut to a higher output from my internal utility function that I already hold as subjectively "correct".

ie, if valutron physics somehow gave a low yield for making delicious food for friends, and a high yield for knifing everyone, I would still support good hosting and oppose stabbytimes.

So the answer is, apparently, even if the scenarios you conjure came to pass, I would still treat oughts and is-es as distinctly different types from each other, in the same way I do now.

But I still can't equate those scenarios with giving any meaning to "values having some metaphysically basic truth".

Although the valutron-engineering idea seems like a good idea for a fun absurdist sci-fi short story =]

I also agree that it would make a great absurdist sci-fi story. Reminds me of something Vonnegut would have written.

Well, the trick would be that it couldn't be counter to experience: you would never find yourself actually valuing, say, cooking-for-friends as more valuable than knifing-your-friends if knifing-your-friends carried more valutrons. You might expect more from cooking-for-friends and be surprised at the valutron-output for knifing-your-friends. In fact, that'd be one way to tell the difference between "valutrons cause value" and "I value valutrons.": in the latter scenario you might be surprised by valutron output, but not by your subjective values. In the former, you would actually be surprised to find that you valued certain things, which correlated with a high valutron output.

But that's pretty much there. We don't find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between "subjective" and "objective" ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.

That's one of the zombie's weak points, anyway.

Honestly if I ever found my values following valutron outputs in unexpected ways like that, I'd suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.

We don't find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between "subjective" and "objective" ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.

Right.

Which very well describes the way the type distinction of "objective" and "subjective" feels intuitively obvious, and logically sound. Alternatives aint conceivable.

That's one of the zombie's weak points, anyway.

It just doesn't seem like much of a zombie. But that makes sense as it wasn't discovered by someone by trying to pin down an honest sense of fear.

My zombie originally was, and I think I can sum it up as the thought that:

Maybe the same principles that identify wirehead-type states as undesirable under our values, would, if completely and consistently applied, identify everything and anything possible as in the class of wirehead-type states.

(The simple "enjoy broccoli" was an analogy for the entire complicated human CEV.

I threw in a reference to "meaningful human relationships" not because that's my problem anymore than the average person, but because "other people" seems to have something important to do with distinguishing between a happiness we actually want and an undesirable wirehead-type state.)

How do you kill that zombie? My own solution more-or-less worked, but it was rambling and badly articulated and generally less impressive than I might like.

And yeah, the philosophical problem is at base an existential angst factory. The real problem to solve for me is, obviously, getting around the disappointing life setback I mentioned.

But laying this philosophical problem to rest with a nice logical piledriver that's epic enough to friggin incinerate it would be one thing in service of that goal.

The subjective/objective opposition in theories of value is somewhat subverted by the existence of subjective facts. There are qualia of decision-making. These include at a minimum any emotional judgments which play a role in forming a preference or a decision, and there may be others.

Whether or not there is a sense of moral rightness, distinct from emotional, logical, and aesthetic judgments, is a basic question. If the answer is yes, that implies a phenomenological moral realism - there is a separate category of moral qualia. The answer is no in various psychologically reductive theories of morality. Hedonism, as a descriptive (not yet prescriptive) theory of human moral psychology, says that all moral judgments are really pleasure/pain judgments. Nietzsche offered a slightly different reduction of everything, to "will to power", which he regarded as even more fundamental.

How this subjective, phenomenological analysis relates to the computational, algorithmic, decision-theoretic analysis of decision-making, is one of the great unaddressed questions, in all the discussion on this site about morals and preferences and utility functions. Of course, it's an aspect of the general ontological problem of consciousness. And it ought to be relevant to the discussion you're having with Annie... if you can find a way to talk about it.

Whatever your values are, suicide will not help you achieve them. Stay alive, give 5% of your income to charity, and spend the rest on whatever makes you happy. You end up doing more good than 99% of the rest of humanity.

Whatever your values are, suicide will not help you achieve them. Stay alive, give 5% of your income to charity, and spend the rest on whatever makes you happy. You end up doing more good than 99% of the rest of humanity.

Well, the return on term life insurance can be pretty big, too. You can multiply your wealth by a factor of 500 by buying a 10 year term life insurance premium, making payments for two years (at which point U.S. law obligates the insurer to pay out in the event of suicide) and then dying.

The blood's on your hands if they actually do this.

But what about all the people we're letting die by not donating to charity?

There's a taboo against encouraging suicide, and it's probably there for the same reason we have other deontological taboos.

I can't even do it semi-ironically? :(

....Ha! Guys, I should have made this clearer. I don't need counseling. I more-or-less fixed my problem for myself. By which I mean I could do with having it expressed for myself a bit more succinctly in a snappy, catchy sentence or two, but essentially, I got it already.

My point in bringing it to this audience was, "Hey, pretty sure generalizing fundamental techniques of human rationality shouldn't cause existential angst. Seems like a problem that comes from and incomplete application of rationality to the issue. I think I figured out how to solve it for myself, but has anyone else ever had this problem, and how did you solve it?"

And we're talking about a situation in which a being discovered that its values were internally inconsistent, and the same logic that identifies wireheading as "not what I actually want" extended to everything. Leaving the being with nothing 'worth' living for, but still capable of feeling pain.

So it wouldn't make any sense for it to care at all how its death affected the state of the universe after it was gone. The point is that there are NO states that it actually values over others, other than ending its own subjective experience of pain.

If it had any reason to value killing itself to save the world over killing itself with a world destroying bomb (so long as both methods were somehow equally quick, easy, and painless to itself), then the whole reason it was killing itself in the first place wouldn't be true.

The questions I mean to raise here are, is it even possible for a being to have a value system that logically eats itself from the inside out like that? And even if it was, I don't think human values would fit into that class. But what's the simplest, clearest way of proving that?

....Ha! Guys, I should have made this clearer. I don't need counseling.

Well, I recognized that... :P

This needn't be ironic. If I'm willing to die to give my beneficiary a comfortable living, this might be a viable strategy.

Yous missin' da point dere.

It looks that if you enjoyed doing good - or at least having done good - the problem wouldn't occur anyway. And then, why suffer the conflict?

Because even if you don't enjoy doing good, you can still value it.

This usually falls in the "enjoying having done good" category. If you do not enjoy knowing that you are in a "better" world more, why say that you value it?

Yes, exactly. I'm glad I was at least clear enough for someone to get that point. =]

As I see it, once you accept the idea that we are just a dance of particles (as I do too), then in an important sense 'all bets are off'. A person comes up with something that works for them and goes with it. You don't have any really good reason not to become a serial murderer, and no good reason to save the world if you know how. So most of us (?) pick a set of values in line with human moral intuition and what other people pick and and just go back to living. It makes us happiest. I claim you can't be secretly miserable in an existential-angsty sort of way -- there is no deeper reality which supports that. There may be deeper realities we aren't seeing that we should worry about, but they are all within the scope of values we have chosen. But I've certainly had the experience that when I'm feeling bad I get reminded of the dance-of-particles situation and it further bums me out.

I see a decision about killing yourself as (in a way) constructing your future 'contentment curve' and seeing if the area above zero is larger than the area below. Rational people who get a painful terminal illness sometimes see lots of negative and that's where physician-assisted suicide comes in. This is subject to the enormous, hard-to-emphasize-enough cognitive distortion that badly depressed people are terrible at constructing future contentment curves. Then irrreversibility comes in as an argument, and the suggestion that a person should let others help them figure it out too.

Yeah, that pretty much sums it up. Especially:

This is subject to the enormous, hard-to-emphasize-enough cognitive distortion that badly depressed people are terrible at constructing future contentment curves.

Although I don't actually think getting reminded of the "dance of particles situation" does "further bum me out". I've understood since I was a kid that values are subjective. It was the thought that my values might be somehow broken by hidden inconsistency that bugged me.

What I was fearing was, if the logic of your values can identify wireheading as "not something I actually want", then what if that same logic actually extends to everything?

[-][anonymous]12y00

I find myself quite often experiencing something similar, but not with broccoli. So far there are two (general) strategies that I find to work best when facing zombies.

If we take your zombie "If nothing really matters [ie, values aren’t objective, or however I put it back then], then it doesn't matter that nothing matters. If I choose to hold something as important, I can't be wrong."

1) Asking: What is it about "values are not objective" that I find so scary? If I could change that fact to something, what would that look like?

2) Questioning why the zombie is allowed make a statement with such emotional force, when it seems to imply that nothing deserves emotional force.

I'm not sure how much sens I make, but that's that.

[-][anonymous]12y00

say you had a being that valued spending its life enjoying eating broccoli. Except it turns out that it didn’t really like broccoli.

This is almost a direct self-contradiction, given the utility-function formalism, but I am still myself stuped by my reading of the Metaethics sequence's discourse on Löb's theorem.

This is almost a direct self-contradiction, given the utility-function formalism

Yes, I think you would have to take it less formally here.

[-][anonymous]12y00

I am sorry. It appears I can't. I have internalized "there is only utilons" too strongly. What has been seen cannot be unseen, and all I see is a contradiction in terms.

Sorry, I meant "ignore that part". The 'being' is clearly a human. "Didn't really like" can be seen as a description of the sensory experience of taste, while "valued" can be seen as an actual relevant-to-utility value. If possible, such an agent might self-modify to like the taste of broccoli. If not, then it will just be a very sad panda.

[-][anonymous]12y00

I fail to see the problem. If broccoli tastes good to me I value it. If it is healthy I value it. If it is right to eat broccoli I value it. And so with the converse of all those statements.

all I see is a contradiction in terms.

I fail to see the problem.

Success!

[-][anonymous]12y00

Don't cleave my words. I do not value it.

What I am pointing out is that to say "I do not like doing X but I value X" is a contradiction in terms when viewed with Utilon glasses.

I fail to see your ploblem's relevance because of the above.

I work towards things that I apparently value because worlds where I value things are the only ones that affect my utility function. It's like Pascal's Wager except it doesn't cost anything to play.