Vladimir_Nesov comments on Strategies for dealing with emotional nihilism - Less Wrong

28 [deleted] 10 October 2010 01:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (170)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 10 October 2010 02:44:39PM *  4 points [-]

"I just don't care" is a curiosity-stopper. The actions of a "nihilistic" person of the kinds you describe are still very specific: they don't convulse uncontrollably, choosing to send random signals down their nerves. Thus, "all-zero utility function" is an incorrect model of the situation, making further analysis flawed.

Comment author: Eliezer_Yudkowsky 10 October 2010 06:26:23PM 9 points [-]

Agreed that all-zero utility function is more or less just wrong.

People like this can still remember what happiness is and wish that they were happy; they can dislike feeling nihilistic.

They can still experience all sorts of things as unpleasant, such as making an effort.

A state of mind in which happiness is very difficult to obtain and drive/motivation is at an extremely low ebb is not a zero utility function.

Nonetheless I find it very easy to understand why "zero utility function" would be used in this case as a poetic metaphor.

Comment author: [deleted] 10 October 2010 03:26:40PM *  9 points [-]

good point, and something to think about. Obviously someone who assigned truly equal value to every possible action would behave completely at random, which nobody does.

A better guess: what happens when you feel nihilistic is anhedonia. You don't get as much value or satisfaction out of the "peaks" -- experiences that once were very desirable are now less so. This results in expending much less effort to attain the most desirable things. Your ability to desire intensely is messed up.

I think you could model that by flattening out the peaks. It leaves most processes intact (you still speak in language, you still put on clothes, etc.) but it diminishes motivation, anticipation, and happiness. You can do a little goal-directed activity (rock-bottom rituals, choosing to eat or sleep) but much less than normal.

Comment author: Vladimir_Nesov 10 October 2010 03:32:35PM 7 points [-]

Yes, reduced intensity and resulting disturbed balance of psychological drives is a much better description.

Comment author: NancyLebovitz 10 October 2010 02:52:03PM 4 points [-]

Her formalism may be wrong-- it probably is, since it's possible to have ordinary nihilism which permits minimal self-maintenance. For that matter, those hitting bottom rituals are still goal-directed behavior.

Still, pervasive akrasia or high-lethargy depression or whatever you want to call it does happen, and I think the post is a good effort at addressing it.

Comment author: Vladimir_Nesov 10 October 2010 03:08:17PM 3 points [-]

It should strive to be much better, at least this utility function mysticism could be avoided.

Comment author: Perplexed 12 October 2010 12:19:07AM 3 points [-]

I'm not sure exactly where in the conversation is the best place for me to inject this comment, but this may be as good a place as any.

I think that it is important to realize that only rational agents can be behaviorally modeled using a utility function. Non rational agents, including agents beset with "depression" or "nihilism", don't necessarily even have well-defined utility functions, and if they do have them, their behavior is not controlled by expected utility in the same way that the behavior of rational agents is controlled.

The success that the simple hypothesis of hyperbolic discounting has had in explaining akrasia has perhaps misled us into thinking that all departures from rationality can be modeled by simple tweaks to the standard machinery for modeling rational agents. It ain't necessarily so.

Comment author: timtyler 13 October 2010 11:08:44AM *  1 point [-]

If you drop enough of the axioms (e.g. the axiom of independence) from the expected utility formalisation you can represent the behaviour of any creature you care to imagine with a utility function.

Eventually, such a function just becomes a map between sensory inputs (including memories) and motor outputs.

Comment author: RichardKennaway 13 October 2010 12:10:22PM 1 point [-]

If you drop enough of the axioms (e.g. the axiom of independence) from the expected utility formalisation you can represent the behaviour of any creature you care to imagine with a utility function.

At some point, you can't call it a utility function any more.

Eventually, such a function just becomes a map between sensory inputs (including memories) and motor outputs.

Such a hypothetical function is as useless as the supposed function, in a deterministic universe, for calculating all future states of the universe from an exact knowledge of its present.

Comment author: timtyler 13 October 2010 01:57:26PM *  1 point [-]

Richard, I think your first point is probably based on a misconception about the idea. It would still be a utility function - in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility). Being that which is maximised during action is what the term "utility" means.

Sure, if you go beyond that, then the word "utility" might eventually become inappropriate, but that is not what is being proposed.

I can't make much sense of the second point. Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs. They are not useless if you do things like drop the axiom of independence. Indeed, the axiom of independence is the most frequently-dropped axiom.

It is generally useful to have an abstract utility-based model that can model the behaviour of any computable creature by plugging in a utility function.

Comment author: RichardKennaway 13 October 2010 03:26:17PM 0 points [-]

Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs.

Hang on, a moment ago they were functions from outputs to values. Now they're functions from inputs to values. Which are they?

Comment author: magfrump 13 October 2010 05:35:14PM 0 points [-]

Gonna take a wild stab:

A "Utility Function" is a function from the space of (sensory inputs including memories) to the space of (functions from outputs to values).

For any given set of (sensory inputs including memories) we can that set's image under our "Utility Function" a "utility function" and then sometimes mess up the capitalization.

Is that more clear, and/or is that what was being said?

Comment author: timtyler 13 October 2010 03:50:23PM 0 points [-]

Utility functions are maps between sensory inputs (including memories) and scalar values associated with possible motor outputs.

Comment author: RichardKennaway 13 October 2010 08:47:43PM 0 points [-]

Yes, that's what I already quoted. But earlier in the same comment you said this:

It would still be a utility function - in that it would assign real-valued utilities to possible actions (before selecting the action with highest utility).

There you are saying that it maps actions to utilities. Hence my question.

I have something to say in response, but I can't until I know what you actually mean, and the version that you have just reasserted makes no sense to me.

Comment author: multifoliaterose 10 October 2010 03:22:56PM *  3 points [-]

I agree with you and Nancy Lebovitz that it's not literally the case that emotional nihilism corresponds to the trivial utility function - I think that SarahC did not intend to make this claim and was instead describing her subjective impressions of how emotional nihilism feels relative to a more common equilibrium emotional state.

Comment author: [deleted] 10 October 2010 03:17:51PM *  2 points [-]

I don't think the point of the post is about reaching complete nihilism. It's reaching a point where you more-or-less think "what difference could it make" and then stop at "I don't care". It's not exactly all utility being zero (because, like I said in my other comment, that would mean doing nothing, and there's no way out then), but it's damn near close and is a problem for just about anyone in the "nearby-nihilism" state.

Comment author: Vladimir_Nesov 10 October 2010 03:24:56PM 1 point [-]

Being indifferent doesn't mean doing nothing. How would you privilege "doing nothing" over other courses of action, if you are indifferent to everything?

Comment author: [deleted] 10 October 2010 03:29:17PM 0 points [-]

It's less consciously privileging "doing nothing" over anything else so much as looking at everything else you'd usually do, not caring about any of those options, thinking up some alternatives, still not caring, and subsequently just doing nothing, possibly because it's easiest.

Comment author: Vladimir_Nesov 10 October 2010 03:33:51PM 1 point [-]

So one does still care about things being easy.

Comment author: [deleted] 10 October 2010 03:37:54PM *  0 points [-]

Possibly, so I guess it's not completely nihilism. Or it's just null-set nihilism: If nothing seems worth doing, do nothing.

Note the fact that, in my original scenario, we considered alternative choices of action. I get the feeling a pure nihilistic engine wouldn't even do that, so I'm already arguing from the wrong point.