chaosmosis comments on Rationality Quotes May 2012 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (696)
"If a person's life really isn't worth living [objectively]" then the person should stop caring about flawed concepts like objective value. "If a person's life really isn't worth living [subjectively]" then they should work on changing their subjective values or changing the way that their life is so it is subjectively worth living. If neither of the above is possible, then they should kill themselves.
It's important that we recognize where the worth "comes from" as a potential solution to the problem.
This insight brought to you by my understanding of Friedrich Nietzsche. (Read his stuff!)
It's hard to say what it would even mean for moral value to be truly objective, but say that, if a person is alive, it will cause many people to suffer terribly. Should they stop caring about this in order to keep wanting to live?
If a person is living in inescapably miserable circumstances, changing their value system so they're not miserable anymore is easier said than done. And if it were easy, do you think it would be better to simply always change our values so that they're already met, rather than changing the world to satisfy our values?
Better to self-modify to suffer less due to not achieving your goals (yet), while keeping the same goals.
Easier said than done, unfortunately.
This doesn't make sense.
How do you retain something as a goal while removing the value that you place on it?
Don't remove the value. Remove just the experience of feeling bad due to not yet achieving the value.
If I have a value/goal of being rich, this doesn't have to mean I will feel miserable until I'm rich.
What you're implicitly doing here is divorcing goals from values (feelings are a value). Either that or you're thinking that there's something especially wrong related to negative incentives that doesn't apply to positive ones.
If you don't feel miserable when you're poor or, similarly, if you won't feel happier when you're rich, then why would you value being rich at all? If your emotions don't change in response to having or not having a certain something then that something doesn't count as a goal. You would be wanting something without caring about it, which is silly. You're saying we should remove the reasons we care about X while still pursuing X, which makes no sense.
There's something terribly wrong about the way negative incentives are implemented in humans. I think the experience of pain (and the fear or anticipation of it) is a terrible thing and I wish I could self-modify so I would feel pain as damage/danger signals, but without the affect of pain. (There are people wired like this, but I can't find the name for the condition right now.)
Similarly, I would like to get rid of the negative affect of (almost?) everything else in life. Fear, grief, etc. They're the way evolution implemented negative reinforcement learning in us, but they're not the only possible way, and they're no longer needed for survival; if we only had the tools to replace them with something else.
Being rich is (as an example) an instrumental goal, not a terminal one. I want it because I will use the money to buy things and experiences that will make me feel good, much more than having the money (and not using it) would.
"pain asymbolia"
Treating it as an instrumental goal doesn't solve the problem, it just moves it back a step. Even if you wouldn't feel miserable by being poor because you magically eliminated negative incentives you would still feel less of the positive incentives when you are poor than when you were rich, even though richness is just the means to feeling better. All of this:
still applies.
(Except insofar as it might be altered by relevant differences between positive and negative incentives.)
To clarify, what I'm contending is that this would only make sense as a motivational system if you placed positive value on achieving certain goals which you hadn't yet achieved, I think you agree with this part but am not sure. But I don't think we can justify treating positive incentives differently than negative ones.
I don't view the distinction between an absence of a positive incentive and the presence of a negative incentive the same way you do. I'm not even sure that I have any positive incentives which aren't derived from negative incentives.
Negative and positive feelings are differently wired in the brain. Fewer positive feelings is not the same as more negative ones. Getting rid of negative feelings is very worthwhile even without increasing positive ones.
But the same logic justifies both, even if they are drastically different in other sort of ways.
Forcing yourself to feel maximum happiness would make sense if forcing yourself to feel minimum unhappiness made sense. They both interact with utilitarianism and preference systems which are the only relevant parts of the logic. The degree or direction of the experience doesn't matter here.
Removing negative incentives justifies maxing out positive incentives = nihilism.
I mean, you can arbitrarily only apply it to certain incentives which is desirable because that precludes the nihilism. But that feels too ad hoc and it still would mean that you can't remove the reasons you care about something while continuing to think of it as a goal, which is part of what I was trying to get at.
So, given that I don't like nihilism or preference paralysis but I do support changing values sometimes, I guess that my overall advocacy is that values should only be modified to max out happiness / minimize unhappiness if happiness / no unhappiness is unachievable (or perhaps also if modifying those specific values helps you to achieve more value total through other routes). Maybe that's the path to an agreement between us.
If you have an insatiable positive preference, satiate it by modifying yourself to be content with what you have. If you can never be rid of a certain negative incentive, try to change your preferences so that you like it. Unfortunately, this does entail losing your initial goals. But it's not a very big loss to lose unachievable goals while still achieving the reasons the goals matter, so fulfilling your values by modifying them definitely makes sense.
I think DanArmak means modify the negative affect we feel from not acheiving the goals while keeping the desire and motivation to acheive them.
EDIT: oops, ninja'd by DanArmak. Never mind.
If you cannot change the world to satisfy your values then your values should change, is what I advocate. To answer your tradeoff example: Choose whichever one you value more, then make the other unachievable negative value go away.
And I don't know how to solve the problem I mention in my other comment below.
There's an interesting issue here.
The agent might have a constitution such that they don't place subjective value on changing their subjective values to something that would be more fulfillable. The current-agent would prefer that they not change their values. The hypothetical-agent would prefer that they have already changed their values. I was just reading the posts on Timeless Decision Theory and it seems like this is a problem that TDT would have a tough time grappling with.
I'm also feeling that it's plausible that someone is systematically neg karmaing me again.