byrnema comments on The Wannabe Rational - Less Wrong

31 Post author: MrHen 15 January 2010 08:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (296)

You are viewing a single comment's thread. Show more comments above.

Comment author: byrnema 17 January 2010 05:54:08PM *  0 points [-]

I'm not sure to what extent terminal values can be chosen or not, but it seems to me that (the following slightly different than what you were describing) if you become absolutely convinced that your values aren't important, then it would be difficult to continue thinking your values are important. Maybe the fact that I can't be convinced of the unimportance of my values explains why I can't really be convinced there's no Framework of Objective Value, since my brain keeps outputting that this would make my values unimportant. But maybe, by the end of this thread, my brain will stop outputting that. I'm willing to do the necessary mental work.

By the way, Furcas seemed to understand the negation of value I'm experiencing via an analogy of solipsism.

Comment author: orthonormal 17 January 2010 06:17:42PM *  7 points [-]

One last time, importanceuniversality.

If we had been Babyeaters, we would think that eating babies is the right-B thing to do. This doesn't in any way imply we should be enthusiastic or even blasé about baby-eating, because we value the right thing, not the right-B thing that expresses the Babyeaters' morality!

I understand that you can't imagine a value being important without it being completely objective and universal. But you can start by admitting that the concept of important-to-you value is at least distinct from the concept of an objective or universal value!

Imagine first that there is an objective value that you just don't care about. Easy, right? Next, imagine that there is something you care about, deeply, that just isn't an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.

Comment author: Kutta 17 January 2010 08:58:10PM *  3 points [-]

Imagine first that there is an objective value that you just don't care about. Easy, right? Next, imagine that there is something you care about, deeply, that just isn't an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.

This the best (very) short guide to naturalistic metaethics I've read so far.

Comment author: byrnema 17 January 2010 07:17:05PM *  0 points [-]

This is very helpful. The only thing I would clarify is that the lesson I need to learn is that importance ≠ objectivity. (I'm not at all concerned about universality.)

I understand that you can't imagine a value being important without it being completely objective [...]. But you can start by admitting that the concept of important-to-you value is at least distinct from the concept of an objective or universal value!

I'm not sure. With a squirrel in the universe, I would have thought the universe was better with more nuts than with less. I can understand there being no objective value, but I can't understand objective value being causally or meaningfully distinct from the subjective value.

Next, imagine that there is something you care about, deeply, that just isn't an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.

Hm. I have no problem with 'permission'. I just find that I don't care about caring about it. If it's not actually horrible, then let the universe fill up with it! My impression is that intellectually (not viscerally, of course) I fail to weight my subjective view of things. If some mathematical proof really convinced me that something I thought subjectively horrible was objectively good, I think I would start liking it.

(The only issue, that I mentioned before, is that a sense of moral responsibility would prevent me from being convinced by a mathematical proof to suddenly acquire beliefs that would cause me to do something I've already learned is immoral. I would have to consider the probability that I'm insane or hallucinating the proof, etc.)