All of Annie0305's Comments + Replies

...I don't think "don't engage the problem at all" is really a viable option. Once you've taken the red pill, you can't really go back up the rabbit hole, right?

You don't have to fall down it and smash out your brains at the bottom either.

I... don't think that metaphor actually connects to any choices you can actually make.

Don't get me wrong, I'm not against ignoring things. There are so many things you could pay attention to and the vast majority simply aren't worth it.

But when you find yourself afraid, or uneasy, or upset, I don't thin... (read more)

Honestly if I ever found my values following valutron outputs in unexpected ways like that, I'd suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.

We don't find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between "subjective" and "objective" ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective value

... (read more)

...I guess I would get valutron-dynamics worked out, and engineer systems to yield maximum valutron output?

Except that I'd only do that if I believed it would be a handy shortcut to a higher output from my internal utility function that I already hold as subjectively "correct".

ie, if valutron physics somehow gave a low yield for making delicious food for friends, and a high yield for knifing everyone, I would still support good hosting and oppose stabbytimes.

So the answer is, apparently, even if the scenarios you conjure came to pass, I would st... (read more)

4asparisi
I also agree that it would make a great absurdist sci-fi story. Reminds me of something Vonnegut would have written.
2asparisi
Well, the trick would be that it couldn't be counter to experience: you would never find yourself actually valuing, say, cooking-for-friends as more valuable than knifing-your-friends if knifing-your-friends carried more valutrons. You might expect more from cooking-for-friends and be surprised at the valutron-output for knifing-your-friends. In fact, that'd be one way to tell the difference between "valutrons cause value" and "I value valutrons.": in the latter scenario you might be surprised by valutron output, but not by your subjective values. In the former, you would actually be surprised to find that you valued certain things, which correlated with a high valutron output. But that's pretty much there. We don't find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between "subjective" and "objective" ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source. That's one of the zombie's weak points, anyway.

would you treat your oughts any different if they DID turn out to be based in some metaphysically basic truth somehow?

I... can't even answer that, because I can't conceive of a way in which that COULD be true. What would it even MEAN?

Still seems like a harmless corpse to me. I mean, not to knock your frankenskillz, but it seems like sewing butterfly wings onto a dead earthworm and putting it on top of a 9 volt battery. XD

3asparisi
I can conjure a few scenarios. Imagine that you expected to find Valutrons- subatomic particles that impose "value" onto things: the things you value are such because they have more Valutrons, and the things you don't do not. Or imagine that Omega comes up to you and tells you that there is a "true value" associated with ordinary objects. If you discovered that your values were based in something that was non-subjective, would you treat those oughts any differently?

Heh, yeah, it's kind of an odd case in which the fact that you want to write a particular bottom line before you begin is quite possibly an argument for that bottom line?

Quite honestly that zombie doesn't even seem to be animated to me. My ability to discriminate 'ises' and 'oughts' as two distinct types feels pretty damn natural and instinctive to me.

What bothered me was the question of whether my oughts were internally inconsistent.

4asparisi
Ah. Perhaps I talked around the issue of that zombie, rather than at it directly: The specific issue I was getting at is that even if your moral "ought" isn't based in some state of the world (an "is"), you will treat it like it is: you will act like your "oughts" are basic, even when they aren't. You will treat your oughts as if they matter outside of your own head, because as a human brain you are good at fooling yourself. To put it another way: would you treat your oughts any different if they DID turn out to be based in some metaphysically basic truth somehow? If the answer is no, then you are treating your 'subjective' values the same as you would 'objective' ones. So applying the 'subjective' label doesn't pull any weight: your values don't really matter, and thus depression and angst are simply the natural path to take once you know how the world works. (Note: I am not actually arguing something I believe here: I am just letting the zombie get in a few good swings. I don't actually think it is true and already have a couple of tracks against it. But I would be a poor rhetorical necromancer if I let my argument-zombies fall apart too easily.)

...I don't think "don't engage the problem at all" is really a viable option. Once you've taken the red pill, you can't really go back up the rabbit hole, right?

My original problem immediately made me think, "Okay, this conclusion is totally bumming me out, but I'm pretty sure it's coming from an incomplete application of logic". So I went with that and more-or-less solved it. I could do with having my solution more succinctly expressed, in a snappy, catchy sentence or two, but it seems to work. What I'm asking here is, has anybody else... (read more)

2Richard_Kennaway
You don't have to fall down it and smash out your brains at the bottom either. This is a basilisk that appears in many forms. For children, it's "How do you know there isn't a monster under the bed?" For horror readers, it's "How do you know that everyday life is anything more than a terrifyingly fragile veneer over unspeakable horrors that would instantly drive us mad if we so much as suspected their existence?" For theists, "How do you know God in His omnibenevolence passing human understanding doesn't torture every sentient being after death for all eternity?" For AGI researchers, "How do you know that a truly Friendly AI wouldn't in Its omnibenevolence passing human understanding reanimate every sentient being and torture them for all eternity?" For utilitarians, "How can we calculate utility, when we are responsible for the entire future lightcone not only of ourselves but of every being sufficiently like us anywhere in the universe?" For philosophers, "How can we ever know anything?" "Does anything really exist?" It's all down to how to deal with not having an answer. The fact is, there is no ultimate foundation for anything: you will always have questions that you currently have no answer to, because it is easier to question an answer than to answer a question (as the parents of any small child know). Terror about what the unknown answers might be doesn't help. I prefer to just look directly at the problem instead of haring off after solutions I know I can't find, and then Ignore it.

Yes, exactly. I'm glad I was at least clear enough for someone to get that point. =]

....Ha! Guys, I should have made this clearer. I don't need counseling. I more-or-less fixed my problem for myself. By which I mean I could do with having it expressed for myself a bit more succinctly in a snappy, catchy sentence or two, but essentially, I got it already.

My point in bringing it to this audience was, "Hey, pretty sure generalizing fundamental techniques of human rationality shouldn't cause existential angst. Seems like a problem that comes from and incomplete application of rationality to the issue. I think I figured out how to solve i... (read more)

2CronoDAS
Well, I recognized that... :P

Ha, I aint exactly about to off myself any time soon! :P

I said this was a problem I more-or-less fixed for myself.

The bits of it that could be handled off of lesswrong, I did.

I'm not looking for counseling here. I'm looking to see how other people try to solve the philosophical problem.

2Nisan
That's good to hear. I myself have never experienced this particular problem.

Yeah, that pretty much sums it up. Especially:

This is subject to the enormous, hard-to-emphasize-enough cognitive distortion that badly depressed people are terrible at constructing future contentment curves.

Although I don't actually think getting reminded of the "dance of particles situation" does "further bum me out". I've understood since I was a kid that values are subjective. It was the thought that my values might be somehow broken by hidden inconsistency that bugged me.

What I was fearing was, if the logic of your values can i... (read more)

Annie0305190

Oh, and Paul Graham again from the same piece:

When people are bad at math, they know it, because they get the wrong answers on tests. But when people are bad at open-mindedness they don't know it.

Annie0305160

"Almost certainly, there is something wrong with you if you don't think things you don't dare say out loud."

~Paul Graham