Wiki Contributions

Comments

Sorted by

...I don't think "don't engage the problem at all" is really a viable option. Once you've taken the red pill, you can't really go back up the rabbit hole, right?

You don't have to fall down it and smash out your brains at the bottom either.

I... don't think that metaphor actually connects to any choices you can actually make.

Don't get me wrong, I'm not against ignoring things. There are so many things you could pay attention to and the vast majority simply aren't worth it.

But when you find yourself afraid, or uneasy, or upset, I don't think you should ignore it. I don't think you really can.

There's got to be some thought or belief that's disturbing you (usually, caveats, blah blah), and you've got to track it down and nail it to the ground. Maybe it's a real external problem you've got to solve, maybe it's a flawed belief you've got to convincingly disprove to yourself, or at least honestly convince yourself that the problem belongs to the huge class of things that aren't worth worrying about.

But if that's the correct solution to a problem, just convincing yourself it aint worth worrying about, you've still got to arrive at that conclusion as an actual belief, by actually thinking about it. You can't just decide to believe it, cuz that'd just be belief in belief, and that doesn't really seem to work, emotionally, even for people who haven't generalized the concept of belief in belief.

Anyway, the more I've had to practice articulating what was bothering me (that about our/my values logically auto-cannibalizing themselves), the more I've come to actually believe that it's not worth worrying about. (It no longer feels particularly likely to really mean much, and then even if it did why would it apply to me?)

So when you said:

I prefer to just [A] look directly at the problem instead of [B] haring off after solutions I know I can't find, and then [C] Ignore it.

Yeah, that's pretty much exactly what I was doing when I wrote this post. Just that in order to effectively reach [C:Ignore], you've got to properly do [A:look at problem] first, and [B: try looking for solutions] is part of that.

Honestly if I ever found my values following valutron outputs in unexpected ways like that, I'd suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.

We don't find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between "subjective" and "objective" ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.

Right.

Which very well describes the way the type distinction of "objective" and "subjective" feels intuitively obvious, and logically sound. Alternatives aint conceivable.

That's one of the zombie's weak points, anyway.

It just doesn't seem like much of a zombie. But that makes sense as it wasn't discovered by someone by trying to pin down an honest sense of fear.

My zombie originally was, and I think I can sum it up as the thought that:

Maybe the same principles that identify wirehead-type states as undesirable under our values, would, if completely and consistently applied, identify everything and anything possible as in the class of wirehead-type states.

(The simple "enjoy broccoli" was an analogy for the entire complicated human CEV.

I threw in a reference to "meaningful human relationships" not because that's my problem anymore than the average person, but because "other people" seems to have something important to do with distinguishing between a happiness we actually want and an undesirable wirehead-type state.)

How do you kill that zombie? My own solution more-or-less worked, but it was rambling and badly articulated and generally less impressive than I might like.

And yeah, the philosophical problem is at base an existential angst factory. The real problem to solve for me is, obviously, getting around the disappointing life setback I mentioned.

But laying this philosophical problem to rest with a nice logical piledriver that's epic enough to friggin incinerate it would be one thing in service of that goal.

...I guess I would get valutron-dynamics worked out, and engineer systems to yield maximum valutron output?

Except that I'd only do that if I believed it would be a handy shortcut to a higher output from my internal utility function that I already hold as subjectively "correct".

ie, if valutron physics somehow gave a low yield for making delicious food for friends, and a high yield for knifing everyone, I would still support good hosting and oppose stabbytimes.

So the answer is, apparently, even if the scenarios you conjure came to pass, I would still treat oughts and is-es as distinctly different types from each other, in the same way I do now.

But I still can't equate those scenarios with giving any meaning to "values having some metaphysically basic truth".

Although the valutron-engineering idea seems like a good idea for a fun absurdist sci-fi short story =]

would you treat your oughts any different if they DID turn out to be based in some metaphysically basic truth somehow?

I... can't even answer that, because I can't conceive of a way in which that COULD be true. What would it even MEAN?

Still seems like a harmless corpse to me. I mean, not to knock your frankenskillz, but it seems like sewing butterfly wings onto a dead earthworm and putting it on top of a 9 volt battery. XD

Heh, yeah, it's kind of an odd case in which the fact that you want to write a particular bottom line before you begin is quite possibly an argument for that bottom line?

Quite honestly that zombie doesn't even seem to be animated to me. My ability to discriminate 'ises' and 'oughts' as two distinct types feels pretty damn natural and instinctive to me.

What bothered me was the question of whether my oughts were internally inconsistent.

...I don't think "don't engage the problem at all" is really a viable option. Once you've taken the red pill, you can't really go back up the rabbit hole, right?

My original problem immediately made me think, "Okay, this conclusion is totally bumming me out, but I'm pretty sure it's coming from an incomplete application of logic". So I went with that and more-or-less solved it. I could do with having my solution more succinctly expressed, in a snappy, catchy sentence or two, but it seems to work. What I'm asking here is, has anybody else had to solve this problem, and how did they do it?

what difference in sensory experience do you anticipate if we are completely irrelevant on a grand scale? None?

...What? We already know that we're completely "irrelevant" on any scale, in the sense that there is no universal utility function hardwired into the laws of physics. Discriminating between oughts and is-es is pretty basic.

The question is not whether our human utility functions are universally "true". We already know they aren't, because they don't have an external truth value.

The question is, are our values internally consistent? How do you prove that they don't eat themselves from the inside out, or prove that such a problem doesn't even make sense?

Yes, exactly. I'm glad I was at least clear enough for someone to get that point. =]

....Ha! Guys, I should have made this clearer. I don't need counseling. I more-or-less fixed my problem for myself. By which I mean I could do with having it expressed for myself a bit more succinctly in a snappy, catchy sentence or two, but essentially, I got it already.

My point in bringing it to this audience was, "Hey, pretty sure generalizing fundamental techniques of human rationality shouldn't cause existential angst. Seems like a problem that comes from and incomplete application of rationality to the issue. I think I figured out how to solve it for myself, but has anyone else ever had this problem, and how did you solve it?"

And we're talking about a situation in which a being discovered that its values were internally inconsistent, and the same logic that identifies wireheading as "not what I actually want" extended to everything. Leaving the being with nothing 'worth' living for, but still capable of feeling pain.

So it wouldn't make any sense for it to care at all how its death affected the state of the universe after it was gone. The point is that there are NO states that it actually values over others, other than ending its own subjective experience of pain.

If it had any reason to value killing itself to save the world over killing itself with a world destroying bomb (so long as both methods were somehow equally quick, easy, and painless to itself), then the whole reason it was killing itself in the first place wouldn't be true.

The questions I mean to raise here are, is it even possible for a being to have a value system that logically eats itself from the inside out like that? And even if it was, I don't think human values would fit into that class. But what's the simplest, clearest way of proving that?

Ha, I aint exactly about to off myself any time soon! :P

I said this was a problem I more-or-less fixed for myself.

The bits of it that could be handled off of lesswrong, I did.

I'm not looking for counseling here. I'm looking to see how other people try to solve the philosophical problem.

Load More