# TheOtherDave comments on Not for the Sake of Happiness (Alone) - Less Wrong

35 22 November 2007 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Old

Comment author: 06 May 2012 03:30:34AM 0 points [-]

I'm not sure how "hedons" interact with "utilons".
I'm not saying anything at all about how they interact.
I'm merely saying that they aren't the same thing.

Comment author: 06 May 2012 03:45:15AM 0 points [-]

Oh! I didn't catch that at all. I apologize.

You've made an excellent case for them not being the same. I agree.

Comment author: 06 May 2012 03:53:21AM 0 points [-]

Cool. I thought it was confusing you earlier, but perhaps I misunderstood.

Comment author: 06 May 2012 04:00:58AM 0 points [-]

It was confusing me, yes. I considered hedons exactly equivalent to utilons.

Then you made your excellent case, and now it no longer confuses me. I revised my definition of happiness from "reality matching the utility function" to "my perception of reality matching the utility function" - which it should have been from the beginning, in retrospect.

I'd still like to know if people see happiness as something other than my new definition, but you have helped me from confusion to non-confusion, at least regarding the presence of a distinction, if not the exact nature thereof.

Comment author: 06 May 2012 05:55:31AM *  1 point [-]

(nods) Cool.

As for your proposed definition of happiness... hm.

I have to admit, I'm never exactly sure what people are talking about when they talk about their utility functions. Certainly, if I have a utility function, I don't know what it is. But I understand it to mean, roughly, that when comparing hypothetical states of the world Wa and Wb, I perform some computation F(W) on each state such that if F(Wa) > F(Wb), then I consider Wa more valuable than Wb.

Is that close enough to what you mean here?

And you are asserting, definitionally, that if that's true I should also expect that, if I'm fully aware of all the details of Wa and Wb, I will be happier in Wa.

Another way of saying this is that if OW is the reality that I would perceive in a world W, then my happiness in Wa is F(OWa). It simply cannot be the case, on this view, that I consider a proposed state-change in the world to be an improvement, without also being such that I would be made happier by becoming aware of that state-change actually occurring.

Am I understanding you correctly so far?

Further, if I sincerely assert about some state change that I believe it makes the world better, but it makes me less happy, it follows that I'm simply mistaken about my own internal state... either I don't actually believe it makes the world better, or it doesn't actually make me less happy, or both.

Did I get that right? Or are you making the stronger claim that I cannot in point of fact ever sincerely assert something like that?

Comment author: 06 May 2012 12:54:10PM 0 points [-]

I understand it to mean, roughly, that when comparing hypothetical states of the world Wa and Wb, I perform some computation F(W) on each state such that if F(Wa) > F(Wb), then I consider Wa more valuable than Wb.

That's precisely what I mean.

Another way of saying this is that if OW is the reality that I would perceive in a world W, then my happiness in Wa is F(OWa). It simply cannot be the case, on this view, that I consider a proposed state-change in the world to be an improvement, without also being such that I would be made happier by becoming aware of that state-change actually occurring.

Yes

Further, if I sincerely assert about some state change that I believe it makes the world better, but it makes me less happy, it follows that I'm simply mistaken about my own internal state... either I don't actually believe it makes the world better, or it doesn't actually make me less happy, or both. Did I get that right? Or are you making the stronger claim that I cannot in point of fact ever sincerely assert something like that?

Hm. I'm not sure what you mean by "sincerely", if those are different. I would say if you claimed "X would make the universe better" and also "Being aware of X would make me less happy", one of those statements must be wrong. I think it requires some inconsistency to claim F(Wa+X)>F(Wa) but F(O(Wa+X))<F(O(Wa)) - I changed the notation slightly, let me know if that doesn't make sense. Although! If X includes a change to F, I must additionally stipulate that the Fs must match - it's perfectly valid to say F1(Wa+X)<F1(Wa) but F2(O(Wa+X))>F2(O(Wa)), which is relatively common (Pascal's Wager comes to mind).

Comment author: 06 May 2012 02:33:36PM 0 points [-]

What I mean by "sincerely" is just that I'm not lying when I assert it.
And, yes, this presumes that X isn't changing F.
I wasn't trying to be sneaky; my intention was simply to confirm that you believe F(Wa+X)>F(Wa) implies F(O(Wa+X))<F(O(Wa)), and that I hadn't misunderstood something.
And, further, to confirm that you believe that you believe that if F(W) gives the utility of a world-state for some evaluator, then F(O(W)) gives the degree to which that world-state makes that evaluator happy. Or, said more concisely: that H(O(W)) == F(O(W)) for a given observer.

Hm.

So, I agree broadly that F(Wa+X)>F(Wa) implies F(O(Wa+X))<F(O(Wa)). (Although a caveat: it's certainly possible to come up with combinations of F() and O() for which it isn't true, so this is more of an evidentiary implication than a logical one. But I think that's beside our purpose here.)

H(O(W)) = F(O(W)), though, seems entirely unjustified to me. I mean, it might be true, sure, just as it might be true that F(O(W)) is necessarily equal to various other things. But I see no reason to believe it; it feels to me like an assertion pulled out of thin air.

Of course, I can't really have any counterevidence, the way the claim is structured.

I mean, I've certainly had the experience of changing my mind about whether X makes the world better, even though observing X continues to make me equally happy -- that is, the experience of having F(Wa+X) - F(Wa) change while H(O(Wa+X)) - H(O((Wa)) stays the same -- which suggests to me that F() and H() are different functions... but you would presumably just say that I'm mistaken about one or both of those things. Which is certainly possible, I am far from incorrigible either about what makes me happy and I don't entirely understand what I believe makes the world better.

I think I have to leave it there. You are asserting an identity that seems unjustified to me, and I have no compelling reason to believe that it's true, but also no definitive grounds for declaring it false.

Comment author: 06 May 2012 02:54:30PM *  0 points [-]

I believe you to be sincere when you say

I've certainly had the experience of changing my mind about whether X makes the world better, even though observing X continues to make me equally happy -- that is, the experience of having F(Wa+X) - F(Wa) change while H(O(Wa+X)) - H(O((Wa)) stays the same

but I can't imagine experiencing that. If the utility of a function goes down, it seems my happiness from seeing that function must necessarily go down as well. This discrepancy causes me to believe there is a low-level difference between what you consider happiness and what I consider happiness, but I can't explain mine any farther than I already have.

I don't know how else to say it, but I don't feel I'm actually making that assertion. I'm just saying: "By my understanding of hedony=H(x), awareness=O(x), and utility=F(x), I don't see any possible situation where H(W) =/= F(O(W)). If they're indistinguishable, wouldn't it make sense to say they're the same thing?"

Edit: formatting

Comment author: 06 May 2012 03:51:20PM 1 point [-]

I agree that if two things are indistinguishable in principle, it makes sense to use the same label for both.

It is not nearly as clear to me that "what makes me happy" and "what makes the world better" are indistinguishable sets as it seems to be to you, so I am not as comfortable using the same label for both sets as you seem to be.

You may be right that we don't use "happiness" to refer to the same things. I'm not really sure how to explore that further; what I use "happiness" to refer to is an experiential state I don't know how to convey more precisely without in effect simply listing synonyms. (And we're getting perilously close to "what if what I call 'red' is what you call 'green'?" territory, here.)

Comment author: 07 May 2012 12:37:13AM 0 points [-]

Without a much more precise way of describing patterns of neuron-fire, I don't think either of us can describe happiness more than we have so far. Having discussed the reactions in-depth, though, I think we can reasonably conclude that, whatever they are, they're not the same, which answers at least part of my initial question.

Thanks!