When is it adaptive for an organism to be satisfied with what it has?  When does an organism have enough children and enough food?  The answer to the second question, at least, is obviously "never" from an evolutionary standpoint.  The first proposition might be true if the reproductive risks of all available options exceed their reproductive benefits.  In general, though, it is a rare organism in a rare environment whose reproductively optimal strategy is to rest with a smile on its face, feeling happy.

To a first approximation, we might say something like "The evolutionary purpose of emotion is to direct the cognitive processing of the organism toward achievable, reproductively relevant goals".  Achievable goals are usually located in the Future, since you can't affect the Past.  Memory is a useful trick, but learning the lesson of a success or failure isn't the same goal as the original event—and usually the emotions associated with the memory are less intense than those of the original event.

Then the way organisms and brains are built right now, "true happiness" might be a chimera, a carrot dangled in front of us to make us take the next step, and then yanked out of our reach as soon as we achieve our goals.

This hypothesis is known as the hedonic treadmill.

The famous pilot studies in this domain demonstrated e.g. that past lottery winners' stated subjective well-being was not significantly greater than that of an average person, after a few years or even months.  Conversely, accident victims with severed spinal cords were not as happy as before the accident after six months—around 0.75 sd less than control groups—but they'd still adjusted much more than they had expected to adjust.

This being the transhumanist form of Fun Theory, you might perhaps say:  "Let's get rid of this effect.  Just delete the treadmill, at least for positive events."

I'm not entirely sure we can get away with this.  There's the possibility that comparing good events to not-as-good events is what gives them part of their subjective qualityAnd on a moral level, it sounds perilously close to tampering with Boredom itself.

So suppose that instead of modifying minds and values, we first ask what we can do by modifying the environment.  Is there enough fun in the universe, sufficiently accessible, for a transhuman to jog off the hedonic treadmill—improve their life continuously, at a sufficient rate to leap to an even higher hedonic level before they had a chance to get bored with the previous one?

This question leads us into great and interesting difficulties.

I had a nice vivid example I wanted to use for this, but unfortunately I couldn't find the exact numbers I needed to illustrate it.  I'd wanted to find a figure for the total mass of the neurotransmitters released in the pleasure centers during an average male or female orgasm, and a figure for the density of those neurotransmitters—density in the sense of mass/volume of the chemicals themselves.  From this I could've calculated how long a period of exponential improvement would be possible—how many years you could have "the best orgasm of your life" by a margin of at least 10%, at least once per year—before your orgasm collapsed into a black hole, the total mass having exceeded the mass of a black hole with the density of the neurotransmitters.

Plugging in some random/Fermi numbers instead:

Assume that a microgram of additional neurotransmitters are released in the pleasure centers during a standard human orgasm.  And assume that neurotransmitters have the same density as water.  Then an orgasm can reach around 108 solar masses before it collapses and forms a black hole, corresponding to 1047 baseline orgasms.  If we assume that a 100mg dose of crack is as pleasurable as 10 standard orgasms, then the street value of your last orgasm is around a hundred billion trillion trillion trillion dollars.

I'm sorry.  I just had to do that calculation.

Anyway... requiring an exponential improvement eats up a factor of 1047 in short order.  Starting from human standard and improving at 10% per year, it would take less than 1,200 years.

Of course you say, "This but shows the folly of brains that use an analog representation of pleasure.  Go digital, young man!"

If you redesigned the brain to represent the intensity of pleasure using IEEE 754 double-precision floating-point numbers, a mere 64 bits would suffice to feel pleasures up to 10^308 hedons...  in, um, whatever base you were using.

This still represents less than 7500 years of 10% annual improvement from a 1-hedon baseline, but after that amount of time, you can switch to larger floats.

Now we have lost a bit of fine-tuning by switching to IEEE-standard hedonics.  The 64-bit double-precision float has an 11-bit exponent and a 52-bit fractional part (and a 1-bit sign).  So we'll only have 52 bits of precision (16 decimal places) with which to represent our pleasures, however great they may be.  An original human's orgasm would soon be lost in the rounding error... which raises the question of how we can experience these invisible hedons, when the finite-precision bits are the whole substance of the pleasure.

We also have the odd situation that, starting from 1 hedon, flipping a single bit in your brain can make your life 10154 times more happy.

And Hell forbid you flip the sign bit.  Talk about a need for cosmic ray shielding.

But really—if you're going to go so far as to use imprecise floating-point numbers to represent pleasure, why stop there?  Why not move to Knuth's up-arrow notation?

For that matter, IEEE 754 provides special representations for +/—INF, that is to say, positive and negative infinity.  What happens if a bit flip makes you experience infinite pleasure?  Does that mean you Win The Game?

Now all of these questions I'm asking are in some sense unfair, because right now I don't know exactly what I have to do with any structure of bits in order to turn it into a "subjective experience".  Not that this is the right way to phrase the question.  It's not like there's a ritual that summons some incredible density of positive qualia that could collapse in its own right and form an epiphenomenal black hole.

But don't laugh—or at least, don't only laugh—because in the long run, these are extremely important questions.

To give you some idea of what's at stake here, Robin, in "For Discount Rates", pointed out that an investment earning 2% annual interest for 12,000 years adds up to a googol (10^100) times as much wealth; therefore, "very distant future times are ridiculously easy to help via investment".

I observed that there weren't a googol atoms in the observable universe, let alone within a 12,000-lightyear radius of Earth.

And Robin replied, "I know of no law limiting economic value per atom."

If you've got an increasingly large number of bits—things that can be one or zero—and you're doing a proportional number of computations with them... then how fast can you grow the amount of fun, or pleasure, or value?

This echoes back to the questions in Complex Novelty, which asked how many kinds of problems and novel solutions you could find, and how many deep insights there were to be had.  I argued there that the growth rate is faster than linear in bits, e.g., humans can have much more than four times as much fun as chimpanzees even though our absolute brain volume is only around four times theirs.  But I don't think the growth in "depth of good insights" or "number of unique novel problems" is, um, faster than exponential in the size of the pattern.

Now... it might be that the Law simply permits outright that we can create very large amounts of subjective pleasure, every bit as substantial as the sort of subjective pleasure we get now, by the expedient of writing down very large numbers in a digital pleasure center.  In this case, we have got it made.  Have we ever got it made.

In one sense I can definitely see where Robin is coming from.  Suppose that you had a specification of the first 10,000 Busy Beaver machines—the longest-running Turing machines with 1, 2, 3, 4, 5... states.  This list could easily fit on a small flash memory card, made up of a few measly avogadros of atoms.

And that small flash memory card would be worth...

Well, let me put it this way:  If a mathematician said to me that the value of this memory card, was worth more than the rest of the entire observable universe minus the card...  I wouldn't necessarily agree with him outright.  But I would understand his point of view.

Still, I don't know if you can truly grok the fun contained in that memory card, without an unbounded amount of computing power with which to understand it.  Ultradense information does not give you ultradense economic value or ultradense fun unless you can also use that information in a way that consumes few resources.  Otherwise it's just More Fun Than You Can Handle.

Weber's Law of Just Noticeable Difference says that stimuli with an intensity scale, typically require a fixed fraction of proportional difference, rather than any fixed interval of absolute intensity, in order for the difference to be noticeable to a human or other organism.  In other words, we may demand exponential increases because our imprecise brains can't notice smaller differences.  This would suggest that our existing pleasures might already in effect possess a floating-point representation, with an exponent and a fraction—the army of actual neurons being used only to transmit an analog signal most of whose precision is lost.  So we might be able to get away with using floats, even if we can't get away with using up-arrows.

But suppose that the inscrutable rules governing the substantiality of "subjective" pleasure actually require one neuron per hedon, or something like that.

Or suppose that we only choose to reward ourselves when we find a better solution, and that we don't choose to game the betterness metrics.

And suppose that we don't discard the Weber-Fechner law of "just noticeable difference", but go on demanding percentage annual improvements, year after year.

Or you might need to improve at a fractional rate in order to assimilate your own memories.  Larger brains would lay down larger memories, and hence need to grow exponentially—efficiency improvements suiting to moderate the growth, but not to eliminate the exponent.

If fun or intelligence or value can only grow as fast as the mere cube of the brain size... and yet we demand a 2% improvement every year...

Then 350 years will pass before our resource consumption grows a single order of magnitude.

And yet there are only around 1080 atoms in the observable universe.

Do the math.

(It works out to a lifespan of around 28,000 years.)

Now... before everyone gets all depressed about this...

We can still hold out a fraction of hope for real immortality, aka "emortality".  As Greg Egan put it, "Not dying after a very long time.  Just not dying, period."

The laws of physics as we know them prohibit emortality on multiple grounds.  It is a fair historical observation that, over the course of previous centuries, civilizations have become able to do things that previous civilizations called "physically impossible".  This reflects a change in knowledge about the laws of physics, not a change in the actual laws; and we cannot do everything once thought to be impossible.  We violate Newton's version of gravitation, but not conservation of energy.  It's a good historical bet that the future will be able to do at least one thing our physicists would call impossible.  But you can't bank on being able to violate any particular "law of physics" in the future.

There is just... a shred of reasonable hope, that our physics might be much more incomplete than we realize, or that we are wrong in exactly the right way, or that anthropic points I don't understand might come to our rescue and let us escape these physics (also a la Greg Egan).

So I haven't lost hope.  But I haven't lost despair, either; that would be faith.

In the case where our resources really are limited and there is no way around it...

...the question of how fast a rate of continuous improvement you demand for an acceptable quality of life—an annual percentage increase, or a fixed added amount—and the question of how much improvement you can pack into patterns of linearly increasing size—adding up to the fun-theoretic question of how fast you have to expand your resource usage over time to lead a life worth living...

...determines the maximum lifespan of sentient beings.

If you can get by with increasing the size in bits of your mind at a linear rate, then you can last for quite a while.  Until the end of the universe, in many versions of cosmology.  And you can have a child (or two parents can have two children), and the children can have children.  Linear brain size growth * linear population growth = quadratic growth, and cubic growth at lightspeed should be physically permissible.

But if you have to grow exponentially, in order for your ever-larger mind and its ever-larger memories not to end up uncomfortably squashed into too small a brain—squashed down to a point, to the point of it being pointless—then a transhuman's life is measured in subjective eons at best, and more likely subjective millennia.  Though it would be a merry life indeed.

My own eye has trouble enough looking ahead a mere century or two of growth.  It's not like I can imagine any sort of me the size of a galaxy.  I just want to live one more day, and tomorrow I will still want to live one more day.  The part about "wanting to live forever" is just an induction on the positive integers, not an instantaneous vision whose desire spans eternity.

If I can see to the fulfillment of all my present self's goals that I can concretely envision, shouldn't that be enough for me?  And my century-older self will also be able to see that far ahead.  And so on through thousands of generations of selfhood until some distant figure the size of a galaxy has to depart the physics we know, one way or the other...  Should that be scary?

Yeah, I hope like hell that emortality is possible.

Failing that, I'd at least like to find out one way or the other, so I can get on with my life instead of having that lingering uncertainty.

For now, one of the reasons I care about people alive today is the thought that if creating new people just divides up a finite pool of resource available here, but we live in a Big World where there are plenty of people elsewhere with their own resources... then we might not want to create so many new people here.  Six billion now, six trillion at the end of time?  Though this is more an idiom of linear growth than exponential—with exponential growth, a factor of 10 fewer people just buys you another 350 years of lifespan per person, or whatever.

But I do hope for emortality.  Odd, isn't it?  How abstract should a hope or fear have to be, before a human can stop thinking about it?

Oh, and finally—there's an idea in the literature of hedonic psychology called the "hedonic set point", based on identical twin studies showing that identical twins raised apart have highly similar happiness levels, more so than fraternal twins raised together, people in similar life circumstances, etcetera.  There are things that do seem to shift your set point, but not much (and permanent downward shift happens more easily than permanent upward shift, what a surprise).  Some studies have suggested that up to 80% of the variance in happiness is due to genes, or something shared between identical twins in different environments at any rate.

If no environmental improvement ever has much effect on subjective well-being, the way you are now, because you've got a more or less genetically set level of happiness that you drift back to, then...

Well, my usual heuristic is to imagine messing with environments before I imagine messing with minds.

But in this case?  Screw that.  That's just stupid.  Delete it without a qualm.

 

Part of The Fun Theory Sequence

Next post: "Sensual Experience"

Previous post: "Complex Novelty"

New Comment
25 comments, sorted by Click to highlight new comments since: Today at 9:28 AM

"[...] which begs the question [sic] of how we can experience these invisible hedons [...]"

Wh--wh--you said you were sympathetic!

an investment earning 2% annual interest for 12,000 years adds up to a googol (10^100) times as much wealth.

no it adds up to a googol of economic units. in all likelihood the actual wealth that the investment represents will stay roughly the same or grow and shrink within fairly small margins.

it seems you conclude with an either/or on subjective experience improvement and brain tinkering. I think it more likely that we will improve our subjective experience up to a certain point of feasibility and then start with the brain tinkering. Some will clock-out by wireheading themselves, but most won't. Some will be more disposed towards brain tinkering, some will plug themselves into experience machines instead. The average person will do a little of both, trying various brain modifications the way we try drugs today. Will this be dangerous? Well the first people to try a new drug are taking a big risk, but the guinea pigs are a small minority. And they will use experience machines, but most won't surrender to them, just like most don't die playing world of warcraft today.

Eliezer, I'm reading Bill McKibben's "Enough" at the moment, and it is interesting to note that he asks some of the same questions that you do. It seems that he has never come across anyone seriously thinking about hedonic issues in a future with superabundance. He seems to have mostly come across the blind techno-optimists.

In fact the most interesting observation is that both you and McKibben argue strongly that removing all challenge from human life is a bad idea, though McKibben jumps to a relinquishment conclusion from this, rather than considering the possibility of re-introducing the right sort of challenge into a posthuman existence.

I don't see how removing getting-used-to is close to removing boredom. IANAneurologist, but on a surface level, they do seem to work differently - boredom is reading the same book everyday and getting tired of it, habituation is getting a new book everyday and not thinking "Yay, new fun" anymore.

I'm reluctant to keep habituation because, at least in some cases, it is evil. When the emotion is appropriate to the event, it's wrong for it to disminish - you have a duty to rage against the dying of the light. (Of course we need it for survival, we can't be mourning all the time.) It also looks linked to status quo bias.

Maybe, like boredom, habituation is an incentive to make life better; but it's certainly not optimal.

Does that mean you Win The Game?

I just lost.

It's premature optimization, we won't reach heaven. Anyway, do you test those ideas in practice? Theoretical falsifiability isn't enough.

the value of this memory card, was worth more than the rest of the entire observable universe minus the card

I doubt this would be true. I think the value of the card would actually be close to zero (though I'm not completely sure). It does let one solve the halting problem up to 10,000 states, but it does so in time and space complexity O(busy_beaver(n)). In other words, using the entire observable universe as computing power and the card as an oracle, you might be able to solve the halting problem for 7 state machines or so. Not that good... The same goes for having the first 10,000 bits of Omega. What you really want are the bits of Tau, which directly encode whether the nth machine halts. Sure you need exponentially more of them, but your computation is then much faster.

"It's premature optimization"

Thanks, I was trying to think of exactly how to describe this series of posts, and that phrase seems concise enough. It's not that it's not interesting in it's own way, but even for an already pretty speculative blog, you're really building castles on air here.

To make yet another analogy, you're trying to build 100th floor of a huge house of cards here, when you're not even sure what the 5th floor should be like yet (I was going to say the 1st floor, but I think you've at least gotten off to a decent start).

I think this is more like planning the layout for the 100th floor, because that determines what's necessary for the architecture on floors 1-99.

That argument with the lottery winner is pretty materialistic. Why should lottery winners be happier? Feeling happiness doesn't come directly from the materialistic stuff that you own.

It's probably a better strategy to go to some Buddhist monarchy and sit down and meditate if your goal is happiness. They got it figured out to mentally detach themselves from expectations enough to be able to feel pleasure through their own conscious choice.

Interestingly, you can have unboundedly many children with only quadratic population growth, so long as they are exponentially spaced. For example, give each newborn sentient a resource token, which can be used after the age of maturity (say, 100 years or so) to fund a child. Additionally, in the years 2^i every living sentient is given an extra resource token. One can show there is at most quadratic growth in the number of resource tokens. By adjusting the exponent in 2^i we can get growth O(n^{1+p}) for any nonnegative real p.

I think assumption that life worth living must be o continuous happiness may be wrong. Math is great, but since happiness itself only can exist within humans, you must take them into account. Experience seems to be showing that continuous happiness is not possible even with exponential exponential exponential improvement of your life wealth or whatever. We just got use to growth rate and growth rate of a growth rate and so on.

My point here (you know, I'm just a little human), is that I may be satisfied with life where my wealth level is like white noise. It just have to have big enough amplitude so I won't get bored.

And the reason why amplitude does not have to grow infinitely is that my memory sucks.

PS. Awesome blog, keep up the great work.

Shard Pheonix, etc:

"It's premature optimization"

Thanks, I was trying to think of exactly how to describe this series of posts, and that phrase seems concise enough. It's not that it's not interesting in it's own way, but even for an already pretty speculative blog, you're really building castles on air here.

To make yet another analogy, you're trying to build 100th floor of a huge house of cards here, when you're not even sure what the 5th floor should be like yet (I was going to say the 1st floor, but I think you've at least gotten off to a decent start)."

  • I couldn't disagree more. This kind of thinking is very important - not because we need to know RIGHT NOW in order to make some immediate and pressing policy decision, but because humans like to know where things are heading, what we are eventually aiming for. Suppose someone rejects cryonics or life extension research and opts for religion on the grounds that eternity in heaven will be "infinitely" good, but human life on earth, even technologically enhanced life, is necessarily mediocre. What can one say to such objections other than something like this series of posts?

Without a sense that there is a light at the end of the secular rationalist tunnel, many - even most - will give up the fight. This is the relevance of transhumanism to today's world.

Let's say I picked the happiest moment in my life (I honestly don't know what that is, but we can ignore that for now). After the Singularity when we can do things currently considered impossible, could I for all practical purposes rewind time and experience that moment again as if it had never happened to shift my hedonic set point?

I can remember how happy I was at my fourth birthday when my mum got me a pink balloon. It was very pretty. :)

Really find your blog very interesting. I am very impressed by the work and effort you put into your post - very indepth. I find your posts thought provoking and I couldn't ask for much more.

thanks.

Without a sense that there is a light at the end of the secular rationalist tunnel, many - even most - will give up the fight.

Thanks, I was trying to figure out exactly how to phrase that rejoinder!

"I couldn't disagree more. This kind of thinking is very important - not because we need to know RIGHT NOW in order to make some immediate and pressing policy decision, but because humans like to know where things are heading, what we are eventually aiming for. Suppose someone rejects cryonics or life extension research and opts for religion on the grounds that eternity in heaven will be "infinitely" good, but human life on earth, even technologically enhanced life, is necessarily mediocre. What can one say to such objections other than something like this series of posts?"

I'd say that if they're willing to believe something just because it sounds nice rather than because it's true, they've already given up on rationality. Is the goal to be rational and spread the truth, or to recruit people to the cause with wildly speculative optimism? I'd think just the idea of creating a super-intelligent AI that doesn't destroy the world (if that's even an issue - and I think there's a good chance that it is) is a good incentive already - there's no need to postulate a secular heaven that depends on so many things that we aren't at all sure about yet.

Shard, with respect, your comment is fine and good if we are dealing with literally perfect rationalists whose emotions do just what they should. Any less than that, and your motivation to go on working can still be torpedoed by not being able to visualize a light at the end of the tunnel.

ShardPhoenix says "I'd say that if they're willing to believe something just because it sounds nice rather than because it's true, they've already given up on rationality."

Humanity isn't neatly divided into people who have "given up on rationality" and tireless rationalists. There are just people who try to and succeed at being rational (ie. wining) to varying extents depending on a large complicated set of considerations including how the person is feeling and how smart they are. Even Newton was a religious fundamentalist and even one who is trying his mightiest to be rational can flinch away from a sufficiently unpleasant truth.

ShardPhoenix then says "Is the goal to be rational and spread the truth, or to recruit people to the cause with wildly speculative optimism?"

Because we aren't perfectly rational creatures, because we try harder to win when motivated, it makes perfect sense to pursue lines of speculation which can motivate us, so long as we keep careful track of which things we actually know and which things we don't so that it doesn't slash our tires. If you think that in his "wildly speculative optimism" Eliezer has, despite all the question marks in his recent writing, claimed to know something which he shouldn't, or to suspect something more strongly than he should, then by all means point it out. If he hasn't, then the phrase "wildly speculative optimism" might not be a terribly good description of the recent series of posts.

Eliezer, does The Adaptive Stereo have an analogic application here?

To compress the idea, if you slowly turn down the strength of a signal into perception (in this case sound), you can make large, pleasant, periodic steps up without actually going anywhere. Or at least, you can go slower.

Any logical reason why this wouldn't work for hedons? 'Going digital' might nullify this effect, but in that case we just wouldn't do that, right?

Finally, I would dispute the notion that a periodic incremental increase in hedons flowing into us is how human pleasure works. The key notion here is surely not pleasure but payoff - continually getting something (even exponentially more of something) for nothing won't feel like an orgasm getting better and better.* Unless you make some serious architectural changes. And, for me at least, that would be filed under 'wirehead'. To continue the orgasm metaphor, if you could press a button (no puns please) and come, you'd quickly get bored. It might even turn you off actual sex.

The future scares me.

I know that we won't necessarily get all these billions and trillions of hedons free - we would probably seek to set up some carrots and sticks of our own etc. But still. It'd be tough not to just plug yourself in given the option. Like you say though, easier to poke holes than to propose solutions. Will ponder on.

Marcello, very well put.

*This is my intuition talking, but surely that's what we're running on here?

"Any less than that, and your motivation to go on working can still be torpedoed by not being able to visualize a light at the end of the tunnel."

I understand, and I'm hardly a perfect rationalist myself, but to me it seems that you don't need to go so far as this to motivate people. You can easily come up with post-FAI scenarios that are much preferable to the modern day without having to speculate about hedon representation in the year 12000, when we don't even know exactly what a hedon is or even what we really want in the long term. And if someone is convinced that post-singularity life can't help but be horrible (or even is just a bit dubious about the whole scenario), then I doubt such "crazy sounding" ideas are going to make them listen.

On a side note, a lot of the stuff in this post seems very closely related to wireheading to me - not that I'm necessarily against that, but I know you are, and this post almost seems to be leading up to wireheading by another name.

cubic growth at lightspeed should be physically permissible

This is an instance of a common error around here. Our current understanding of general relativity implies that you can only fit O(r^2) bits into a sphere of radius r. If you try and put more in, you eventually get a black hole and then we will have to appeal to some much more intricate understanding of physics to know what happens. There may or may not be some deep reasons for this (related to the AdS/CFT correspondence, about which I know very little).

(Of course, I wouldn't bet aggressively against future revisions of the current understanding.)

And on a moral level, it sounds perilously close to tampering with Boredom itself.

I am not sure we need necessarily to shy away from tampering with our reward system. To me it seems, that the whole reward system is already somewhat arbitrary, shaped by necessities, largely from our past.

We may (enjoy) feel rewarded by building our influence/power, to generate and take care of offspring, explore, both spatially and in-depth by understanding things better. All this seems already biased. Also it seems that the younger people seem to be more be oriented towards rewards related to exploration, expansion, whereas the older we get, we seem to shift our focus into retaining, stabilising, attempting to guarantee what we have built during lifetimes.

Although this value set has probably been giving advantage to us for long time, it is likely relatively slow to change, and it is based on assumptions. These same values might not provide direct usefulness by themselves, should these assumptions no longer hold, perhaps even only due to change in our situation as the most intelligent race on the planet.

For example, if mankind's ability of developing abstract sciences would be rendered useless by superintelligence, which would provide us with the results in much more efficient way, then feeling rewarded by re-inventing and trying to prove/disprove theories already available to us might not give us any advantage. Should we work still towards honing our skills to be able to expand what we know ourselves? Perhaps, but not necessarily. Maybe it would give more value for us then, for example, to concentrate on trying to better understand and acquire the information that is already provided to us, concentrate on learning instead of trying to rush ahead, expand.

That kind of shifts, some subtler, some perhaps more radical, they are, in essence, changes in our core "enjoyment" value systems.

I didn't except that much of happiness to be "hardcoded" in genetics. Wikipedia page https://en.wikipedia.org/wiki/Hedonic_treadmill gives a 50% instead of 80% genetically part, but that's still much more than I though before. sound of brain updating its belief network, or at least trying to

But there is a gap between perceived long-term "background" happiness, and actually perceived instantaneous happiness. Humans are bad at unconsciously integrating. I'm pretty sure that if you asked every 5 minutes for a month to someone "right now, what is your happiness ?" (or more realistically measuring the happiness every 5 minutes with a small device carried by the person) and then ask to the person at the end of the month "rate your happiness this last month", the two results will be quite different.

The "background" happiness (how happy you feel overall this last month) seems harder to change through environmental change than the instantaneous happiness (how happy you feel right now). Which seems quite coherent with the fact that background happiness can be changed by events affecting your life - but not for long. Events affecting your life change your instantaneous happiness of each instant, but after a while your re-normalize your background happiness feeling level to the hardcoded level. Or it could be that, in fact, the effect on your instantaneous happiness of each instant goes decreasing. I suspect a bit of both, now, in which proportion ? I fear we'll have to wait for a real-time happiness meter to be put on identical twins to know for sure.