Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Yvain's post suggested it; I just stuck it in my cache.

Yvain wrote:

Only now neuroscientists are starting to recognize a difference between "reward" and "pleasure", or call it "wanting" and "liking"... A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for "wanting" and "liking", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it). When they knocked out the "liking" system, the rats would eat exactly as much of the food without making any of the satisifed lip-licking expression, and areas of the brain thought to be correlated with pleasure wouldn't show up in the MRI. Knock out "wanting", and the rats seem to enjoy the food as much when they get it but not be especially motivated to seek it out.

They are permitted by informed consent. (A new mother may not know in detail what oxytocin does, but would have to be singularly incurious not to have asked other mothers what it's like to become a mother.)

you would allow a third party to tasp you and get you addicted to wireheading

No, I wouldn't. I required the third party to pay attention to my preferences, not just my happiness, and I've already stated my preference to not be wireheaded.

I can't help but get the feeling that you have some preconceived notions about my personal views which are preventing you from reading my comments carefully. ETA: Well, no, maybe you just believe remote stimulation of the pleasure centers of one's brain to be inherently addicting, whereas I just assumed that a superintelligent being hitting my brain with remote stimulation could avoid causing addiction if it was motivated to do so.

Er, what? Please draw a clearer connection between the notion of having preferences over the way things actually are and the notion that our evolutionarily constructed bias/carrot/stick system is a 'noble lie'.

I'm not categorically against being tasped by a third party, but I'd want that third party to pay attention to my preferences, not merely my happiness. I'd also require the third party to be more intelligent than the most intelligent human who ever existed, and not by a small margin either.

I wouldn't: I have preferences about the way things actually are, not just how they appear to me or what I'm experiencing at any given moment.

That oxytocin &c. causes us to bond with and become partial to our children does not make any causally subsequent happiness less real.

note that what you said doesn't quite contradict the hypothesis

Fair point. So let me just state that as far as I can tell, the average of my DWMM2M happiness is higher than it was before my child was born, and I expect that in a counterfactual world where my spouse and I didn't want a child and consequently didn't have one, my DWMM2M happiness would not be as great as in this one. It's just that knowing what I know (including what I've learned from this site) and having been programmed by evolution to love a stupendous badass (and that stupendous badass having been equally programmed to love me back), I find that watching that s.b. unfold into a human before my eyes causes me happiness of a regularity and intensity that I personally have never experienced before.

As a parent I can report that most days my day-wise maximum moment-to-moment happiness is due to some interaction with my child.

But then, my child is indisputably the most lovable child on the planet.

(welcome thread link not necessary)