Comment author: minusdash 03 March 2015 01:23:04AM *  0 points [-]

Life, sin, disease, redness, maleness and indeed dogness "may" also be like electromagnetism. The English language may also be a fundamental part of the universe and maybe you could tell if "irregardless" or "wanna" are real English words by looking into a microscope or turning your telescope to certain parts of the sky, or maybe by looking at chicken intestines, who knows. I know some people think like this. Stuart Hameroff says that morality may be encoded into the universe at the Planck scale. So maybe that's where you should look for "good", maybe "pleasure" is there as well.

But anyway, research into electromagnetism was done using the scientific method, which means that the hypothesis had to produce predictions that were tested and replicated numerous times. What sort of experiment would you envision for testing something about "inherently pleasurable" arrangements of atoms? Would the atoms make you feel warm and fuzzy inside when you look at them? Or would you try to put that pattern into different living creatures and see if they react with their normal joyful reactions?

Comment author: johnsonmx 05 March 2015 06:44:35PM 0 points [-]

Although life, sin, disease, redness, maleness, and dogness are (I believe) inherently 'leaky' / 'fuzzy' abstractions that don't belong with electromagnetism, this is a good comment. If a hypothesis is scientific, it will make falsifiable predictions. I hope to have something more to share on this soon.

Comment author: minusdash 02 March 2015 10:09:56PM 1 point [-]

I don't like the expression "carve reality at the joints", I think it's very vague and hard to verify if a concept carves it there or not. The best way I can imagine this is that you have lots of events or 'things' in some description space and you can notice some clusterings, and you pick those clusters as concepts. But a lot depends on which subspace you choose and on what scale you're working... 'Good' may form a cluster or may not, I just don't even know how you could give evidence either way. It's unclear how you could formalize this in practice.

My thoughts on pleasure and the concept of good is that your problem is that you're trying to discover the sharp edges of these categories, whereas concepts don't work like that. Take a look at this LW post and this one from Slatestarcodex. From the second one, the concept of a behemah/dag exists because fishing and hunting exist.

Try to make it clearer what you're trying to ask. "What is pleasure really?" is a useless question. You may ask "what is going on in my body when I feel pleasure?" or "how could I induce that state again?"

You seem to be looking for some mathematical description of the pattern of pleasure that would unify pleasure in humans and aliens with totally unknown properties (that may be based on fundamentally different chemistry or maybe instead of electomagnetism-based chemistry their processes work over the strong nuclear force or whatever). What do you really have in mind here? A formula, like a part of space giving off pulses at the rate of X and another part of space at 1 cm distance pulsating with rate Y?

You may just as well ask how we would detect alien life at all. And then I'd say "life" is a human concept, not a divine platonic object out there that you can go to and see what it really is. We even have edge cases here on Earth, like viruses or prions. But the importance of these sorts of questions disappears if you think about what you'd do with the answer. If it's "I just want to know how it really is, I can't imagine doing anything practical with the answer" then it's too vague to be answered.

Comment author: johnsonmx 03 March 2015 01:01:25AM 2 points [-]

I think we're still not seeing eye-to-eye on the possibility that valence, i.e., whatever pattern within conscious systems innately feels good, can be described crisply.

If it's clear a priori that it can't, then yes, this whole question is necessarily confused. But I see no argument to that effect, just an assertion. From your perspective, my question takes the form: "what's the thing that all dogs have in common?"- and you're trying to tell me it's misguided to look for some platonic 'essence of dogness'. Concepts don't work like that. I do get that, and I agree that most concepts are like that. But from my perspective, your assertion sounds like, "all concepts pertaining to this topic are necessarily vague, so it's no use trying to even hypothesize that a crisp mathematical relationship could exist." I.e., you're assuming your conclusion. Now, we can point to other contexts where rather crisp mathematical models do exist: electromagnetism, for instance. How do you know the concept of valence is more like 'dogness' than electromagnetism?

Ultimately, the details, or mathematics, behind any 'universal' or 'rigorous' theory of valence would depend on having a well-supported, formal theory of consciousness to start from. It's no use talking about patterns within conscious systems when we don't have a clear idea of what constitutes a conscious system. A quantitative approach to valence needs a clear ontology, which we don't have yet (Tononi's IIT is a good start, but hardly a final answer). But let's not mistake the difficulty in answering these questions with them being inherently unanswerable.

We can imagine someone making similar critiques a few centuries ago regarding whether electromagnetism was a sharply-defined concept, or whether understanding it matters. It turned out electromagnetism was a relatively sharply-defined concept: there was something to get, and getting it did matter. I suspect a similar relationship holds with valence in conscious systems. I'm not sure it does, but I think it's more reasonable to accept the possibility than not at this point.

Comment author: RichardKennaway 02 March 2015 11:20:32AM 1 point [-]

This is part of the Hard Problem of Consciousness: why is there any such thing and how does it work? It is Hard because we cannot even see what a solution would be. Even if we discovered patterns of neural activity or anything else that reliably and in great detail matched up with the experience, it seems that that still wouldn't tell us why there is such a thing as that experience, and would not suggest any test we could apply to a synthetic imitation of the patterns.

(7) If we met an alien life-form, how could we tell if it was suffering?

The world is already full of alien life-forms -- that is, life-forms radically different from yourself. How do you decide, and how should you decide, which of the following suffers? A human being with toothache; a dog that has been hit by a car; a mouse bred to grow cancers; a wasp infected by a fungus that is eating up its whole body and sprouting from its surface; a caterpillar paralysed and being eaten alive by the larvae of that wasp; a jellyfish stranded on the beach that a playful child has thrust its spade into; a fish dying from the sting of a jellyfish; a tree with the sort of burr that wood carvers prize for its ornamental patterns; parched grass in a drought. And, for that matter, a cliff face that has collapsed in a great storm; tectonic plates grinding together; a meteor burning up in the atmosphere.

Comment author: johnsonmx 02 March 2015 09:09:20PM 0 points [-]

Right- good questions.

First, I think getting a rigorous answer to this 'mystery of pain and pleasure' is contingent upon having a good theory of consciousness. It's really hard to say anything about which patterns in conscious systems lead to pleasure without a clear definition of what our basic ontology is.

Second, I've been calling this "The Important Problem of Consciousness", a riff off Chalmers' distinction between the Easy and Hard problems. I.e., if someone switched my red and green qualia in some fundamental sense it wouldn't matter; if someone switched pain and pleasure, it would.

Third, it seems to me that patternist accounts of consciousness can answer some of your questions, to some degree, just by ruling out consciousness (things can only experience suffering insofar as they're conscious). How to rank each of your examples in severity, however, is... very difficult.

Comment author: Lumifer 02 March 2015 06:05:22PM *  3 points [-]

Surely neurological processes are "arrangements of particles" too, though.

Processes are not "arrangements", it's a dynamic vs static difference.

Comment author: johnsonmx 02 March 2015 09:01:12PM 0 points [-]

Right. It might be a little bit more correct to speak of 'temporal arrangements of arrangements of particles', for which 'processes' is a much less awkward shorthand.

But saying "pleasure is a neurological process" seems consistent with saying "it all boils down to physical stuff- e.g., particles, eventually", and doesn't seem to necessarily imply that "you can't find a 'pleasure pattern' that's fully generalized. The information is always contextual."

Comment author: minusdash 02 March 2015 10:22:19AM 1 point [-]

Good is a complex concept, not an irreducible basic constituent of the universe. It's deeply rooted in our human stuff like metabolism (food is good), reproduction (sex is good), social environment (having allies is good) etc. We can generalize from this and say that the general pattern of "good" things is that they tend to reinforce themselves. If you feel good, you'll strive to achive the same later. If you feel bad, you'll strive to avoid feeling that in the future. So if an experience makes more of it then it's good, otherwise it's bad.

Note that we could also ask: "Is there a general principle to be found with regard to which patterns within conscious systems innately feel like smelling a rose, or isn't there?" We could build rose smell detecting machines in various ways. How can you say that one is really having the experience of smelling it while another isn't?

Comment author: johnsonmx 02 March 2015 08:56:20PM 1 point [-]

Good is a complex concept, not an irreducible basic constituent of the universe. It's deeply rooted in our human stuff like metabolism (food is good), reproduction (sex is good), social environment (having allies is good) etc

It seems like you're making two very distinct assertions here: first, that valence is not a 'natural kind', that it doesn't 'carve reality at the joints', and is impossible to form a crisp, physical definition of; and second, that valence is highly connected to drives that have been evolutionarily advantageous to have. The second is clearly correct; the first just seems to be an assertion (one that I understand, and I think reasonable people can hold at this point, but that I disagree with).

Comment author: minusdash 02 March 2015 01:04:02AM *  0 points [-]

This all seems to be about the "qualia" problem. Take another example. How would you know if an alien was having the experience of seeing the color red? Well, you could show it red and see what changes. You could infer it from its behavior (for example if you trained it that red means food - if indeed the alien eats food).

Similarly you could tell that it's suffering when it does something to avoid an ongoing situation, and if later on it would very much prefer not to go under the same conditions ever again.

I don't think there is anything special about the actual mechanism and neural pattern that expresses pain or suffering in our brains. It's that pattern's relation to memories, sensory inputs and motor outputs that's important.

Probably you could even retrain the brain to consider a certain fixed brain stimulus to be pleasure even though it was previously associated with pain. It's like putting on those corrective glasses that turn the visual input by 180° and the brain can adapt to that situation and the person is feeling normal after some time.

Comment author: johnsonmx 02 March 2015 08:31:40AM *  0 points [-]

I see the argument, but I'll note that your comments seem to run contrary to the literature on this: see, e.g., Berridge on "Dissecting components of reward: ‘liking’, ‘wanting’, and learning", as summed up by Luke in The Neuroscience of Pleasure. In short, behavior, memory, and enjoyment ('seeking', 'learning', and 'liking' in the literature) all seem to be fairly distinct systems in the brain. If we consider a being with a substantially different cognitive architecture, whether through divergent evolution or design, it seems problematic to view behavior as the gold standard of whether it's experiencing pleasure or suffering. At this point it may be the most practical approach, but it's inherently imperfect.

My strong belief is that although there is substantial plasticity in how we interpret experiences as positive or negative, this plasticity isn't limitless. Some things will always feel painful; others will always feel pleasurable, given a not-too-highly-modified human brain. But really, I think this line of thinking is a red herring: it's not about the stimulus, it's about what's happening inside the brain, and any crisp/rigorous/universal principles will be found there.

Is valence a 'natural kind'? Does it 'carve reality at the joints'? Intuitions on this differ (here's a neat article about the lack of consensus about emotions). I don't think anger, or excitement, or grief carve reality at the joints- I think they're pretty idiosyncratic to the human emotional-cognitive architecture. But if anything about our emotions is fundamental/universal, I think it'd have to be their valence.

Comment author: 27chaos 01 March 2015 08:57:34PM 3 points [-]

Pleasure is not a static "arrangement of particles". Pleasure is a neurological process.

You can't find a "pleasure pattern" that's fully generalized. The information is always contextual.

This isn't a perfect articulation of my objections, but this is a difficult subject.

Comment author: johnsonmx 02 March 2015 08:12:34AM 1 point [-]

Surely neurological processes are "arrangements of particles" too, though.

I think your question gets to the heart of the matter- is there a general principle to be found with regard to which patterns within conscious systems innately feel good, or isn't there? It would seem very surprising to me if there wasn't.

Comment author: dxu 02 March 2015 01:03:55AM *  1 point [-]

Off-topic, but I notice that this post, according to the time-stamp, was apparently posted on March 1, 2015. There are comments attached to it, however, dating from 2013. Does anyone know why this is?

Comment author: johnsonmx 02 March 2015 08:09:27AM 0 points [-]

I had posted the original in 2013, and did a major revision today, before promoting it (leaving the structure of the questions intact, to preserve previous discussion referents).

I hope I haven't committed any faux pas in doing this.

The mystery of pain and pleasure

8 johnsonmx 01 March 2015 07:47PM

 

Some arrangements of particles feel better than others. Why?

We have no general theories, only descriptive observations within the context of the vertebrate brain, about what produces pain and pleasure. It seems like there's a mystery here, a general principle to uncover.

Let's try to chart the mystery. I think we should, in theory, be able to answer the following questions:


(1) What are the necessary and sufficient properties for a thought to be pleasurable?

(2) What are the characteristic mathematics of a painful thought?

(3) If we wanted to create an artificial neural network-based mind (i.e., using neurons, but not slavishly patterned after a mammalian brain) that could experience bliss, what would the important design parameters be?

(4) If we wanted to create an AGI whose nominal reward signal coincided with visceral happiness -- how would we do that?

(5) If we wanted to ensure an uploaded mind could feel visceral pleasure of the same kind a non-uploaded mind can, how could we check that? 

(6) If we wanted to fill the universe with computronium and maximize hedons, what algorithm would we run on it?

(7) If we met an alien life-form, how could we tell if it was suffering?


It seems to me these are all empirical questions that should have empirical answers. But we don't seem to have much for hand-holds which can give us a starting point.

Where would *you* start on answering these questions? Which ones are good questions, and which ones are aren't? And if you think certain questions aren't good, could you offer some you think are?

 

As suggested by shminux, here's some research I believe is indicative of the state of the literature (though this falls quite short of a full literature review):

Tononi's IIT seems relevant, though it only addresses consciousness and explicitly avoids valence. Max Tegmark has a formal generalization of IIT which he claims should apply to non-neural substrates. And although Tegmark doesn't address valence either, he posted a recent paper on arxiv noting that there *is* a mystery here, and that it seems topical for FAI research.

Current models of emotion based on brain architecture and neurochemicals (e.g., EMOCON) are somewhat relevant, though ultimately correlative or merely descriptive, and seem to have little universalization potential.

There's also a great deal of quality literature about specific correlates of pain and happiness- e.g., Building a neuroscience of pleasure and well-being and An fMRI-Based Neurologic Signature of Physical Pain. Luke covers Berridge's research in his post, The Neuroscience of Pleasure. Short version: 'liking', 'wanting', and 'learning' are all handled by different systems in the brain. Opioids within very small regions of the brain seem to induce the 'liking' response; elsewhere in the brain, opioids only produce 'wanting'. We don't know how or why yet. This sort of research constrains a general principle, but doesn't really hint toward one.

 

In short, there's plenty of research around the topic, but it's focused exclusively on humans/mammals/vertebrates: our evolved adaptations, our emotional systems, and our architectural quirks. Nothing on general or universal principles that would address any of (1)-(7). There is interesting information-theoretic / patternist work being done, but it's highly concentrated around consciousness research.

 

---

 

Bottom line: there seems to be a critically important general principle as to what makes certain arrangements of particles innately preferable to others, and we don't know what it is. Exciting!

Comment author: capybaralet 27 January 2015 06:43:49AM 0 points [-]

These are great questions. I'm not sure they have answers. But they seem extremely pertinent to making a good AGI.

Tegmark's paper here: http://arxiv.org/pdf/1409.0813.pdf seems to be poking in the same direction.

Neglecting these questions is, IMO, tantamount to moral relativism or nihilism.

Comment author: johnsonmx 17 February 2015 11:41:52PM 0 points [-]

Thank you- that paper is extremely relevant and I appreciate the link.

To reiterate, mostly for my own benefit: As Tegmark says- whether we're talking about a foundation to ethics, or a "final goal", or we simply want to not be confused about what's worth wanting, we need to figure out what makes one brain-state innately preferable to another, and ultimately this boils down to arrangements of particles. But what makes one arrangement of particles superior to another? (This is not to give credence to moral relativism- I do believe this has a crisp answer).

View more: Prev | Next