Comment author: minusdash 02 March 2015 10:09:56PM 1 point [-]

I don't like the expression "carve reality at the joints", I think it's very vague and hard to verify if a concept carves it there or not. The best way I can imagine this is that you have lots of events or 'things' in some description space and you can notice some clusterings, and you pick those clusters as concepts. But a lot depends on which subspace you choose and on what scale you're working... 'Good' may form a cluster or may not, I just don't even know how you could give evidence either way. It's unclear how you could formalize this in practice.

My thoughts on pleasure and the concept of good is that your problem is that you're trying to discover the sharp edges of these categories, whereas concepts don't work like that. Take a look at this LW post and this one from Slatestarcodex. From the second one, the concept of a behemah/dag exists because fishing and hunting exist.

Try to make it clearer what you're trying to ask. "What is pleasure really?" is a useless question. You may ask "what is going on in my body when I feel pleasure?" or "how could I induce that state again?"

You seem to be looking for some mathematical description of the pattern of pleasure that would unify pleasure in humans and aliens with totally unknown properties (that may be based on fundamentally different chemistry or maybe instead of electomagnetism-based chemistry their processes work over the strong nuclear force or whatever). What do you really have in mind here? A formula, like a part of space giving off pulses at the rate of X and another part of space at 1 cm distance pulsating with rate Y?

You may just as well ask how we would detect alien life at all. And then I'd say "life" is a human concept, not a divine platonic object out there that you can go to and see what it really is. We even have edge cases here on Earth, like viruses or prions. But the importance of these sorts of questions disappears if you think about what you'd do with the answer. If it's "I just want to know how it really is, I can't imagine doing anything practical with the answer" then it's too vague to be answered.

Comment author: johnsonmx 03 March 2015 01:01:25AM 2 points [-]

I think we're still not seeing eye-to-eye on the possibility that valence, i.e., whatever pattern within conscious systems innately feels good, can be described crisply.

If it's clear a priori that it can't, then yes, this whole question is necessarily confused. But I see no argument to that effect, just an assertion. From your perspective, my question takes the form: "what's the thing that all dogs have in common?"- and you're trying to tell me it's misguided to look for some platonic 'essence of dogness'. Concepts don't work like that. I do get that, and I agree that most concepts are like that. But from my perspective, your assertion sounds like, "all concepts pertaining to this topic are necessarily vague, so it's no use trying to even hypothesize that a crisp mathematical relationship could exist." I.e., you're assuming your conclusion. Now, we can point to other contexts where rather crisp mathematical models do exist: electromagnetism, for instance. How do you know the concept of valence is more like 'dogness' than electromagnetism?

Ultimately, the details, or mathematics, behind any 'universal' or 'rigorous' theory of valence would depend on having a well-supported, formal theory of consciousness to start from. It's no use talking about patterns within conscious systems when we don't have a clear idea of what constitutes a conscious system. A quantitative approach to valence needs a clear ontology, which we don't have yet (Tononi's IIT is a good start, but hardly a final answer). But let's not mistake the difficulty in answering these questions with them being inherently unanswerable.

We can imagine someone making similar critiques a few centuries ago regarding whether electromagnetism was a sharply-defined concept, or whether understanding it matters. It turned out electromagnetism was a relatively sharply-defined concept: there was something to get, and getting it did matter. I suspect a similar relationship holds with valence in conscious systems. I'm not sure it does, but I think it's more reasonable to accept the possibility than not at this point.

Comment author: RichardKennaway 02 March 2015 11:20:32AM 1 point [-]

This is part of the Hard Problem of Consciousness: why is there any such thing and how does it work? It is Hard because we cannot even see what a solution would be. Even if we discovered patterns of neural activity or anything else that reliably and in great detail matched up with the experience, it seems that that still wouldn't tell us why there is such a thing as that experience, and would not suggest any test we could apply to a synthetic imitation of the patterns.

(7) If we met an alien life-form, how could we tell if it was suffering?

The world is already full of alien life-forms -- that is, life-forms radically different from yourself. How do you decide, and how should you decide, which of the following suffers? A human being with toothache; a dog that has been hit by a car; a mouse bred to grow cancers; a wasp infected by a fungus that is eating up its whole body and sprouting from its surface; a caterpillar paralysed and being eaten alive by the larvae of that wasp; a jellyfish stranded on the beach that a playful child has thrust its spade into; a fish dying from the sting of a jellyfish; a tree with the sort of burr that wood carvers prize for its ornamental patterns; parched grass in a drought. And, for that matter, a cliff face that has collapsed in a great storm; tectonic plates grinding together; a meteor burning up in the atmosphere.

Comment author: johnsonmx 02 March 2015 09:09:20PM 0 points [-]

Right- good questions.

First, I think getting a rigorous answer to this 'mystery of pain and pleasure' is contingent upon having a good theory of consciousness. It's really hard to say anything about which patterns in conscious systems lead to pleasure without a clear definition of what our basic ontology is.

Second, I've been calling this "The Important Problem of Consciousness", a riff off Chalmers' distinction between the Easy and Hard problems. I.e., if someone switched my red and green qualia in some fundamental sense it wouldn't matter; if someone switched pain and pleasure, it would.

Third, it seems to me that patternist accounts of consciousness can answer some of your questions, to some degree, just by ruling out consciousness (things can only experience suffering insofar as they're conscious). How to rank each of your examples in severity, however, is... very difficult.

Comment author: Lumifer 02 March 2015 06:05:22PM *  3 points [-]

Surely neurological processes are "arrangements of particles" too, though.

Processes are not "arrangements", it's a dynamic vs static difference.

Comment author: johnsonmx 02 March 2015 09:01:12PM 0 points [-]

Right. It might be a little bit more correct to speak of 'temporal arrangements of arrangements of particles', for which 'processes' is a much less awkward shorthand.

But saying "pleasure is a neurological process" seems consistent with saying "it all boils down to physical stuff- e.g., particles, eventually", and doesn't seem to necessarily imply that "you can't find a 'pleasure pattern' that's fully generalized. The information is always contextual."

Comment author: minusdash 02 March 2015 10:22:19AM 1 point [-]

Good is a complex concept, not an irreducible basic constituent of the universe. It's deeply rooted in our human stuff like metabolism (food is good), reproduction (sex is good), social environment (having allies is good) etc. We can generalize from this and say that the general pattern of "good" things is that they tend to reinforce themselves. If you feel good, you'll strive to achive the same later. If you feel bad, you'll strive to avoid feeling that in the future. So if an experience makes more of it then it's good, otherwise it's bad.

Note that we could also ask: "Is there a general principle to be found with regard to which patterns within conscious systems innately feel like smelling a rose, or isn't there?" We could build rose smell detecting machines in various ways. How can you say that one is really having the experience of smelling it while another isn't?

Comment author: johnsonmx 02 March 2015 08:56:20PM 1 point [-]

Good is a complex concept, not an irreducible basic constituent of the universe. It's deeply rooted in our human stuff like metabolism (food is good), reproduction (sex is good), social environment (having allies is good) etc

It seems like you're making two very distinct assertions here: first, that valence is not a 'natural kind', that it doesn't 'carve reality at the joints', and is impossible to form a crisp, physical definition of; and second, that valence is highly connected to drives that have been evolutionarily advantageous to have. The second is clearly correct; the first just seems to be an assertion (one that I understand, and I think reasonable people can hold at this point, but that I disagree with).

Comment author: minusdash 02 March 2015 01:04:02AM *  0 points [-]

This all seems to be about the "qualia" problem. Take another example. How would you know if an alien was having the experience of seeing the color red? Well, you could show it red and see what changes. You could infer it from its behavior (for example if you trained it that red means food - if indeed the alien eats food).

Similarly you could tell that it's suffering when it does something to avoid an ongoing situation, and if later on it would very much prefer not to go under the same conditions ever again.

I don't think there is anything special about the actual mechanism and neural pattern that expresses pain or suffering in our brains. It's that pattern's relation to memories, sensory inputs and motor outputs that's important.

Probably you could even retrain the brain to consider a certain fixed brain stimulus to be pleasure even though it was previously associated with pain. It's like putting on those corrective glasses that turn the visual input by 180° and the brain can adapt to that situation and the person is feeling normal after some time.

Comment author: johnsonmx 02 March 2015 08:31:40AM *  0 points [-]

I see the argument, but I'll note that your comments seem to run contrary to the literature on this: see, e.g., Berridge on "Dissecting components of reward: ‘liking’, ‘wanting’, and learning", as summed up by Luke in The Neuroscience of Pleasure. In short, behavior, memory, and enjoyment ('seeking', 'learning', and 'liking' in the literature) all seem to be fairly distinct systems in the brain. If we consider a being with a substantially different cognitive architecture, whether through divergent evolution or design, it seems problematic to view behavior as the gold standard of whether it's experiencing pleasure or suffering. At this point it may be the most practical approach, but it's inherently imperfect.

My strong belief is that although there is substantial plasticity in how we interpret experiences as positive or negative, this plasticity isn't limitless. Some things will always feel painful; others will always feel pleasurable, given a not-too-highly-modified human brain. But really, I think this line of thinking is a red herring: it's not about the stimulus, it's about what's happening inside the brain, and any crisp/rigorous/universal principles will be found there.

Is valence a 'natural kind'? Does it 'carve reality at the joints'? Intuitions on this differ (here's a neat article about the lack of consensus about emotions). I don't think anger, or excitement, or grief carve reality at the joints- I think they're pretty idiosyncratic to the human emotional-cognitive architecture. But if anything about our emotions is fundamental/universal, I think it'd have to be their valence.

Comment author: 27chaos 01 March 2015 08:57:34PM 3 points [-]

Pleasure is not a static "arrangement of particles". Pleasure is a neurological process.

You can't find a "pleasure pattern" that's fully generalized. The information is always contextual.

This isn't a perfect articulation of my objections, but this is a difficult subject.

Comment author: johnsonmx 02 March 2015 08:12:34AM 1 point [-]

Surely neurological processes are "arrangements of particles" too, though.

I think your question gets to the heart of the matter- is there a general principle to be found with regard to which patterns within conscious systems innately feel good, or isn't there? It would seem very surprising to me if there wasn't.

Comment author: dxu 02 March 2015 01:03:55AM *  1 point [-]

Off-topic, but I notice that this post, according to the time-stamp, was apparently posted on March 1, 2015. There are comments attached to it, however, dating from 2013. Does anyone know why this is?

Comment author: johnsonmx 02 March 2015 08:09:27AM 0 points [-]

I had posted the original in 2013, and did a major revision today, before promoting it (leaving the structure of the questions intact, to preserve previous discussion referents).

I hope I haven't committed any faux pas in doing this.

Comment author: capybaralet 27 January 2015 06:43:49AM 0 points [-]

These are great questions. I'm not sure they have answers. But they seem extremely pertinent to making a good AGI.

Tegmark's paper here: http://arxiv.org/pdf/1409.0813.pdf seems to be poking in the same direction.

Neglecting these questions is, IMO, tantamount to moral relativism or nihilism.

Comment author: johnsonmx 17 February 2015 11:41:52PM 0 points [-]

Thank you- that paper is extremely relevant and I appreciate the link.

To reiterate, mostly for my own benefit: As Tegmark says- whether we're talking about a foundation to ethics, or a "final goal", or we simply want to not be confused about what's worth wanting, we need to figure out what makes one brain-state innately preferable to another, and ultimately this boils down to arrangements of particles. But what makes one arrangement of particles superior to another? (This is not to give credence to moral relativism- I do believe this has a crisp answer).

Comment author: johnsonmx 27 February 2014 05:44:08PM 3 points [-]

Very interesting. No objections to your main points, but a few comments on side points and conclusions:

  • You say "it's not like we know of a specific technological innovation that would solve poverty, if only someone would develop it." I would identify Greg Cochran's 'genetic spellcheck' as such a tech, along with what other people are suggesting. http://westhunt.wordpress.com/2012/02/27/typos/

  • "We might have exhausted the low-hanging fruits in our desires." I think this is right, but it's complicated. I think the Robin Hanson way to frame this could be the following: innovation has been this rising technological tide that has made it a lot easier to meet most of Maslow's hierarchy of needs. But now most of the 'gains' from innovation are made in positional goods and services, which aren't the same sort of gains as, say, flush toilets, so they don't feel "real".

Comment author: falenas108 12 May 2013 06:29:35AM 0 points [-]

A possible answer:

There are many different kinds of pain and pleasure, and trying to categorize all of them together loses information.

For starters, the difference between physical and mental pain and pleasure.

To get more nuanced, the difference between the stingy pain of a slap, the thudy pain of a punch, the searing pain of fire, and the pain from electricity are all very distinct feelings, which could have very different circuitry.

I'm not as sure on the last paragraph, I would place that at 60% probability.

Comment author: johnsonmx 12 May 2013 07:19:55AM *  2 points [-]

On the first point-- what you say is clearly right, but is also consistent with the notion that there are certain mathematical commonalities which hold across the various 'flavors' of pleasure, and different mathematical commonalities in pain states.

Squashing the richness of human emotion into a continuum of positive and negative valence sounds like a horribly lossy transform, but I'm okay with that in this context. I expect that experiences at the 'pleasure' end of the continuum will have important commonalities 'under the hood' with others at that same end. And those commonalities will vanish, and very possibly invert, when we look at the 'agony' end.

On the second point, the evidence points to physical and emotional pain sharing many of the same circuits, and indeed, drugs which reduce physical pain also reduce emotional pain. On the other hand, as you might expect, there are some differences in the precise circuitry each type of pain activates. But by and large, the differences are subtle.

View more: Prev | Next