Comment author: johnsonmx 27 February 2014 05:44:08PM 3 points [-]

Very interesting. No objections to your main points, but a few comments on side points and conclusions:

  • You say "it's not like we know of a specific technological innovation that would solve poverty, if only someone would develop it." I would identify Greg Cochran's 'genetic spellcheck' as such a tech, along with what other people are suggesting. http://westhunt.wordpress.com/2012/02/27/typos/

  • "We might have exhausted the low-hanging fruits in our desires." I think this is right, but it's complicated. I think the Robin Hanson way to frame this could be the following: innovation has been this rising technological tide that has made it a lot easier to meet most of Maslow's hierarchy of needs. But now most of the 'gains' from innovation are made in positional goods and services, which aren't the same sort of gains as, say, flush toilets, so they don't feel "real".

Comment author: falenas108 12 May 2013 06:29:35AM 0 points [-]

A possible answer:

There are many different kinds of pain and pleasure, and trying to categorize all of them together loses information.

For starters, the difference between physical and mental pain and pleasure.

To get more nuanced, the difference between the stingy pain of a slap, the thudy pain of a punch, the searing pain of fire, and the pain from electricity are all very distinct feelings, which could have very different circuitry.

I'm not as sure on the last paragraph, I would place that at 60% probability.

Comment author: johnsonmx 12 May 2013 07:19:55AM *  2 points [-]

On the first point-- what you say is clearly right, but is also consistent with the notion that there are certain mathematical commonalities which hold across the various 'flavors' of pleasure, and different mathematical commonalities in pain states.

Squashing the richness of human emotion into a continuum of positive and negative valence sounds like a horribly lossy transform, but I'm okay with that in this context. I expect that experiences at the 'pleasure' end of the continuum will have important commonalities 'under the hood' with others at that same end. And those commonalities will vanish, and very possibly invert, when we look at the 'agony' end.

On the second point, the evidence points to physical and emotional pain sharing many of the same circuits, and indeed, drugs which reduce physical pain also reduce emotional pain. On the other hand, as you might expect, there are some differences in the precise circuitry each type of pain activates. But by and large, the differences are subtle.

Comment author: Qiaochu_Yuan 11 May 2013 10:21:13PM *  5 points [-]

These questions seem confused, but I'm having trouble articulating exactly why I think that. Something like "you are trying to take concepts that are appropriate when you model the world at one level of detail and applying them to a model of the world at a more detailed level, and this is a type error."

Comment author: johnsonmx 12 May 2013 02:06:27AM 2 points [-]

I understand the type of criticism generally, but could you say more about this specific case?

I'm curious if the objection stems from some mismatch of abstraction layers, or just the habit of not speaking about certain topics in certain terms.

Comment author: gjm 11 May 2013 01:07:46PM 4 points [-]

I'm not nyan_sandwich, but here is what I believe to be his point about asking for necessary and sufficient conditions.

Part of your question (maybe not all) appears to be: how should we define "pleasure"?

Aside from precise technical definitions ("an abelian group is a set A together with a function * from AxA to A, such that ..."), the meaning of a word is hardly ever accurately given by any necessary-and-sufficient conditions that can be stated explicitly in a reasonable amount of space, because that just isn't the way human minds work.

We learn the meaning of a word by observing how it's used. We see, and hear, a word like "pleasure" or "pain" applied to various things, and not to others. What our brains do with this is approximately to consider something an instance of "pleasure" in so far as it resembles other things that are called "pleasure". There's no reason why any manageable set of necessary and sufficient conditions should be equivalent to that.

Further, different people are exposed to different sets of uses of the word, and evaluate resemblance in different ways. So your idea of "pleasure" may not be the same as mine, and there's no reason why there need be any definite answer to the question of whose is better.

Typically, lots of different things will contribute to our considering something sufficiently like other instances of "pleasure" to deserve that name itself. In some particular contexts, some will be more important than others. So if you're trying to pin down a precise definition for "pleasure", the features you should concentrate on will depend on what that definition is going to be used for.

Does any of that help?

Comment author: johnsonmx 11 May 2013 07:04:39PM 1 point [-]

It does, and thank you for the reply.

How should we define "pleasure"? -- A difficult question. As you mention, it is a cloud of concepts, not a single one. It's even more difficult because there appears to be precious little driving the standardization of the word-- e.g., if I use the word 'chair' differently than others, it's obvious, people will correct me, and our usages will converge. If I use the word 'pleasure' differently than others, that won't be as obvious because it's a subjective experience, and there'll be much less convergence toward a common usage.

But I'd say that in practice, these problems tend to work themselves out, at least enough for my purposes. E.g., if I say "think of pure, unadulterated agony" to a room of 10000 people, I think the vast majority would arrive at fairly similar thoughts. Likewise, if I asked 10000 people to think of "pure, unadulterated bliss… the happiest moment in your life", I think most would arrive at thoughts which share certain attributes, and none (<.01%) would invert answers to these two questions.

I find this "we know it when we see it" definitional approach completely philosophically unsatisfying, but it seems to work well enough for my purposes, which is to find mathematical commonalities across brain-states people identify as 'pleasurable', and different mathematical commonalities across brain-states people identify as 'painful'.

I see what you mean by "the meaning of a word is hardly ever accurately given by any necessary-and-sufficient conditions that can be stated explicitly in a reasonable amount of space, because that just isn't the way human minds work." On the other hand, all words are imperfect and we need to talk about this somehow. How about this: (1) what are the characteristic mathematics of (i.e., found disproportionally in) self-identified pleasurable brain states?

Comment author: [deleted] 11 May 2013 04:34:20AM -1 points [-]

First recommendation is to get to the bottom of what question you are actually asking. What are you actually trying to do? Do the right thing? Learn how to manipulate people? Learn how to torture? Become a pleasure delivery professional?

See disguised queries

(1) What are the necessary and sufficient properties for a thought to be pleasurable?

It feels good? Some pretty heavy neuroscience to say anything beyond that. Again, what are you going to do with the answer to this question. Ask that question instead.

Also note that "necessary and sufficient" is an obsolete model of concepts. See the human's guide to words.

(2) What are the characteristic mathematics of a painful thought?

What does this mean? How do I calculate exactly how much pain someone will experience if I punch them? Again, ask the real question.

(3) If we wanted to create an artificial neural network-based mind (i.e., using neurons, but not slavishly patterned after a mammalian brain) that could experience bliss, what would the important design parameters be?

Um. Why would you want to do that? Is this simply a hypothetical to see if we understand the concept?

It really depends on what aspect you are interested in; you could create "pleasure" and "pain" by hacking up some kind of simple reinforcement learner, and I suppose you could shoehorn that into a neural network if you really wanted to. But why?

Note that a simple reinforcement learner "experiences" "pain" and "pleasure" in some sense, but not in the morally relevant sense. You will find that the moral aspect is much more anthropomorphic and much more complex, I think.

(4) If we wanted to create an AGI whose nominal reward signal coincided with visceral happiness -- how would we do that?

I guess you could have a little "visceral happiness" meter that gets filled up in the right conditions, but this would a profound waste of AGI capability, and probably doesn't do what you actually wanted. What is it you actually want?

(5) If we wanted to ensure an uploaded mind could feel visceral pleasure of the same kind a non-uploaded mind can, how could we check that?

Ask them? The same way we think we know for non-uploaded minds.

(6) If we wanted to fill the universe with computronium and maximize hedons, what algorithm would we run on it?

If I wanted to turn the universe into paperclips and meaningless crap, how would I do it? Why is your question interesting? Is this simply an excercise in learning how to fill the universe with X? You could pick a less confusing X.

I feel like you might be importing a few mistaken assumptions into this whole line of questioning. I recommend that you lurk more and read some of the stuff I linked.

And if you think certain questions aren't good, could you offer some you think are?

Good question:

How would a potentially powerful optimizing process have to be constructed to be provably capable of steering towards some coherent objective(s) over the long run and through self-modifications?

My first post; please be somewhat gentle. Thanks!

Downvote preventers get downvoted.

In response to comment by [deleted] on The mystery of pain and pleasure
Comment author: johnsonmx 11 May 2013 05:46:38AM *  3 points [-]

We seem to be talking past each other, to some degree. To clarify, my six questions were chosen to illustrate how much we don't know about the mathematics and science behind psychological valence. I tried to have all of them point at this concept, each from a slightly different angle. Perhaps you interpret them as 'disguised queries' because you thought my intent was other than to seek clarity about how to speak about this general topic of valence, particularly outside the narrow context of the human brain?

I am not trying to "Learn how to manipulate people? Learn how to torture? Become a pleasure delivery professional?" -- my focus is entirely on speaking about psychological valence in clear terms, illustrating that there's much we don't know, and make the case that there are empirical questions about the topic that don't seem to have empirical answers. Also, in very tentative terms, to express the personal belief that a clear theory on exactly what states of affairs are necessary and sufficient for creating pain and pleasure may have some applicability to FAI/AGI topics (e.g., under what conditions can simulated people feel pain?).

I did not find 'necessary and sufficient', or any permutation thereof, in the human's guide to words. Perhaps you'd care to explicate why you didn't care for my usage?

Re: (3) and (4), I'm certain we're not speaking of the same things. I recall Eliezer writing about how creating pleasure isn't as simple as defining a 'pleasure variable' and incrementing it:

int pleasure = 5; pleasure++

I can do that on my macbook pro; it does not create pleasure.

There exist AGIs in design space that have the capacity to (viscerally) feel pleasure, much like humans do. There exist AGIs in design space with a well-defined reward channel. I'm asking: what principles can we use to construct an AGI which feels visceral pleasure when (and only when) its reward channel is activated? If you believe this is trivial, we are not communicating successfully.

I'm afraid we may not share common understandings (or vocabulary) on many important concepts, and I'm picking up a rather aggressive and patronizing vibe, but a genuine thanks for taking the time to type out your comment, and especially the intent in linking that which you linked. I will try not to violate too many community norms here.

Comment author: shminux 11 May 2013 03:57:38AM *  5 points [-]

Note that all worthwhile original research starts with a literature review. What have you found so far?

Comment author: johnsonmx 11 May 2013 04:16:30AM *  6 points [-]

Tononi's Phi theory seems somewhat relevant, though it only addresses consciousness and explicitly avoids valence. It does seem like something that could be adapted toward answering questions like this (somehow).

Current models of emotion based on brain architecture and neurochemicals (e.g., EMOCON) are relevant, though ultimately correlative and thus not applicable outside of the human brain.

There's also a great deal of quality literature about specific correlates of pain and happiness- e.g., Building a neuroscience of pleasure and well-being and An fMRI-Based Neurologic Signature of Physical Pain.

In short, I've found plenty of plenty of research around the topic but nothing that's particularly predictive outside of very constrained contexts. No generalized theories. There's some interesting stuff happening around panpsychism (e.g., see these two pieces by Chalmers) but they focus on consciousness, not valence.

My intuition is valence will be encoded within frequency dynamics in a way that will be very amiable to mathematical analysis, but right now I'm seeking clarity about how to speak about the problem.

Edit: I'll add this to the bottom of the post

In response to How to Be Happy
Comment author: johnsonmx 15 January 2013 07:30:54AM 0 points [-]

I'd just like to say thanks for posting this. Cogent, researched, cheerful, and helpful.

Comment author: johnsonmx 06 December 2012 11:35:50PM *  4 points [-]

Another view of Philosophy, which I believe Russell also subscribed to (but I can't seem to find a reference for presently) is that philosophy was the 'mother discipline'. It was generative. You developed your branch of Philosophy until you got your ontology and methodology sorted out, and then you stopped calling what you were doing philosophy. (This has the amusing side-effect of making anything philosophers say wrong by definition-- sometimes useful, but always wrong.)

The Natural Sciences, Psychology, Logic, Mathematics, Linguistics-- they all got their start this way.

That's how Philosophy used to work. Nowadays, I think the people who can do that type of "mucking around with complex questions of ontology and methodology" thinking have largely moved on to other disciplines. If we define Philosophy as this messily complex discipline-generating process, it no longer happens in the discipline we call "Philosophy".[1]

That said--- while I would personally enjoy the "intro to philosophy" syllabus Luke proposes, I think it's a stretch to label the course a philosophy course, much less [The One And True] Intro To Philosophy. It's cool and a great idea, but the continuity with many models (be they aspirational or descriptive) of Philosophy is fairly tenuous, and without a lot of continuity I think it'd be hard to push into established departments.[2]

If we're speaking more modestly, that philosophers should be steeped in modern science and logic and that when they're not, what they do is often worse than useless, I can certainly agree with that.

[1] E.g., Axiology.

[2] Why not call it "introduction to scientific epistemology"?

In response to Causal Reference
Comment author: johnsonmx 28 October 2012 07:59:47PM -1 points [-]

We can speak of different tiers of stuff, interacting (or not) through unknown causal mechanisms, but Occam's Razor would suggest these different tiers of stuff might actually be fundamentally the same 'stuff', just somehow viewed from different angles. (This would in turn suggest some form of panpsychism.)

In short, I have trouble seeing how we make these metaphysical hierarchizations pay rent. Perhaps that's your point also.

Comment author: joaolkf 05 March 2010 01:05:45PM *  16 points [-]

A cognitive module for cuteness only needs to make us find babies a nice thing and enhance the probability of parental care. It simply doesn’t matter if, besides doing that, the same cognitive module make us find bunnies or orthorhombic sulfur crystals at low temperature cute, so long this doesn’t have any deleterious effects. Probably a cognitive module that can find cute only human babies and not bunnies is more evolutionary improbable and developmental costly having the same relevant behavioral results of a more cheap and universal cognitive module for cuteness. Evolution only needs to shape cognition in order to generate, more or less, the right type of behavior. It DOESN’T have to, and in most cases it doesn’t, shape cognition nicely, in a way we would look at it and say “nice work”.

Comment author: johnsonmx 24 October 2012 10:55:38PM 2 points [-]

Yes, and I would say finding bunnies cuter than human babies isn't a strong argument against Dennett's hypothesis. Supernormal Stimuli are quite common in humans and non-humans.

I think this argument could be analogously phrased: "The reason why exercise makes us feel good can't be to get us to exercise more, because cocaine feels even better than exercise." Seems wrong when we put it that way.

View more: Prev | Next