There is no such thing as pleasure
By saying that there is no such thing as pleasure, I don't mean that I don't enjoy anything. I mean that I can find nothing in common among all the things I do enjoy, to call "pleasure". In contrast, I can find something in common among all physically painful things. I have experienced toothache, indigestion, a stubbed toe, etc., and these experiences differ along only a few dimensions: intensity, location, sharpness, and temporal modulation are about it. I perceive a definite commonality among these experiences, and that is what I call "pain". (Metaphorical pains such as "emotional pain" or "an eyesore" are not included.)
However, I cannot find anything in common among solving an interesting problem, sex, listening to good music, or having a good meal. Not common to all of them, nor even common to any two of them. There is not even a family resemblance. This is what I mean when I say there is no such thing as pleasure. But that's just me. I know that mental constitutions vary, and I suspect they vary in more ways than anyone has yet discovered. Perhaps they vary in this matter? Are there people who do experience "pleasure", in the sense in which I do not?
Why is this a LessWrong topic? Because people often talk about "pleasure" as if there were such a thing, the obtaining of which is the reason that people seek pleasurable experiences, and the maximisation of which is what people do. But it appears to me that "pleasure" is nothing more than a label applied to disparate experiences, becoming a mere dormitive principle when used as an explanation. Does that difference result from an actual difference in mental constitution?
If there are people who do experience a definite thing common to all enjoyable experiences, this might be one reason for the attraction, to some, of utilitarian theories -- even for taking some sort of utilitarianism to be obviously, trivially true. My experience, as set out above, is certainly one reason why I find all varieties of utilitarianism a priori implausible.
Dennett's heterophenomenology
In an earlier comment, I conflated heterophenomenology in the general sense of taking introspective accounts as data to be explained rather than direct readouts of the truth, with Dennett's particular approach to explaining those data. So to correct myself, I say that it is Dennett, rather than heterophenomenology, that claims that there is no such thing as consciousness. Dennett denies that he does, but I disagree. I defend this view here.
I have to admit at this point that I have not read "Consciousness Explained". Had either of the library's copies been on the shelves last Tuesday I would have done by now, but instead I found his later book (and his most recent on the topic), "Sweet Dreams: Philosophical Obstacles to a Science of Consciousness". The subtitle suggests a drawing back from the confidence of the earlier title, as does that of the book in between. The book confirms me in my impression that the ideas of "C.E." have been in the air so long (the air of hard SF, sciblogs, and the like, not to mention Phil Goetz's recent posts) that reading the primary source 19 years on would be nothing more than an exercise in checkbox-ticking.
I'll give a brief run-through of "Sweet Dreams" and then carry on the argument.
The usefulness of correlations
I sometimes wonder just how useful probability and statistics are. There is the theoretical argument that Bayesian probability is the fundamental method of correct reasoning, and that logical reasoning is just the limit as p=0 or 1 (although that never seems to be applied at the meta-level: what is the probability that Bayes' Theorem is true?), but today I want to consider the practice.
Casinos, lotteries, and quantum mechanics: no problem. The information required for deterministic measurement is simply not available, by adversarial design in the first two cases, and by we know not what in the third. Insurance: by definition, this only works when it's impossible to predict the catastrophes insured against. No-one will offer insurance against a risk that will happen, and no-one will buy it for a risk that won't. Randomised controlled trials are the gold standard of medical testing; but over on OB Robin Hanson points out from time to time that the marginal dollar of medical spending has little effectiveness. And we don't actually know how a lot of treatments work. Quality control: test a random sample from your production run and judge the whole batch from the results. Fine -- it may be too expensive to test every widget, or impossible if the test is destructive. But wherever someone is doing statistical quality control of how accurately you're filling jam jars with the weight of jam it says on the label, someone else will be thinking about how to weigh every single one, and how to make the filling process more accurate. (And someone else will be trying to get the labelling regulations amended to let you sell the occasional 15-ounce pound of jam.)
But when you can make real measurements, that's the way to go. Here is a technical illustration.
Information cascades in scientific practice
Here's an interesting recent paper in the British Medical Journal: "How citation distortions create unfounded authority: analysis of a citation network". (I don't know if this is freely accessible, but the abstract should be.)
From the paper:
"Objective To understand belief in a specific scientific claim by studying the pattern of citations among papers stating it."
"Conclusion Citation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation."
It also includes a list of specific ways in which citations were found to amplify or invent evidence.
Causality does not imply correlation
It is a commonplace that correlation does not imply causality, however eyebrow-wagglingly suggestive it may be of causal hypotheses. It is less commonly noted that causality does not imply correlation either. It is quite possible for two variables to have zero correlation, and yet for one of them to be completely determined by the other.
Fourth London Rationalist Meeting?
It's been the first Sunday of the month so far, but I haven't seen any announcement for this month yet. There was a discussion, but no conclusion. Is anything happening?
ETA: This would have appeared a day and a half ago, but I did not notice that it had only been stored as a draft and not published. When logged in, it was impossible to notice that I was the only person seeing this. Feature request for this site: add a visual indication that something is only a draft, e.g. a "Publish" link, perhaps with the words somewhere, Unpublished draft.
Without models
Followup to: What is control theory?
I mentioned in my post testing the water on this subject that control systems are not intuitive until one has learnt to understand them. The point I am going to talk about is one of those non-intuitive features of the subject. It is (a) basic to the very idea of a control system, and (b) something that almost everyone gets wrong when they first encounter control systems.
I'm going to address just this one point, not in order to ignore the rest, but because the discussion arising from my last post has shown that this is presently the most important thing.
There is a great temptation to think that to control a variable -- that is, to keep it at a desired value in spite of disturbing influences -- the controller must contain a model of the process to be controlled and use it to calculate what actions will have the desired effect. In addition, it must measure the disturbances or better still, predict them in advance and what effect they will have, and take those into account in deciding its actions.
In terms more familiar here, the temptation to think that to bring about desired effects in the world, one must have a model of the relevant parts of the world and predict what actions will produce the desired results.
However, this is absolutely wrong. This is not a minor mistake or a small misunderstanding; it is the pons asinorum of the subject.
Note the word "must". It is not disputed that one can use models and predictions, only that one must, that the task inherently requires it.
What is control theory, and why do you need to know about it?
This is long, but it's the shortest length I could cut from the material and have a complete thought.
1. Alien Space Bats have abducted you.
In the spirit of this posting, I shall describe a magical power that some devices have. They have an intention, and certain means available to achieve that intention. They succeed in doing so, despite knowing almost nothing about the world outside. If you push on them, they push back. Their magic is not invincible: if you push hard enough, you may overwhelm them. But within their limits, they will push back against anything that would deflect them from their goal. And yet, they are not even aware that anything is opposing them. Nor do they act passively, like a nail holding something down, but instead they draw upon energy sources to actively apply whatever force is required. They do not know you are there, but they will struggle against you with all of their strength, precisely countering whatever you do. It seems that they have a sliver of that Ultimate Power of shaping reality, despite their almost complete ignorance of that reality. Just a sliver, not a whole beam, for their goals are generally simple and limited ones. But they pursue them relentlessly, and they absolutely will not stop until they are dead.
You look inside one of these devices to see how it works, and imagine yourself doing the same task...
Alien Space Bats have abducted you. You find yourself in a sealed cell, featureless but for two devices on the wall. One seems to be some sort of meter with an unbreakable cover, the needle of which wanders over a scale marked off in units, but without any indication of what, if anything, it is measuring. There is a red blob at one point on the scale. The other device is a knob next to the meter, that you can turn. If you twiddle the knob at random, it seems to have some effect on the needle, but there is no fixed relationship. As you play with it, you realise that you very much want the needle to point to the red dot. Nothing else matters to you. Probably the ASBs' doing. But you do not know what moves the needle, and you do not know what turning the knob actually does. You know nothing of what lies outside the cell. There is only the needle, the red dot, and the knob. To make matters worse, the red dot also jumps along the scale from time to time, in no particular pattern, and nothing you do seems to have any effect on it. You don't know why, only that wherever it moves, you must keep the needle aligned with it.
Solve this problem.
That is what it is like, to be one of these magical devices. They are actually commonplace: you can find them everywhere.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)