Followup toMaking Beliefs Pay Rent, Lost Purposes

Thus runs the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound?
One says, "Yes it does, for it makes vibrations in the air."
Another says, "No it does not, for there is no auditory processing in any brain."

So begins a long, acrimonious battle...

The conventional resolution is that the two are fighting over the definition of a word, and such labels do not have intrinsic definitions, only agreed-upon definitions.

Yet if you need to know about the forest for any pragmatic reason - if there is anything you plan on doing with the knowledge - then the answer is no longer a matter of mutual agreement.  If, for example, you need to know whether landmines will be set off by the tree falling, then you cannot make the land mines explode or unexplode by any possible amount of agreement about the meaning of the word "sound".  You can get the whole world to agree, one way or the other, and it still won't make a difference.

You find yourself in an unheard-falling-tree dilemma, only when you become curious about a question with no pragmatic use, and no predictive consequences.  Which suggests that you may be playing loose with your purposes.

So does this mean that truth reduces to usefulness?  But this, itself, would be a purpose-loss, a subgoal stomp, a mistaking of the indicator for the indicated.  Usefulness for prediction, and demonstrated powers of manipulation, is one of the best indicators of truth.  This does not mean that usefulness is truth.  You might as well say that the act of driving to the supermarket is eating chocolate.

There is, nonetheless, a deep similarity between the pragmatic and the epistemic arts of rationality, in the matter of keeping your eye on the ball.

In pragmatic rationality, keeping your eye on the ball means holding to your purpose:  Being aware of how each act leads to its consequence, and not losing sight of utilities in leaky generalizations about expected utilities.  If you hold firmly in your mind the image of a drained swamp, you will be less likely to get lost in fighting alligators.

In epistemic rationality, keeping your eye on the ball means holding to your question:  Being aware of what each indicator says about its indicatee, and not losing sight of the original question in fights over indicators.  If you want to know whether landmines will detonate, you will not get lost in fighting over the meaning of the word "sound".

Both cases deal with leaky generalizations about conditional probabilities:  P(Y=y|X=x) is nearly but not quite 1.

In the case of pragmatic rationality: driving to the supermarket may almost always get you chocolate, but on some occasions it will not.  If you forget your final purpose and think that x=y then you will not be able to deal with cases where the supermarket is out of chocolate.

In the case of epistemic rationality: seeing a "Chocolate for sale" sign in the supermarket may almost always indicate that chocolate is available, but on some occasions it will not.  If you forget your original question and think that  x=y then you will go on arguing "But the sign is up!" even when someone calls out to you, "Hey, they don't have any chocolate today!"

This is a deep connection between the human arts of pragmatic and epistemic rationality...

...which does not mean they are the same thing.

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 11:32 AM

Eliezer- I like these ideas. I’m thinking a possible distinction between a seeker (one attempting to overcome bias) and a dogmatist (one attempting to defend bias) would be that a seeker takes a pragmatic rationality and looks for exceptions (thereby continuing to look for the deeper epistemic rationality) whereas a dogmatist takes a pragmatic rationality and turns it into an epistemic rationality by ignoring or redefining exceptions. Am I understanding?

'Scuse me, but isn't this trivial ? Both pragmatic and epistemic instances depend on available information. If you drive to Carrefour, you need some information to tell you they're out of chocolate. And to see the 'Out of chocolate' sign, you need to have driven to Carrefour. So, dear friends, both instances depend on (a) purpose (b) information relative to the achievability of the purpose. Unless of course your purpose is 'enculage des mouches', in which case, don't go to Carrefour. Go to Tesco. PS Truth does not reduce to usefulness. Truth is a relative concept dependant on usefulness. I asked Schrodinger's cat to contribute but she was busy with her Whiskas.

Truth is a relative concept dependant on usefulness.

You're both wrong! Truth is an objective concept totally unrelated in any way to usefulness.

It is not useful to have an accurate model of reality? Isn't that what truth is: something that helps you refine your model of reality?

Hi Richard, any relation to the punch card guy ? IBM paid my salary for 35 years. Someone in one of these threads got squashed flatter than a pancake for supposedly confusing maps and territories, so let's be careful with models of reality. When I say 'dependant on usefulness', I just meant that the selectivity and level of detail of the map would depend on what you want to use it for. Not much point in going to the doctor and telling him the 'truth' about my finger, which would involve energy fields and dark matter, if what I want from him is a sticking plaster. Lovely article here on what the Romans thought was important in a map, and why it doesn't look like one that we'd find useful, or 'truthful', today. http://news.bbc.co.uk/2/hi/europe/7113810.stm

Interesting comments. I recently read Lovejoy's "Thirteen Pragmatisms", which made me think that a pragmatic view is, by necessity, purpose-driven. The question is, does that argument necessarily lead to a relativist view of the world or can it be in line with a realist perspective? This is a crucial issue for knowledge representation, and ontology-focused philosophers such as Barry Smith have strong opinions about it: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1839444

thoughts?

Truth and usefulness? I don't think that the two can be equated. Sometimes in life a lie is more useful than the truth (a nice tool against self-incrimination), so how could a useful lie be in any way truthful?

Does the lie become a truth at some conceptual level?

I disagree with the idea that usefulness is not truth. More specifically, I disagree with the idea that there's anything else you could really call truth.

Anything that you can actually test can be useful, so you if it isn't useful, you can't possibly figure it out. If there is truth out there that isn't useful, we'll never know what it is. You may understand every useful thing about chocolate, but any other aspect about it you have absolutely no understanding of whatsoever.

In short, by your definition of truth, you are always wrong.