You are viewing a version of this post published on the . This link will always display the most recent version of the post.

Some of the comments in this blog have touched on the question of why we ought to seek truth.  (Thankfully not many have questioned what truth is.)  Our shaping motivation for configuring our thoughts to rationality, which determines whether a given configuration is "good" or "bad", comes from whyever we wanted to find truth in the first place.

It is written:  "The first virtue is curiosity."  Curiosity is one reason to seek truth, and it may not be the only one, but it has a special and admirable purity.  If your motive is curiosity, you will assign priority to questions according to how the questions, themselves, tickle your personal aesthetic sense.  A trickier challenge, with a greater probability of failure, may be worth more effort than a simpler one, just because it is more fun.

Some people, I suspect, may object that curiosity is an emotion and is therefore "not rational". I label an emotion as "not rational" if it rests on mistaken beliefs, or rather, on irrational epistemic conduct: "If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm." Conversely, then, an emotion which is evoked by correct beliefs or epistemically rational thinking is a "rational emotion"; and this has the advantage of letting us regard calm as an emotional state, rather than a privileged default. When people think of "emotion" and "rationality" as opposed, I suspect that they are really thinking of System 1 and System 2—fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren't always true, and perceptual judgments aren't always false; so it is very important to distinguish that dichotomy from "rationality". Both systems can serve the goal of truth, or defeat it, according to how they are used.

Besides sheer emotional curiosity, what other motives are there for desiring truth? Well, you might want to accomplish some specific real-world goal, like building an airplane, and therefore you need to know some specific truth about aerodynamics. Or more mundanely, you want chocolate milk, and therefore you want to know whether the local grocery has chocolate milk, so you can choose whether to walk there or somewhere else. If this is the reason you want truth, then the priority you assign to your questions will reflect the expected utility of their information—how much the possible answers influence your choices, how much your choices matter, and how much you expect to find an answer that changes your choice from its default.

To seek truth merely for its instrumental value may seem impure—should we not desire the truth for its own sake?—but such investigations are extremely important because they create an outside criterion of verification: if your airplane drops out of the sky, or if you get to the store and find no chocolate milk, it's a hint that you did something wrong.  You get back feedback on which modes of thinking work, and which don't.  Pure curiosity is a wonderful thing, but it may not linger too long on verifying its answers, once the attractive mystery is gone.  Curiosity, as a human emotion, has been around since long before the ancient Greeks.  But what set humanity firmly on the path of Science was noticing that certain modes of thinking uncovered beliefs that let us manipulate the world.  As far as sheer curiosity goes, spinning campfire tales of gods and heroes satisfied that desire just as well, and no one realized that anything was wrong with that.

Are there motives for seeking truth besides curiosity and pragmatism?  The third reason that I can think of is morality:  You believe that to seek the truth is noble and important and worthwhile.  Though such an ideal also attaches an intrinsic value to truth, it's a very different state of mind from curiosity.  Being curious about what's behind the curtain doesn't feel the same as believing that you have a moral duty to look there.  In the latter state of mind, you are a lot more likely to believe that someone else should look behind the curtain, too, or castigate them if they deliberately close their eyes.  For this reason, I would also label as "morality" the belief that truthseeking is pragmatically important to society, and therefore is incumbent as a duty upon all.  Your priorities, under this motivation, will be determined by your ideals about which truths are most important (not most useful or most intriguing); or your moral ideals about when, under what circumstances, the duty to seek truth is at its strongest.

I tend to be suspicious of morality as a motivation for rationality, not because I reject the moral ideal, but because it invites certain kinds of trouble.  It is too easy to acquire, as learned moral duties, modes of thinking that are dreadful missteps in the dance.  Consider Mr. Spock of Star Trek, a naive archetype of rationality.  Spock's emotional state is always set to "calm", even when wildly inappropriate.  He often gives many significant digits for probabilities that are grossly uncalibrated.  (E.g:  "Captain, if you steer the Enterprise directly into that black hole, our probability of surviving is only 2.234%"  Yet nine times out of ten the Enterprise is not destroyed.  What kind of tragic fool gives four significant digits for a figure that is off by two orders of magnitude?)  Yet this popular image is how many people conceive of the duty to be "rational"—small wonder that they do not embrace it wholeheartedly.  To make rationality into a moral duty is to give it all the dreadful degrees of freedom of an arbitrary tribal custom.  People arrive at the wrong answer, and then indignantly protest that they acted with propriety, rather than learning from their mistake.

And yet if we're going to improve our skills of rationality, go beyond the standards of performance set by hunter-gatherers, we'll need deliberate beliefs about how to think with propriety.  When we write new mental programs for ourselves, they start out in System 2, the deliberate system, and are only slowly—if ever—trained into the neural circuitry that underlies System 1.  So if there are certain kinds of thinking that we find we want to avoid—like, say, biases—it will end up represented, within System 2, as an injunction not to think that way; a professed duty of avoidance.

If we want the truth, we can most effectively obtain it by thinking in certain ways, rather than others; and these are the techniques of rationality.  Some of the techniques of rationality involve overcoming a certain class of obstacles, the biases...

New Comment
61 comments, sorted by Click to highlight new comments since:

Yet nine times out of ten the Enterprise is not destroyed. What kind of tragic fool gives four significant digits for a figure that is off by two orders of magnitude?

One who doesn't understand the Million To One Chance principle that operates in fictional universes. If the Star Trek universe didn't follow the laws of fiction, the Enterprise would have been blown up long ago. ;)

See also: Straw Vulcan

Maybe in ninety-eight universes out of 100 it does blow up and we just see the one that's left; and he's actually giving an accurate number. :P

The TV show version of the anthropic principle: all the episodes where the Enterprise does blow up aren't made.

[-]Raemon120

Except one.

In the "Star Trek: Judgement Rites" game there's a spot where Spock gives ridiculously precise odds, and Kirk comments that they seem "better than usual."  Spock then clarifies that he has begun factoring Kirk's history of prevailing when the odds are against him into the calculations.

And do keep in mind that the audience doesn't necessarily see all the times that low-odds plans don't work out.

Does this sentence contain a typo?

"If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, the Way opposes your calm."

[-]RickJS230

Thanks, Eliezer!

“Are there motives for seeking truth besides curiosity and pragmatism?”

I can think of several that have showed up in my life. I’m offering these for consideration, but not claiming these are good or bad, pure or impure etc. Some will doubtless overlap somewhat with each other and the ones stated.

  1. As a weapon. Use it to win arguments (sometimes the point of an argument is to WIN, never mind learning the truth. I've got automatic competitiveness I need to keep on a short leash). Use it to win bar room bets. Acquire knowledge about the “buttons” people have, and use it to manipulate them. Use it to thwart opposition to my plans, however sleazy. (“What are we going to do tonight, Brain?” ... )
  2. As evidence that I deserve an A in school. Even if I never have a pragmatic use for the knowledge, there is (briefly) value in demonstrably having the knowledge.
  3. As culture. I don’t think I have ever found a practical use for the facts of history ( of science, of politics, or of art ), but they participated in shaping my whole world view. Out of that, I came out of retirement and dedicated myself to saving humanity. Go figure.
  4. As a contact, as in, “I know Nick Bostrom.” (OK, that’s a bit of a stretch, but it is partly informational.) 5, As pleasure & procreation, as in, “Cain knew his wife.” ;-)

“To make rationality into a moral duty is to give it all the dreadful degrees of freedom of an arbitrary tribal custom. People arrive at the wrong answer, and then indignantly protest that they acted with propriety, rather than learning from their mistake.” Yes. I say, “Morality is for agents that can’t figure out the probable consequences of their actions.” Which includes me, of course. However, whenever I can make a good estimate, I pretty much become a consequentialist.

Seeking knowledge has, for me, an indirect but huge value. I say: Humanity needs help to survive this century, needs a LOT of help. I think Friendly AI is our best shot at getting it. And we’re missing pieces of knowledge. There may be whole fields of knowledge that we’re missing and we don’t know what they are.

I would not recommend avoiding lines of research that might enable making terribly powerful weapons. We’ve already got that problem, there’s no avoiding it. But there’s no telling what investigations will produce bits of information that will trigger some human mind into a century-class breakthrough that we had no idea we needed.

The significant digit anecdote reminds me: why does the Dow Jones giver their average with 2 decimal points?

I do have a couple of problems, though

1) It is written: "The first virtue is curiosity." - Written by whom? 2) …curiosity is an emotion… - says who? 3) To seek truth merely for its instrumental value may seem impure… – Why? To whom? 4) If we want the truth, we can most effectively obtain it by thinking in certain ways – and if you think the way I tell you to think, you’ll wind up with my truth

It is written: "The first virtue is curiosity." - Written by whom?

By Eliezer.

From TVtropes:

Star Trek The Original Series episode "This Side of Paradise". Mr. Spock has been affected by spores that release his emotional side. He and his love interest Leila Kalomi are looking at clouds.

Spock: That one looks like a dragon. You see the tail and the dorsal spines?

Leila: I've never seen a dragon.

Spock: I have. On Berengaria 7. But I've never stopped to look at clouds before. Or rainbows. I can tell you exactly why one appears in the sky, but considering its beauty has always been out of the question.

Sigh.

I know! Is the world not more beautiful when one can understand how it works?

Lets not forget, arguably the most important reason.

Because it makes us feel good.

We can feel superior to others, because we can do something that few other people can. We can collect instances where our approach is beneficial and use that to validate our self worth. And we can form a community that validates our strengths and ignores our weaknesses. All perfectly reasonable motivations (provided our satisfaction is a reasonable goal).

In my own field (Computer Vision), there are those who pursue it rationally (with rigorous mathematical analysis) and those who pursue it heuristically (creating a variety of systems and testing them on small samples). These approaches seem to mirror the determined search for truth and the pragmatic "go with what feels like it works" approaches. Without rigorously analysing them (although this may be possible) both approaches seem to deliver benefit with no clear winner in terms of delivering approaches that are practically applied or used as the basis for further work. I think it is interesting to apply this meta analysis to reason, i.e. can we scientifically determine whether approaching problems reasonably conveys advantage? Is there an optimal balance?

By "most important reason" do you mean "most compelling justification" or "predominant cause"?

I would suggest both, and I would add that I don't think this inherently diminishes the value of pursuing truth. I am increasingly of the belief that in order to be content it is necessary to pick ones community and embrace its values. What I love about this community is its willingness to question itself as much as the views of others. I think it's useful to acknowledge what we really enjoy and be hesitant of explanations that attribute objective value to enjoyable activities. Doing so risks erasing self doubt and can lead to the adoption of strong moral values that distort our lives to such an extent that they ultimately make us miserable.

[-]Nisan110

there are those who pursue it rationally (with rigorous mathematical analysis) and those who pursue it heuristically (creating a variety of systems and testing them on small samples). [...] both approaches seem to deliver benefit with no clear winner

"Rationality" is what I would call the meta-analysis which concludes that both approaches are equally valid in this field.

[-][anonymous]10

"For this reason, I would also label as "morality" the belief that truthseeking is pragmatically important to society, and therefore is incumbent as a duty upon all."

Morality doesn't need to have anything to do with society or duty. Consider the case of an rational ethical egoist, to whom acting in one's self-interest and for one's own values is virtuous.

Morality doesn't need to have anything to do with society or duty. Consider the case of an rational ethical egoist, to whom acting in one's self-interest and for one's own values is virtuous.

If that person is a human, and thinks that ethical egoism does not have anything to do with society or duty, then they are mistaken.

[-][anonymous]00

Why?

I'd guess because humans often contain concepts of duty and the like, and have experiences vastly contingent on social / societal contexts.

Maintaining interpersonal relationships is vital to the human condition. As Aristotle put it, "The solitary life is perhaps suitable for a god or a beast, but not for a man". Friendships are a necessary part of flourishing for humans, and aside from that we are almost always in a context where our success depends upon our interactions with others.

There is more discussion of this post here as part of the Rerunning the Sequences series.

I'll be honest, I have a serious problem with hypocrites, and so I warn everyone I know if they start heading down that path. In your article, you say that morality is perhaps the most suspect method of rationality. Yet, you yourself, by putting up these articles and arguing that everyone should use rational thought, seem to have a moral motivation for rationality. I am not saying that this is your only motivation, but it seems to be the motivation behind these posts. However, I do appreciate that you respect morality by mentioning how important it is in pursuing paths that will not result in horrible consequences. I think that maybe you should allow yourself to admit that morality is a good motivator if used with other good types of motivations to seek truth.

Here's an interesting take on the "morality" side: It may be morally incumbent on some to look behind the curtain, and not for others. Since knowing about biases can hurt people, it may well be that those who are "fit" to look behind the curtain are in fact required to be the guardians of said curtain, forbidding anyone without the proper light and knowledge from looking behind it, but acting upon the knowledge gained for the benefit of society.

..... Hence, the Conspiracy.

I am trying to win an argument, and I am having trouble defeating the following claim:

It can, under certain scenarios, be instrumental (in the sense of achieving values) to believe in something which is false -- usually by virtue of a placebo effect. For example: believing you are more likely to get a job offer than you really are, so you are confident at the job interview.

The counterargument I want to make, in my head, is that if you have the ability to deceive yourself to that extent -- to make yourself believe something that is false -- then you have the ability to believe that you won't get the job interview, but pretend that you think you will. I don't feel like that's a very solid or reassuring argument, though.

[-]TimS10

I think the best response to the argument for instrumentally useful false beliefs is to think a little about the causal mechanism. Surely it is not the case that Omega reads your minds, sees your false confidence, and orders you hired.

As you noted, a more plausible mechanism is that the false confidence causes changes in affect (i.e. appearing confident) that are beneficial for the task. Or perhaps false over-confidence cancels false under-confidence that would have caused anxiety that would be detrimental for the task.

Once the causal chain is examined, the next thing to ask is whether the beneficial intermediate effects can be caused by something other than false belief. If so, you have answered the claim you are responding to. If not, examining why one doesn't believe it possible needs to be examined.

You should also examine the costs of each method of achieving the intermediate effects. Even if there are other ways available, maybe self-deception is the easiest, and the costs of that particular incorrect belief are small.

[-]royf20

"If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm."

This quote conflates "true beliefs" and what we may call "correct beliefs". True beliefs are ones which assign high probability to the truth, i.e. the actual state of things. Correct beliefs are ones which follow from an agent's priors and observations. The former are objective, the latter subjective but not irrational. If the iron has been cool the last 107 times it has approached your face, but hot this 108th time, your belief that it is cool is correct but false (perhaps better terms are needed).

Also, a belief is not binary. You may be 99.8% sure that the iron is hot and still rationally fear it. A hot iron on your face is far more costly than a needless avoidance.

[-]royf10

There's an interesting duality between morality as "the belief that truthseeking is pragmatically important to society" and morality as the result of social truthseeking, which is closer to the usual sense, or rather what the usual sense would ideally be. I'd like to see this explored further if anyone has a link in mind.

The LesssWrong FAQ indicated that there is value in replying to old content, so I'm posting anyway. Context might be in order, so here's what we are talking about:

I tend to be suspicious of morality as a motivation for rationality

You and I had a similar take on this bit of Yudkowsky's post. Maybe you would call my stance "truthseeking as the result of morality" instead of your "morality as the result of social truthseeking".

The problem Yudkowsky is describing sounds like it comes from entangling the "logical" archetype with "morality". This means any behavior which differs from this archetype becomes "immoral", regardless of whether it is actually Bayesian reasoning or not. Personally, I would phrase this as "declaring rationality to be (a) moral value". This specifically excludes cases where people place intrinsic value on some specific result, and then place instrumental moral value on rationality, as a tool to achieve the desired results. This is much what effective altruism is doing, after all.

LessWrong FAQ

Hmm, couldn't find a link directly on this site. Figured someone else might want it too (although a google search did kind of solve it instantly).

I'm not convinced that this post actually says anything. If seeking the truth is useful for any specific reason, then people who see some benefit from it will do so and if it isn't useful then they won't. Actually writing this out has made me think both this post and my comment haven't really said much, but I think that's because this discussion is too abstract to have any real use/meaning. Ideas which are true/work will work, ideas that aren't won't, and that's all that needs to be said, never mind this business about rationality and truth and curiosity.

that's all that needs to be said

Would that this were true.

Indeed, if that were all there was to it, nothing would need to be said at all, as that's a tautology. But people manage to fail at noticing when things do / don't work anyway, and false ideas stick around a very long time.

I just find it very unlikely that the specifics of how this post is constructed have much of an effect on correcting this issue.

Ah, but the seeker needs to find out if the answer - the truth - is beneficial. You can't not know the truth and make a decision without knowing the answer. That's just guessing.

My friend argues that believing in an afterlife (i.e. religion) is beneficial for some people because it gives them a (patently false!) sense of "security". So why tell them it's wrong to believe such a thing?

My answer is a) the fact that there's no afterlife is the truth, as far as humans know (i.e. as far as the evidence - or lack of evidence - shows); and b) it's wrong to believe in such a falsehood - in the sense that most people with such a belief tend to be either less ethical/moral (because they'll fix up the imbalance 'later'), or irrationally over-moral or hyper-ethical because they don't want to risk their slot in eternity's gravy train. Either way, they act irrationally and abnormally, and for the wrong reasons!

I can't think of much in life that could be worse than that. What a horrible life!

[-][anonymous]00

It is instructive to review this essay after reading the sequence regarding metaethics, morlaity, and planning algorithms. It lets you receive a deeper insight into how "morality as a motivation" might have come about and what its flaws are.

"our probability of surviving is only 2.234%" Yet nine times out of ten the Enterprise is not destroyed.

Perhaps this is a minor nitpick or technicality, but that's probably not the best example, because keeping the same probability estimate actually makes sense in this instance. To alter it would be a form of survivorship bias. This is because there is no way he could have have observed the opposite during the previous 10 attempts, since he would no longer be alive to have those memories if he had.

"For this reason, I would also label as "morality" the belief that truthseeking is pragmatically important to society..."

This seems like a naive understanding of what morality is. It seems like you are referring to a certain subset of ethics, in this case utilitarianism (do what promotes the greatest good among the greatest number). But this is just one part of a class of normative ethical theories. The class to which I'm referring to is consequentialism where essentially, the end justifies the means. I'd rather not get off topic here and simply state that a morality-driven pursuit of truth does not necessarily mean that the person is motivated by the "greater good".

Also, Spock's calculation is off by one order of magnitude, not two. He predicts, roughly, a 98% chance of destruction yet you say in practice, the Enterprise is destroyed 10% of the time. That's just about one order of magnitude off.

Remember that that's a 11 year old post you're replying to.

Hey, eleven year old posts are just posts that lack life experience.

I think you're misinterpreting Yudkowsky. He's not saying that all ethics is pragmatic. He's saying that pragmatics is ethics. Previously in the paragraph, he listed other, non-pragmatic ethical reasons to seek truth.

As for the orders of magnitude, it's log(.9) - log(.02234) = 1.6 orders of magnitude. That's closer to 2 than to 1.

"Curiosity, as a human emotion, has been around since long before the ancient Greeks."

Is that a reference to Pandora's Box or am I off base?

[-]Elo30

Yes it is.

I am guessing that the link what truth is. is meant to be http://yudkowsky.net/rational/the-simple-truth

Thanks, fixed as well!

Please restore apostrophees...

"our probability of surviving" - probably extrapolated from other similar objects going through black holes. Enterprise, because fictional laws, eschews the odds, but it may only mean that some other ships get destroyed even somewhat more frequently and Enterprise has "five points... for sheer dumb luck!"

I think its great that the apostrophes were left out. Apart from possessive apostrophes, which I think should be used, apostrophes are an extra effort (especially when texting) that add no extra meaning or clarification.

I mean, there are minimal pairs (mostly in cases where possessive apostrophees are for some reason not used, like its - it's, who's - whose). But overall it just helps readability (speaking as a non-native).

I am not sure, but there seem to be a couple of apostrophes missing in the sentence

[...] if were going to improve our skills of rationality, go beyond the standards of performance set by hunter-gatherers, well need deliberate beliefs [...]

Truth is important because it is instrumental to all areas of life. By increasing our overall epistemic rationality, we will understand the world better, and so be able to act (or withold action) in ways that increase our quality of life. Without epistemic rationality, instrumental rationality may be incoherent and misdirected, seeking goals that are counterproductive to the agent's and/or common wellbeing. For example, a person might highly value outcome X, and practice instrumental rationality to achieve that outcome. However, if they had a better understanding of epistemic rationality, they might no longer value outcome X and instead more highly value different outcomes. Epistemic rationality allows us to "optimize" our values.

Optimizing our values and behaviour increases common wellbeing, therefore I think truth seeking and epistemic rationality is a moral imperative for everyone. I believe that the desire for increased wellbeing is actually the most important reason for truth seeking, and since it affects everyone, it is a moral/civil duty.

Regarding the Spock probability reference, I've always imagined that TV shows and movies either take place in the parallel universe where very specific events happen to take place (e.g. the universe where the 'bad guys' miss the 'good guys' with all of their bullets despite being trained soldiers), or in the case of the Enterprise, the camera follows the adventures of the one ship that is super lucky. Perhaps the probability of survival really is 2.234 %, the Enterprise is just the 1 in 1,000 ship that keeps surviving (because who wants the camera to follow those other ships?).

Most apostrophe removals didnt cause any problems, but the "were" in the paragraph before the last one had me confused for a split second.

For everytime I am curious about "how the things are?", I would like to be curious also about "what to do?" then. (Curious pragmatism)

What about moral duty to be curious?

Thanks Eliezer,
I am surprised that you only have three motivations for seeking out truth in your conclusion. Moral duty, pragmatism and curiosity. Even though you talk about manipulating the world while talking about curiosity.
I would separate curiosity, where the benefit is enjoyment at understanding, and power seeking, which allows shaping the world more efficiently.
Certainly in the scientists I know, those motivations are often mixed. The search for exotic particles in physics is closer to curiosity and the "pleasure of finding things out".
The quest of seeking the truth in applied physics to build a nuclear bomb has more to do about power seeking.

Two very different motivations, no ?

This might seem obvious (whether right or wrong) to some of you, but broadly I think curiosity must be a significant factor that came along with human (likely back to early hominid) evolution. Animals are curious, sure, but don't really have a drive for knowledge outside of what is immediately practical. Humans might be the only species with even the brainpower and self-awareness to seek facts for the pleasure of knowledge, just as we have progressed into other subjective enjoyments like food and music. 

I tend to be suspicious of morality as a motivation for rationality, not because I reject the moral ideal, but because it invites certain kinds of trouble. It is too easy to acquire, as learned moral duties, modes of thinking that are dreadful missteps in the dance.

 

Yes. Morality changes every few years. What we were taught as children does not hold that much value in this day and age. Instead of morality being the base of rationality, We should keep Rationality as the base of morality. But I doubt that too.

Science was noticing that certain modes of thinking uncovered beliefs that let us manipulate the world—truth as an instrument

So, the motive should be to gain the ability to manipulate the world?

I think "should" isn't quite the right frame here. You can have whatever motivations you want. But, it's a if-then fact about the world that science was helpful for people seeking to affect the world.

(Also note "manipulate" sometimes has negative connotations which isn't really what's meant here)

Science was noticing that certain modes of thinking uncovered beliefs that let us manipulate the world


Science is our tool to manipulate the world. Science is the instrument of truth. If you can not manipulate the meaning of reality through definition of words such as 'rational', what it is and it isn't. Where science is the substitute for truth and rationality is the substitute for truth also. As self-evident the definition of rationality is substituted for science and this forms our basic definition for rationality moving forward. Care not to define science as in rationality, science is a tool as is rationality.

In rationality, as in science. 'Curiosity, pragmatism, and quasi-moral injunctions' are injected into research questions and colour our understanding of both the world and truth. 

To use science as a tool is to obtain the truth, to use rationality is to obtain the truth. We must apply a scientific approach and method to our rationality.