That which can be destroyed by the truth should *not* necessarily be
I've been throwing some ideas around in my head, and I want to throw some of them half-formed into the open for discussion here.
I want to draw attention to a particular class of decisions that sound much like beliefs.
|
Belief |
Decision |
|
There is no personal god that answers prayers. |
I should badger my friend about atheism. |
|
Cryonics is a rational course of action. |
To convince others about cryonics, I should start by explaining that if we exist in the future at all, then we can expect it to be nicer than the present on account of benevolent super-intelligences. |
|
There is an objective reality. |
Postmodernists should be ridiculed and ignored. |
|
1+1=2 |
If I encounter a person about to jump unless he is told "1+1=3", I should not acquiesce. |
I've thrown ideas from a few different bags into the table, and I've perhaps chosen unnecessarily inflammatory examples. There are many arguments to be had about these examples, but the point I want to make is the way in which questions about the best course of action can sound very much like questions about truth. Now this is dangerous because the way in which we chose amongst decisions is radically different from the way in which we chose amongst beliefs. For a start, evaluating decisions always involves evaluating a utility function, whereas evaluating beliefs never does (unless the utility function is explicitly part of the question). By appropriate changes to one's utility function the optimal decision in any given situation can be modified arbitrarily whilst simultaneously leaving all probability assignments to all statements fixed. This should make you immediately suspicious if you ever make a decision without consulting your utility function. There is no simple mapping from beliefs to decisions.
I've noticed various friends and some people on this site making just this mistake. It's as if their love for truth and rational enquiry, which is a great thing in its own right, spills over into a conviction to act in a particular way, which itself is of questionable optimality.
In recent months there have been several posts on LessWrong about the "dark arts", which have mostly concerned using asymmetric knowledge to manipulate people. I like these posts, and I respect the moral stance implied by their name, but I fear that "dark arts" is becoming applicable to the much broader case of not acting according to the simple rule that decisions are always good when they sound like true beliefs. I shouldn't need to argue explicitly that there are cases when lying or manipulating constitute good decisions; that would privileged a very particular hypothesis (namely that decisions are always good when they sound like true beliefs).
This brings be all the way back to the much-loved quotation, "that which can be destroyed by the truth should be". Now there are several ways to interpret the quote but at least one interpretation implies the existence of a simple isomorphism from true beliefs to good decisions. Personally, I can think of lots of things that could be destroyed by the truth but should not be.
Does it matter if you don't remember?
Does it matter if you experienced pain in the past, but you don't remember? (And there are no other side-effects, etc etc). At one point in Accelerando, Charles Strauss describes children that routinely decapitate and disembowel each other, only to be repaired (bodily and memory-wise) by the friendly local AI. This struck me as awful, but I'm suspicious of my intuition. Note that here I'm assuming pain is a terminal "bad" factor in your utility function. You can substitute "pain" for whatever you think is bad. I think there are at least two questions here:
- Is it bad for someone to be in pain if they will not remember it in the future? I think yes, because by assumption pain is a terminal "bad" node. Being relieved of future painful memories is good, but nowhere near good enough to fully compensate.
- Is it bad to have experienced pain in the past, if you don't remember it? Or, can your utility function coherently include facts about the past, even if they have no causal connection to the present? My intuition here says yes, but I'd be interested in others' thoughts. To make this concrete, imaging that you have a choice between medium pain that you will remember, or extreme pain followed by memory erasure.
When does an insight count as evidence?
Bayesianism, as it is presently formulated, concerns the evaluation of the probability of beliefs in light of some background information. In particular, given a particular state of knowledge, probability theory says that there is exactly one probability that should be assigned to any given input statement. A simple corrolary is that if two agents with identical states of knowledge arrive at different probabilities for a particular belief then at least one of them is irrational.
A thought experiment. Suppose I ask you for the probability that P=NP (a famous unsolved computer science problem). Sounds like a difficult problem, I know, but thankfully all relevant information has been provided for you --- namely the axioms of set theory! Now we know that either P=NP is proveable from the axioms of set theory, or its converse is (or neither is proveable, but let's ignore that case for now). The problem is that you are unlikely to solve the P=NP problem any time soon.
So being the pragmatic rationalist that you are, you poll the world's leading mathematicians, and do some research of your own into the P=NP problem and the history of difficult mathematical problems in general to gain insight into perhaps which group of mathematicians may be more reliable, and to what extent thay may be over- or under-confident in their beliefs. After weighing all the evidence honestly and without bias you submit your carefully-considered probability estimate, feeling like a pretty good rationalist. So you didn't solve the P=NP problem, but how could you be expected to when it has eluded humanity's finest mathematicians for decades? The axioms of set theory may in principle be sufficient to solve the problem but the structure of the proof is unknown to you, and herein lies information that would be useful indeed but is unavailable at present. You cannot be considered irrational for failing to reason from unavailable information, you say; rationality only commits you to using the information that is actually available to you, and you have done so. Very well.
Rational lies
If I were sitting opposite a psychopath who had a particular sensitivity about ants, and I knew that if I told him that ants have six legs then he would jump up and start killing the surrounding people, then it would be difficult to justify telling him my wonderful fact about ants, regardless of whether I believe that ants really have six legs or not.
Or suppose I knew my friend's wife was cheating on him, but I also knew that he was terminally ill and would die within the next few weeks. The question of whether or not to inform him of my knowledge is genuinely complex, and the truth or falsity of my knowledge about his wife is only one factor in the answer. Different people may disagree about the correct course of action, but no-one would claim that the only relevant fact is the truth of the statement that his wife is cheating on him.
This is all a standard result of expected utility maximization, of course. Vocalizing or otherwise communicating a belief is itself an action, and just like any other action it has a set of possible outcomes, to which we assign probabilities as well as some utility within our value coordinates. We then average out the utilities over the possible outcomes for each action, weighted by the probability that they will actually happen, and choose the action that maximizes this expected utility. Well, that's the gist of the situation, anyway. Much has been written on this site about the implications of expected utility maximization under more exotic conditions such as mind splitting and merging, but I'm going to be talking about more mundane situations, and the point I want to make is that beliefs are very different objects from the act of communicating those beliefs.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)