The Litany of Tarski (formulated by Eliezer, not Tarski) reads

If the box contains a diamond,

I desire to believe that the box contains a diamond;

If the box does not contain a diamond,

I desire to believe that the box does not contain a diamond;

Let me not become attached to beliefs I may not want.

This works for a physical realist, but I have been feeling uncomfortable with it for some time now. So I have decided to reformulate it in a more instrumental way, replacing existential statements with testable predictions. I had to find a new name for it, so I call it the Litany of Instrumentarski:

If believing that there is a diamond in the box lets me find the diamond in the box,

I desire to believe that there is a diamond in the box;

If believing that there is a diamond in the box leaves me with an empty box,

I desire to believe that there is no diamond in the box;

Let me not become attached to inaccurate beliefs.

Posting it here in a hope that someone else also finds it more palatable and unassuming than straight-up realism. 

EDIT: It seems to me that this modification also guides you to straight-up one-box on Newcomb, where the original one is mired in the EDT vs CDT issues.

EDIT2: Looks like the above version resulting in people confusing desiring accurate beliefs with desiring diamonds. It's about accurate accounting, not about utility of a certain form of crystallized carbon.

Maybe the first line should be modified to something like "If I later find a diamond in the box...", or something. How about the following?

If I will find a diamond in the box,

I desire to believe that I will find a diamond in the box;

If I will find no diamond in the box,

I desire to believe that I will find no diamond in the box;

Let me not become attached to inaccurate beliefs.

For some reason the editor does not let me use the <strike> tag to cross out the previous version, not sure how to work around it.

New Comment
45 comments, sorted by Click to highlight new comments since: Today at 2:54 PM

It seems to me that the number of false things¹ that it is actually instrumentally useful to believe is much less than the number of false things that someone else would like me to think that it is instrumentally useful to believe in order that they may take advantage of my belief in false things.

In other words, for any belief X, if X is false and some person S wants to convince me that my believing X would be instrumentally useful to me, it is almost certainly the case that ① believing X actually isn't instrumentally useful to me, and indeed ② my believing X would put me at a disadvantage to S.

In other other words, the more you try to convince me that believing false things is good for me, the more I will conclude that you are bad for me.


¹ As opposed to merely inaccurate but usually-good-enough approximations, as found in classical physics or kindergarten hygiene instruction

Not sure how this is related to what I posted. It was about accurate accounting, not accurate reporting of your account to someone who doesn't want an accurate account.

If my beliefs change reality, I desire to believe that my beliefs will change reality. If my beliefs have no effect upon reality, I desire to believe my beliefs have no effect upon reality. Let me not become attached to inaccurate beliefs.

Further:

My beliefs exist in reality. If my beliefs, by so existing, change the outcome of reality, I desire to believe that my beliefs change reality. If my beliefs, by so existing, do not change the outcome of reality, I desire to believe that my beliefs are immaterial to reality. Let me not become attached to mind-body dualism.

You don't need to reject realism to reject the idea that beliefs can only be reflections of reality, rather than a causal part of it. The map is part of the territory. How accurately does your map represent your map?

If my beliefs, by so existing, change the outcome of reality

So, if your writing "here be dragons" on a map results in someone encountering a dragon when traveling to the mapped area (a very popular theme in SF/F), how useful is the concept of reality?

You don't need to reject realism to reject the idea that beliefs can only be reflections of reality, rather than a causal part of it.

You don't need to, no. But the concept of reality is less useful if all you mean by it is future inputs to test the accuracy of your beliefs, as opposed to a largely unchanged territory that you map. If you have to build the proverbial epicycles to keep your belief alive, you might want to consider a simpler model.

The map is part of the territory.

Is this a useful assertion? If so, how?

How accurately does your map represent your map?

Not sure what you mean by this. That beliefs can be nested? Sure. That the term "map" presumes some territory it maps? It sure does, in the realist picture. Hence my preference for the term "model" or even "belief". Of course, a realist can ask something like "but what is your model a model of [in reality]?", which to me is not a useful question if your reality is changing depending on the models.

At the risk of dogpiling:

So, if it is true in reality that your writing "here be dragons" on a map results in someone encountering a dragon when traveling to the mapped area...

It happens. Probably not with dragons, but with placebo and with many other things where the nonlinear second order effect map->reality is ignored in this simplified map/territory distinction. So why make the distinction?

It is true in reality that it happens...

(Sorry.) To answer your question: for the times when it doesn't happen? I wasn't actually planning to join a debate; you might find it more productive to ask one of the people who gave more in-depth replies.

how useful is the concept of reality

Reality is a useful concept in all possible universes you might find yourself in!

Real things cause qualia, unreal things do not. No matter what you care about, this distinction will impact it.

May I ask what your definition of reality you are currently using?

So, if your writing "here be dragons" on a map results in someone encountering a dragon when traveling to the mapped area (a very popular theme in SF/F), how useful is the concept of reality?

The placebo effect came up elsewhere as an example where beliefs alter reality. Similarly, self-fulfilling prophecies need not rely on magic; if I believe I'll fail at a task, I very probably alter, just by holding this belief, my odds of completing that task. The modified litany isn't "All beliefs modify reality," but "I should have accurate beliefs about what beliefs have repercussions in reality." Your dragon example is merely a demonstration of a belief which is immaterial to reality, for the purposes at least of the subject of the belief.

I believe this response suffices in answering the rest of your objections, as well.

See http://lesswrong.com/lw/h69/litany_of_instrumentarski/8qht for an example of a pretty common theme in this post. Contrary to the argument presented in the comment [ETA: I misread the comment; this argument isn't actually present. My apologies!], rationality doesn't break down, a specific and faulty idea held by some rationalists breaks down.

He's talking about brains.

I was expecting to find someone commenting about beliefs whose truth-value may be hard to know but whose effect is positive nonetheless. Several examples (which I don't necessarily personally endorse)

If believing this homeopathic sugar pill works will make it work,
I desire to believe that this sugar pill works.
If believing this homeopathic sugar pill works will not make it work,
I desire to believe that this sugar pill does not work.
Let me not become attached to beliefs that do not serve me.

or

If believing in synchronicities will cause more good things to happen in my life,
I desire to believe in synchronicities.
If believing in synchronicities will not cause more good things to happen in my life,
I desire to not believe in synchronicities.
Let me not become attached to beliefs I do not want.

It appears that, if you have the ability to actually self-modify your beliefs as such, the "Litany of Instrumentarski" could be a useful way to deal with the thing where rationality breaks things like the placebo effect. Sugar pills, or whatever, if you can adopt the positive sides of beliefs that are self-fulfilling prophecies (true either way you believe them, like e.g. the Pygmalion effect) then that ought to be conducive to winning.

That's a good point. I guess I still have to quite a ways to go to rid myself of the notion of external reality, which I was subconsciously assuming. If belief changes reality, too bad for reality. It's the accuracy of the belief that is important.

What's 'accuracy' without 'reality'?

Self-fulfilling beliefs don't mean there's no external reality, they just mean your mind (and thus your beliefs) are part of reality, and therefore capable of influencing it. If they weren't, naturally you would be unable to act on them in any case. The correct belief is, of course, "if someone believes X, X will occur. If someone believes Y, Y will occur."

EDIT: The last sentence, which is slightly tangential to the rest, has been moved (on the theory that it was attracting downvotes) to increase the signal-to-noise ratio. It still exists in the comment below, if you wish to downvote it.

This may, unfortunately, be one of those rare occasions where the instrumental value of a false belief outweighs that of knowledge.

-removed from the above post, for the curious and/or offended.

That seems like a responable response to shminux's post, so I'm not sure why you were at -2 (unless it was for your final sentence).

Huh, I hadn't noticed that. You're probably right; such a statement is something of an anti-applause light here on lesswrong. (And, to be fair, with good reason.)

EDIT: I think I'll remove it, actually ... I'll move it to a comment so as not to torture the poor souls who saw this cryptic conversation.

But if the belief is accurate either way, then you can basically pick whatever belief you want. This is the weird paradox of self-fulfilling prophecies, like the Pygmalion effect. So what then?

If I'm reading this correctly, if A is true and the evidence available to you for A is false, you wish to believe that A is false? Or am I missing something?

if A is true

Note that this statement only makes sense if you already subscribe to physical realism, as it presumes the territory separate from any maps.

If you don't make this assumption, this statement means that "at some point I will acquire evidence confirming the model based on A with very high confidence". The currently available evidence may be against A, however. It happens quite often in physics, though not it trivial ways.

For example, light was believed to be composed of particles, until the Poisson's spot was discovered. There was plenty of experimental evidence for it, too. Afterwards, light was believed to be waves, and there was overwhelming evidence for this, as well. Then the UV catastrophe was deduced and the photoelectric effect was discovered, demonstrating that the question "is light a wave or a particle" has a different answer, depending on the manner of asking. The story is far from over at present.

I wish to believe that I will update my beliefs based on available evidence (a bit meta here).

The correct answer to "is light a wave or a particle" is "No, it is not the case that there exists 'light' that is a wave or a particle. Electromagnetism behaves according to these equations, which closely approximate wavelike behavior in these areas and closely approximate billiard balls in these areas."

I think your heuristic requires too much computational power to wield as effectively as the original version for it to be worth it. The temptation to take bad black swan bets seems too great.

I don't follow. What computational power do you mean? And what bad black swan bets? A couple of examples would be great.

Maybe the first line should be modified to something like "If I later find a diamond in the box...", or something. How about the following?

I disagree with this modification. The first one explicitly focuses on the causal effect of the belief, but the second one focuses on the temporal successors of the belief. The first is much stronger, more useful, and more general than the second.

Interesting. Do you mind elaborating?

Stronger because the second looks like a codification of "post hoc ergo propter hoc," better because the relationships are narrower, and more general because it responds well to situations where you let causation flow backwards in time. (For example, the first will let you pay in the Parfit's Hitchhiker scenario.)

[-][anonymous]11y00

I disagree with this modification. The first one explicitly focuses on the causal effect of the belief, but the second one focuses on the temporal successors of the belief. The first is much stronger, more useful, and more general than the second.

I prefer the modification, for some of the same reasons that you disagree with it. That is, because the modification is weaker, less general, actually doesn't serve to convey shminux's position and avoids conflating instrumentality considerations with the anti-realist position.

Specifically, saying this:

If I will find no diamond in the box,

I desire to believe that I will find no diamond in the box;

... does not entail any sort of claim about the distribution of the diamond in situations in which one will not happen to, or expect to be able to, personally interact with the diamond but still care whether diamond containing box are sent to some place. ie. It is technically compatible with:

If Sally will find a diamond in the box but I will never receive any message from Sally or the box after the box arrives at Sally,

I still desire to believe that Sally will find a diamond in the box.

(Or, you know, food rations and a terraforming device for her colonization mission.)

[This comment is no longer endorsed by its author]Reply

Analogously, the proposition "snow is white" is true if and only if believing that snow is white has positive utility.

If you're a perfect Bayesian reasoner, believing that snow is white has positive utility iff snow is actually white, and so the above sentence simplifies to "The proposition "snow is white" is true if and only if snow is white." But you are not a perfect Bayesian reasoner, and insofar as you are imperfect, things are fuzzy.

The first paragraph here is pretty much tautological; what you can disagree about is whether the cost involved, and the benefit to be gained, are ever such that you can actually gain utility by self-delusion.

More analogously, you should believe that the proposition "snow is white" is true if and only if believing that snow is white has positive utility.

There is a difference between the proposition being true and believing the proposition is true, right?

The proposition "snow is white" is true if and only if believing that snow is white has positive utility.

That's quite catchy.

It also seems to claim that snow is black if doing so has positive utility, regardless of whether or not it's actually true.

Consider, for example, if Big Brother can read your mind and will punish you horribly if you believe that snow is white. Yes, in that case it might make sense to believe that it's black (if you are capable of doing so), but that doesn't make it true.

Yes, it is basically a roundabout way of saying that you prefer or think that achieving your values is more important than having an accurate map.

Right, and if people care terminally about having an accurate map, that's one of their values, so the sentence also applies to them.

Utility is not the same thing as testability. Your color detector may return the same result when pointed at snow as when pointed at a sheet of paper, but you may decide to call the former "black" and the latter "white" for utilitarian reasons. Which is quite common IRL.

I'm interested in when you think the utility of beliefs diverges from their truth.

I don't believe we are talking about the same thing. I wasn't talking about utility, I was talking about testability. My operational definition of truth is the accuracy of predictions. Except for the "mathematical truths", which are well formed finite strings of symbols.

All the time, don't you think?

Sminux,

One of the problems with your position is that physical realism is the beginning of the debate, not its end. Positions on the ontological status of physical entities have all sorts of implications elsewhere.

You yourself implicitly acknowledge as much when you said that you desire to find diamonds in the box, and want to adjust your beliefs to maximize the likelihood of such a pleasant discovery. In other words, finding diamonds means more than just evidence of accurate belief or accurate ability to make predictions - finding a (valuable) diamond has other benefits as well.

You yourself implicitly acknowledge as much when you said that you desire to find diamonds in the box

I never said that. In this example I don't care about diamonds. I desire to believe that my expectations of the number of diamonds will match the reported number of diamonds should, I bother checking. Could be one or could be none, whatever, as long as it matches.

You said:

If believing that there is a diamond in the box lets me find the diamond in the box, I desire to believe that there is a diamond in the box

This implies that you'd like to find a diamond in the box. That desire to find a diamond has nothing to do with physical pragmatism.

But if you say I've misread the emphasized portion of your quote, then I believe you. Not sure what it changes about my point that the physical realism debate exists in part to provide a firmer underpinning for other debates (like morality or preference).

This implies that you'd like to find a diamond in the box.

Only if I believe that I will find one. Actually not even that. It's the other way around. I desire to believe that I will find a diamond if and only if I will find the diamond.

I guess I sort of see where the confusion is coming from. Maybe I should rephrase it. I have edited the OP.

EDIT:

the physical realism debate exists in part to provide a firmer underpinning for other debates (like morality or preference).

Are you saying that I must subscribe to the physical realism because of moral considerations?

Are you saying that I must subscribe to the physical realism because of moral considerations?

No. But your position (or any position) on physical realism has implications in meta-ethics. Personally, those implications are the only reason I find the physical realism debate interesting at all.

In other words, a moral realist who is a physical anti-realist is very confused. In general, the desire of all realists is to have a consistent definition of "real" for both physical entities and moral facts. (Probably, we all desire it, but realists believe the characteristic "real" is a worthwhile label to try to apply).

I'm confused by your stance because you seem to think one's position on physical realism has no bearing on one's moral position. Whereas I think most of the motivation for (interesting) arguments about physical realism are outgrowths of disputes in other kinds of realism debates.

I'm confused by your stance because you seem to think one's position on physical realism has no bearing on one's moral position.

Not quite. I assert that instrumentalism/physical pragmatism gives you a cleaner path to moral considerations than physical realism. The resulting positions may or may not be the same, depending on other factors (not all physical realists have the same set of morals, either). But not getting sidetracked into what exist and what doesn't and instead concentrating on accurate and inaccurate models of past, present and future inputs lets you bypass a lot of rubbish along the way. Unfortunately, it does not let you avoid being strawmanned by everyone else.

Suffice it to say that I don't agree. Having a consistent definition of exists would help immeasurably in clarifying positions on the moral realism / anti-realism debate. And you don't do a good job of noting when you are using a word in a non-standard way (and your other interlocutors are not great at noticing that your usage is non-standard).

You do realize that the standard understandings in the moral realism debate would say that referencing wrongness to a particular (non-universal) source of judgment is an anti-realist position?

Saying that right and wrong are meaningful only given a particular social context is practically the textbook definition of moral relativism, which is an anti-realist position.

Suffice it to say that I don't agree.

That's a position, not an argument.

Having a consistent definition of exists would help immeasurably in clarifying positions on the moral realism / anti-realism debate.

Boooring... I care about accurate models, not choosing between two equally untestable positions.

You do realize that the standard understandings in the moral realism debate would say that referencing wrongness to a particular (non-universal) source of judgment is an anti-realist position?

Why should I care what a particular school of untestables says?

Saying that right and wrong are meaningful only given a particular social context is practically the textbook definition of moral relativism, which is an anti-realist position.

Again, I don't care about the labels, I care about accurate beliefs.