Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

MoreOn comments on Making Beliefs Pay Rent (in Anticipated Experiences) - Less Wrong

110 Post author: Eliezer_Yudkowsky 28 July 2007 10:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (245)

Sort By: Old

You are viewing a single comment's thread.

Comment author: MoreOn 25 February 2011 06:45:42PM *  4 points [-]

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

If some average Joe believes he’s smart and beautiful, and that gives him utility, is that necessarily a bad thing? Joe approaches a girl in a bar, dips his sweaty fingers in her iced drink, cracks a piece of ice in his teeth, pulls it out of his mouth, shoves it in her face for demonstration, and says, “Now that I’d broken the ice—”

She thinks: “What a butt-ugly idiot!” and gets the hell away from him.

Joe goes on happily believing that he’s smart and beautiful.

For myself, the answer is obvious: my beliefs are means to an end, not ends in themselves. They’re utility producers only insofar as they help me accomplish utility-producing operations. If I were to buy stock believing that its price would go up, I better hope my belief paid its rent in correct anticipation, or else it goes out the door.

But for Joe? If he has utility-pumping beliefs, then why not? It’s not like he would get any smarter or prettier by figuring out he’s been a butt-ugly idiot this whole time.

Comment author: Spurlock 25 February 2011 07:40:26PM *  4 points [-]

It's sort of taken for granted here that it is in general better to have correct beliefs (though there have been some discussions as to why this is the case). It may be that there are specific (perhaps contrived) situations where this is not the case, but in general, so far as we can tell, having the map that matches the territory is a big win in the utility department.

In Joe's case, it may be that he is happier thinking he's beautiful than he is thinking he is ugly. And it may be that, for you, correct beliefs are not themselves terminal values (ends in themselves). But in both cases, having correct beliefs can still produce utility. Joe for example might make a better effort to improve his appearance, might be more likely to approach girls who are in his league and at his intellectual level, thereby actually finding some sort of romantic fulfillment instead of just scaring away disinterested ladies. He might also not put all his eggs in the "underwear model" and "astrophysicist" baskets career-wise. You can further twist the example to remove these advantages, but then we're just getting further and further from reality.

Overall, the consensus seems to be that wrong beliefs can often be locally optimal (meaning that giving them up might result in a temporary utility loss, or that you can lose utility by not shifting them far enough towards truth), but a maximally rational outlook will pay off in the long run.

Comment author: Manfred 25 February 2011 07:54:04PM 3 points [-]

The trouble is that this rationale leads directly to wireheading at the first chance you get - choosing to become a brain in a vat with your reward centers constantly stimulated. Many people don't want that, so those people should make their beliefs only a means to an end.

However, there are some people who would be fine with wireheading themselves, and those people will be totally unswayed by this sort of argument. If Joe is one of them... yeah, sure, a sufficiently pleasant belief is better than facing reality. In this particular case, I might still recommend that Joe face the facts, since admitting that you have a problem is the first step. If he shapes up enough, he might even get married and live happily ever after.

Comment author: TheOtherDave 25 February 2011 09:04:38PM 1 point [-]

Well, he might. Or, rather, there might be available ways of becoming smarter or prettier for which jettisoning his false beliefs is a necessary precondition.

But, admittedly, he might not.

Anyway, sure, if Joe "terminally" values his beliefs about the world, then he gets just as much utility out of operating within a VR simulation of his beliefs as out of operating in the world. Or more, if his beliefs turn out to be inconsistent with the world.

That said, I don't actually know anyone for whom this is true.

Comment author: MoreOn 25 February 2011 11:29:11PM 0 points [-]

That said, I don't actually know anyone for whom this is true.

I don't know too many theist janitors, either. Doesn't mean they don't exist.

From my perspective, it sucks to be them. But once you're them, all you can do is minimize your misery by finding some local utility maximum and staying there.

Comment author: jimrandomh 25 February 2011 10:12:36PM 5 points [-]

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

They can. They just do so very rarely, and since accepting some inaccurate beliefs makes it harder to determine which beliefs are and aren't beneficial, in practice we get the highest utility from favoring accuracy. It's very hard to keep the negative effects of a false belief contained; they tend to have subtle downsides. In the example you gave, Joe's belief that he's already smart and beautiful might be stopping him from pursuing self-improvements. But there definitely are cases where accurate beliefs are definitely detrimental; Nick Bostrom's Information Hazards has a partial taxonomy of them.

Comment author: HonoreDB 26 February 2011 01:47:39AM 0 points [-]

I don't think it's possible for a reflectively consistent decision-maker to gain utility from self-deception, at least if you're using an updateless decision theory. Hiding an unpleasant fact F from yourself is equivalent to deciding never to know whether F is true or false, which means fixing your belief in F at your prior probability for it. But a consistent decision-maker who loses 10 utilons from believing F with probability ~1 must lose p*10 utilons for believing F with probability p.

Comment author: jimrandomh 26 February 2011 03:04:19AM *  2 points [-]

A consistent decision-maker who loses 10 utilons from believing F with probability ~1 must lose p*10 utilons for believing F with probability p.

No, this is not true. Many of the reasons why true beliefs can be bad for you are because information about your beliefs can leak out to other agents in ways other than through your actions, and there is is no particular reason for this effect to be linear. For example, blocking communications from a potential blackmailer is good because knowing with probability 1.0 that you're being blackmailed is more than 5 times worse than knowing with probability 0.2 that you will be blackmailed in the future if you don't.

Comment author: HonoreDB 26 February 2011 05:12:04PM 0 points [-]

Oh, sure. By "gain utility" I meant "gain utility directly," as in the average Joe story.

Comment author: jimrandomh 26 February 2011 05:20:27PM 0 points [-]

I don't think it's linear in the average Joe story, either; if there's one threshold level of belief which changes his behavior, then utility is constant for levels of belief on either side of that threshold and discontinuous in between.

Comment author: HonoreDB 26 February 2011 05:47:07PM 1 point [-]

A rational agent can have its behavior depend on a threshold crossing of belief, but if there's some belief that grants it utility in itself (e.g. Joe likes to believe he is attractive), the utility it gains from that belief has to be linear with the level of belief. Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.

Comment author: jimrandomh 26 February 2011 05:58:54PM 0 points [-]

Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.

This doesn't sound right. Could you describe the Dutch-booking procedure explicitly? Assume that believing P with probability p gives me utility U(p)=p^2+C.

Comment author: HonoreDB 26 February 2011 07:33:13PM *  0 points [-]

An additive constant seems meaningless here: if Joe gets C utilons no matter what p is, then those utilons are unrelated to p or to P--Joe's behavior should be identical if U(p)=p^2, so for simplicity I'll ignore the C.

Now, suppose Joe currently believes he is not attractive. A surgery has a .5 chance of making him attractive and a .5 chance of doing nothing. This surgery is worth U(.5)-U(0)=.25 utilons to Joe; he'll pay up to that amount for it.

Suppose instead the surgeon promises to try again, once, if the first surgery fails. Then Joe's overall chance of becoming attractive is .75, so he'll pay U(.75)-U(0)=.75^2=0.5625 for the deal.

Suppose Joe has taken the first deal, and the surgeon offers to upgrade it to the second. Joe is willing to pay up to the difference in prices for the upgrade, so he'll pay .5625-.25=.3125 for the upgrade.

Joe buys the upgrade. The surgeon performs the first surgery. Joe wakes up and learns that the surgery failed. Joe is entitled to a second surgery, thanks to that .3125-utility purchase of the upgrade. But the second surgery is now worth only .25 utility to him! The surgeon offers to buy that second surgery back from him at a cost of .26 utility. Joe accepts. Joe has spent a net of .0525 utility on an upgrade that gave him no benefit.

As a sanity check, let's look at how it would go if Joe's U(p)=p. The single surgery is worth .5. The double surgery is worth .75. Joe will pay up to .25 utility for the upgrade. After the first surgery fails, the upgrade is worth .5 utility. Joe does not regret his purchase.

Comment author: jimrandomh 26 February 2011 08:25:59PM *  2 points [-]

You're missing the fact that how much Joe values the surgery depends on whether or not he expects to be told whether it worked afterward. If Joe expects to have the surgery but to never find out whether or not it worked, then its value is U(0.5)-U(0)=0.25. On the other hand, if he expects to be told whether it worked or not, then he ends up with a belief-score or either 0 or 1, not 0.5, so its value is (0.5*U(1.0) + 0.5*U(0)) - U(0) = 0.5.

Suppose Joe is uncertain whether he's attractive or not - he assigns it a probability of 1/3. Someone offers to tell him the true answer. If Joe's utility-of-belief function is U(p)=p^2, then being told the answer is worth ((1/3)*U(1) + (2/3)*U(0)) - U(1/3) = ((1/3)*1 + (2/3)*0) - (1/9) = 2/9, so he takes the offer. If on the other hand his utility-of-belief function were U(p)=sqrt(p), then being told the information would be worth ((1/3)*sqrt(1) + (2/3)*sqrt(0)) - sqrt(1/3) = -0.244, so he plugs his ears.

Comment author: NancyLebovitz 25 February 2011 10:21:16PM 0 points [-]

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

Is there a difference between utility and anticipated experiences? I can see a case that utility is probability of anticipated, desired experiences, but for purposes of this argument, I don't think that makes for an important difference.

Comment author: MoreOn 25 February 2011 11:19:03PM 0 points [-]

"Smart and beautiful" Joe is being Pascal's-mugged by his own beliefs. His anticipated experiences lead to exorbitantly high utility. When failure costs (relatively) little, it subtracts little utility by comparison.

I suppose you could use the same argument for the lottery-playing Joe. And you would realize that people like Joe, on average, are worse off. You wouldn't want to be Joe. But once you are Joe, his irrationality looks different from the inside.

Comment author: JGWeissman 25 February 2011 11:17:30PM 0 points [-]

In this example, Joe's belief that he's smart and beautiful does pay rent in anticipated experience. He anticipates a favorable reaction if he approaches a girl with his gimmick and pickup line. As it happens, his innaccurate beliefs are paying rent in inaccurate anticipated experiences, and he goes wrong epistemically by not noticing that his actual experience differs from his anticipated experience and he should update his beliefs accordingly.

The virtue of making beliefs pay rent in anticipated experience protects you from forming incoherent beleifs, maps not corresponding to any territory. Joe's beliefs are coherent, correspond to a part of the territory, and are persistantly wrong.

Comment author: MoreOn 25 February 2011 11:24:56PM 0 points [-]

If my tenants paid rent with a piece of paper that said "moneeez" on it, I wouldn't call it paying rent.

In your view, don't all beliefs pay rent in some anticipated experience, no matter how bad that rent is?

Comment author: JGWeissman 25 February 2011 11:32:24PM *  0 points [-]

In your view, don't all beliefs pay rent in some anticipated experience, no matter how bad that rent is?

No, for an example of beliefs that don't pay rent in any anticipated experience, see the first 3 paragraphs of this article:

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."

Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying "No," and the other saying "Yes," they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

Comment author: MoreOn 25 February 2011 11:34:40PM *  1 point [-]

Two people have semantically different beliefs.

Both beliefs lead them to anticipate the same experience.

EDIT: In other words, two people might think they have different beliefs, but when it comes to anticipated experiences, they have similar enough beliefs about the properties of sound waves and the properties of falling trees and recorders and etc etc that they anticipate the same experience.

Comment author: JGWeissman 25 February 2011 11:53:11PM 2 points [-]

Two people have semantically different beliefs.

Taboo "semantically".

See also the example of The Dragon in the Garage, as discussed in the followup article.

Comment author: MoreOn 26 February 2011 12:31:18AM 0 points [-]

Taboo'ed. See edit.

Although I have a bone to pick with the whole "belief in belief" business, right now I'll concede that people actually do carry beliefs around that don't lead to anticipated experiences. Wulky Wilkinsen being a "post-utopian" (as interpreted from my current state of knowing 0 about Wulky Wilkinsen and post-utopians) is a belief that doesn't pay any rent at all, not even a paper that says "moneeez."

Comment author: Steven_Bukal 27 June 2011 07:41:47PM 1 point [-]

If my tenants paid rent with a piece of paper that said "moneeez" on it, I wouldn't call it paying rent.

Or they pay you with forged bills. You think you'll be able to deposit them at the bank and spend them to buy stuff, but what actually happens is the bank freezes your account and the teller at the store calls the police on you.

Comment author: buybuydandavis 21 September 2011 09:43:35AM *  3 points [-]

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

I think you've hit on one of the conceptual weaknesses of many Rationalists. Beliefs can pay rent in many ways, but Rationalists tend to only value the predictive utility of beliefs, and pooh pooh other other utilities of belief. Comfort utility - it makes me feel good to believe it. Social utility - people will like me for believing it. Efficacy utility - I can be more effective if I believe it.

Predictive Truth is a means to value, and even if a value in itself, it's surely not the only value. Instead of pooh poohing other types of utility, to convince people you need to use that predictive utility to analyze how the other utilities can best be fulfilled.