Econ/Game theory question
This puzzled me. I'm pretty sure it's one of those unsolvable questions, but I'd want to know if it's not.
Two members of the species Homo Economus, A and B, live next to each other. A wants to buy an easement (a right to cross B's property, without which he cannot bring anything onto his lot) from B so that he can develop his property. B, under the law, has an absolute right to exclude A, meaning that nothing happens unless B agrees to it. The cost to B of granting this easement is $10 - it's over a fairly remote part of his land and he's not using it for anything else. A values the easement at $500,000, because he's got a sweet spot to build his dream house, if only he could construction equipment and whatnot to it. A and B know each others costs and values. They are "rational" and purely self-interested and bargaining costs zero. What's the outcome? I'm guessing it's "Between $5 and $500k," or "There is no deal unless one can credibly commit to being irrational." But I'm really not sure.
This could be asked as "In a bilateral monopoly situation where the seller's reservation price is $5 and the buyer's is $500,000, what is the predicted outcome?" But I figured the concrete example might make it more concrete.
Now that I've written this, I'm tempted to develop a "True price fallacy" and its implications for utilitarian measurement. But that's a separate matter entirely.
Experiment Idea Thread - Spring 2011
This is an idea that just occurred to me. We have a large community of people who think about scientific problems recreationally, many of whom are in no position to go around investigating them. Hopefully, however, some other community members are in a position to go around investigating them, or know people who are. The idea here is to allow people to propose relatively specific ideas for experiments, which can be upvoted if people think they are wise, and can be commented on and refined by others. Grouping them together in an easily identifiable, organized way in which people can provide approval and suggestions seems like it may actually help advance human knowledge, and with its high sanity waterline and (kind of) diverse group of readers, this community seems like an excellent place to implement this idea.
These should be relatively practical, with an eye towards providing some aspiring grad student or professor with enough of an idea that they could go implement it. You should explain the general field (physics, AI, evolutionary psychology, economics, psychology, etc.) as well as the question the experiment is designed to investigate, in as much detail as you are reasonably capable of.
If this is a popular idea, a new thread can be started every time one of these reaches 500 comments, or quarterly, depending on its popularity. I expect this to provide help for people refining their understanding of various sciences, and if it ever gets turned into even a few good experiments, it will prove immensely worthwhile.
I think it's best to make these distinct from the general discussion thread because they have a very narrow purpose. I'll post an idea or two of my own to get things started. I'd also encourage people to post not only experiment ideas, but criticism and suggestions regarding this thread concept. I'd also suggest that people upvote or downvote this post if they think this is a good or bad idea, to better establish whether future implementations will be worthwhile.
Hollow Adjectives
[This is a draft intended to be developed into a top-level post - it wouldn't feel wrong to make it such right now, but it wouldn't quite feel right. I am not entirely sure how to end it or if I could generalize better at the end. I kind of like the ending I have, but I'm not sure if the point overall is coherent enough. Thoughts/suggestions/criticism would all be appreciated. ETA: The problem here may be that this is actually a follow up (or a footnote) to another article I've been thinking of about Weasel Words and the art of misleading through langauge; related to my earlier post on Not Technically Lying]
When I was a teenager, I remember hearing a couple of riddles that I thought were neat:
"Could God draw a square circle?"
"Could God create a stone so large that even He could not lift it?"
Let me just disclaim that this post has pretty nothing to do with religion. I just think that these are great examples that many people may be familiar with. That said, consider: do either of these problems pose a threat to the existence of an omnipotent God?
The answer, as will be clear on a full exposition, is a resounding "No." These are terrible, awful, misleading arguments, and the second one illustrates a relatively common trick used to sneak past an audience's intellectual defenses.
These riddles both fail to provide relevant counterexamples for the exact same reason, even though the second may seem to make more sense. The first is simpler: a square circle is not a thing. In a practical sense, we can put the words next to each other, but there is simply no way to translate the sound "square circle" into some kind of expectation or thing in the real world, in the same way one could translate, "red barn" or "white unicorn" into an expected observation. It is impossible for anything to be both square and circular, so the fact that God cannot do something that cannot be done does not limit His omnipotence. By the same token, God could not create a married bachelor (using the strict definitions of the terms), as a bachelor is an unmarried man. The inability to violate the law of non-contradiction does not appear to be a legitimate refutation of omnipotence. If we taboo, "square circle," there isn't really a meaningful way of describing the thing you are insisting God be able to draw.
"A stone so large that God cannot lift it," is exactly the same thing as a square circle. It sounds like a problem, since it's showing that God can't create a big enough stone. But an omnipotent being could presumably lift an object of any arbitrary size. Therefore, no stone could ever meet these criteria. If we taboo "so large that God cannot lift it," there is no actual weight you could describe such a stone as having. Presumably, God could lift a stone that weighted 3^^^3 tons, or even 3^^^^^^3 tons. You've created a hollow adjective: a descriptor whose actual meaning makes an argument self-evidently bad, but which is appealing if you don't actually think about it. It's not Not Technically Lying, because it isn't untrue, it's meaningless, which makes it harder to detect (though less common).
This is an extreme example. Usually, hollowness allows a speaker to be vague enough that they sound like they have a point when a clear definition of their terms would disprove this. Offenses in common language are usually a bit less egregious. "The president hasn't done enough to fix the economy," comes to mind as an example. What exactly, should he have done? There has probably never been a president in history whom people would generally agree has done "enough to fix the economy;" indeed, most economists would question the power of the president to seriously influence such things. "Hasn't the president failed to end the recession?" may be technically true, but it isn't really useful to call someone a failure for not doing something they lack the power to do. This example is merely illustrative; it is often easy to create descriptors that make your conclusion apparently foregone, despite their actual lack of substance.
Using such slanted terms is among the darker of the Dark Arts. It plays on its audience not by appealing to the irrational vagaries of the human mind; such efforts are, at least, often transparent. Rather, it masquerades as a rational argument, requiring complex nuance to refute. For those who are not disposed to disagree, it can escape the defense mechanisms of even a cautious mind. Understanding this concept can make it far easier to pinpoint the error in some beguiling arguments.
Planning a series: discounting utility
I'm planning a top-level post (probably two or three or more) on when agent utility should not be part of utilitarian calculations - which seems to be an interesting and controversial topic given some recent posts. I'm looking for additional ideas, and particularly counterarguments. Also hunting for article titles. The series would look something like the following - noting that obviously this summary does not have much room for nuance or background argument. I'm assuming moral antirealism, with the selection of utilitarianism as an implemented moral system.
Intro - Utilitarianism has serious, fundamental measurement problems, and sometimes substantially contradicts our intuitions. One solution is to say our intuitions are wrong - this isn't quite right (i.e. a morality can't be "wrong") unless our intuitions are internally inconsistent, which I do not think is the problem. This is particularly problematic because agents (especially with high self modification capacities) may face socially undesirable incentives. I argue that a better solution is to ignore or discount the utility of certain agents in certain circumstances. This better fits general moral intuitions. (There remains a debate as to whether Morality A might be better than Morality B when Morality B better matches our general intuitions - I don't want to get into this, as I'm not sure there's a non-circular meaning of "better" as applied to morality that does not relate to moral intuitions.)
1 -First, expressly anti-utilitarian utility can be disregarded. Most of the cases of this are fairly simple and bright-line. No matter how much Bob enjoys raping people, the utility he derives from doing so is irrelevant unless he drinks the utilitarian Koolaid and only, for example, engages in rape fantasies (in which case his utility is counted - the issue is not that his desire is bad, it's that his actions are). This gets into some slight line-drawing problems with, for example, utility derived from competition (as one may delight in defeating people - this probably survives, however, particularly since it is all consensual).
1.5 - The above point is also related to the issue of discounting the future utility of such persons; I'm trying to figure out if it belongs in this sequence. The example I plan to use (which makes pretty much the entire point) is as follows. You have some chocolate ice cream you have to give away. You can give it to a small child and a person who has just brutally beaten and molested that child. The child kinda likes chocolate ice cream; vanilla is his favorite flavor, but chocolate's OK. The adult absolutely, totally loves chocolate ice cream; it's his favorite food in the world. I, personally, give the kid the ice cream, and I think so does well over 90% of the general population. On the other hand, if the adult were simply someone who had an interest in molesting children, but scrupulously never acted on it, I would not discount his utility so cheerfully. This may simply belong as a separate post on its own on the utility value of punishment. I'd be interested in feedback on it.
2 -Finally, and trickiest, is the problem of utility conditioned on false beliefs. Take two examples: an african village stoning a child to death because they think she's a witch who has made it stop raining, and the same village curing that witch-hood by ritually dunking her in holy water (or by some other innocuous procedure). In the former case, there's massive disutility that occurs because people will think it will solve a problem that it won't (I'm also a little unclear on what it would mean for the utility of the many to "outweigh" the utility of the one, but that's an issue I'll address in the intro article). In the latter, there's minimal disutility (maybe even positive utility), even though there's the same impotence. The best answer seems to be that utility conditioned on false beliefs should be ignored to the extent that it is conditioned on false beliefs. Many people (myself included) celebrate religious holidays with no belief whatsoever in the underlying religion - there is substantial value in the gathering of family and community. Similarly, there is some value to the gathering of the community in both village cases; in the murder it doesn't outweigh the costs, in the baptism it very well might.
3 - (tentative) How this approach coincides with the unweighted approach in the long term. Basically, if we ignore certain kinds of utility, we will encourage agents to pursue other kinds of utility (if you can't burn witches to improve your harvest, perhaps you'll learn how to rotate crops better). The utility they pursue is likely to be of only somewhat lower value to them (or higher value in some cases, if they're imperfect, i.e. human). However, it will be of non-negative value to others. Thus, a policy-maker employing adjusted utilitarianism is likely to obtain better outcomes from an unweighted perspective. I'm not sure this point is correct or cogent.
I'm aware at least some of this is against lesswrong canon. I'm curious as to if people have counterarguments, objections, counterexamples, or general feedback on whether this would be a desirable series to spell out.
The Bias You Didn't Expect
There are few places where society values rational, objective decision making as much as it values it in judges. While there is a rather cynical discipline called legal realism that says the law is really based on quirks of individual psychology, "what the judge had for breakfast," there's a broad social belief that the decision of judges are unbiased. And where they aren't unbiased, they're biased for Big, Important, Bad reasons, like racism or classism or politics.
It turns out that legal realism is totally wrong. It's not what the judge had for breakfast. It's how recently the judge had breakfast. A a new study (media coverage) on Israeli judges shows that, when making parole decisions, they grant about 65% after meal breaks, and almost all the way down to 0% right before breaks and at the end of the day (i.e. as far from the last break as possible). There's a relatively linear decline between the two points.
Why people reject science
From the NYTimes. The central point:
Humans, he argues, are hard-wired to reject scientific conclusions that run counter to their instinctive belief that someone or something is out to get them.
Approaching Infinity
[This is from a very neat example my real analysis professor used some years ago. While I'm fairly confident it's neat, I'm not certain it's top-level-post-worthy. The general point is about problems with applying concepts involving infinity to reality; any advice on content (or formatting!) would be greatly welcomed. My math education basically ended after a few upper division courses, so it's possible there are some notational schemes or methods I am ignorant of.
I think this is a fun little exercise, if nothing more.]
The concept of "infinity" and "infinite series" and sets get thrown around a lot in mathematics and some of philosophy. It's worth trying to put the concept of infinity in perspective before we try to think of things in the real world being "infinite." Warning: this post will involve numbers that are literally too large to comprehend. But that's the point.
Let us define an operator, /X\ ("triangle-X"). /X\ = X raised to the X power X times. Thus, /2\ = 22^2 = 24 = 16.
//2\\ ("2-triangle-2") would do this operation twice. Thus, it would equal /16\, the value of which we'll get to in a minute.
We now introduce a new operator, [X] ("square-X"). [X] = triangle-X-triangle-X, i.e. X inside of /X\ triangles. [2] = ////////////////2\\\\\\\\\\\\\\\\
We can introduce another operator, [X> ("pentagon-X"). [X> = X inside of [X] squares. I believe this would be "square-X-square-X").
...
[Edited for clarity]
I'll spare the next [X] operators, and go right to (X) ("circle-X"). Technically, it's whole-lot-of-sides-polygon-X - we could continue this process indefinitely - but we'll call it circle-X, because that's as far as we're going. (X) follows the process that took us from triangle to square to pentagon, iterated an additional [X] times.
I'll be honest. This got kind of meaningless a bit before [X]. Let's start trying to construct what [2] equals, and you'll see why.
/2\=16. So //2\\ = /16\ = 1616^16^16^16^16^16^16^16^16^16^16^16^16^16^16^16. Using some very rough approximations, we can say this is about 102x10^19, or one followed by twenty billion billion zeroes. //16\\ is thus one followed by twenty billion billion zeroes, raised to the power of one followed by twenty billion billion zeroes one followed by twenty billion billion zeroes times. My math education could be more complete, but I am not aware of another way to denote such a number. To say it could not be written in scientific notation on a universe-sized sheet of paper is probably a colossal understatement. And after we calculate that number we have to repeat the process thirteen more times to get [2]. We could theoretically keep doing this until we got to (2); (2) is a number that cannot be meaningfully expressed, understood, or calculated by any means that exist today. And there's still ((2)) after that.
Now, imagine that this period (.) represents zero. Imagine drawing a line from that point to one on the near surface of the sun, which represents infinity (yes, this is improper - it's a finite line - but the point is visualization, so understatement really isn't an issue here). (2) lies within the parentheses surrounding that period, and that's an understatement of how close it is to zero. It really isn't even 1/(2) inches from that period.
Remember this the next time you ponder the meaning, use, or existence of an infinite set, infinite repetitions, or an infinite time.
Assuming Nails
Tangential followup to Defeating Ugh Fields in Practice.
Somewhat related to Privileging the Hypothesis.
Edited to add:
I'm surprised by negative/neutral reviews. This means that either I'm simply wrong about what counts as interesting, or I haven't expressed my point very well. Based on commenter response, I think the problem is the latter. In the next week or so, expect a much more concise version of this post that expresses my point about epistemology without the detour through a criticism of economics.
At the beginning of my last post, I was rather uncharitable to neoclassical economics:
If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it's this article about financial incentives and medication compliance.... [to maintain that this theory is correct] is to crush reality into a theory that cannot hold it.
Some mistook this to mean that I believe neoclassical economists honestly, explicitly believe that all people are always totally rational. But, to quote Rick Moranis, "It's not what you think. It's far, far worse." The problem is that they often take the complex framework of neoclassical economics and believe that a valid deduction within this framework is a valid deduction about the real world. However, deductions within any given framework are entirely uninformative unless the framework corresponds to reality. But, because such deductions are internally valid, we often give them far more weight than they are due. Testing the fit of a theoretical framework to reality is hard, but a valid deduction within a framework feels so very satisfying. But even if you have a fantastically engineered hammer, you cannot go around assuming everything you want to use it on is a nail. It is all too common for experts to assume that their framework applies cleanly to the real world simply because it works so well in its own world.
If this concept doesn't make perfect sense, that's what the rest of this post is about: spelling out exactly how we go wrong when we misuse the essentially circular models of many sciences, and how this matters. We will begin with the one discipline in which this problem does not occur. The one discipline which appears immune to this type of problem is mathematics, the paragon of "pure" academic disciplines. This is principally because mathematics appears to have perfect conformity with reality, with no research or experimentation needed to ensure said conformity. The entire system of mathematics exists, in a sense, in its own world. You could sit in windowless room (perhaps one with a supercomputer) and, theoretically, derive every major theorem of mathematics, given the proper axioms. The answer to the most difficult unsolved problems in mathematics was determined the moment the terms and operators within them were defined - once you say a "circle" is "a convex polygon with every point equidistant from a center," you have already determined every single digit of pi. The problem is finding out exactly how this model works - making calculations and deductions within this model. In the case of mathematics, for whatever reason, the model conforms perfectly to the real world, so any valid mathematical deduction is a valid deduction in the real world.
This is not the case in any true science, which by necessity must rely on experiment and observation. Every science operates off of some simplified model of the world, at least with our current state of knowledge. This creates two avenues of progress: discoveries within the model, which allow one to make predictions about the world, and refinements of the model, which make such predictions more accurate. If we have an internally consistent framework, theoretical manipulation within our model will never show us our error, because our model is circular and functions outside the real world. It would be like trying to predict a stock market crash by analyzing the rules of Monopoly, except that it doesn't feel absurd. There's nothing wrong with the model qua the model, the problem is with the model qua reality, and we have to look at both of them to figure that out.
Economics is one of the fields that most suffers from this problem. Our mathematician in his windowless room could generate models of international exchange rates without ever having seen currency, once we gave him the appropriate definitions and assumptions. However, when we try using these models to forecast the future, life gets complicated. No amount of experimenting within our original model will fix this without looking at the real world. At best, we come up with some equations that appear to conform to what we observe, but we run the risk that the correspondence is incidental or that there were some (temporarily) constant variables we left out that will suddenly cease to be constant and break the whole model. It is all too easy to forget that the tremendous rigor and certainty we feel when we solve the equations of our model does not translate into the real world. Getting the "right" answer within the model is not the same thing as getting the real answer.
As an obvious practical example, an individual with a serious excess of free time could develop a model of economics which assumes that agents are rational paper-clip maximizers - that agents are rational and their ultimate concern is maximizing the number of existing paper-clips. Given even more free time and a certain amount of genius, you could even model the behaviour of irrational paper-clip maximizers, so long as you had a definition of irrational. But however refined these models are, they models will remain entirely useless unless you actually have some paper-clip maximizers whose behaviour you want to predict. And even then, you would need to evaluate your predictions after they succeed or fail. Developing a great hammer is relatively useless if the thing you need to make must be put together with screws.
There is an obvious difference in the magnitude of this problem between the sciences, and it seems to be based on the difficulty of experimenting within them. In harder sciences where experiments are fairly straightforwards, like physics and chemistry, it is not terribly difficult to make models that conform well with reality. The bleeding edge of, say, physics, tends to like in areas that are either extremely hard to observe, like the subatomic, or extremely computation-intensive. In softer sciences, experiments are very difficult, and our models rely much more on powerful assumptions, social values, and armchair reasoning.
As humans, we are both bound and compelled to use the tools we have at our disposal. The problem here is one of uncertainty. We know that most of our assumptions in economics are empirically off, but we don't know how wrong or how much that matters when we make predictions. But the model nevertheless seeps into the very core of our model of reality itself. We cannot feel this disconnect when we try to make predictions; a well-designed model feels so complete that there is no feeling of error when we try to apply it. This is likely because we are applying it correctly, but it just doesn't apply to reality. This leads people to have high degrees of certainty and yet frequently be wrong. It would not surprise me if the failure of many experts to appreciate the model-reality gap is responsible for a large proportion of incorrect predictions.
This, unfortunately, is not the end of the problem. It gets much worse when you add a normative element into your model, when you get to call some things, "efficient" or "healthful," or "normal," or "insane." There is also a serious question as to whether this false certainty is preferable to the vague unfalsifiability of even softer social sciences. But I shall save these subjects for future posts.
Defeating Ugh Fields In Practice
Unsurprisingly related to: Ugh fields.
If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it's this article about financial incentives and medication compliance. In short, offering people small cash incentives vastly improves their adherence to life-saving medical regimens. That's right. For a significant number of people, a small chance at winning $10-100 can be the difference between whether or not they stick to a regimen that has a very good chance of saving their life. This technique has even shown promise in getting drug addicts and psychiatric patients to adhere to their regimens, for as little as a $20 gift certificate. This problem, in the aggregate, is estimated to cost about 5% of total health care spending -$100 billion - and that may not properly account for the utility lost by those who are harmed beyond repair. To claim that people are making a reasoned decision between the payoffs of taking and not-taking their medication, and that they be persuaded to change their behaviour by a payoff of about $900 a year (or less), is to crush reality into a theory that cannot hold it. This is doubly true when you consider that some of these people were fairly affluent.
A likely explanation of this detrimental irrationality is something close to an Ugh field. It must be miserable having a life-threatening illness. Being reminded of it by taking a pill every single day (or more frequently) is not pleasant. Then there's the question of whether you already took the pill. Because if you take it twice in one day, you'll end up in the hospital. And Heaven forfend your treatment involves needles. Thus, people avoid taking their medicine because the process becomes so unpleasant, even though they know they really should be taking it.
As this experiment shows, this serious problem has a simple and elegant solution: make taking their medicine fun. As one person in the article describes it, using a low-reward lottery made taking his meds "like a game;" he couldn't wait to check the dispenser to see if he'd won (and take his meds again). Instead of thinking about how they have some terrible condition, they get excited thinking about how they could be winning money. The Ugh field has been demolished, with the once-feared procedure now associated with a tried-and-true intermittent reward system. It also wouldn't surprise me the least if people who are unlikely to adhere to a medical regimen are the kind of people who really enjoy playing the lottery.
Deception and Self-Doubt
A little while ago, I argued with a friend of mine over the efficiency of the Chinese government. I admitted he was clearly better informed on the subject than I. At one point, however, he claimed that the Chinese government executed fewer people than the US government. This statement is flat-out wrong; China executes ten times as many people as the US, if not far more. It's a blatant lie. I called him on it, and he copped to it. The outcome is besides the point. Why does it matter that he lied? In this case, it provides weak evidence that the basics of his claim were wrong, that he knew the point he was arguing was, at least on some level, incorrect.
The fact that a person is willing to lie indefensibly in order to support their side of an argument shows that they have put "winning" the argument at the top of their priorities. Furthermore, they've decided, based on the evidence they have available, that lying was a more effective way to advance their argument than telling the truth. While exceptions obviously exist, if you believe that lying to a reasonably intelligent audience is the best way of advancing your claim, this suggests that you know your claim is ill-founded, even if you don't admit this fact to yourself.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)