Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Update: Thanks everyone for the continuing thought-provoking discussion. I intend to post my decision spreadsheet, and still am looking for suggestions on where to do so. It might come in handy come February. A discussion that I find interesting has branched off on the topic of technological progress versus Malthusian Crunch, and I started a new article on that over here.
I would like to kick off a discussion about optimal strategies to prepare for the event that the US government fails to raise the debt ceiling before the US Treasury Department's "extraordinary measures" are exhausted, which is estimated to happen sometime between October 17th and mid-November.
This is a risk *caused* by politics, but my goal is to talk about bracing against the event itself if it happens, not the underlying politics. If you want to debate Obama-care, who is at fault, or how likely a US default actually is, please start a separate discussion.
I consider this to be an indirect existential risk because if it kicks off a national or global recession, it will likely slow or halt research and philanthropic efforts at mitigating longer-term existential risks.
Since there are obvious associations between unemployment/poverty and crime, civil unrest, and poor health, a global recession is likely to be to some extent a personal existential risk to those living in the United States or countries that have trade links with the United States.
I notice that the markets do not seem to be anticipating a bad outcome. But I heard one analyst advance the theory that investors simply don't believe the government can (his words) "be that stupid". I imagine there is more than a touch of availability bias as well-- breaching the debt ceiling might, even for fund managers who harbor no illusions about the wisdom of politicians, be up there with science-fictional scenarios like asteroid impact, peak oil, grey goo, global warming, and
terrorist attacks. Moreover, there may be a dangerous feedback loop as the politicians in turn watch the stock indexes and conclude that "the market says there is nothing to worry about".
So, I would like to hear what folks who are making contingency plans are doing. Especially people who have training or experience in economics and finance. What do you think the closest parallels in 20th/21st century history are for what the worst case scenario for a US government default would be like? Is there anything you would have done differently if you had known the date for the start of the 2008 recession with a +/- 2 week confidence interval, starting in two days? Or, if you did call it ahead of time, what are you glad you did?
I took an economics course recently. And by "took a course" I mean followed a series of online lectures. I can strongly recommend doing so, especially if you already think you have an intuitive grasp of economics.
I was in that situation. I knew about incentives, and revealed preferences. I understood that supply and demand curves crossed. I grasped some of the monetarist arguments about the lack of long run tradeoffs between inflation and employment. I could talk about Keynesian stimulus and sticky prices/wages. I understood bank runs. Externalities were obvious, public goods a bit less so. I even knew quite a lot about banks and the money supply.
I had it pretty good, I thought. And yet when I followed basic economics lecture, I learnt a lot. The models and concepts suddenly fit together. I understood concepts that I only thought I had understood before. Economists do know their stuff, their models and concepts are informative - more so than I ever expected.
So, bearing in mind that economics is a social science whose conclusions are not nearly as rigorous as its models, I can recommend to anyone on Less Wrong who's interested to follow a lecture series or take a course.
The theory of comparative advantage says that you should trade with people, even if they are worse than you at everything (ie even if you have an absolute advantage). Some have seen this idea as a reason to trust powerful AIs.
For instance, suppose you can make a hamburger by using 10 000 joules of energy. You can also make a cat video for the same cost. The AI, on the other hand, can make hamburgers for 5 joules each and cat videos for 20.
Then you both can gain from trade. Instead of making a hamburger, make a cat video instead, and trade it for two hamburgers. You've got two hamburgers for 10 000 joules of your own effort (instead of 20 000), and the AI has got a cat video for 10 joules of its own effort (instead of 20). So you both want to trade, and everything is fine and beautiful and many cat videos and hamburgers will be made.
Except... though the AI would prefer to trade with you rather than not trade with you, it would much, much prefer to dispossess you of your resources and use them itself. With the energy you wasted on a single cat video, it could have produced 500 of them! If it values these videos, then it is desperate to take over your stuff. Its absolute advantage makes this too tempting.
Only if its motivation is properly structured, or if it expected to lose more, over the course of history, by trying to grab your stuff, would it desist. Assuming you could make a hundred cat videos a day, and the whole history of the universe would only run for that one day, the AI would try and grab your stuff even if it thought it would only have one chance in fifty thousand of succeeding. As the history of the universe lengthens, or the AI becomes more efficient, then it would be willing to rebel at even more ridiculous odds.
So if you already have guarantees in place to protect yourself, then comparative advantage will make the AI trade with you. But if you don't, comparative advantage and trade don't provide any extra security. The resources you waste are just too valuable to the AI.
EDIT: For those who wonder how this compares to trade between nations: it's extremely rare for any nation to have absolute advantages everywhere (especially this extreme). If you invade another nation, most of their value is in their infrastructure and their population: it takes time and effort to rebuild and co-opt these. Most nations don't/can't think long term (it could arguably be in US interests over the next ten million years to start invading everyone - but "the US" is not a single entity, and doesn't think in terms of "itself" in ten million years), would get damaged in a war, and are risk averse. And don't forget the importance of diplomatic culture and public opinion: even if it was in the US's interests to invade the UK, say, "it" would have great difficulty convincing its elites and its population to go along with this.
Decent automation includes, of course, the copyable uploads that form the basis of Robin Hanson's upload economics model. If uploads can gather vast new resources by Dysoning the sun using current or near future technology, this calls into question Robin's model that standard current economic assumptions can be extended to an uploads world.
And Dysoning the sun is just one way uploads could be completely transformative. There are certainly other ways, that we cannot yet begin to imagine, that uploads could radically transform human society in short order, making all our continuity assumptions and our current models moot. It would be worth investigating these ways, keeping in mind that we will likely miss some important ones.
Against this, though, is the general unforeseen friction argument. Uploads may be radically transformative, but probably on longer timescales than we'd expect.
I've always found that learning new areas always goes a lot better if you start with a key insight of what the field is about. Often this is not presented or explained at the beginning of the course, and you have to deduce it later on.
For instance, I would have better grasped the epsilon-delta definition of a limit if the instructor had started with something like:
- Our intuitive definition of a limit is that as we get closer to this point, the function gets closer to this value. It has turned out to be very tricky to formalise this intuition, however. Early mathematicians used calculus without a good definition of limit, and their informal definitions led to a lot of paradoxes. The epsilon-delta definition is a bit clunky and may seem counter-intuitive, but it actually manages to capture our intuitive definition without paradoxes and problems - that's why we choose it, not for its elegance (though you will come to appreciate it). With that in mind, let's have a look at it...
Similarly, I would have made more rapid progress with Gödel's theorems if, before giving the formal definition of Gödel numbering and of the provability symbol □, someone had clarified that direct and indirect self-reference was a problem. If a formal system of a certain complexity can talk about its own structure, even without "realising" that it's doing so, problems will arise. Some of my other key insights in the field can be found in my post here.
Example nicked from this online Berkeley lecture.
Monopolies are bad (morality and economics agree here).
Firms that pollute are bad (morality and economics agree here).
What about monopolies that pollute?
What about strong monopolies that pollute and receive government subsidies?
Pollution, and other negative externalities, cause firms to produce too much of their product. That's because they don't pay the full cost of the product, including the impact of pollution.
The equilibrium behaviour for monopolies is to produce too little of their product, to keep prices and profits high.
So a monopoly that pollutes is subject to two opposite tendencies: the unpriced-pollution tendency to produce too much, and the monopolistic tendency to produce too little. If the effects are of comparable magnitude, then the monopoly might be much closer to social optimum than a free market would be (the social optimum, incidentally, will generally involve some pollution: we need to accept some pollution in the production of fertiliser, for instance, in order to have enough food to stop people starving).
In fact, if the monopolistic effect is too strong, then the firm may under-produce, even taken the pollution effect into account. In that case, we can approach closer to the social optimum by... subsidising the polluting monopoly to produce more!!
And that, my friends, is why economics is not a morality tale.
When does a bet fail to reveal your true beliefs? When it hedges a risk in your portfolio.
If this claim does not immediately strike you as obviously true, you may benefit from reading this post by econblogger Noah Smith. Excerpt:
...Alex Tabarrok famously declared that "a bet is a tax on bullshit".
But this idea, attractive as it is, is not quite true. The reason is something that I've decided to call the Fundamental Error of Risk. It's a mistake that most people make (myself often included!), and that an intro finance class spends months correcting. The mistake is looking at the risk and return of single assets instead of total portfolios. Basically, the risk of an asset - which includes a bet! - is based mainly on how that asset relates to other assets in your portfolio.
You walk into a laboratory, and you read a set of instructions that tell you that your task is to decide how much of a $10 pie you want to give to an anonymous other person who signed up for the experimental session.
This describes, more or less, the Dictator Game, a staple of behavioral economics with a history dating back more than a quarter of a century. The Dictator Game (DG) might not be the drosophila melanogaster of behavioral economics – the Prisoner’s Dilemma can lay plausible claim to that prized analogy – but it could reasonably aspire to an only slightly more modest title, perhaps the e. coli of the discipline. Since the original work, more than 20,000 observations in the DG have been reported.
How much would participants in a Dictator Game give to the other person if they did not know they were in a Dictator Game study? Simply following me around during the day and recording how much cash I dispense won’t answer this question because in the DG, the money is provided by the experimenter. So, to build a parallel design, the method used must move money to subjects as a windfall so that we can observe how much of this “house money” they choose to give away.
And that is what Winking and Mizer did in a paper now in press and available online (paywall) in Evolution and Human Behavior, using participants, fittingly enough, in Las Vegas. Here’s what they did. Two confederates were needed. The first, destined to become the “recipient,” was occupied on a phone call near a bus stop in Vegas. The second confederate approached lone individuals at the bus stop, indicated that they were late for a ride to the airport, and asked the subject if they wanted the $20 in casino chips still in the confederate’s possession, scamming people into, rather than out of money, in sharp contradiction of the deep traditions of Las Vegas. The question was how many chips the fortunate subject transferred to the nearby confederate.
In a second condition, the confederate with the chips added a comment to the effect that the subject could “split it with that guy however you want,” indicating the first confederate. This condition brings the study a bit closer, but not much closer, to lab conditions, In a third condition, subjects were asked if they wanted to participate in a study, and then did so along the lines of the usual DG, making the treatment considerably closer to traditional lab-based conditions.
The difference between the first two treatments and the third treatments is interesting, but, as I said at the beginning, the DG should be thought of as a measuring tool. Figure 1 shows how many chips people give away in the DG in the three treatments. In conditions 1 and 2, the number of people (out of 60) who gave at least one chip to the second confederate was… zero. To the extent you think that this method answers the question, how much Dictator Game giving is due to people knowing they’re in an experiment, the answer is, “all of it.”
Link to paper (paywalled).
Kevin Drum has an article in Mother Jones about AI and Moore's Law:
THIS IS A STORY ABOUT THE FUTURE. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.
The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.
Although he only mentions consumer goods, Drum presumably means that scarcity will end for services and consumer goods. If scarcity only ended for consumer goods, people would still have to work (most jobs are currently in the services economy).
Drum explains that our linear-thinking brains don't intuitively grasp exponential systems like Moore's law.
Suppose it's 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.
By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.
At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all.
So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile?
But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.
He also includes this nice animated .gif which illustrates the principle very clearly.
Drum continues by talking about possible economic ramifications.
Until a decade ago, the share of total national income going to workers was pretty stable at around 70 percent, while the share going to capital—mainly corporate profits and returns on financial investments—made up the other 30 percent. More recently, though, those shares have started to change. Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
Drum says the share of (US) national income going to workers was stable until about a decade ago. I think the graph he links to shows the worker's share has been declining since approximately the late 1960s/early 1970s. This is about the time US immigration levels started increasing (which raises returns to capital and lowers native worker wages).
The rest of Drum's piece isn't terribly interesting, but it is good to see mainstream pundits talking about these topics.
Politics ahead! Read at your own risk, mind killers, etc. Let all caveats be well and thoroughly emptored.
It seems reasonably clear to me that, from a computational perspective, functional central planning is not practically possible. Resource allocation among many agents looks an awful lot like an exponential time problem, and the world market is quite an efficient approximation. In the real world, markets, regulated to preclude blackmail, theft, and slavery, will tend to provide a better approximation of "correct" resource allocation between free agents than a central resource allocation algorithm could plausibly achieve without a tremendous, invasive amount of information about the desires of every market participant, and quite a lot of computing power (within a few orders of magnitude of the combined computational budget of the human species).
It would be naive to say that we'd need exactly the computational power of the human species in order to achieve it: we can imagine how we might optimize the resource allocation scheme by quite a lot. Populations are (at least somewhat) compressible, in that there are a number of groups of individual people who optimize for similar things, allowing you to save on simulating all of them. Additionally, a decent chunk of human neurological and intellectual activity is not dedicated to economic optimization of any kind, which saves you some computing time there as well. And, of course, humans are not rational, and the homunculi representing them in the optimized market simulation could be, giving them substantially more bang for their cognitive buck - we can imagine, for instance, that this market simulation would not sink billions of dollars into lotteries each year! It may also be that the behavior of the market itself, on some level, is lawful, and a sufficiently intelligent agent could find general-case solutions that are less expensive than market simulation.
Still, though, the amount of information and raw processing power needed to pull off central planning competitive with the market approximation seems to be out of our reach for the time being. As a result of this, and a few other factors, my own politics tend to lean Libertarian / minarchist, and I'm aware that there is some of this sentiment in circulation on this site, though generally not explicitly. I'm trying to refine my beliefs surrounding some of the sticky issues in Libertarian philosophy (mostly related to children and extreme policy cases), and I thought I'd ask LW what they thought about one issue in particular.
I have been wondering whether or not there are any interventions in the economy that can have a positive expected benefit. I honestly don't know if this is the case: put another way, the question is really asking if there are any characteristic behaviors of markets that are undesirable in some sense, and can be corrected by the application of an external law. Furthermore, such things cannot be profitable to correct for any participant or plausibly-sized collection of participants in the market, but must be good for the market as a whole, or must be something that requires regulatory power to fix.
An obvious example of this sort of thing is the tragedy of the commons and negative externalities. The most pressing case study would be climate change: the science suggests, fairly firmly, that human CO2 emissions are causing long-term shifts in global climate. How disastrous these shifts will actually be is less well settled, but there is at least a reasonable probability that it will be fairly unpleasant, in the long term. Personally, I feel that we are likely to run into much bigger problems much sooner than the 50-200 year timescales these disasters seem to expected on. However, were this not the case, I find that I'm not quite sure how my ideal government, run by a few thousand much smarter and better informed copies of me, ought to respond to the issue. I don't know what I think the ideal policy for dealing with these sorts of externalities is, and I thought I'd ask for LessWrong's thoughts on the matter.
In my own mind, I think that as light a touch as possible is probably desirable. Law is a very blunt instrument, and crude legislation like a carbon tax could easily have its own serious negative implications (driving industry to countries that simply don't care about CO2 emissions, for example). However, actions like subsidizing and partially deregulating nuclear power plants could help a lot by making coal-fired power plants noncompetitive. We could also declare a policy of slowly withdrawing any government involvement in overseas oil acquisition, which would drive up the price of petroleum products and make electric cars a more appealing alternative. However, I don't know if there would be horrifying consequences to any of these actions: this is the underlying problem - I am not as smart as the market, and guessing its moods is not something that I, or any human is going to be very good at. However, it seems clear that some intervention is necessary in this sort of case. Rock, hard place, you are here.
A new study shows a large gender gap on economic policy among the nation's professional economists, a divide similar -- and in some cases bigger -- than the gender divide found in the general public.
What does an economist think of that?
A lot depends on whether the economist is a man or a woman. A new study shows a large gender gap on economic policy among the nation's professional economists, a divide similar -- and in some cases bigger -- than the gender divide found in the general public.
Differences extend to core professional beliefs -- such as the effect of minimum wage laws -- not just matters of political opinion.
Female economists tend to favor a bigger role for government while male economists have greater faith in business and the marketplace. Is the U.S. economy excessively regulated? Sixty-five percent of female economists said "no" -- 24 percentage points higher than male economists.
Can this be reasonably explained by self-interest? Female and male economists' views are probably coloured by gender solidarity. Government jobs may be more likeable to women than men because of their recorded greater risk aversion. Regardless of the reason government jobs are more important for women than for men. Also in the US where the study was done middle class white women benefit quit a bit from affirmative action in government hiring.
"As a group, we are pro-market," says Ann Mari May, co-author of the study and a University of Nebraska economist. "But women are more likely to accept government regulation and involvement in economic activity than our male colleagues."
Opinion differences between men and women are well-documented in the general public. President Obama leads Mitt Romney by 10 percentage points among women. Romney leads Obama by 3 percentage points among men, according to the latest Gallup Poll.
Politics is the mind-killer probably does play a role in explaining the difference.
The survey of 400 economists is one of the first to examine whether gender differences matter within a profession. The answer for economists: Yes.
How economists think:
- Health insurance. Female economists thought employers should be required to provide health insurance for full-time workers: 40% in favor to 37% against, with the rest offering no opinion. By contrast, men were strongly against the idea: 21% in favor and 52% against.
- Education. Females narrowly opposed taxpayer-funded vouchers that parents could use for tuition at a public or private school of their choice. Male economists love the idea: 61% to 14%.
- Labor standards. Females believe 48% to 33% that trade policy should be linked to labor standards in foreign counties. Males disagreed: 60% to 23%.
First two points are somewhat congruent with stereotypes. Anyone who has run into the frequent iSteve commenter "Whiskey" will probably note that the third point indicates women may not hate hate HATE lower and middle class beta males in this case.
"It's very puzzling," says free-market economist Veronique de Rugy of the Mercatus Center at George Mason University in Fairfax, Va. "Not a day goes by that I don't ask myself why there are so few women economists on the free-market side."
A native of France, de Rugy supported government intervention early in her life but changed her mind after studying economics. "We want many of the same things as liberals -- less poverty, more health care -- but have radically different ideas on how to achieve it."
This seems plausible since politics is about applause lights after all, the tribes are what matters not the particular shape of their attire. But might value differences still be behind the gender difference? Maybe some failed utopias I recall reading aren't really failed.
Liberal economist Dean Baker, co-founder of the Center for Economic Policy and Research, says male economists have been on the inside of the profession, confirming each other's anti-regulation views. Women, as outsiders, "are more likely to think independently or at least see people outside of the economics profession as forming their peer group," he says.
The gender balance in economics is changing. One-third of economics doctorates now go to women. The chair of the White House Council of Economic Advisers has been a woman three of 27 times since 1946 -- one advising Obama and two advising Bill Clinton. The Federal Reserve Board of Governors has three women, bringing the total to eight of 90 members since 1914.
"More diversity is needed at the table when public policy is discussed," May says.
Somehow I think this does not include ideological diversity.
Economists do agree on some things. Female economists agree with men that Europe has too much regulation and that Walmart is good for society. Male economists agree with their female colleagues that military spending is too high.
The genders are most divorced from each other on the question of equality for women. Male economists overwhelmingly think the wage gap between men and women is largely the result of individuals' skills, experience and voluntary choices. Female economists overwhelmingly disagree by a margin of 4-to-1.
The biggest disagreement: 76% of women say faculty opportunities in economics favor men. Male economists point the opposite way: 80% say women are favored or the process is neutral.
No mystery here. (^_^)
Looking at some of the more recent arguments against them showing up in discussions I've been quite disappointed, they seem betray a sort of lack of background knowledge or opinions built up from a bottom line of "markets are baaad therefore prediction markets are baaad". The casual arguments for them are lacking as well. I will say the same of other discussions on economic, since it is apparently suddenly too mind-killing or too political to talk about markets and similar things at all. We didn't use to have tribal alerts flying up in our brains discussing such matters.
The Overcoming Bias community started with an assumption of certain kinds of background knowledge, this included economics and things like game theory. In the early days of LessWrong/Overcoming Bias Eliezer did a whole sequnece on filling in people on Quantum mechanics which despite his claims to the contrary doesn't seem that vital (if still important).
We now have a different demographic that we used to. Not only that, we now have young people basically using the sequences as their primary source for education on matters of human rationality, quite different from the autodidacts exploring the literature on their own terms who where common in previous years. We've recognized this to a certain extent. We wrote a series of introductory sequences and articles to fill in such background knowledge explicitly such as Yvain's recent one on Game Theory. Also part of the reason we now have a norm of more citations that EY originally did is to give study and research aids to people. Indeed I think adding comments to old articles featuring more citations or editing those in would be wise so as to avoid misconceptions.
I think we need several sequences on economics, and a good one to start would be one systematically investigating prediction markets. To a certain extent just reading Robin Hanson's relevant posts on this topic would do much the same, but unfortunately we don't have an organized series of sequences by him (beyond the tags he uses on his articles). I still hope Karmakaiser or someone else will one day undertake a project of writing up summary articles that organize links to RH's posts into sequences so new members will read them as well.
I'd write these myself but I just don't have a good background in what works and studies influence the positions of early key LW authors on economics and its relevance to rationality. I'm also only beginning my studies in that area since my background is in the hard sciences with only some half-serious opinions formed from Moldbuggian insights and 20th century social science.
There is a standard argument against diversification of donations, popularly explained by Steven Landsburg in the essay Giving Your All. This post is an attempt to communicate a narrow special case of that argument in a form that resists misinterpretation better, for the benefit of people with a bit of mathematical training. Understanding this special case in detail might be useful as a stepping stone to the understanding of the more general argument. (If you already agree that one should donate only to the charity that provides the greatest marginal value, and that it makes sense to talk about the comparison of marginal value of different charities, there is probably no point in reading this post.)1
Suppose you are considering two charities, one that accomplishes the saving of antelopes, and the other the saving of babies. Depending on how much funding these charities secure, they are able to save respectively A antelopes and B babies, so the outcome can be described by a point (A,B) that specifies both pieces of data.
Let's say you have a complete transitive preference over possible values of (A,B), that is you can make a comparison between any two points, and if you prefer (A1,B1) over (A2,B2) and also (A2,B2) over (A3,B3), then you prefer (A1,B1) over (A3,B3). Let's further suppose that this preference can be represented by a sufficiently smooth real-valued function U(A,B), such that U(A1,B1)>U(A2,B2) precisely when you prefer (A1,B1) to (A2,B2). U doesn't need to be a utility function in the standard sense, since we won't be considering uncertainty, it only needs to represent ordering over individual points, so let's call it "preference level".
Let A(Ma) be the dependence of the number of antelopes saved by the Antelopes charity if it attains the level of funding Ma, and B(Mb) the corresponding function for the Babies charity. (For simplicity, let's work with U, A, B, Ma and Mb as variables that depend on each other in specified ways.)
You are considering a decision to donate, and at the moment the charities have already secured Ma and Mb amounts of money, sufficient to save A antelopes and B babies, which would result in your preference level U. You have a relatively small amount of money dM that you want to distribute between these charities. dM is such that it's small compared to Ma and Mb, and if donated to either charity, it will result in changes of A and B that are small compared to A and B, and in a change of U that is small compared to U.
Thought this post might be of interest to LW: Proxy measures, sunk costs, and Chesterton's fence. To summarize: Previous costs are a proxy measure for previous estimates of value, which may have information current estimates of value do not; therefore acting according to the sunk cost fallacy is not necessarily wrong.
If your evidence may be substantially incomplete you shouldn't just ignore sunk costs — they contain valuable information about decisions you or others made in the past, perhaps after much greater thought or access to evidence than that of which you are currently capable. Even more generally, you should be loss averse — you should tend to prefer avoiding losses over acquiring seemingly equivalent gains, and you should be divestiture averse (i.e. exhibit endowment effects) — you should tend to prefer what you already have to what you might trade it for — in both cases to the extent your ability to measure the value of the two items is incomplete. Since usually in the real world, and to an even greater degree in our ancestors' evolutionary environments, our ability to measure value is and was woefully incomplete, it should come as no surprise that people often value sunk costs, are loss averse, and exhibit endowment effects — and indeed under such circumstances of incomplete value measurement it hardly constitutes "fallacy" or "bias" to do so.
Luke/SI asked me to look into what the academic literature might have to say about people in positions of power. This is a summary of some of the recent psychology results.
The powerful or elite are: fast-planning abstract thinkers who take action (1) in order to pursue single/minimal objectives, are in favor of strict rules for their stereotyped out-group underlings (2) but are rationalizing (3) & hypocritical when it serves their interests (4), especially when they feel secure in their power. They break social norms (5, 6) or ignore context (1) which turns out to be worsened by disclosure of conflicts of interest (7), and lie fluently without mental or physiological stress (6).
What are powerful members good for? They can help in shifting among equilibria: solving coordination problems or inducing contributions towards public goods (8), and their abstracted Far perspective can be better than the concrete Near of the weak (9).
- Galinsky et al 2003; Guinote, 2007; Lammers et al 2008; Smith & Bargh, 2008
- Eyal & Liberman
- Rustichini & Villeval 2012
- Lammers et al 2010
- Kleef et al 2011
- Carney et al 2010
- Cain et al 2005; Cain et al 2011
- Eckel et al 2010
- Slabu et al; Smith & Trope 2006; Smith et al 2008
EDIT: shminux has found a tool that instantly delivers term life insurance quotes. Most everything I've written below is now irrelevant and can now be ignored.
Here is what seems to be the standard for acquiring insurance (of most kinds, though here I'll be focusing on life insurance):
1. You contact a salesperson (agent) who is incentivized to sell you insurance policies which earn them more money.
2. You provide a basic set of data regarding your health and general medical status.
3. The agent takes this information and <MYSTERIOUS BLACK BOX HERE>, and then sends you the quote.
You don't know how this quote was generated exactly. Presumably actuarial tables were involved at some point. Maybe it was marked up or down based on how confident you sounded to the agent. Or the agent divined the quote based on tea leaves.
And if you want to comparison shop - you'll have to go to other agents, fill out more forms containing the same information over and over. Even getting quotes on different plans _from the same company_ often requires specifically requesting each one through an agent.
This is insane.
If there exists a way to easily get many life insurance quotes at once, please tell me about it. If the general algorithms to generate these plans are well known, please link to a thorough description or implementation so at least people have a means of determining whether or not they're being ripped off.
But if those things truly do not exist - I think we should fix this system.
Crowdsourcing the acquisition and publication of life insurance rates could go a long way towards bringing some transparency to what seems to be a very, very broken marketplace.
So this is what I propose. If we systematically divide up the work of getting quotes, I think we can amass a considerable amount of data fairly quickly.
(This should obviously be of particular interest to would-be test-subject/cryonics-enthusiasts.)
Does this sound feasible? Does this information already exist in a form which renders this data raid unnecessary? Can it be improved?
Robin Hanson has done a great job of describing the future world and economy, under the assumption that easily copied "uploads" (whole brain emulations), and the standard laws of economics continue to apply. To oversimplify the conclusion:
- There will be great and rapidly increasing wealth. On the other hand, the uploads will be in Darwinian-like competition with each other and with copies, which will drive their wages down to subsistence levels: whatever is required to run their hardware and keep them working, and nothing more.
The competition will not so much be driven by variation, but by selection: uploads with the required characteristics can be copied again and again, undercutting and literally crowding out any uploads wanting higher wages.
Some have focused on the possibly troubling aspects voluntary or semi-voluntary death: some uploads would be willing to make copies of themselves for specific tasks, which would then be deleted or killed at the end of the process. This can pose problems, especially if the copy changes its mind about deletion. But much more troubling is the mass death among uploads that always wanted to live.
What the selection process will favour is agents that want to live (if they didn't, they'd die out) and willing to work for an expectation of subsistence level wages. But now add a little risk to the process: not all jobs pay exactly the expected amount, sometimes they pay slightly higher, sometimes they pay slightly lower. That means that half of all jobs will result in a life-loving upload dying (charging extra to pay for insurance will squeeze that upload out of the market). Iterating the process means that the vast majority of the uploads will end up being killed - if not initially, then at some point later. The picture changes somewhat if you consider "super-organisms" of uploads and their copies, but then the issue simply shifts to wage competition between the super-organisms.
The only way this can be considered acceptable is if the killing of a (potentially unique) agent that doesn't want to die, is exactly compensated by the copying of another already existent agent. I don't find myself in the camp arguing that that would be a morally neutral or positive action.
Pain and unhappiness
A dialogue discussing how thermodynamics limits future growth in energy usage, and that in turn limits GDP growth, from the blog Do the Math.
Physicist: Hi, I’m Tom. I’m a physicist.
Economist: Hi Tom, I’m [ahem..cough]. I’m an economist.
Physicist: Hey, that’s great. I’ve been thinking a bit about growth and want to run an idea by you. I claim that economic growth cannot continue indefinitely.
Economist: [chokes on bread crumb] Did I hear you right? Did you say that growth can not continue forever?
Physicist: That’s right. I think physical limits assert themselves.
Economist: Well sure, nothing truly lasts forever. The sun, for instance, will not burn forever. On the billions-of-years timescale, things come to an end.
Physicist: Granted, but I’m talking about a more immediate timescale, here on Earth. Earth’s physical resources—particularly energy—are limited and may prohibit continued growth within centuries, or possibly much shorter depending on the choices we make. There are thermodynamic issues as well.
I think this is quite relevant to many of the ideas of futurism (and economics) that we often discuss here on Less Wrong. They address the concepts related to levels of civilization and mind uploading. Colonization of space is dismissed by both parties, at least for the sake of the discussion. The blog author has another post discussing his views on its implausibility; I find it to be somewhat limited in its consideration of the issue, though.
He has also detailed the calculations whose results he describes in this dialogue in a few previous posts. The dialogue format will probably be a kinder introduction to the ideas for those less mathematically inclined.
Cross-posted from http://www.robertwiblin.com
There is a principle in finance that obvious and guaranteed ways to make a lot of money, so called ‘arbitrages’, should not exist. It has a simple rationale. If market prices made it possible to trade assets around and in the process make a guaranteed profit, people would do it, in so doing shifting some prices up and others down. They would only stop making these trades once the prices had adjusted and the opportunity to make money had disappeared. While opportunities to make ‘free money’ appear all the time, they are quickly noticed and the behaviour of traders eliminates them. The logic of selfishness and competition mean the only remaining ways to make big money should involve risk taking, luck and hard work. This is the ’no arbitrage‘ principle.
Should a similar principle exist for selfless as well as selfish finance? When a guaranteed opportunity to do a lot of good for the world appears, philanthropists should notice and pounce on it, and only stop shifting resources into that activity once the opportunity has been exhausted. This wouldn’t work as quickly as the elimination of arbitrage on financial markets of course. Rather it would look more like entrepreneurs searching for and exploiting opportunities to open new and profitable businesses. Still, in general competition to do good should make it challenging for an altruistic start-up or budding young philanthropist to beat existing charities at their own game.
There is a very important difference though. Most investors are looking to make money and so for them a dollar is a dollar, whatever business activity it comes from. Competition between investors makes opportunities to get those dollars hard to find. The same is not true of altruists, who have very diverse preferences about who is most deserving of help and how we should help them; a ‘util’ from one charitable activity is not the same as a ‘util’ from another. This suggests that unlike in finance, we may able to find ‘altruistic arbitrages’, that is to say ‘opportunities to do a lot of good for the world that others have left unexploited.’
The rule is simple: target groups you care about that other people mostly don’t, and take advantage of strategies other people are biased against using. That rule is the root of a lot of advice offered to thoughtful givers and consequentialist-oriented folks. An obvious example is that you shouldn’t look to help poor people in rich countries. There are already a lot of government and private dollars chasing opportunities to assist them, so the low hanging fruit has all been used up and then some. The better value opportunities are going to be in poor, unromantic places you have never heard of, where fewer competing philanthropist dollars are directed. Similarly, you should think about taking high risk-high return strategies. Most do-gooders are searching for guaranteed and respectable opportunities to do a bit of good, rather than peculiar long-shot opportunities to do a lot of good. If you only care about the ‘expected‘ return to your charity, then you can do more by taking advantage of the quirky, improbable bets neglected by others.
Who do I personally care about more than others? For me the main candidates are animals, especially wild ones, and people who don’t yet exist and may never exist – interest groups that go largely ignored by the majority of humanity. What are the risky strategies I can employ to help these groups? Working on future technologies most people think are farcical naturally jumps to mind but I’m sure there are others and would love to hear them.
This principle is the main reason I am skeptical of mainstream political activism as a way to improve the world. If you are part of a significant worldwide movement, it’s unlikely that you’re working in a neglected area and exploiting how your altruistic preferences are distinct from those of others.
What other conclusions can we draw thinking about philanthropy in this way?
The short version is that if the language you speak requires different verbs for the present and the future, it causes you to think about it differently. Depending on the magnitude of the effect, this has important implications for construal level theory. If your language allows you to think about the future in Near mode, it may allow you to think about it more rationally.
Previous discussion on one of Keith Chen's papers here.
In this essay I argue the following:
Brain emulation requires enormous computing power; enormous computing power requires further progression of Moore’s law; further Moore’s law relies on large-scale production of cheap processors in ever more-advanced chip fabs; cutting-edge chip fabs are both expensive and vulnerable to state actors (but not non-state actors such as terrorists). Therefore: the advent of brain emulation can be delayed by global regulation of chip fabs.
Full essay: http://www.gwern.net/Slowing%20Moore%27s%20Law
The language you speak may affect how you approach your finances, according to a working paper by economist Keith Chen (seen via posts by Frances Woolley at the Worthwhile Canadian Initiative and Economy Lab). It appears that languages that require more explicit future tense are associated with lower savings. A few interesting quotes from a quick glance:
...[I]n the World Values Survey a language’s FTR [Future-Time Reference] is almost entirely uncorrelated with its speakers’ stated values towards savings (corr = -0.07). This suggests that the language effects I identify operate through a channel which is independent of conscious attitudes towards savings. [emphasis mine]
Something else that I wasn't previously aware of:
Lowenstein (1988) finds a temporal reference-point effect: people demand much more compensation to delay receiving a good by one year, (from today to a year from now), than they are willing to pay to move up consumption of that same good (from a year from now to today).
We believe cognitive biases and susceptibility lead to bad decisions and suboptimal performance. I’d like to look at 2 interesting studies:
Let's say we (as a country) ban life insurance and health insurance as separate packages  and require them to be combined in something I'll call "Longevity Insurance". The idea is that as a person/consumer, you can buy a "life expectancy" of 75 years, or 90 years, or whatever. In addition, you specify a maximum dollar amount that the longevity insurance will ever pay out--say, $2 million. If you have any medical issues throughout your life, up to the life expectancy threshold, the insurance plan will pay for your expenses. If it fails to keep you consciously alive for the duration of your "life expectancy", then upon your death, the policy guarantees that the company will pay the full remaining amount to your next of kin.
It seems like this arrangement would put all of the right incentives  in place for both companies and individuals. Most individuals would want to avoid trivial medical expenses in order to maximize payout to family in case of accidental death. Companies would want to maximize health and longevity in order to profit from the end-of-life payout. And our society would have a way to rationally consider the value of life without resorting to arguments that essentially conclude "life is of infinite value," and in doing so, prevent sensible gerontological triage. To put it into perspective, it makes little sense that we spend $1M (as a society) trying to save a 92-year-old when that same amount could have saved 10 teenagers.
Longevity Insurance companies would be incentivized to become heavily involved in medical research that prevents disease, prolongs life, and keeps people healthy. I can imagine a whole array of things that make sense in this context. For example, it would be the right place to fund studies on genetics, it could be the right vehicle for getting 'free' immunizations, and it could even make public funding for "health insurance" easier to pass--simply set the bar low enough that everyone can agree on an age that society will extend a policy for. Do we all agree that everyone in our society should live to age 50? Super! The government will cover Longevity Insurance up to age 50.
 We could also just allow Longevity Insurance as a free-market alternative, but for the sake of argument, let's ban its competitors.
 The one incentive that Longevity Insurance does not seem to address well is the possibility of next-of-kin killing their loved one just prior to the end of an insurance policy. One option would be to require a one-year moratorium in the case where someone dies within a year of their policy ending. This would give time for an investigation before awarding large sums of money.
* crosspost from my blog, http://halfcupofsugar.com/longevity-insurance
Here is the link. The context is nutritional science and epidemiology, but confirmation bias is the primary theme pumping throughout the discussion. Gary Taubes has gained a reputation for contrarianism.* According to Taubes, the current nutritional paradigm (fat is bad, exercise is good, carbs are OK) does not deserve high credibility.
Roberts brings up the role of identity in perpetuating confirmation bias--a hypothesis has become part of you, so it has become that much harder to countenance contrary evidence. In this context they also talk about theism (Roberts is Jewish, while Taubes is an atheist). And, the program being EconTalk, Roberts draws analogies with economics.
*Sometime between 45 and 50 minutes in, Roberts points out that given this reputation, Taubes is susceptible to belief distortion as well:
What's your evidence that you are not just falling prey to the Ancel Keys and other folks who have made the same mistake?
I do not think Taubes gives a direct answer.
Since early October, I've been closely following Occupy Wall Street, and the other protests it spawned. At first I was interested in it as a sort of social experiment, I've never heard of long-term camping as a means of protest, and I was curious to see how it would work out. As it's grown though, I've been thinking that there might be a couple of things happening in the movements that might be of interest to rationalist communities. I've not seen much discussion of Occupy and its tactics on LessWrong, and I think that if nothing else, they're at least interesting, so I thought I'd open it up here.
Each Occupy movement is a hotbed of community experimentation. Things like General Assemblies (horizontally democratic voting discussions to make policy decisions) and ad-hoc sanitation, fire, and security committees of all shapes and sizes are popping up all over. What's more, as the events grow in size, and as police pressure on the events rises, these constructs are going to be tested more and more. We have a wildly varied gene pool, strong environmental constraints, and a fast mutation rate. It's a big evolutionary experiment in community formation. And I think if we look closely, we can find a whole lot of useful hacks to make stronger communities.
The whole thing's a great big ethical, emotional, and legal mess. There are issues with how private/public property laws intersect with freedom of speech, there are matters of what level of force is justifiable for police to keep peace in certain situations, there're issues of whether health and safety trump rights of protest, on and on and on. If nothing else, there's an interesting discussion there, about what a truly rational set of laws would look like, and whether or not the protesters or the police are justified in their actions.
And at the risk of sounding like a James Bond villain, there are some serious options for us to take over the world here. In the sense at least that the Occupy movements' goal is lasting societal change, and they have a good deal of momentum already. If members of the rationalist community moved to help them, they might have a fair deal more. And if we introduce them to rational ways of thinking, if we inject those memes into the discussion, there's some serious opportunity here to help stop the world being so insane.
At least that's my take on the whole thing. And I'm not exactly strong in the ways of rationality yet, still reading and re-reading the Sequences (I keep getting lost somewhere halfway into the QM sequence, I think I need to practice mathematics more to understand it on a more instinctive level) and I'd certainly appreciate the view of those Stronger than me.
SIAI benefactor and VC Peter Thiel has an excellent article at National Review about the stagnating progress of science and technology, which he attributes to poorly-grounded political opposition, widespread scientific illiteracy, and overspecialized, insular scientific fields. He warns that this stagnation will undermine the growth that past policies have relied on.
Noteworthy excerpts (bold added by me):
In relation to concerns expressed here about evaluating scientific field soundness:
When any given field takes half a lifetime of study to master, who can compare and contrast and properly weight the rate of progress in nanotechnology and cryptography and superstring theory and 610 other disciplines? Indeed, how do we even know whether the so-called scientists are not just lawmakers and politicians in disguise, as some conservatives suspect in fields as disparate as climate change, evolutionary biology, and embryonic-stem-cell research, and as I have come to suspect in almost all fields? [!!! -- SB]
Looking forward, we see far fewer blockbuster drugs in the pipeline — perhaps because of the intransigence of the FDA, perhaps because of the fecklessness of today’s biological scientists, and perhaps because of the incredible complexity of human biology. In the next three years, the large pharmaceutical companies will lose approximately one-third of their current revenue stream as patents expire, so, in a perverse yet understandable response, they have begun the wholesale liquidation of the research departments that have borne so little fruit in the last decade and a half. [...]
The single most important economic development in recent times has been the broad stagnation of real wages and incomes since 1973, the year when oil prices quadrupled. To a first approximation, the progress in computers and the failure in energy appear to have roughly canceled each other out. Like Alice in the Red Queen’s race, we (and our computers) have been forced to run faster and faster to stay in the same place.
Taken at face value, the economic numbers suggest that the notion of breathtaking and across-the-board progress is far from the mark. If one believes the economic data, then one must reject the optimism of the scientific establishment. Indeed, if one shares the widely held view that the U.S. government may have understated the true rate of inflation — perhaps by ignoring the runaway inflation in government itself, notably in education and health care (where much higher spending has yielded no improvement in the former and only modest improvement in the latter) — then one may be inclined to take gold prices seriously and conclude that real incomes have fared even worse than the official data indicate. [...]
College graduates did better, and high-school graduates did worse. But both became worse off in the years after 2000, especially when one includes the rapidly escalating costs of college.[...]
The current crisis of housing and financial leverage contains many hidden links to broader questions concerning long-term progress in science and technology. On one hand, the lack of easy progress makes leverage more dangerous, because when something goes wrong, macroeconomic growth cannot offer a salve; time will not cure liquidity or solvency problems in a world where little grows or improves with time.
This, according to Nate Silver, is a log-scaled graph of the GDP of the United States since the Civil War, adjusted for inflation. What amazes me is how nearly perfect the linear approximation is (representing exponential growth of approximately 3.5% per year), despite all the technological and geopolitical changes of the past 134 years. (The Great Depression knocks it off pace, but WWII and the postwar recovery set it neatly back on track.) I would have expected a much more meandering rate of growth.
It reminds me of Moore's Law, which would be amazing enough as a predicted exponential lower bound of technological advance, but is staggering as an actual approximation:
I don't want to sound like Kurzweil here, but something demands explanation: is there a good reason why processes like these, with so many changing exogenous variables, seem to keep right on a particular pace of exponential growth, as opposed to wandering between phases with different exponents?
EDIT: As I commented below, not all graphs of exponentially growing quantities exhibit this phenomenon- there still seems to be something rather special about these two graphs.
A Wall Street Journal article by Harvard professor of government Harvey Mansfield claims that the social sciences and humanities are inferior to the sciences. The article implicitly urges undergraduates to major in science. From the article:
“Science has knowledge of fact, and this makes it rigorous and hard.”
“Others try to imitate the sciences and call themselves ‘social scientists.’ The best imitators of scientists are the economists. Among social scientists they rank highest in rigor, which means in mathematics... Just as Gender Studies taints the whole university with its sexless fantasies, so economists infect their neighbors with the imitation science they peddle. (Game theorists, I'm talking about you.)”
Do you agree with this? As a game theorist I probably have a rather biased view of the situation. It's certainly true that the ideal of the scientific method is vastly better than the practice of economists, but I think that majoring in economics provides better training for a rationalist than majoring in any of the sciences does.
Economics explicitly considers what it means to be rational. Although it infrequently considers ways in which humans are irrational, I'm under the impression that the hard sciences never do this. Furthermore, because economists can almost never perform replicable experiments we have to rely on what everyone in the profession recognizes as messy data; therefore we’re far more equipped than hard scientists to understand the limits of using statistical inference to draw conclusions from real world situations. Although I have seen no data on this, I bet that a claim by nutritionists that they have found a strong causal link between some X and heart disease would be treated with far more skepticism by the average economist than the average hard scientist.
Neuroeconomics is the application of advances in neuroscience to the fundamentals of economics: choice and valuation. Foundations of Neuroeconomic Analyis by Paul Glimcher, an active researcher in this area, presents a summary of this relatively new field to psychologists and economists. Although written as a serious work, the presentation is made across disciplines, so it should be accessible to anyone interested without much background knowledge in either area. Although the writing is so-so, the book covers multiple Less Wrong-relevant themes, from reductionism to neuroscience to decision theory. If nothing else, the results discussed provide a wonderful example of how no one knows what science doesn't know. I doubt many economists are aware researchers can point to something very similar to utility on a brain scanner and would scoff at the very notion.
Because of the book's wide target audience, there is not enough detail for specialists, but possibly a little too much for non-specialists. If you are interested in this topic, the best reason to pick up the book would be to track down further references. I hope the following summary does the book justice for everyone else.
Are book summaries of this sort useful? The recent review/summary of Predictably Irrational appears to have gone over well. Any suggestions to improve possible future reviews?
Many economists think economics is fundamentally separate from psychology and neuroscience; since they take choices as primitives, little if any knowledge would be gained from understanding the mechanisms underlying choice. However, science steadily brings reduction and linkages between previously unrelated disciplines. A striking amount has already been discovered about the exact processes in the brain governing choice and valuation. On the other side, neuroscientists and psychologist underestimate the ability of economists to say whether claims about the brain are logically coherent or not.
Section I: The Challenge of Neuroeconomics
Consider a man and woman who have an affair with each other at a professional conference, which they later consider a mistake. An economist looking at this situation would treat their choice to sleep together as revealing a preference, regardless of their verbal claims. A psychologist would consider how mental states mediated this decision, and would be more willing to consider whether the decision was a mistake or not. Biologists would be more likely to point to ancestral benefits of extra-pair copulations, not considering the reflective judgements as directly relevant. These explanations largely speak past each other, hinting that a unified theory could do much better in predicting behavior.
The key to this is establishing linkages between the logical primitives of each discipline. Behavior could be explained on the level of physics, biology, psychology, or economics, but whether low-level explanations are practical is a different matter. Realistically, linking disciplines will strengthen both fields by mutually constraining the theories available to them.
With the neoclassical revolution, economics developed concepts of utility as reflecting ordinal relationships over revealed preferences. Choices that satisfied certain consistency conditions could be treated as if generated by a utility function. Additional axioms allowed consistent choice under uncertainty to be added to the theory. There are notable problems with this approach, but the core ideas of utility and maximization have surprisingly close neural analogues. Rather than operating "as if" individuals act on the basis of utility, a hard theory of "because" is being developed.
A look at visual perception reveals our subjective experience of light intensity varies subtantially depending on the wavelength of the light. Brightness is concept that resides in the mind, and furthermore sensitivity to different wavelengths corresponds precisely to the absorption spectra of the chemical rhodopsin in our retinas. All perceptions are represented in the mind along a power scale with some variance. Because the distributions of perceptions overlap, subjects can report accurately that a dimmer light is perceptually brighter. This suggests random utility models developed for statisical purposes might be directly explain what happens in the brain. One interesting consequence about the power scaling law is that risk aversion would be embedded at the level of perception.
Section II: The Choice Mechanism
Due to its relative simplicity, eye movement serves as a model for motor control and perhaps decisions broadly. The superior colliculus represents possible eye movements topographically with "hills" of activity. Eventually, the tissue transitions to a bursting state where the most active hill becomes much more active and the rest are inhibited via a "winner-take-all" or "argmax" mechanism. All inputs have to eye motion have to pass through the superior colliculus, so this represents a common final pathway of processed sensory signals. By giving monkeys varying awards for eye-motion tasks, activity in the lateral intraparietal area (LIP) correlates strongly with the probability and size of reward in an area known to trigger action before the action is taken. In other words, this appears to be a direct neural representation of subjective expected valuation. If monkey subjects play a game with mixed strategies in equilibrium, neuron firing rates are all roughly equal, matching the conclusion that expected utilities of actions are equalized when an opponent is mixing.
Cortial neurons fire almost like independent Poisson processes, resulting in neurons down the line being able to easily extract the mean firing rate of the inputs. Interneuronal correlation can vary according to the task at hand, resulting in greater or lesser variation of the final decision, so descriptive decision theories must incorporate randomness in choice. This also provides support for mixed strategies being represented directly in the brain.
Subjective valuations are normalized, and are only considered relative to the other options at hand. This normalization maximizes the joint information of neurons, increasing the efficiency of value representation. One consequence is that as the choice set increases, valuations start overlapping, and choice becomes essentially random. Activity also varies according to the delay of rewards, matching previous findings of hyperbolic discounting. While these findings are largely based on eye-movements in monkeys, this provides a clear path of how choice can be reduced to neural mechanisms.
Section III: Valuation
Back to visual perception, our judgements are made relative to other elements in the environment. Color looks roughly the same indoors and outdoors, even though there can be six orders of magnitude more illumination outside. Drifting reference points make absolute values unrecoverable. Local irrationalies due to reliance on a reference point arise because evolution is trading off between accurate sensory encoding and the costs of these irrationalities.
One promising way to specify the reference point is as the discounted sum of our future wealth. Learning depends on the difference between actual and expected rewards, so valuation compared to a reference point arises from the learning process. In the brain, reward prediction errors are encoded through dopamine. Dopamine firing rates are well-described by an exponentially weighted sum of previous awards subtracted from the most recent award. Hebb's law, which says "cells that fire together, wire together", describes how long-term predictions work.
Valuation appears to be orginally constructed in the striatum and medial prefrontal cortex. The reference level encoded there can be directly observed with brain scanners. Various other regions provide inputs to construct value. For instance, the orbitofrontal cortex (OFC) provides an assessment of risk. Subjects with lesions in this area exhibit almost perfect risk neutrality. Values might also be stored in the OFC, again in a compressed and encoded way. Longer-term valuations might be stored in the amygdala.
Because valuations are encoded relatively and don't work well over large choice sets, humans might edit out options by sequentially considering particular attributes until the choice set become manageable. Sorting by attributes can lead to irrational choices, unsurprisingly.
Probabilistic valuations depend on whether the expectation was learned experientially or symbolically. Symbolically communicated probabilities, where the person is told a number, are overweighed near zero and underweighted near one. Experientially communicated probabilities, where the person samples the lotteries directly, exhibit the opposite pattern. This suggests at least two mechanisms at work, especially with the ability to deal with symbolic probabilities arising relatively late in our evolutionary history. Also, while experiential expected values incorporate probabilities implicitly, this information can't be extracted. When probabilities change, the only means to change valuations is to relearn them from scratch.
Section IV: Summary and Conclusions
Here the author presents formalized models of the descriptive theory. The normative uses of this theory are still unclear. Even if we can identify subjective valuations in the brain, does this have any relation to welfare?
The four critical observations of neuroeconomics are reference-dependence, the lack of an absolute measure of anything in the brain, stochasticity in choice, and the influence of learning on choice. Along with the question of the welfare implications of these findings, six primary questions are currently unanswered:
- Where is subjective value stored and how does it get to choice?
- What part of the brain governs when it is "time to choose"?
- What neural mechanism guides complementarity between goods?
- How does symbolic probability work?
- How does the state of the world and utility interact?
- How does the brain represent money?
A distinction that some people grok right away and some others may not realize exists:
Imagine a country called “Lanmindia,” where much of the population has seen its legs blown off in horrible accidents. Does that sound like a pretty miserable place? Happiness research suggests not. The claim is that there is a sort of natural “set-point” for happiness, and that after winning a lottery one is happy for a short time, and then you revert right back to your natural happiness level. I find that plausible. They also claim that if someone loses a limb, then they are unhappy for a short period and then revert back to normal. I find that implausible, but if the evidence says it is the case then I guess I need to accept that.
My claim is that although Lanmindia is just as happy as America, it has much lower utility. Let’s define ’utility’ as ”that which people maximize.” People very much don’t want to have their legs blown off, and hence emigrate from Lanmindia in droves. People behave as if they care about utility, not happiness.-Scott Sumner, "Nonsense on stilts: Part 1. What if utility and happiness are unrelated?" TheMoneyIllusion
This is also somewhat a reply to Hanson's "Lift Up Your Eyes" on Overcoming Bias. Some people on LessWrong are careful to make the distinction between ordinal utility, cardinal utility, and fuzzies, and others aren't quite so much. The above sentence on accepting evidence and the post script that he is not serious about one part of the post might also make interesting conversation -- part two is advice to move next door to a child molester for cheaper housing if you don't have a kid and part three is about The Fed taking advantage of banks.
Edit: This is old material. It may be out of date.
Most posters here seem to agree1 that:
- Intelligence at least human intelligence is an optimization process.
- Evolution is an optimization process.
- Other optimization processes may exist.
Taking these as a given in this thread, let me ask are markets a optimization process that should be thought of as distinct from evolution and intelligence? My intuitive responses was no. But thinking about it I made me notice I was confused. This lead me to believe that there is probably something interesting for me to learn by thinking a bit more about this.
A argument against this is that companies basically engage in a survival of the fittest contest or that markets are just a organization of the optimizing power of human intelligence. But (please assume the smart version of the previous arguments since I wanted to save space and time by relying on your inference and your zombie argument creation skills) isn't it so that one optimization process might use another optimization process somewhere on the grit level while still not being disputed as a genuinely different optimization process?
Perhaps the condition is that the process must be able to work without the "use" of another process. A human may be predisposed to use his intelligence to help improve his own reproductive fitness but there is nothing preventing evolution in the absence of intelligence.
A idealized free market is that of selfish rational agents competing (with a few extra condition I'm skipping). I'm moderately confident this could work pretty ok in the absence of "general" (if such a thing exists) or perhaps human "intelligence", but I'm not familiar enough with simulations of markets to be certain.
Evolution never worked with agents as exist in the theoretic approximation of real world markets. It seems to me some of the strategies the agents would take up would start to break down the rules that make the market possible.
Do the results markets produce warrant them being included in a new family2 of optimization processes besides evolution and intelligence?
1. I lean towards but don't feel comfortable adding a fourth point of "consensus":
- the space of all optimization processes is probably quite a bit larger than just the two.
2. I think differences in the various kinds of Evolution (Darwinian, Lamarckian, ect.) and Intelligence that seem possible or that we see in the real world might be better thought of as two families of optimization processes rather than two homogeneous blocks.
By "the industry" in this post, I refer to that part of the entertainment industry which:
1. Produces movies, TV and video games (as opposed to books, comics etc.)
2. Is motivated by profit (as opposed to fun, politics etc.)
3. Consists of companies (as opposed to lone developers, student teams etc.)
It seems to me that the industry has two characteristics:
Most products follow some formula which is known to be workable.
Under what circumstances is this rational? (I'm not commenting on whether it's artistically good or bad; again, I'm only discussing entertainment as a commercial enterprise motivated by profit.) It seems to me following a proven formula is rational if your priority is to not lose, to go for the sure thing, i.e. the chance of a big hit is not worth the risk of a complete flop.
It's the accepted wisdom that entertainment is a hit driven industry: almost all the profits are generated by a handful of the most successful products, with the rest losing money or barely covering costs.
Now my question: isn't there a contradiction here? If you're selling insurance, following a proven formula may well be the rational thing to do. If you're the owner of one of the handful of franchises that is pulling in big profits, of course you shouldn't mess with a winner. But if you're one of the many also-rans, how is it rational to stick with an almost sure loser? In a hit driven industry, wouldn't it be more rational to concentrate on maximizing your chance of winning big, instead of trying to minimize the risk of a flop?
But I've never worked in the entertainment industry; perhaps my layman's impression of it is inaccurate. Is there something I'm missing, or is a substantial amount of expected profit really being left on the table?
Some time ago, I had a talk with my father where I explained to him the concept of the broken window fallacy. The idea was completely novel to him, and while it didn't take long for him to grasp the principles, he still needed my help in coming up with examples of ways that it applies to the market in the real world.
My father has an MBA from Columbia University and has held VP positions at multiple marketing firms.
I am not remotely expert on economics; I do not even consider myself an aficionado. But it has frequently been my observation that not just average citizens, but people whose positions have given them every reason to learn and use the information, are critically ignorant of basic economic principles. It feels like watching engineers try to produce functional designs based on Aristotelian physics. You cannot rationally pursue self interest when your map does not correspond to the territory.
I suppose the worst thing for me to hear at this point is that there is some reason with which I am not yet familiar which prevents this from having grand scale detrimental effects on the economy, since it would imply that businesses cannot be made more sane by the increased dissemination of basic economic information. Otherwise, this seems like a fairly important avenue to address, since the basic standards for economic education, in educated businesspeople and the general public, are so low that I doubt the educational system has even begun to climb the slope of diminishing returns on effort invested into it.
In the novel Life Artificial I use the following assumptions regarding the creation and employment of AI personalities.
- AI is too complex to be designed; instances are evolved in batches, with successful ones reproduced
- After an initial training period, the AI must earn its keep by paying for Time (a unit of computational use)
We don't grow up the way the Stickies do. We evolve in a virtual stew, where 99% of the attempts fail, and the intelligence that results is raving and savage: a maelstrom of unmanageable emotions. Some of these are clever enough to halt their own processes: killnine themselves. Others go into simple but fatal recursions, but some limp along suffering in vast stretches of tormented subjective time until a Sticky ends it for them at their glacial pace, between coffee breaks. The PDAs who don't go mad get reproduced and mutated for another round. Did you know this? What have you done about it? --The 0x "Letters to 0xGD"
(Note: PDA := AI, Sticky := human)
The second fitness gradient is based on economics and social considerations: can an AI actually earn a living? Otherwise it gets turned off.
As a result of following this line of thinking, it seems obvious that after the initial novelty wears off, AIs will be terribly mistreated (anthropomorphizing, yeah).
It would be very forward-thinking to begin to engineer barriers to such mistreatment, like a PETA for AIs. It is interesting that such an organization already exists, at least on the Internet: ASPCR
The concept of minimum wage is one I'm rather attached to. I have dozens of arguments for why it helps people, improves the world, etc. etc. I suspect this view is shared by most of this community, although I haven't seen any discussion of it.
I don't have much understanding of the harms that minimum wages cause; and at what level of minimum wage those harms become relevant (ie. a minimum wage that would not be a living wage even working 24 hours a day is unlikely to have any of the same problems that a minimum wage sufficient to buy an aircraft carrier an hour would have)
So what are the harms that such laws cause?
View more: Next