Risk is not empirically correlated with return
The most widely appreciated finance theory is the Capital Asset Pricing Model. It basically says that diminishing marginal utility of absolute wealth implies that riskier financial assets should have higher expected returns than less risky assets and that only risk correlated with the market (beta risk) is a whole is important because other risk can be diversified out.
Eric Falkenstein argues that the evidence does not support this theory; that the riskiness of assets (by any reasonable definition) is not positively correlated with return (some caveats apply). He has a paper (long but many parts are skimmable; not peer reviewed; also on SSRN) as well as a book on the topic. I recommend reading parts of the paper.
The gist of his competing theory is that people care mostly about relative gains rather than absolute gains. This implies that riskier financial assets will not have higher expected returns than less risky assets. People will not require a higher return to hold assets with higher undiversifiable variance because everyone is exposed to the same variance and people only care about their relative wealth.
Falkenstein has a substantial quantity of evidence to back up his claim. I am not sure if his competing theory is correct, but I find the evidence against the standard theory quite convincing.
If risk is not correlated with returns, then anyone who is mostly concerned with absolute wealth can profit from this by choosing a low beta risk portfolio.
This topic seems more appropriate for the discussion section, but I am not completely sure, so if people think it belongs in the main area, let me know.
Added some (hopefully) clarifying material:
All this assumes that you eliminate idiosyncratic risk through diversification. Technically impossible, but you can get it reasonably low. The R's are all *instantaneous* returns; though since these are linear models they apply to geometrically accumulated returns as well. The idea that E(R_asset) are independent of past returns is a background assumption for both models and most of finance.
Beta_portfolio = Cov(R_portfolio, R_market)/variance(R_market)
In CAPM your expected and variance are:
E(R_portfolio) = R_rfree + Beta_portfolio * (E(R_market) - R_rfree)
Var(R_portfolio) = Beta_portfolio * Var(R_market)
in Falkenstein's model your expected return are:
E(R_portfolio) = R_market # you could also say = R_rfree; the point is that its a constant
Var(R_portfolio) = Beta_portfolio * Var(R_market)
The major caveat being that it doesn't apply very close to Beta_portfolio = 0; Falkenstein attributes this to liquidity benefits. And it doesn't apply to very high Beta_portfolio; he attributes this to "buying hope". See the paper for more.
Falkenstein argues that his model fits the facts more closely than CAPM. Assuming Falkenstein's model describes reality, if your utility declines with rising Var(R_portfolio) (the standard assumption), then you'll want to hold a portfolio with a beta of zero; or taking into account the caveats, a low Beta_portfolio. If your utility is declining with Var(R_portfolio - R_market), then you'll want to hold the market portfolio. Both of these results are unambiguous since there's no trade off between either measure of risk and return.
Some additional evidence from another source, and discussion: http://falkenblog.blogspot.com/2010/12/frazzini-and-pedersen-simulate-beta.html
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (26)
That sounds as though it would only work if risk was negatively-correlated with returns.
Surprisingly not. If returns are uncorrelated with risk, then you choose a low beta portfolio. It will have low overall variance. You can either choose to have a a low variance portfolio with the same expected return as the market portfolio or you can use leverage (doesn't necessarily require personally taking out loans) to have a higher variance, higher expected return portfolio. Whichever you prefer.
If returns are uncorrelated with risk (average gain in value of an investment is unrelsated to its volatility), why would you choose the low beta portfolio? Couldn't you increase your personal returns by dollar-cost-averaging into the high-beta portfolio?
Do you have your directions flipped? beta = covariance(portfolio, market)/variance(market) if beta is low for a portfolio, it has low covariance with the market. If not, I don't understand your logic.
A high beta portfolio will have high variance. If risk is not correlated with return, then it won't have higher expected return to compensate, which is worse than having the same return and low variance.
I was taking beta to effectively measure average deviation from a trendline exp(k t), k>0, when an investment's cumulative value is expressed in dollars. In any case, that's the measure of risk I thought you were using, and by that definition of it, I think I've shown it to indicate a way you can make returns greater than k: just buy $X worth of the investment every period (dollar-cost averaging). In that case, you will buy less of the investment when it is above exp(k t) trend and more when it's below, beating the ROR k.
This would show how investors buying riskier assets would end up with a higher return for the same average return k.
I'd think average deviation of portfolio value from any trendline will be infinite for any risky portfolio, assuming asset prices are a random walk. Your story relies on mean reverting asset returns, and the advantage from that mean reversion rather than Falkenstein's idea.
Normally, discussions about correlations between assets are actually discussions of correlations between returns of assets.
How can that be, when you take the average per unit time.
In any case, I'm confused -- I had always taken "risk" in this context to mean the volatility, and the traditional argument to mean that the highest-returning risky assets should have higher returns than the highest-returning less-risky assets. And my point is that the individual's return will be different from that of the asset's return.
Do you mean average change in deviation from the trend line? I understood you to mean average absolute deviation from the trend line. Maybe I just misunderstood you. The average change in deviation is non-infiinite, but the average absolute deviation deviation will grow without bound.
Sorry, I meant the signed difference -- so that an investment going above and below could count as zero. But I don't think that's essential to my point about risk per the standard definition.
I think you're confused about something, but I'm not sure what it is. I've added some clarifying material to the post. Does that clarify our discussion?
Actually, IIRC, one of Falkenstein's findings was that once volatility exceeds a certain level, it actually is negatively correlated with returns. He explains this with the phrase "People pay for hope."
Not terribly surprising - though you can pretty easily turn a low risk investment into a high-risk one at relatively low cost with a roulette wheel. It is going to be harder to sell high-risk low return bonds when high-risk average return bonds are also available.
Things I learned from this post:
When one says "riskier assets should have higher expected returns than less risky assets", one imagines comparing two assets that cost the same because the demand for them is the same.
"Relative gains" means gains relative to everyone else.
(I don't have any substantial comments to make; I'm just closing the language barrier.)
Falkenstein seems to change the definition of 'risk-free'; for him, risk-free means investing in the whole market?
Sort of. "Risk-free" always has to be defined relative to some baseline level of performance. You'd think that "hiding dollars in your mattress" would be risk-free (as long as you don't get robbed), because the value of your "investment", measured in dollars, won't go down. On the other hand, what if you stuffed your mattress with euros instead of dollars? Now the value of your investment, as measured in euros, won't change, but the value of your investment, measured in dollars, can. (And you could also measure the value of your investment in terms of its ability to buy ounces of gold, bushels of wheat, barrels of oil, McDonalds hamburgers, Google shares, kilowatt-hours of electricity, or even hours of human labor.)
Normally, people who study investing assume that people care about "absolute wealth" and use either dollars or U.S. treasury bills as their "risk-free" benchmark, but Falkenstein is saying that this doesn't reflect the actual behavior of the people who manage most of the money in the economy. Falkenstein says that they act as though they care not about "absolute wealth" but "relative wealth": their performance compared to other investors. After all, if everybody lost a lot of money, it's clearly not your fault the fund lost all that value, so you get to keep your job. And even if you're making money, you'll lose clients if other people happen to be making a lot more. Basically, only deviations from average market returns (as measured by the S&P 500) are rewarded or punished, so the "risk-free" thing for them to do is to act like an index fund and end up with exactly "average" returns.
I wouldn't say "for him"; he advises people to choose a low beta portfolio, but yes, in the model he uses to explain the data investing in the market portfolio is risk free.
I would say 'for him' because 'Falkenstein risk-free' <> 'risk-free' as is commonly used. (For example, anyone invested in the whole market the last few years knows that it's not risk-free in the literal or commonly used sense.) As a result, 'Falkenstein risk premium' <> 'risk premium' as commonly used. I think if he had used a different term, like baseline or benchmark for example (instead of recycling/ humptifying the existing term 'risk free') his article would be clearer.
Falkenstein violates the second of what I think of as the Two Commandments of Research:
If a name for a concept exists in the literature, you use it; you don't create your own name for a concept that's already in currency. If your concept is one-off (i.e., related, but somehow different) from an existing concept, it's best to coin a term which is a modification of the current term.
Thou shalt never clobber an existing term by giving it a new meaning different from its existing meaning. Ever.
Should you still call a triangle a triangle if you're drawing it on a curved surface?
(You're right in general, though.)
I suppose extending is a case of "giving it a new meaning different from its existing meaning", but it isn't clobbering.