OK you like EMH so much that you think 9 students from one professor all outperforming for decades is cherry picking and data mining. I think it is finding a small group of people wh oclaim to be learning from someone who has empirically verified methods, and who, when they apply these methods, get the predicted results consistently for decades. I think characterizing this as cherry picking and data mining is at more likely to be a bad explanation for what is being seen than is mine, which is that they are doing what ehy say they are doing, and it is working.
Even a broad index fund is "managed." The conditions for being listed are quite stringent, and involve "survival bias" filters, if stocks fall below a certain value they are delisted. I actually don't think that the difficulty of beating the SP500 is much of a proof of EMH as much as it is a proof that very straigtforward standards applied on a slow timescale capture almost all of the value available from managing a portfolio. I think people investing more broadly than SP500, people investing with people who come in to their living rooms seeking "angel" investors do a lot worse. If the market was efficient in principle, then one wouldn't need the SP500 or even the NASDAQ seal of approval to wind up with results that were at the market mean. If usnig your brain is required to pick SP500 over living room pitch man, then in principle, using your brain is required to get reasonable results.
I think if a proposition of efficiency is to be proved true, ti si not by looking at the average performance of every tom dick and harry and noticing that with mathematical necessity they tend to have the same mean as the market which of course they comprise. I think a proper proof of efficinecy requires showing in detail that there are no consistent outliers of high performance. That funds with decades long records of outperformance occur at the proper rate to be consistent with pure luck. Indeed, to show that while it appears that some people predictably outperform, that for all these actors past performance is no predictor of future performance, and that the hangers on that joined Buffett in the 60s or 70s or 80s or 90s after seeing his record THOUGHT their outperformance was due to their identifying a winner, but that it was consistent with just pure dumb continuous luck.
I think their is a gigantic difference between "we cannot prove that their is alpha" and "the most likely explanation of what we see is that their is no alpha."
As to identifying bubbles that were not bubbles, the only bubbles I have identified are tech,and real estate. I identified a "bubble" in a small company stock (Conductus) where a company with no real products generated excitement by talking about how they were getting in to the cellular industry, driving their stock price from 3 to about 80 before they crashed back down to 3. I shorted them at about 70, took my returns a few weeks later at 60 or so, they proceeded to rise to 80 and then within a year drop back to 2.5. I identified another mispricing in NHCS where numerically they were spinning out a company which was being completely undervalued in their current stock price. I asked others "can this really be true," they said only in general yeah stuff like that happens. I bought a few thousand dollars worth, made the 20% or so return it seemed I was seeing laying on the table a few months later.
The main sense in which the market seems efficient is that the prices are predominantly set using sensible analyses, presumably because those who do not follow a proven technique of picking sensible prices do not survive, so the main component of market efficiency is that the processes for beating the market are broadly exercised and dominating the market. So it is hard to do better than free-riding on that. But does it turn out that some people do better at that process than others? I think the best explanation for what we see is that yes, some do, and that they are a smallish minority is not because they are just the tail of a random distribution, but because of mathematical necessity beating the average significantly can only be done by a minority.
Anyway, thanks for sticking with it and explaining your position to me.
OK you like EMH so much that you think 9 students from one professor all outperforming for decades is cherry picking and data mining.
To expand even further on my critique: you are placing a huge amount of weight on 9 students, of unknown veracity, out of an unknown number of students (itself out of an unknown number of millions of people who have tried to beat the market over the past century), who have not released audited records much less ones comparing them to indexing, who started half a century ago (which is the investing dark ages compared to wha...
In an unrelated thread, one thing led to another and we got onto the subject of overpopulation and carrying capacity. I think this topic needs a post of its own.
TLDR mathy version:
let f(m,t) be the population that can be supported using the fraction of Earth's theoretical resource limit m we can exploit at technology level t
let t = k(x) be the technology level at year x
let p(x) be population at year x
What conditions must constant m and functions f(m,k(x)), k(x), and p(x) satisfy in order to insure that p(x) - f(m,t) > 0 for all x > today()? What empirical data are relevant to estimating the probability that these conditions are all satisfied?
Long version:
Here I would like to explore the evidence for and against the possibility that the following assertions are true:
Please note: I'm not proposing that the above assertions must be true, only that they have a high enough probability of being correct that they should be taken as seriously as, for example, grey goo:
Predictions about the dangers of nanotech made in the 1980's shown no signs of coming true. Yet, there is no known logical or physical reason why they can't come true, so we don't ignore it. We calibrate how much effort should be put into mitigating the risks of nanotechnology by asking what observations should make us update the likelihood we assign to a grey-goo scenario. We approach mitigation strategies from an engineering mindset rather than a political one.
Shouldn't we hold ourselves to the same standard when discussing population growth and overshoot? Substitute in some other existential risks you take seriously. Which of them have an expectation2 of occuring before a Malthusian Crunch? Which of them have an expectation of occuring after?
Footnotes:
1: By carrying capacity, I mean finite resources such as easily extractable ores, water, air, EM spectrum, and land area. Certain very slowly replenishing resources such as fossil fuels and biodiversity also behave like finite resources on a human timescale. I also include non-finite resources that expand or replenish at a finite rate such as useful plants and animals, potable water, arable land, and breathable air. Technology expands carrying capacity by allowing us to exploit all resource more efficiently (paperless offices, telecommuting, fuel efficiency), open up reserves that were previously not economically feasible to exploit (shale oil, methane clathrates, high-rise buildings, seasteading), and accelerate the renewal of non-finite resources (agriculture, land reclamation projects, toxic waste remediation, desalinization plants).
2: This is a hard question. I'm not asking which catastrophe is the mostly likely to happen ever while holding everything else constant (the possible ones will be tied for 1 and the impossible ones will be tied for 0). I'm asking you to mentally (or physically) draw a set of survival curves, one for each catastrophe, with the x-axis representing time and the y-axis representing fraction of Everett branches where that catastrophe has not yet occured. Now, which curves are the upper bound on the curve representing Malthusian Crunch, and which curves are the lower bound? This is how, in my opinioon (as an aging researcher and biostatistician for whatever that's worth) you think about hazard functions, including those for existential hazards. Keep in mind that some hazard functions change over time because they are conditioned on other events or because they are cyclic in nature. This means that the thing most likely to wipe us out in the next 50 years is not necessarily the same as the thing most likely to wipe us out in the 50 years after that. I don't have a formal answer for how to transform that into optimal allocation of resources between mitigation efforts but that would be the next step.