A proposed inefficiency in the Bitcoin markets

3 Liron 27 December 2013 03:48AM
Salviati: Simplicio, do you think the Bitcoin markets are efficient?

Simplicio: If you'd asked me two years ago, I would have said yes. I know hindsight is 20/20, but even at the time, I think the fact that relatively few people were trading it would have risen to prominence in my analysis.

Salviati: And what about today?

Simplicio: Today, it seems like there's no shortage of trading volume. The hedge funds of the world have heard of Bitcoin, and had their quants do their fancy analyses on it, and they actively trade it.

Salviati: Well, I'm certainly not a quant, but I think I've spotted a systematic market inefficiency. Would you like to hear it?

Simplicio: Nah, I'm good.

Salviati: Did you hear what I said? I think I've spotted an exploitable pattern of price movements in a $10 Billion market. If I'm right, it could make us a lot of money.

Simplicio: Sure, but you won't convince me that whatever pattern you're thinking of is a "reliable" one.

Salviati: Come on, you don't even know what my argument is.

Simplicio: But I know how your argument is going to be structured. First you're going to identify some property of Bitcoin prices in past data. Then you'll explain some causal model you have which supposedly accounts for why prices have had that property in the past. Then you'll say that your model will continue to account for that same property in future Bitcoin prices.

Salviati: Yeah, so? What's wrong with that?

Simplicio: The problem is that you are not a trained quant, and therefore, your brain is not capable of bringing a worthwhile property of Bitcoin prices to your attention.

Salviati: Dude, I just want to let you know because this happens often and no one else is ever going to say anything: you're being a dick.

Simplicio: Look, quants are good at their job. To a first approximation, quants are like perfect Bayesian reasoners who maintain a probability distribution over the "reliability" of every single property of Bitcoin prices that you and I are capable of formulating. So this argument you're going to make to me, a quant has already made to another quant, and the other quant has incorporated it into his hedge fund's trading algorithms.

Salviati: Fine, but so what if quants have already figured out my argument for themselves? We can make money on it too.

Simplicio: No, we can't. I told you I'm pretty confident that the market is efficient, i.e. anti-inductive, meaning the quants of the world haven't left behind any reliable patterns that an armchair investor like you can detect and profit from.

Salviati: Would you just shut up and let me say my argument?

Simplicio: Whatever, knock yourself out.

Salviati: Ok, here goes. Everyone knows Bitcoin prices are volatile, right?

Simplicio: Yeah, highly volatile. But at any given moment, you don't know if the volatility is going to move the price up or down next. From your state of knowledge, it looks like a random walk. If today's Bitcoin price is $1000, then tomorrow's price is as likely to be $900 as it is to be $1100.

Salviati: I agree that the Random Walk Hypothesis provides a good model of prices in efficient markets, and that the size of a each step in a random walk provides a good model of price volatility in efficient markets.

Simplicio: See, I told you you wouldn't convince me.

Salviati: Ah, but my empirical observation of Bitcoin prices is inconsistent with the Random Walk hypothesis. So the only thing I'm led to conclude is that the Bitcoin market is not efficient.

Simplicio: What do you mean "inconsistent"?

Salviati: I mean Bitcoin's past prices don't look much like a random walk. They look more like a random walk on a log scale. If today's price is $1000, then tomorrow's price is equally likely to be $900 or $1111. So if I buy $1000 of Bitcoin today, I expect to have 0.5($900) + 0.5($1111) = $1005.50 tomorrow.

Simplicio: How do you know that? Did you write a script to loop through Bitcoin's daily closing price on Mt. Gox and simulate the behavior of a Bayesian reasoner with a variable-step-size random-walk prior and a second Bayesian reasoner with a variable-step-size log-random-walk prior, and thus calculate a much higher Bayesian Score for the log-random-walk model?

Salviati: Yeah, I did.

Simplicio: That's very virtuous of you.

[This is a fictional dialogue. The truth is, I was too lazy to do that. Can someone please do that? I would much appreciate it. --Liron.]

Salviati: So, have I convinced you that the market is anti-inductive now?

Simplicio: Well, you've empirically demonstrated that the log Random Walk Hypothesis was a good model for predicting Bitcoin prices in the past. But that's just a historical pattern. My original point was that you're not qualified to evaluate which historical patterns are *reliable* patterns. The Bitcoin markets are full of pattern-annihilating forces, and you're not qualified to evaluate which past-data-fitting models are eligible for future-data-fitting.

Salviati: Ok, I'm not saying you have to believe that the future accuracy of log-Random-Walk will probably be higher than the future accuracy of linear Random Walk. I'm just saying you should perform a Bayesian update in the direction of that conclusion.

Simplicio: Ok, but the only reason the update has nonzero strength is because I assigned an a-priori chance of 10% to the set of possible worlds wherein Bitcoin markets were inefficient, and that set of possible worlds gives a higher probability that a model like your log-Random-Walk model would fit the price data well. So I update my beliefs to promote the hypothesis that Bitcoin is inefficient, and in particular that it is inefficient in a log-Random-Walk way.

Salviati: Thanks. And hey, guess what: I think I've traced the source of the log-Random-Walk regularity.

Simplicio: I'm surprised you waited this long to mention that.

Salviati: I figured that if I mentioned it earlier, you'd snap back about how efficient markets sever the causal connection between would-be price-regularity-causing dynamics, and actual prices.

Simplicio: Fair enough.

Salviati: Anyway, the reason Bitcoin prices follow a log-Random-Walk is because they reflect the long-term Expected Value of Bitcoin's actual utility.

Simplicio: Bitcoin has no real utility.

Salviati: It does. It's liquid in novel, qualitatively different ways. It's kind of anonymous. It's a more stable unit of account than the official currencies of some countries.

Simplicio: Come on, how much utility is all that really worth in expectation?

Salviati: I don't know. The Bitcoin economy could be anywhere from hundreds of millions of dollars, to trillions of dollars. Our belief about the long-term future value of a single BTC is spread out across a range whose 90% confidence interval is something like [$10, $100,000] for 1BTC.

Simplicio: Are you saying it's spread out over the interval [$10, $100,000] in a uniform distribution?

Salviati: Nope, it's closer to a bell curve centered at $1000 on a log scale. It gives equal probability of ~10% both to the $10-100 range and to the $10,000-100,000 range.

Simplicio: How do you know that everyone's beliefs are shaped like that?

Salviati: Because everyone has a causal model in their head with a node for "order of magnitude of Bitcoin's value", and that node varies in the characteristically linear fashion of a Bayes net.

Simplicio: I don't feel confident in that explanation.

Salviati: Then take whatever explanation you give yourself to explain the effectiveness of Fermi estimates. Those output a bell curve on a log scale too, and seems like estimating Bitcoin's future value should have a lot of methodology in common with doing back-of-the-envelope calculations about the blast radius of a nuclear bomb.

Simplicio: Alright.

Salviati: So the causality of Bitcoin prices roughly looks like this:

[Beliefs about order of magnitude of Bitcoin's future value] --> [Beliefs about Bitcoin's future price] --> [Trading decisions]

Simplicio: Okay, I see how the first node can fluctuate a lot in reaction to daily news events, and that would have a disproportionately high effect on the last node. But how can an efficient market avoid that kind of log-scale fluctuation? Efficient markets always reflect a consensus estimate of an asset's price, and it's rational to arrive at an estimate that fluctuates on a log scale!

Salviati: Actually, I think a truly efficient market shouldn't just skip around across orders of magnitudes, just because expectations of future prices do. I think truly efficient markets show some degree of "drag", which should be invisible in typical cases like publicly-traded stocks, but become noticeable in cases of order-of-magnitude value-uncertainty like Bitcoin.

Simplicio: So you think you're the only one smart enough to notice that it's worth trading Bitcoin so as to create drag on Bitcoin's log-scale random walk?

Salviati: Yeah, I think maybe I am.


Salviati is claiming that his empirical observations show a lack of drag on Bitcoin price shifts, which would be actionable evidence of inefficiency. Discuss.

To Learn Critical Thinking, Study Critical Thinking

26 gwern 07 July 2012 11:50PM

Critical thinking courses may increase students’ rationality, especially if they do argument mapping.

The following excerpts are from “Does philosophy improve critical thinking skills?”, Ortiz 2007.

1 Excerpts

This thesis makes a first attempt to subject the assumption that studying [Anglo-American analytic] philosophy improves critical thinking skills to rigorous investigation.

…Thus the second task, in Chapter 3, is to articulate and critically examine the standard arguments that are raised in support of the assumption (or rather, would be raised if philosophers were in the habit of providing support for the assumption). These arguments are found to be too weak to establish the truth of the assumption. The failure of the standard arguments leaves open the question of whether the assumption is in fact true. The thesis argues at this point that, since the assumption is making an empirical assertion, it should be investigated using standard empirical techniques as developed in the social sciences. In Chapter 4, I conduct an informal review of the empirical literature. The review finds that evidence from the existing empirical literature is inconclusive. Chapter 5 presents the empirical core of the thesis. I use the technique of meta-analysis to integrate data from a large number of empirical studies. This meta-analysis gives us the best yet fix on the extent to which critical thinking skills improve over a semester of studying philosophy, general university study, and studying critical thinking. The meta-analysis results indicate that students do improve while studying philosophy, and apparently more so than general university students, though we cannot be very confident that this difference is not just the result of random variation. More importantly, studying philosophy is less effective than studying critical thinking, regardless of whether one is being taught in a philosophy department or in some other department. Finally, studying philosophy is much less effective than studying critical thinking using techniques known to be particularly effective such as LAMP.

continue reading »

Exploring the Idea Space Efficiently

22 Elithrion 08 April 2012 04:28AM

Simon is writing a calculus textbook. Since there are a lot of textbooks on the market, he wants to make his distinctive by including a lot of original examples. To do this, he decides to first check what sorts of examples are in some of the other books, and then make sure to avoid those. Unfortunately, after skimming through several other books, he finds himself completely unable to think of original examples—his mind keeps returning to the examples he's just read instead of coming up with new ones.

What he's experiencing here is another aspect of priming or anchoring. The way it appears to happen in my brain is that it decides to anchor on the examples it's already seen and explore the idea-space from there, moving from an idea only to ideas that are closely related to it (similarly to a depth-first search)

At first, this search strategy might not seem so bad—in fact, it's ideal if there is one best solution and the closer you get to it the better. For example, if you were shooting arrows at a target, all you'd need to consider is how close to the center you can hit. Where we run into problems, however, is trying to come up with multiple solutions (such as multiple examples of the applications of calculus), or trying to come up with the best solution when there are many plausible solutions. In these cases, our brain's default search algorithm will often grab the first idea it can think of and try to refine it, even if what we really need is a completely different idea.

continue reading »

The Bias You Didn't Expect

92 Psychohistorian 14 April 2011 04:20PM

There are few places where society values rational, objective decision making as much as it values it in judges. While there is a rather cynical discipline called legal realism that says the law is really based on quirks of individual psychology, "what the judge had for breakfast," there's a broad social belief that the decision of judges are unbiased. And where they aren't unbiased, they're biased for Big, Important, Bad reasons, like racism or classism or politics.

It turns out that legal realism is totally wrong. It's not what the judge had for breakfast. It's how recently the judge had breakfast. A a new study (media coverage) on Israeli judges shows that, when making parole decisions, they grant about 65% after meal breaks, and almost all the way down to 0% right before breaks and at the end of the day (i.e. as far from the last break as possible). There's a relatively linear decline between the two points.

continue reading »

Dead men tell tales: falling out of love with SIA

2 Stuart_Armstrong 18 February 2011 02:10PM

SIA is the Self Indication Assumption, an anthropic theory about how we should reason about the universe given that we exist. I used to love it; the argument that I've found most convincing about SIA was the one I presented in this post. Recently, I've been falling out of love with SIA, and moving more towards a UDT version of anthropics (objective probabilities and total impact of your decision being of a specific type, including in all copies of you and enemies with the same decision process). So it's time I revisit my old post, and find the hole.

The argument rested on the plausible sounding assumption that creating extra copies and killing them is no different from if they hadn't existed in the first place. More precisely, it rested on the assumption that if I was told "You are not one of the agents I am about to talk about. Extra copies were created to be destroyed," it was exactly the same as hearing  "Extra copies were created to be destroyed. And you're not one of them."

But I realised that from the UDT/TDT perspective, there is a great difference between the two situations, if I have the time to update decisions in the course of the sentence. Consider the following three scenarios:

  • Scenario 1 (SIA):

Two agents are created, then one is destroyed with 50% probability. Each living agent is entirely selfish, with utility linear in money, and the dead agent gets nothing. Every survivor will be presented with the same bet. Then you should take the SIA 2:1 odds that you are in the world with two agents. This is the scenario I was assuming.

  • Scenario 2 (SSA):

Two agents are created, then one is destroyed with 50% probability. Each living agent is entirely selfish, with utility linear in money, and the dead agent is altruistic towards his survivor. This is similar to my initial intuition in this post. Note that every agents have the same utility: "as long as I live, I care about myself, but after I die, I'll care about the other guy", so you can't distinguish them based on their utility. As before, every survivor will be presented with the same bet.

Here, once you have been told the scenario, but before knowing whether anyone has been killed, you should pre-commit to taking 1:1 odds that you are in the world with two agents. And in UDT/TDT precommitting is the same as making the decision.

continue reading »

Statistical Prediction Rules Out-Perform Expert Human Judgments

68 lukeprog 18 January 2011 03:19AM

A parole board considers the release of a prisoner: Will he be violent again? A hiring officer considers a job candidate: Will she be a valuable asset to the company? A young couple considers marriage: Will they have a happy marriage?

The cached wisdom for making such high-stakes predictions is to have experts gather as much evidence as possible, weigh this evidence, and make a judgment. But 60 years of research has shown that in hundreds of cases, a simple formula called a statistical prediction rule (SPR) makes better predictions than leading experts do. Or, more exactly:

When based on the same evidence, the predictions of SPRs are at least as reliable as, and are typically more reliable than, the predictions of human experts for problems of social prediction.1

For example, one SPR developed in 1995 predicts the price of mature Bordeaux red wines at auction better than expert wine tasters do. Reaction from the wine-tasting industry to such wine-predicting SPRs has been "somewhere between violent and hysterical."

How does the SPR work? This particular SPR is called a proper linear model, which has the form:

P = w1(c1) + w2(c2) + w3(c3) + ...wn(cn)

The model calculates the summed result P, which aims to predict a target property such as wine price, on the basis of a series of cues. Above, cn is the value of the nth cue, and wn is the weight assigned to the nth cue.2

In the wine-predicting SPR, c1 reflects the age of the vintage, and other cues reflect relevant climatic features where the grapes were grown. The weights for the cues were assigned on the basis of a comparison of these cues to a large set of data on past market prices for mature Bordeaux wines.3

There are other ways to construct SPRs, but rather than survey these details, I will instead survey the incredible success of SPRs.

continue reading »

How are critical thinking skills acquired? Five perspectives

9 matt 22 October 2010 02:29AM

Link to sourcehttp://timvangelder.com/2010/10/20/how-are-critical-thinking-skills-acquired-five-perspectives/
Previous LW discussion of argument mappingArgument Maps Improve Critical ThinkingDebate tools: an experience report

How are critical thinking skills acquired? Five perspectivesTim van Gelder discusses acquisition of critical thinking skills, suggesting several theories of skill acquisition that don't work, and one with which he and hundreds of his students have had significant success.

In our work in the Reason Project at the University of Melbourne we refined the Practice perspective into what we called the Quality (or Deliberate) Practice Hypothesis.   This was based on the foundational work of Ericsson and others who have shown that skill acquisition in general depends on extensive quality practice.  We conjectured that this would also be true of critical thinking; i.e. critical thinking skills would be (best) acquired by doing lots and lots of good-quality practice on a wide range of real (or realistic) critical thinking problems.   To improve the quality of practice we developed a training program based around the use of argument mapping, resulting in what has been called the LAMP (Lots of Argument Mapping) approach.   In a series of rigorous (or rather, as-rigorous-as-possible-under-the-circumstances) studies involving pre-, post- and follow-up testing using a variety of tests, and setting our results in the context of a meta-analysis of hundreds of other studies of critical thinking gains, we were able to establish that critical thinking skills gains could be dramatically accelerated, with students reliably improving 7-8 times faster, over one semester, than they would otherwise have done just as university students.   (For some of the detail on the Quality Practice hypothesis and our studies, see this paper, and this chapter.)

LW has been introduced to argument mapping before

SIA won't doom you

8 Stuart_Armstrong 25 March 2010 05:43PM

Katja Grace has just presented an ingenious model, claiming that SIA combined with the great filter generates its own variant of the doomsday argument. Robin echoed this on Overcoming Bias. We met soon after Katja had come up with the model, and I signed up to it, saying that I could see no flaw in the argument.

Unfortunately, I erred. The argument does not work in the form presented.

First of all, there is the issue of time dependence. We are not just a human level civilization drifting through the void in blissful ignorance about our position in the universe. We know (approximately) the age of our galaxy, and the time elapsed since the big bang.

How is this relevant? It is relevant because all arguments about the great filter are time-dependent. Imagine we had just reached consciousness and human-level civilization, by some fluke, two thousand years after the creation of our galaxy, by an evolutionary process that took two thousand years. We see no aliens around us. In this situation, we have no reason to suspect any great filter; if we asked ourselves "are we likely to be the first civilization to reach this stage?" then the answer is probably yes. No evidence for a filter.

Imagine, instead, that we had reached consciousness a trillion years into the life of our galaxy, again via an evolutionary process that took two thousand years, and we see no aliens or traces of aliens. Then the evidence for a filter is overwhelming; something must have stopped all those previous likely civilizations from emerging into the galactic plane.

So neither of these civilizations can be included in our reference class (indeed, the second one can only exist if we ourselves are filtered!). So the correct reference class to use is not "the class of all potential civilizations in our galaxy that have reached our level of technological advancement and seen no aliens", but "the class of all potential civilizations in our galaxy that have reached our level of technological advancement at around the same time as us and seen no aliens". Indeed, SIA, once we update on the present, cannot tell us anything about the future.

But there's more.

continue reading »

Necessary, But Not Sufficient

44 pjeby 23 March 2010 05:11PM

There seems to be something odd about how people reason in relation to themselves, compared to the way they examine problems in other domains.

In mechanical domains, we seem to have little problem with the idea that things can be "necessary, but not sufficient".  For example, if your car fails to start, you will likely know that several things are necessary for the car to start, but not sufficient for it to do so.  It has to have fuel, ignition, and compression, and oxygen...  each of which in turn has further necessary conditions, such as an operating fuel pump, electricity for the spark plugs, electricity for the starter, and so on.

And usually, we don't go around claiming that "fuel" is a magic bullet for fixing the problem of car-not-startia, or argue that if we increase the amount of electricity in the system, the car will necessarily run faster or better.

For some reason, however, we don't seem to apply this sort of necessary-but-not-sufficient thinking to systems above a certain level of complexity...  such as ourselves.

When I wrote my previous post about the akrasia hypothesis, I mentioned that there was something bothering me about the way people seemed to be reasoning about akrasia and other complex problems.  And recently, with taw's post about blood sugar and akrasia, I've realized that the specific thing bothering me is the absence of causal-chain reasoning there.

continue reading »

The Presumptuous Philosopher's Presumptuous Friend

3 PlaidX 05 October 2009 05:26AM

One day, you and the presumptuous philosopher are walking along, arguing about the size of the universe, when suddenly Omega jumps out from behind a bush and knocks you both out with a crowbar. While you're unconscious, she builds two hotels, one with a million rooms, and one with just one room. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

You wake up in a hotel room, in bed with the presumptuous philosopher, with a note on the table from Omega, explaining what she's done.

"Which hotel are we in, I wonder?" you ask.

"The big one, obviously" says the presumptuous philosopher. "Because of anthropic reasoning and all that. Million to one odds."

"Rubbish!" you scream. "Rubbish and poppycock! We're just as likely to be in any hotel omega builds, regardless of the number of observers in that hotel."

"Unless there are no observers, I assume you mean" says the presumptuous philosopher.

"Right, that's a special case where the number of observers in the hotel matters. But except for that it's totally irrelevant!"

"In that case," says the presumptuous philosopher, "I'll make a deal with you. We'll go outside and check, and if we're at the small hotel I'll give you ten bucks. If we're at the big hotel, I'll just smile smugly."

"Hah!" you say. "You just lost an expected five bucks, sucker!"

You run out of the room to find yourself in a huge, ten thousand story attrium, filled with throngs of yourselves and smug looking presumptuous philosophers.

View more: Prev | Next