Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A proposed inefficiency in the Bitcoin markets

3 Liron 27 December 2013 03:48AM
Salviati: Simplicio, do you think the Bitcoin markets are efficient?

Simplicio: If you'd asked me two years ago, I would have said yes. I know hindsight is 20/20, but even at the time, I think the fact that relatively few people were trading it would have risen to prominence in my analysis.

Salviati: And what about today?

Simplicio: Today, it seems like there's no shortage of trading volume. The hedge funds of the world have heard of Bitcoin, and had their quants do their fancy analyses on it, and they actively trade it.

Salviati: Well, I'm certainly not a quant, but I think I've spotted a systematic market inefficiency. Would you like to hear it?

Simplicio: Nah, I'm good.

Salviati: Did you hear what I said? I think I've spotted an exploitable pattern of price movements in a $10 Billion market. If I'm right, it could make us a lot of money.

Simplicio: Sure, but you won't convince me that whatever pattern you're thinking of is a "reliable" one.

Salviati: Come on, you don't even know what my argument is.

Simplicio: But I know how your argument is going to be structured. First you're going to identify some property of Bitcoin prices in past data. Then you'll explain some causal model you have which supposedly accounts for why prices have had that property in the past. Then you'll say that your model will continue to account for that same property in future Bitcoin prices.

Salviati: Yeah, so? What's wrong with that?

Simplicio: The problem is that you are not a trained quant, and therefore, your brain is not capable of bringing a worthwhile property of Bitcoin prices to your attention.

Salviati: Dude, I just want to let you know because this happens often and no one else is ever going to say anything: you're being a dick.

Simplicio: Look, quants are good at their job. To a first approximation, quants are like perfect Bayesian reasoners who maintain a probability distribution over the "reliability" of every single property of Bitcoin prices that you and I are capable of formulating. So this argument you're going to make to me, a quant has already made to another quant, and the other quant has incorporated it into his hedge fund's trading algorithms.

Salviati: Fine, but so what if quants have already figured out my argument for themselves? We can make money on it too.

Simplicio: No, we can't. I told you I'm pretty confident that the market is efficient, i.e. anti-inductive, meaning the quants of the world haven't left behind any reliable patterns that an armchair investor like you can detect and profit from.

Salviati: Would you just shut up and let me say my argument?

Simplicio: Whatever, knock yourself out.

Salviati: Ok, here goes. Everyone knows Bitcoin prices are volatile, right?

Simplicio: Yeah, highly volatile. But at any given moment, you don't know if the volatility is going to move the price up or down next. From your state of knowledge, it looks like a random walk. If today's Bitcoin price is $1000, then tomorrow's price is as likely to be $900 as it is to be $1100.

Salviati: I agree that the Random Walk Hypothesis provides a good model of prices in efficient markets, and that the size of a each step in a random walk provides a good model of price volatility in efficient markets.

Simplicio: See, I told you you wouldn't convince me.

Salviati: Ah, but my empirical observation of Bitcoin prices is inconsistent with the Random Walk hypothesis. So the only thing I'm led to conclude is that the Bitcoin market is not efficient.

Simplicio: What do you mean "inconsistent"?

Salviati: I mean Bitcoin's past prices don't look much like a random walk. They look more like a random walk on a log scale. If today's price is $1000, then tomorrow's price is equally likely to be $900 or $1111. So if I buy $1000 of Bitcoin today, I expect to have 0.5($900) + 0.5($1111) = $1005.50 tomorrow.

Simplicio: How do you know that? Did you write a script to loop through Bitcoin's daily closing price on Mt. Gox and simulate the behavior of a Bayesian reasoner with a variable-step-size random-walk prior and a second Bayesian reasoner with a variable-step-size log-random-walk prior, and thus calculate a much higher Bayesian Score for the log-random-walk model?

Salviati: Yeah, I did.

Simplicio: That's very virtuous of you.

[This is a fictional dialogue. The truth is, I was too lazy to do that. Can someone please do that? I would much appreciate it. --Liron.]

Salviati: So, have I convinced you that the market is anti-inductive now?

Simplicio: Well, you've empirically demonstrated that the log Random Walk Hypothesis was a good model for predicting Bitcoin prices in the past. But that's just a historical pattern. My original point was that you're not qualified to evaluate which historical patterns are *reliable* patterns. The Bitcoin markets are full of pattern-annihilating forces, and you're not qualified to evaluate which past-data-fitting models are eligible for future-data-fitting.

Salviati: Ok, I'm not saying you have to believe that the future accuracy of log-Random-Walk will probably be higher than the future accuracy of linear Random Walk. I'm just saying you should perform a Bayesian update in the direction of that conclusion.

Simplicio: Ok, but the only reason the update has nonzero strength is because I assigned an a-priori chance of 10% to the set of possible worlds wherein Bitcoin markets were inefficient, and that set of possible worlds gives a higher probability that a model like your log-Random-Walk model would fit the price data well. So I update my beliefs to promote the hypothesis that Bitcoin is inefficient, and in particular that it is inefficient in a log-Random-Walk way.

Salviati: Thanks. And hey, guess what: I think I've traced the source of the log-Random-Walk regularity.

Simplicio: I'm surprised you waited this long to mention that.

Salviati: I figured that if I mentioned it earlier, you'd snap back about how efficient markets sever the causal connection between would-be price-regularity-causing dynamics, and actual prices.

Simplicio: Fair enough.

Salviati: Anyway, the reason Bitcoin prices follow a log-Random-Walk is because they reflect the long-term Expected Value of Bitcoin's actual utility.

Simplicio: Bitcoin has no real utility.

Salviati: It does. It's liquid in novel, qualitatively different ways. It's kind of anonymous. It's a more stable unit of account than the official currencies of some countries.

Simplicio: Come on, how much utility is all that really worth in expectation?

Salviati: I don't know. The Bitcoin economy could be anywhere from hundreds of millions of dollars, to trillions of dollars. Our belief about the long-term future value of a single BTC is spread out across a range whose 90% confidence interval is something like [$10, $100,000] for 1BTC.

Simplicio: Are you saying it's spread out over the interval [$10, $100,000] in a uniform distribution?

Salviati: Nope, it's closer to a bell curve centered at $1000 on a log scale. It gives equal probability of ~10% both to the $10-100 range and to the $10,000-100,000 range.

Simplicio: How do you know that everyone's beliefs are shaped like that?

Salviati: Because everyone has a causal model in their head with a node for "order of magnitude of Bitcoin's value", and that node varies in the characteristically linear fashion of a Bayes net.

Simplicio: I don't feel confident in that explanation.

Salviati: Then take whatever explanation you give yourself to explain the effectiveness of Fermi estimates. Those output a bell curve on a log scale too, and seems like estimating Bitcoin's future value should have a lot of methodology in common with doing back-of-the-envelope calculations about the blast radius of a nuclear bomb.

Simplicio: Alright.

Salviati: So the causality of Bitcoin prices roughly looks like this:

[Beliefs about order of magnitude of Bitcoin's future value] --> [Beliefs about Bitcoin's future price] --> [Trading decisions]

Simplicio: Okay, I see how the first node can fluctuate a lot in reaction to daily news events, and that would have a disproportionately high effect on the last node. But how can an efficient market avoid that kind of log-scale fluctuation? Efficient markets always reflect a consensus estimate of an asset's price, and it's rational to arrive at an estimate that fluctuates on a log scale!

Salviati: Actually, I think a truly efficient market shouldn't just skip around across orders of magnitudes, just because expectations of future prices do. I think truly efficient markets show some degree of "drag", which should be invisible in typical cases like publicly-traded stocks, but become noticeable in cases of order-of-magnitude value-uncertainty like Bitcoin.

Simplicio: So you think you're the only one smart enough to notice that it's worth trading Bitcoin so as to create drag on Bitcoin's log-scale random walk?

Salviati: Yeah, I think maybe I am.


Salviati is claiming that his empirical observations show a lack of drag on Bitcoin price shifts, which would be actionable evidence of inefficiency. Discuss.

The Centre for Applied Rationality: a year later from a (somewhat) outside perspective

40 Swimmer963 27 May 2013 06:31PM

I recently had the privilege of being a CFAR alumni volunteering at a later workshop, which is a fascinating thing to do, and put me in a position both to evaluate how much of a difference the first workshop actually made in my life, and to see how the workshops themselves have evolved. 

Exactly a year ago, I attended one of the first workshops, back when they were still inexplicably called “minicamps”. I wasn't sure what to expect, and I especially wasn't sure why I had been accepted. But I bravely bullied the nursing faculty staff until they reluctantly let me switch a day of clinical around, and later stumbled off my plane into the San Francisco airport in a haze of exhaustion. The workshop spat me out three days later, twice as exhausted, with teetering piles of ideas and very little time or energy to apply them. I left with a list of annual goals, which I had never bothered to have before, and a feeling that more was possible–this included the feeling that more would have been possible if the workshop had been longer and less chaotic, if I had slept more the week before, if I hadn't had to rush out on Sunday evening to catch a plane and miss the social. 

Like I frequently do on Less Wrong the website, I left the minicamp feeling a bit like an outsider, but also a bit like I had come home. As well as my written goals, I made an unwritten pre-commitment to come back to San Francisco later, for longer, and see whether I could make the "more is possible" in my head more specific. Of my thirteen written goals on my list, I fully accomplished only four and partially accomplished five, but I did make it back to San Francisco, at the opportunity cost of four weeks of sacrificed hospital shifts. 

A week or so into my stay, while I shifted around between different rationalist shared houses and attempted to max out interesting-conversations-for-day, I found out that CFAR was holding another May workshop. I offered to volunteer, proved my sincerity by spending 6 hours printing and sticking nametags, and lived on site for another 4-day weekend of delightful information overload and limited sleep. 

Before the May 2012 workshop, I had a low prior that any four-day workshop could be life-changing in a major way. A four-year nursing degree, okay–I've successfully retrained my social skills and my ability to react under pressure by putting myself in particular situations over and over and over and over again. Four days? Nah. Brains don't work that way. 

In my experience, it's exceedingly hard for the human brain to do anything deliberately. In Kahneman-speak, habits are System 1, effortless and automatic. Doing things on purpose involves System 2, effortful and a bit aversive. I could have had a much better experience in my final intensive care clinical if I'd though to open up my workshop notes and tried to address the causes of aversions, or use offline time to train habits, or, y'know, do anything on purpose instead of floundering around trying things at random until they worked. 

(The again, I didn't apply concepts like System 1 and System 2 to myself a year ago. I read 'Thinking Fast and Slow' by Kahneman and 'Rationality and the Reflective Mind' by Stanovich as part of my minicamp goal 'read 12 hard nonfiction books this year', most of which came from the CFAR recommended reading list. If my preceptor had had any idea what I was saying when I explained to her that she was running particular nursing skills on System 1, because they were engrained on the level of habit, and I was running the same tasks on System 2 in working memory because they were new and confusing to me, and that was why I appeared to have poor time management, because System 2 takes forever to do anything, this terminology might have helped. Oh, for the world where everyone knows all jargon!)

...And here I am, setting aside a month of my life to think only about rationality. I can't imagine that my counterfactual self-who-didn't-attend-in-May-2012 would be here. I can't imagine that being here now will have zero effect on what I'm doing in a year, or ten years. Bingo. I did one thing deliberately!

So what was the May 2013 workshop actually like?

The curriculum has shifted around a lot in the past year, and I think with 95% probability that it's now more concretely useful. (Speaking of probabilities, the prediction markets during the workshop seemed to flow better and be more fun and interesting this time, although this may just show that I was more averse to games in general and betting in particular. In that case, yay for partly-cured aversions!)

The classes are grouped in an order that allows them to build on each other usefully, and they've been honed by practice into forms that successfully teach skills, instead of just putting words in the air and on flipcharts. For example, having a personal productivity system like GTD came across as a culturally prestigious thing at the last workshop, but there wasn't a lot of useful curriculum on it. Of course, I left on this trip wanting to spend my offline month creating with a GTD system better than paper to-do lists taped to walls, so I have both motivation and a low threshold for improvement. 

There are also some completely new classes, including "Againstness training" by Valentine, which seem to relate to some of the 'reacting under pressure' stuff in interesting ways, and gave me vocabulary and techniques for something I've been doing inefficiently by trial and error for a good part of my life.

In general, there are more classes about emotions, both how to deal with them when they're in the way and how to use them when they're the best tool available. Given that none of us are Spock, I think this is useful. 

Rejection therapy has morphed into a less terrifying and more helpful form with the awesome name of CoZE (Comfort Zone Expansion). I didn't personally find the original rejection therapy all that awful, but some people did, and that problem is largely solved. 

The workshops are vastly more orderly and organized. (I like to think I contributed to this slightly with my volunteer skills of keeping the fridge stocked with water bottles and calling restaurants to confirm orders and make sure food arrived on time.) Classes began and ended on time. The venue stayed tidy. The food was excellent. It was easier to get enough sleep. Etc. The May 2012 venue had a pool, and this one didn't, which made exercise harder for addicts like me. CFAR staff are talking about solving this. 

The workshops still aren't an easy environment for introverts. The negative parts of my experience in May 2012 were mostly because of this. It was easier this time, because as a volunteer I could skip classes if I started to feel socially overloaded, but periods of quiet alone time had to be effortfully carved out of the day, and at an opportunity cost of missing interesting conversations. I'm not sure if this problem is solvable without either making the workshops longer, in order to space the material out, and thus less accessible for people with jobs, or by cutting out curriculum. Either would impose a cost on the extroverts who don't want an hour at lunch to meditate or go running alone or read a sci-fi book, etc. 

In general, I found the May 2012 workshop too short and intense–we had material thrown at us at a rate far exceeding the usual human idea-digestion rate. Keeping in touch via Skype chats with other participants helped. CFAR now does official followups with participants for six weeks following the workshop. 

Meeting the other participants was, as usual, the best part of the weekend. The group was quite diverse, although I was still the only health care professional there. (Whyyy???? The health care system needs more rationality so badly!) The conversations were engaging. Many of the participants seem eager to stay in touch. The May 2012 workshop has a total of six people still on the Skype chats list, which is a 75% attrition rate. CFAR is now working on strategies to help people who want to stay in touch do it successfully. 

Conclusions?

I thought the May 2012 workshop was awesome. I thought the May 2013 workshop was about an order of magnitude more awesome. I would say that now is a great time to attend a CFAR workshop...except that the organization is financially stable and likely to still be around in a year and producing even better workshops. So I'm not sure. Then again, rationality skills have compound interest–the value of learning some new skills now, even if they amount more to vocab words and mental labels than superpowers, compounds over the year that you spend seeing all the books you read and all the opportunities you have in that framework. I'm glad I went a year ago instead of this May. I'm even more glad I had the opportunity to see the new classes and meet the new participants a year later. 


Amanda Knox: post mortem

23 gwern 20 October 2011 04:10PM

Continuing my interest in tracking real-world predictions, I notice that the recent acquittal of Knox & Sollecito offers an interesting opportunity - specifically, many LessWrongers gave probabilities for guilt back in 2009 in komponisto’s 2 articles:

Both were interesting exercises, and it’s time to do a followup. Specifically, there are at least 3 new pieces of evidence to consider:

  1. the failure of any damning or especially relevant evidence to surface in the ~2 years since (see also: the hope function)
  2. the independent experts’ report on the DNA evidence
  3. the freeing of Knox & Sollecito, and continued imprisonment of Rudy Guede (with reduced sentence)

Point 2 particularly struck me (the press attributes much of the acquittal to the expert report, an acquittal I had not expected to succeed), but other people may find the other 2 points or unmentioned news more weighty.

continue reading »

Two Truths and a Lie

59 Psychohistorian 23 December 2009 06:34AM

Response to Man-with-a-hammer syndrome.

It's been claimed that there is no way to spot Affective Death Spirals, or cultish obsession with the One Big Idea of Everything. I'd like to posit a simple way to spot such error, with the caveat that it may not work for every case.

There's an old game called Two Truths and a Lie. I'd bet almost everyone's heard of it, but I'll summarize it just in case. A person makes three statements, and the other players must guess which of those statements is false. The statement-maker gets points for fooling people, people get points for not being fooled. That's it. I'd like to propose a rationalist's version of this game that should serve as a nifty check on certain Affective Death Spirals, runaway Theory-Of-Everythings, and Perfectly General Explanations. It's almost as simple.

Say you have a theory about human behaviour. Get a friend to do a little research and assert three factual claims about how people behave that your theory would realistically apply to. At least one of these claims must be false. See if you can explain every claim using your theory before learning which one's false. 

If you can come up with a convincing explanation for all three statements, you must be very cautious when using your One Theory. If it can explain falsehoods, there's a very high risk you're going to use it to justify whatever prior beliefs you have. Even worse, you may use it to infer facts about the world, even though it is clearly not consistent enough to do so reliably. You must exercise the utmost caution in applying your One Theory, if not abandon reliance on it altogether. If, on the other hand, you can't come up with a convincing way to explain some of the statements, and those turn out to be the false ones, then there's at least a chance you're on to something.

Come to think of it, this is an excellent challenge to any proponent of a Big Idea. Give them three facts, some of which are false, and see if their Idea can discriminate. Just remember to be ruthless when they get it wrong; it doesn't prove their idea is totally wrong, only that reliance upon it would be.

Edited to clarify: My argument is not that one should simply abandon a theory altogether. In some cases, this may be justified, if all the theory has going for it is its predictive power, and you show it lacks that, toss it. But in the case of broad, complex theories that actually can explain many divergent outcomes, this exercise should teach you not to rely on that theory as a means of inference. Yes, you should believe in evolution. No, you shouldn't make broad inferences about human behaviour without any data because they are consistent with evolution, unless your application of the theory of evolution is so precise and well-informed that you can consistently pass the Two-Truths-and-a-Lie Test.

The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom

42 komponisto 13 December 2009 04:16AM

Note: The quantitative elements of this post have now been revised significantly.

Followup to: You Be the Jury: Survey on a Current Event

All three of them clearly killed her. The jury clearly believed so as well which strengthens my argument. They spent months examining the case, so the idea that a few minutes of internet research makes [other commenters] certain they're wrong seems laughable

- lordweiner27, commenting on my previous post

The short answer: it's very much like how a few minutes of philosophical reflection trump a few millennia of human cultural tradition.

Wielding the Sword of Bayes -- or for that matter the Razor of Occam -- requires courage and a certain kind of ruthlessness. You have to be willing to cut your way through vast quantities of noise and focus in like a laser on the signal.

But the tools of rationality are extremely powerful if you know how to use them.

Rationality is not easy for humans. Our brains were optimized to arrive at correct conclusions about the world only insofar as that was a necessary byproduct of being optimized to pass the genetic material that made them on to the next generation. If you've been reading Less Wrong for any significant length of time, you probably know this by now. In fact, around here this is almost a banality -- a cached thought. "We get it," you may be tempted to say. "So stop signaling your tribal allegiance to this website and move on to some new, nontrivial meta-insight."

But this is one of those things that truly do bear repeating, over and over again, almost at every opportunity. You really can't hear it enough. It has consequences, you see. The most important of which is: if you only do what feels epistemically "natural" all the time, you're going to be, well, wrong. And probably not just "sooner or later", either. Chances are, you're going to be wrong quite a lot.

continue reading »