Why you should consider buying Bitcoin right now (Jan 2015) if you have high risk tolerance

4 Ander 13 January 2015 08:02PM

LessWrong is where I learned about Bitcoin, several years ago, and my greatest regret is that I did not investigate it more as soon as possible, that people here did not yell at me louder that it was important, and to go take a look at it.  In that spirit, I will do so now.

 

First of all, several caveats:

* You should not go blindly buying anything that you do not understand.  If you don't know about Bitcoin, you should start by reading about its history, read Satoshi's whitepaper, etc.  I will assume that hte rest of the readers who continue reading this have a decent idea of what Bitcoin is.

* Under absolutely no circumstances should you invest money into Bitcoin that you cannot afford to lose.  "Risk money" only!  That means that if you were to lose 100% of you money, it would not particularly damage your life.  Do not spend money that you will need within the next several years, or ever.  You might in fact want to mentally write off the entire thing as a 100% loss from the start, if that helps.

* Even more strongly, under absolutely no circumstances whatsoever will you borrow money in order to buy Bitcoins, such as using margin, credit card loans, using your student loan, etc.  This is very much similar to taking out a loan, going to a casino and betting it all on black on the roulette wheel.  You would either get very lucky or potentially ruin your life.  Its not worth it, this is reality, and there are no laws of the universe preventing you from losing.

* This post is not "investment advice".

* I own Bitcoins, which makes me biased.  You should update to reflect that I am going to present a pro-Bitcoin case.

 

So why is this potentially a time to buy Bitcoins?  One could think of markets like a pendulum, where price swings from one extreme to another over time, with a very high price corresponding to over-enthusiasm, and a very low price corresponding to despair.  As Warren Buffett said, Mr. Market is like a manic depressive.  One day he walks into your office and is exuberant, and offers to buy your stocks at a high price.  Another day he is depressed and will sell them for a fraction of that. 

The root cause of this phenomenon is confirmation bias.  When things are going well, and the fundamentals of a stock or commodity are strong, the price is driven up, and this results in a positive feedback loop.  Investors receive confirmation of their belief that things are going good from the price increase, confirming their bias.  The process repeats and builds upon itself during a bull market, until it reaches a point of euphoria, in which bad news is completely ignored or disbelieved in.

The same process happens in reverse during a price decline, or bear market.  Investors receive the feedback that the price is going down => things are bad, and good news is ignored and disbelieved.  Both of these processes run away for a while until they reach enough of an extreme that the "smart money" (most well informed and intelligent agents in the system) realizes that the process has gone too far and switches sides. 

 

Bitcoin at this point is certainly somewhere in the despair side of the pendulum.  I don't want to imply in any way that it is not possible for it to go lower.  Picking a bottom is probably the most difficult thing to do in markets, especially before it happens, and everyone who has claimed that Bitcoin was at a bottom for the past year has been repeatedly proven wrong.  (In fact, I feel a tremendous amount of fear in sticking my neck out to create this post, well aware that I could look like a complete idiot weeks or months or years from now and utterly destroy my reputation, yet I will continue anyway).

 

First of all, lets look at the fundamentals of Bitcoin.  On one hand, things are going well. 

 

Use of Bitcoin (network effect):

One measurement of Bitcoin's value is the strenght of its network effect.  By Metcalfe's law, the value of a network is proporitonal to the square of the number of nodes in the network. 

http://en.wikipedia.org/wiki/Metcalfe%27s_law

Over the long term, Bitcoin's price has generally followed this law (though with wild swings to both the upside and downside as the pendulum swings). 

In terms of network effect, Bitcoin is doing well.

 

Bitcoin transactions are hitting all time highs:  (28 day average of number of transactions).

https://blockchain.info/charts/n-transactions-excluding-popular?timespan=2year&showDataPoints=false&daysAverageString=28&show_header=true&scale=0&address=

 

Number of Bitcoin addresses are hitting all time highs:

https://blockchain.info/charts/n-unique-addresses?timespan=2year&showDataPoints=false&daysAverageString=28&show_header=true&scale=0&address=

 

Merchant adoption continues to hit new highs:

BitPay/Coinbase continue to report 10% monthly growth in the number of merchants that accept Bitcoin.

Prominent companies that began accepting Bitcoin in the past year include: Dell, Overstock, Paypal, Microsoft, etc.

 

On the other hand, due to the sustained price decline, many Btcoin businesses that started up in the past two years with venture capital funding have shut down.  This is more of an effect of the price decline than a cause however.  In the past month especially there has been a number of bearish news stories, such as Bitpay laying off employees, exchanges Vault of Satoshi and CEX.io deciding to shut down, exchange Bitstamp being hacked and shut down for 3 days, but ultimately is back up without losing customer funds, etc.

 

The cost to mine a Bitcoin is commonly seen as one indicator of price.   Note that the cost to mine a Bitcoin does not directly determine the *value* or usefulness of a Bitcoin.   I do not believe in the labor theory of value: http://en.wikipedia.org/wiki/Labor_theory_of_value

However, there is a stabilizing effect in commodities, in which over time, the price of an item will often converge towards the cost to produce it due to market forces. 

 

If a Bitcoin is being priced at a value much greater than the cost (in mining equipment and electricity) to create it, people will invest in mining equipment.  This results in increased 'difficulty' of mining and drives down the amount of Bitcoin that you can create with a particular piece of mining equipment.  (The amount of Bitcoins created is a fixed amount per unit of time, and thus the more mining equipment that exists, the less Bitcoin each miner will get).

If Bitcoin is being priced at a value below the cost to create it, people will stop investing in mining equipment.  This may be a signal that the price is getting too low, and could rise.

 

Historically, the one period of time where Bitcoin was priced significantly below the cost to produce it was in late 2011.  It was noted on LessWrong.  The price has not currently fallen to quite the same extent as it did back then (which may indicate that it has further to fall), however the current price relative to the mining cost indicates we are very much in the bearish side of the pendulum.

 

It is difficult to calculate an exact cost to mine a Bitcoin, because this depends on the exact hardware used, your cost of electricity, and a prediction of the future difficulty adjustments that will occur.  However, we can make estimates with websites such as http://www.vnbitcoin.org/bitcoincalculator.php

According to this site, every available Bitcoin miner will never give you back as much money as it cost, factoring in the hardware cost and electricity cost.   Upcoming more efficient miners which have not yet released yet are estimated to pay off in about a year, if difficulty grows extremely slowly, and that is for upcoming technology which has not yet even been released. 

 

There are two important breakpoints when discussing Bitcoin mining profitability.  The first is the point at which your return is enough that it pays for both the electricity and the hardware.  The second is the point at which you make more than your electricity costs, but cannot recover the hardware cost.

 

For example, lets say Alice pays $1000 on Bitcoin mining equipment.  Every day, this mining equipment can return $10 worth of Bitcoin, but it costs $5 of electricity to run.  Her gain for the day is $5, and it would take 200 days at this rate before the mining equipment paid for itself.  Once she has made the decision to purchase the mining equipment, the money spent on the miner is a sunk cost.  The money spent on electricity is not a sunk cost, she continues to have the decision every day of whether or not to run her mining equipment.  The optimal decision is to continue to run the miner as long as it returns more than the electricity cost. 

Over time, the payout she will receive from this hardware will decline, as the difficulty of mining Bitcoin increases.  Eventually, her payout will decline below the electricity cost, and she should shut the miner down.  At this point, if her total gain from running the equipment was higher than the hardware cost, it was a good investment.  If it did not recoup its cost, then it was worse than simply spending the money buying Bitcoin on an exchange in the first place.

 

This process creates a feedback into the market price of Bitcoins.  Imagine that Bitcoin investors have two choices, either they can buy Bitcoins (the commodity which has already been produced by others), or they can buy miners, and produce Bitcoins for themself.   If the Bitcoin price falls sufficiently that mining equipment will not recover its costs over time, investment money that would have gone into miners instead goes into Bitcoin, helping to support the price.  As you can see from mining cost calculators, we have passed this point already.  (In fact, we passed it months ago already).

 

The second breakpoint is when the Bitcoin price falls so low that it falls below the electricity cost of running mining equipment.  We have passed this point for many of the less efficient ways to mine.  For example, Cointerra recently shut down its cloud mining pool because it was losing money.  We have not yet passed this point for more recent and efficient miners, but we are getting fairly close to it. Crossing this point has occurred once in Bitcoin's history, in late 2011 when the price bottomed out near $2, before giving birth to the massive bull run of 2012-2013 in which the price rose by a factor of 500.

 

Market Sentiment: 

I was not active in Bitcoin back in 2011, so I cannot compare the present time to the sentiment at the November 2011 bottom.  However, sentiment currently is the worst that I have seen by a significant margin. Again, this does not mean that things could not get much, much worse before they get better!  After all, sentiment has been growing worse for months now as the price declines, and everyone who predicted that it was as bad as it could get and the price could not possibly go below $X has been wrong.  We are in a feedback loop which is strongly pumping bearishness into all market participants, and that feedback loop can continue and has continued for quite a while.

 

A look at market indicators tells us that Bitcoin is very, very oversold, almost historically oversold.  Again, this does not mean that it could not get worse before it gets better. 

 

As I write this, the price of Bitcoin is $230.  For perspective, this is down over 80% from the all time high of $1163 in November 2013.  It is still higher than the roughly $100 level it spent most of mid 2013 at.

* The average price of a Bitcoin since the last time it moved is $314.

https://www.reddit.com/r/BitcoinMarkets/comments/2ez90b/and_the_average_bitcoin_cost_basis_is/

The current price is a multiple of .73 of this price.  This is very low historically, but not the lowest it has ever ben.  THe lowest was about .39 in late 2011. 

 

* Short interest (the number of Bitcoins that were borrowed and sold, and must be rebought later) hit all time highs this week, according to data on the exchange Bitfinex, at more than 25000 Bitcoins sold short:

http://www.bfxdata.com/swaphistory/totals.php

 

* Weekly RSI (relative strength index), an indicator which tells if a stock or commodity is 'overbought' or 'oversold' relative to its history, just hit its lowest value ever.

 

Many indicators are telling us that Bitcoin is at or near historical levels in terms of the depth of this bear market.  In percentage terms, the price decline is surpassed only by the November 2011 low.  In terms of length, the current decline is more than twice as long as the previous longest bear market.

 

To summarize: At the present time, the market is pricing in a significant probability that Bitcoin is dying.

But there are some indicators (such as # of transactions) which say it is not dying.  Maybe it continues down into oblivion, and the remaining fundamentals which looked bullish turn downwards and never recover.  Remember that this is reality, and anything can happen, and nothing will save you.

 

 

Given all of this, we now have a choice.  People have often compared Bitcoin to making a bet in which you have a 50% chance of losing everything, and a 50% chance of making multiples (far more than 2x) of what you started with. 

There are times when the payout on that bet is much lower, when everyone is euphoric and has been convinced by the positive feedback loop that they will win.  And there are times when the payout on that bet is much higher, when everyone else is extremely fearful and is convinced it will not pay off. 

 

This is a time to be good rationalists, and investigate a possible opportunity, comparing the present situation to historical examples, and making an informed decision.   Either Bitcoin has begun the process of dying, and this decline will continue in stages until it hits zero (or some incredibly low value that is essentially the same for our purposes), or it will live.  Based on the new all time high being hit in number of transactions, and ways to spend Bitcoin, I think there is at least a reasonable chance it will live.  Enough of a chance that it is worth taking some money that you can 100% afford to lose, and making a bet.  A rational gamble that there is a decent probability that it will survive, at a time when a large number of others are betting that it will fail.

 

And then once you do that, try your hardest to mentally write it off as a complete loss, like you had blown the money on a vacation or a consumer good, and now it is gone, and then wait a long time.

 

 

Roles are Martial Arts for Agency

140 Eneasz 08 August 2014 03:53AM

A long time ago I thought that Martial Arts simply taught you how to fight – the right way to throw a punch, the best technique for blocking and countering an attack, etc. I thought training consisted of recognizing these attacks and choosing the correct responses more quickly, as well as simply faster/stronger physical execution of same. It was later that I learned that the entire purpose of martial arts is to train your body to react with minimal conscious deliberation, to remove “you” from the equation as much as possible.

The reason is of course that conscious thought is too slow. If you have to think about what you’re doing, you’ve already lost. It’s been said that if you had to think about walking to do it, you’d never make it across the room. Fighting is no different. (It isn’t just fighting either – anything that requires quick reaction suffers when exposed to conscious thought. I used to love Rock Band. One day when playing a particularly difficult guitar solo on expert I nailed 100%… except “I” didn’t do it at all. My eyes saw the notes, my hands executed them, and no where was I involved in the process. It was both exhilarating and creepy, and I basically dropped the game soon after.)

You’ve seen how long it takes a human to learn to walk effortlessly. That's a situation with a single constant force, an unmoving surface, no agents working against you, and minimal emotional agitation. No wonder it takes hundreds of hours, repeating the same basic movements over and over again, to attain even a basic level of martial mastery. To make your body react correctly without any thinking involved. When Neo says “I Know Kung Fu” he isn’t surprised that he now has knowledge he didn’t have before. He’s amazed that his body now reacts in the optimal manner when attacked without his involvement.

All of this is simply focusing on pure reaction time – it doesn’t even take into account the emotional terror of another human seeking to do violence to you. It doesn’t capture the indecision of how to respond, the paralysis of having to choose between outcomes which are all awful and you don’t know which will be worse, and the surge of hormones. The training of your body to respond without your involvement bypasses all of those obstacles as well.

This is the true strength of Martial Arts – eliminating your slow, conscious deliberation and acting while there is still time to do so.

Roles are the Martial Arts of Agency.

When one is well-trained in a certain Role, one defaults to certain prescribed actions immediately and confidently. I’ve acted as a guy standing around watching people faint in an overcrowded room, and I’ve acted as the guy telling people to clear the area. The difference was in one I had the role of Corporate Pleb, and the other I had the role of Guy Responsible For This Shit. You know the difference between the guy at the bar who breaks up a fight, and the guy who stands back and watches it happen? The former thinks of himself as the guy who stops fights. They could even be the same guy, on different nights. The role itself creates the actions, and it creates them as an immediate reflex. By the time corporate-me is done thinking “Huh, what’s this? Oh, this looks bad. Someone fainted? Wow, never seen that before. Damn, hope they’re OK. I should call 911.” enforcer-me has already yelled for the room to clear and whipped out a phone.

Roles are the difference between Hufflepuffs gawking when Neville tumbles off his broom (Protected), and Harry screaming “Wingardium Leviosa” (Protector). Draco insulted them afterwards, but it wasn’t a fair insult – they never had the slightest chance to react in time, given the role they were in. Roles are the difference between Minerva ordering Hagrid to stay with the children while she forms troll-hunting parties (Protector), and Harry standing around doing nothing while time slowly ticks away (Protected). Eventually he switched roles. But it took Agency to do so. It took time.

Agency is awesome. Half this site is devoted to becoming better at Agency. But Agency is slow. Roles allow real-time action under stress.

Agency has a place of course. Agency is what causes us to decide that Martial Arts training is important, that has us choose a Martial Art, and then continue to train month after month. Agency is what lets us decide which Roles we want to play, and practice the psychology and execution of those roles. But when the time for action is at hand, Agency is too slow. Ensure that you have trained enough for the next challenge, because it is the training that will see you through it, not your agenty conscious thinking.

 

As an aside, most major failures I’ve seen recently are when everyone assumed that someone else had the role of Guy In Charge If Shit Goes Down. I suggest that, in any gathering of rationalists, they begin the meeting by choosing one person to be Dictator In Extremis should something break. Doesn’t have to be the same person as whoever is leading. Would be best if it was someone comfortable in the role and/or with experience in it. But really there just needs to be one. Anyone.

cross-posted from my blog

17 Rules to Make a Definition that Avoids the 37 Ways of Words Being Wrong

15 mathnerd314 22 February 2014 05:16AM

Eliezer's writing style of A->B, then A, then B, though generally clear, results in a large amount of redundancy.

In this post, I have attempted to reduce the number of rules needed to remember by half. The numbers are the rules from the original post.

So, without further ado, a good definition for a word:

  1. can be shown to be wrong37 and is not the final13 authority18 19
  2. has strong justifications33 for the word's existence32 and its particular definition,20 which leave no room for an argument17 22
  3. agrees with conventional usage4
  4. explains what context the word depends on36
  5. limits its scope to avoid overlap with other meanings25
  6. does not assume that definitions are the best way of giving words semantics12
  7. directs a complex mental paintbrush35 to paint detailed pictures of the thing you're trying to think about23
  8. is a brain inference aid13 that refers to and instructs one on how to find a specific/unique24 similarity cluster21 that is apparent from empirical experience28 29 30, the cluster's size being inversely proportional to the word's length31
  9. is not a binary category9 11 and cannot be used for deductive inference27
  10. requires observing only14 a few3 real-world1 properties that can be easily5 verified2 and are less abstract6 than the word being defined (in particular, the definition cannot be circular16)
  11. is not just a list of random properties10 21
  12. contains no negated properties10 33
  13. specifies exhaustively all of the correct connotations of the word25 26
  14. makes the properties of a random object satisfying the definition be nearly independent34
  15. has examples6 which satisfy the definition, including the original example(s) that motivated the definition being given15 and typical/conventional examples7
  16. tells you which examples are more typical or less typical9
  17. captures enough characteristics of the examples to identify non-members8

And there you go. 17 rules, follow them all and you can't use words wrongly.

The Bottom Line

49 Eliezer_Yudkowsky 28 September 2007 05:47PM

There are two sealed boxes up for auction, box A and box B.  One and only one of these boxes contains a valuable diamond.  There are all manner of signs and portents indicating whether a box contains a diamond; but I have no sign which I know to be perfectly reliable.  There is a blue stamp on one box, for example, and I know that boxes which contain diamonds are more likely than empty boxes to show a blue stamp.  Or one box has a shiny surface, and I have a suspicion—I am not sure—that no diamond-containing box is ever shiny.

Now suppose there is a clever arguer, holding a sheet of paper, and he says to the owners of box A and box B:  "Bid for my services, and whoever wins my services, I shall argue that their box contains the diamond, so that the box will receive a higher price."  So the box-owners bid, and box B's owner bids higher, winning the services of the clever arguer.

The clever arguer begins to organize his thoughts.  First, he writes, "And therefore, box B contains the diamond!" at the bottom of his sheet of paper.  Then, at the top of the paper, he writes, "Box B shows a blue stamp," and beneath it, "Box A is shiny", and then, "Box B is lighter than box A", and so on through many signs and portents; yet the clever arguer neglects all those signs which might argue in favor of box A.  And then the clever arguer comes to me and recites from his sheet of paper:  "Box B shows a blue stamp, and box A is shiny," and so on, until he reaches:  "And therefore, box B contains the diamond."

continue reading »

Three ways CFAR has changed my view of rationality

102 Julia_Galef 10 September 2013 06:24PM

The Center for Applied Rationality's perspective on rationality is quite similar to Less Wrong's. In particular, we share many of Less Wrong's differences from what's sometimes called "traditional" rationality, such as Less Wrong's inclusion of Bayesian probability theory and the science on heuristics and biases.

But after spending the last year and a half with CFAR as we've developed, tested, and attempted to teach hundreds of different versions of rationality techniques, I've noticed that my picture of what rationality looks like has shifted somewhat from what I perceive to be the most common picture of rationality on Less Wrong. Here are three ways I think CFAR has come to see the landscape of rationality differently than Less Wrong typically does – not disagreements per se, but differences in focus or approach. (Disclaimer: I'm not speaking for the rest of CFAR here; these are my own impressions.)

 

1. We think less in terms of epistemic versus instrumental rationality.

Formally, the methods of normative epistemic versus instrumental rationality are distinct: Bayesian inference and expected utility maximization. But methods like "use Bayes' Theorem" or "maximize expected utility" are usually too abstract and high-level to be helpful for a human being trying to take manageable steps towards improving her rationality. And when you zoom in from that high-level description of rationality down to the more concrete level of "What five-second mental habits should I be training?" the distinction between epistemic and instrumental rationality becomes less helpful.

Here's an analogy: epistemic rationality is like physics, where the goal is to figure out what's true about the world, and instrumental rationality is like engineering, where the goal is to accomplish something you want as efficiently and effectively as possible. You need physics to do engineering; or I suppose you could say that doing engineering is doing physics, but with a practical goal. However, there's plenty of physics that's done for its own sake, and doesn't have obvious practical applications, at least not yet. (String theory, for example.) Similarly, you need a fair amount of epistemic rationality in order to be instrumentally rational, though there are parts of epistemic rationality that many of us practice for their own sake, and not as a means to an end. (For example, I appreciate clarifying my thinking about free will even though I don't expect it to change any of my behavior.)

In this analogy, many skills we focus on at CFAR are akin to essential math, like linear algebra or differential equations, which compose the fabric of both physics and engineering. It would be foolish to expect someone who wasn't comfortable with math to successfully calculate a planet's trajectory or design a bridge. And it would be similarly foolish to expect you to successfully update like a Bayesian or maximize your utility if you lacked certain underlying skills. Like, for instance: Noticing your emotional reactions, and being able to shift them if it would be useful. Doing thought experiments. Noticing and overcoming learned helplessness. Visualizing in concrete detail. Preventing yourself from flinching away from a thought. Rewarding yourself for mental habits you want to reinforce. 

These and other building blocks of rationality are essential both for reaching truer beliefs, and for getting what you value; they don't fall cleanly into either an "epistemic" or an "instrumental" category. Which is why, when I consider what pieces of rationality CFAR should be developing, I've been thinking less in terms of "How can we be more epistemically rational?" or "How can we be more instrumentally rational?" and instead using queries like, "How can we be more metacognitive?"

 

2. We think more in terms of a modular mind.

The human mind isn't one coordinated, unified agent, but rather a collection of different processes that often aren't working in sync, or even aware of what each other is up to. Less Wrong certainly knows this; see, for example, discussions of anticipations versus professions, aliefs, and metawanting. But in general we gloss over that fact, because it's so much simpler and more natural to talk about "what I believe" or "what I want," even if technically there is no single "I" doing the believing or wanting. And for many purposes that kind of approximation is fine. 

But a rationality-for-humans usually can't rely on that shorthand. Any attempt to change what "I" believe, or optimize for what "I" want, forces a confrontation of the fact that there are multiple, contradictory things that could reasonably be called "beliefs," or "wants," coexisting in the same mind. So a large part of applied rationality turns out to be about noticing those contradictions and trying to achieve coherence, in some fashion, before you can even begin to update on evidence or plan an action.

Many of the techniques we're developing at CFAR fall roughly into the template of coordinating between your two systems of cognition: implicit-reasoning System 1 and explicit-reasoning System 2. For example, knowing when each system is more likely to be reliable. Or knowing how to get System 2 to convince System 1 of something ("We're not going to die if we go talk to that stranger"). Or knowing what kinds of questions System 2 should ask of System 1 to find out why it's uneasy about the conclusion at which System 2 has arrived.

This is all, of course, with the disclaimer that the anthropomorphizing of the systems of cognition, and imagining them talking to each other, is merely a useful metaphor. Even the classification of human cognition into Systems 1 and 2 is probably not strictly true, but it's true enough to be useful. And other metaphors prove useful as well – for example, some difficulties with what feels like akrasia become more tractable when you model your future selves as different entities, as we do in the current version of our "Delegating to yourself" class.

 

3. We're more focused on emotions.

There's relatively little discussion of emotions on Less Wrong, but they occupy a central place in CFAR's curriculum and organizational culture.

It used to frustrate me when people would say something that revealed they held a Straw Vulcan-esque belief that "rationalist = emotionless robot". But now when I encounter that misconception, it just makes me want to smile, because I'm thinking to myself: "If you had any idea how much time we spend at CFAR talking about our feelings…"

Being able to put yourself into particular emotional states seems to make a lot of pieces of rationality easier. For example, for most of us, it's instrumentally rational to explore a wider set of possible actions – different ways of studying, holding conversations, trying to be happy, and so on – beyond whatever our defaults happen to be. And for most of us, inertia and aversions get in the way of that exploration. But getting yourself into "playful" mode (one of the hypothesized primary emotional circuits common across mammals) can make it easier to branch out into a wider swath of Possible-Action Space. Similarly, being able to call up a feeling of curiosity or of "seeking" (another candidate for a primary emotional circuit) can help you conquer motivated cognition and learned blankness.  

And simply being able to notice your emotional state is rarer and more valuable than most people realize. For example, if you're in fight-or-flight mode, you're going to feel more compelled to reject arguments that feel like a challenge to your identity. Being attuned to the signs of sympathetic nervous system activation – that you're tensing up, or that your heart rate is increasing – means you get cues to double-check your reasoning, or to coax yourself into another emotional state.

We also use emotions as sources of data. You can learn to tap into feelings of surprise or confusion to get a sense of how probable you implicitly expect some event to be. Or practice simulating hypotheticals ("What if I knew that my novel would never sell well?") and observing your resultant emotions, to get a clearer picture of your utility function. 

And emotions-as-data can be a valuable check on your System 2's conclusions. One of our standard classes is "Goal Factoring," which entails finding some alternate set of actions through which you can purchase the goods you want more cheaply. So you might reason, "I'm doing martial arts for the exercise and self-defense benefits... but I could purchase both of those things for less time investment by jogging to work and carrying Mace." If you listened to your emotional reaction to that proposal, however, you might notice you still feel sad about giving up martial arts even if you were getting the same amount of exercise and self-defense benefits somehow else.

Which probably means you've got other reasons for doing martial arts that you haven't yet explicitly acknowledged -- for example, maybe you just think it's cool. If so, that's important, and deserves a place in your decisionmaking. Listening for those emotional cues that your explicit reasoning has missed something is a crucial step, and to the extent that aspiring rationalists sometimes forget it, I suppose that's a Steel-Manned Straw Vulcan (Steel Vulcan?) that actually is worth worrying about.

Conclusion

I'll name one more trait that unites, rather than divides, CFAR and Less Wrong. We both diverge from "traditional" rationality in that we're concerned with determining which general methods systematically perform well, rather than defending some set of methods as "rational" on a priori criteria alone. So CFAR's picture of what rationality looks like, and how to become more rational, will and should change over the coming years as we learn more about the effects of our rationality training efforts. 

Biases of Intuitive and Logical Thinkers

27 pwno 13 August 2013 03:50AM

Any intuition-dominant thinker who's struggled with math problems or logic-dominant thinker who's struggled with small-talk knows how difficult and hopeless the experience feels like. For a long time I was an intuition thinker, then I developed a logical thinking style and soon it ended up dominating -- granting me the luxury of experiencing both kinds of struggles. I eventually learned to apply the thinking style better optimized for the problem I was facing. Looking back, I realized why I kept sticking to one extreme.

I hypothesize that one-sided thinkers develop biases and tendencies that prevent them from improving their weaker mode of thinking. These biases cause a positive feedback loop that further skews thinking styles in the same direction.

The reasons why one style might be overdeveloped and the other underdeveloped vary greatly. Genes have a strong influence, but environment also plays a large part. A teacher may have inspired you to love learning science at a young age, causing you to foster to a thinking style better for learning science. Or maybe you grew up very physically attractive and found socializing with your peers a lot more rewarding than studying after school, causing you to foster a thinking style better for navigating social situations. Environment can be changed to help develop certain thinking styles, but it should be supplementary to exposing and understanding the biases you already have. Entering an environment that penalizes your thinking style can be uncomfortable, stressful and frustrating without being prepared. (Such a painful experience is part of why these biases cause a positive feedback loop, by making us avoid environments that require the opposite thinking style.)

Despite genetic predisposition and environmental circumstances, there's room for improvement and exposing these biases and learning to account for them is a great first step.

Below is a list of a few biases that worsen our ability to solve a certain class of problems and keep us from improving our underdeveloped thinking style.


Intuition-dominant Biases


Overlooking crucial details

Details matter in order to understand technical concepts. Overlooking a word or sentence structure can cause complete misunderstanding -- a common blunder for intuition thinkers.

Intuition is really good at making fairly accurate predictions without complete information, enabling us to navigate the world without having a deep understanding of it. As a result, intuition trains us to experience the feeling we understand something without examining every detail. In most situations, paying close attention to detail is unnecessary and sometimes dangerous. When learning a technical concept, every detail matters and the premature feeling of understanding stops us from examining them.

This bias is one that's more likely to go away once you realize it's there. You often don't know what details you're missing after you've missed them, so merely remembering that you tend to miss important details should prompt you to take closer examinations in the future.

Expecting solutions to sound a certain way

The Internship has a great example of this bias (and a few others) in action. The movie is about two middle-aged unemployed salesmen (intuition thinkers) trying to land an internship with Google. Part of Google's selection process has the two men participate in several technical challenges. One challenge required the men and their team to find a software bug. In a flash of insight, Vince Vaughn's character, Billy, shouts "Maybe the answer is in the question! Maybe it has something to do with the word bug. A fly!" After enthusiastically making several more word associations, he turns to his team and insists they take him seriously.

Why is it believable to the audience that Billy can be so confident about his answer?

Billy's intuition made an association between the challenge question and riddle-like questions he's heard in the past. When Billy used his intuition to find a solution, his confidence in a riddle-like answer grew. Intuition recklessly uses irrelevant associations as reasons for narrowing down the space of possible solutions to technical problems. When associations pop in your mind, it's a good idea to legitimize those associations with supporting reasons.

Not recognizing precise language

Intuition thinkers are multi-channel learners -- all senses, thoughts and emotions are used to construct a complex database of clustered knowledge to predict and understand the world. With robust information-extracting ability, correct grammar/word-usage is, more often than not, unnecessary for meaningful communication.

Communicating technical concepts in a meaningful way requires precise language. Connotation and subtext are stripped away so words and phrases can purely represent meaningful concepts inside a logical framework. Intuition thinkers communicate with imprecise language, gathering meaning from context to compensate. This makes it hard for them to recognize when to turn off their powerful information extractors.

This bias explains part of why so many intuition thinkers dread math "word problems". Introducing words and phrases rich with meaning and connotation sends their intuition running wild. It's hard for them to find correspondences between words in the problem and variables in the theorems and formulas they've learned.

The noise intuition brings makes it hard to think clearly. It's hard for intuition thinkers to tell whether their automatic associations should be taken seriously. Without a reliable way to discern, wrong interpretations of words go undetected. For example, without any physics background, an intuition thinker may read the statement "Matter can have both wave and particle properties at once" and believe they completely understand it. Unrelated associations of what matter, wave and particle mean, blindly take precedence over technical definitions.

The slightest uncertainty about what a sentence means should raise a red flag. Going back and finding correspondence between each word and how it fits into a technical framework will eliminate any uncertainty.

Believing their level of understanding is deeper than what it is

Intuition works on an unconscious level, making intuition thinkers unaware of how they know what they know. Not surprisingly, their best tool to learn what it means to understand is intuition. The concept "understanding" is a collection of associations from experience. You may have learned that part of understanding something means being able to answer questions on a test with memorized factoids, or knowing what to say to convince people you understand, or just knowing more facts than your friends. These are not good methods for gaining a deep understanding of technical concepts.

When intuition thinkers optimize for understanding, they're really optimizing for a fuzzy idea of what they think understanding means. This often leaves them believing they understand a concept when all they've done is memorize some disconnected facts. Not knowing what it feels like to have deeper understanding, they become conditioned to always expect some amount of surprise. They can feel max understanding with less confidence than logical thinkers when they feel max understanding. This lower confidence disincentivizes intuition thinkers to invest in learning technical concepts, further keeping their logical thinking style underdeveloped.

One way I overcame this tendency was to constantly ask myself "why" questions, like a curious child bothering their parents. The technique helped me uncover what used to be unknown unknowns that made me feel overconfident in my understanding.


Logic-dominant Biases


Ignoring information they cannot immediately fit into a framework

Logical thinkers have and use intuition -- problem is they don't feed it enough. They tend to ignore valuable intuition-building information if it doesn't immediately fit into a predictive model they deeply understand. While intuition thinkers don't filter out enough noise, logical thinkers filter too much.

For example, if a logical thinker doesn't have a good framework for understanding human behavior, they're more likely to ignore visual input like body language and fashion, or auditory input like tone of voice and intonation. Human behavior is complicated, there's no framework to date that can make perfectly accurate predictions about it. Intuition can build powerful models despite working with many confounding variables.  

Bayesian probability enables logical thinkers to build predictive models from noisy data without having to use intuition. But even then, the first step of making a Bayesian update is data collection.

Combatting this tendency requires you to pay attention to input you normally ignore. Supplement your broader attentional scope with a researched framework as a guide. Say you want to learn how storytelling works. Start by grabbing resources that teach storytelling and learn the basics. Out in the real-world, pay close attention to sights, sounds, and feelings when someone starts telling a story and try identifying sensory input to the storytelling elements you've learned about. Once the basics are subconsciously picked up by habit, your conscious attention will be freed up to make new and more subtle observations.

Ignoring their emotions

Emotional input is difficult to factor, especially because you're emotional at the time. Logical thinkers are notorious for ignoring this kind of messy data, consequently starving their intuition of emotional data. Being able to "go with your gut feelings" is a major function of intuition that logical thinkers tend to miss out on.

Your gut can predict if you'll get along long-term with a new SO, or what kind of outfit would give you more confidence in your workplace, or if learning tennis in your free time will make you happier, or whether you prefer eating a cheeseburger over tacos for lunch. Logical thinkers don't have enough data collected about their emotions to know what triggers them. They tend to get bogged down and mislead with objective, yet trivial details they manage to factor out. A weak understanding of their own emotions also leads to a weaker understanding of other's emotions. You can become a better empathizer by better understanding yourself.

You could start from scratch and build your own framework, but self-assessment biases will impede productivity. Learning an existing framework is a more realistic solution. You can find resources with some light googling and I'm sure CFAR teaches some good ones too. You can improve your gut feelings too. One way is making sure you're always consciously aware of the circumstances you're in when experiencing an emotion.

Making rules too strict

Logical thinkers build frameworks in order to understand things. When adding a new rule to a framework, there's motivation to make the rule strict. The stricter the rule, the more predictive power, the better the framework. When the domain you're trying to understand has multivariable chaotic phenomena, strict rules are likely to break. The result is something like the current state of macroeconomics: a bunch of logical thinkers preoccupied by elegant models and theories that can only exist when useless in practice.

Following rules that are too strict can have bad consequences. Imagine John the salesperson is learning how to make better first impressions and has built a rough framework so far. John has a rule that smiling always helps make people feel welcomed the first time they meet him. One day he makes a business trip to Russia to meet with a prospective client. The moment he meet his russian client, he flashes a big smile and continues to smile despite negative reactions. After a few hours of talking, his client reveals she felt he wasn't trustworthy at first and almost called off the meeting. Turns out that in Russia smiling to strangers is a sign of insincerity. John's strict rule didn't account for cultural differences, blindsiding him from updating on his clients reaction, putting him in a risky situation.

The desire to hold onto strict rules can make logical thinkers susceptible to confirmation bias too. If John made an exception to his smiling rule, he'd feel less confident about his knowledge of making first impressions, subsequently making him feel bad. He may also have to amend some other rule that relates to the smiling rule, which would further hurt his framework and his feelings.

When feeling the urge to add on a new rule, take note of circumstances in which the evidence for the rule was found in. Add exceptions that limit the rule's predictive power to similar circumstances. Another option is to entertain multiple conflicting rules simultaneously, shifting weight from one to the other after gathering more evidence. 

continue reading »

Second major sequence now available in audio format

22 Rick_from_Castify 31 January 2013 05:25AM

The sequence "A Human's Guide to Words" is now available as a professionally read podcast.

We have started working on the large "Reductionism" sequence which includes both the "Joy in the Merely Real" and the "Zombies" sub-sequences.  They should be available in a couple of weeks.

Godel's Completeness and Incompleteness Theorems

34 Eliezer_Yudkowsky 25 December 2012 01:16AM

Followup to: Standard and Nonstandard Numbers

So... last time you claimed that using first-order axioms to rule out the existence of nonstandard numbers - other chains of numbers besides the 'standard' numbers starting at 0 - was forever and truly impossible, even unto a superintelligence, no matter how clever the first-order logic used, even if you came up with an entirely different way of axiomatizing the numbers.

"Right."

How could you, in your finiteness, possibly know that?

"Have you heard of Godel's Incompleteness Theorem?"

Of course! Godel's Theorem says that for every consistent mathematical system, there are statements which are true within that system, which can't be proven within the system itself. Godel came up with a way to encode theorems and proofs as numbers, and wrote a purely numerical formula to detect whether a proof obeyed proper logical syntax. The basic trick was to use prime factorization to encode lists; for example, the ordered list <3, 7, 1, 4> could be uniquely encoded as:

23 * 37 * 51 * 74

And since prime factorizations are unique, and prime powers don't mix, you could inspect this single number, 210,039,480, and get the unique ordered list <3, 7, 1, 4> back out. From there, going to an encoding for logical formulas was easy; for example, you could use the 2 prefix for NOT and the 3 prefix for AND and get, for any formulas Φ and Ψ encoded by the numbers #Φ and #Ψ:

¬Φ = 22 * 3

Φ ∧ Ψ = 23 * 3 * 5

It was then possible, by dint of crazy amounts of work, for Godel to come up with a gigantic formula of Peano Arithmetic [](p, c) meaning, 'P encodes a valid logical proof using first-order Peano axioms of C', from which directly followed the formula []c, meaning, 'There exists a number P such that P encodes a proof of C' or just 'C is provable in Peano arithmetic.'

Godel then put in some further clever work to invent statements which referred to themselves, by having them contain sub-recipes that would reproduce the entire statement when manipulated by another formula.

And then Godel's Statement encodes the statement, 'There does not exist any number P such that P encodes a proof of (this statement) in Peano arithmetic' or in simpler terms 'I am not provable in Peano arithmetic'. If we assume first-order arithmetic is consistent and sound, then no proof of this statement within first-order arithmetic exists, which means the statement is true but can't be proven within the system. That's Godel's Theorem.

"Er... no."

No?

"No. I've heard rumors that Godel's Incompleteness Theorem is horribly misunderstood in your Everett branch. Have you heard of Godel's Completeness Theorem?"

Is that a thing?

"Yes! Godel's Completeness Theorem says that, for any collection of first-order statements, every semantic implication of those statements is syntactically provable within first-order logic. If something is a genuine implication of a collection of first-order statements - if it actually does follow, in the models pinned down by those statements - then you can prove it, within first-order logic, using only the syntactical rules of proof, from those axioms."

continue reading »

Standard and Nonstandard Numbers

31 Eliezer_Yudkowsky 20 December 2012 03:23AM

Followup toLogical Pinpointing

"Oh! Hello. Back again?"

Yes, I've got another question. Earlier you said that you had to use second-order logic to define the numbers. But I'm pretty sure I've heard about something called 'first-order Peano arithmetic' which is also supposed to define the natural numbers. Going by the name, I doubt it has any 'second-order' axioms. Honestly, I'm not sure I understand this second-order business at all.

"Well, let's start by examining the following model:"

"This model has three properties that we would expect to be true of the standard numbers - 'Every number has a successor', 'If two numbers have the same successor they are the same number', and '0 is the only number which is not the successor of any number'.  All three of these statements are true in this model, so in that sense it's quite numberlike -"

And yet this model clearly is not the numbers we are looking for, because it's got all these mysterious extra numbers like C and -2*.  That C thing even loops around, which I certainly wouldn't expect any number to do.  And then there's that infinite-in-both-directions chain which isn't corrected to anything else.

"Right, so, the difference between first-order logic and second-order logic is this:  In first-order logic, we can get rid of the ABC - make a statement which rules out any model that has a loop of numbers like that.  But we can't get rid of the infinite chain underneath it.  In second-order logic we can get rid of the extra chain."

continue reading »

By Which It May Be Judged

35 Eliezer_Yudkowsky 10 December 2012 04:26AM

Followup toMixed Reference: The Great Reductionist Project

Humans need fantasy to be human.

"Tooth fairies? Hogfathers? Little—"

Yes. As practice. You have to start out learning to believe the little lies.

"So we can believe the big ones?"

Yes. Justice. Mercy. Duty. That sort of thing.

"They're not the same at all!"

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

- Susan and Death, in Hogfather by Terry Pratchett

Suppose three people find a pie - that is, three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory. Zaire wants the entire pie; Yancy thinks that 1/3 each is fair; and Xannon thinks that fair would be taking into equal account everyone's ideas about what is "fair".

continue reading »

View more: Prev | Next