Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Regret, Hindsight Bias and First-Person Experience

Stabilizer 20 April 2014 02:10AM

Here is an experience that I often have: I'm walking down the street, perfectly content and all of a sudden some memory pops into my stream of consciousness. The memory triggers some past circumstance where I did not act completely admirably. Immediately following this, there is often regret. Regret of the form like: "I should've studied harder for that class", "I should've researched my options better before choosing my college", "I should've asked that girl out", "I shouldn't have been such an asshole to her" and so on. So this is regret which is of the kind: "Well, of course, I should've done X. But I did Y. And now here I am."

This is classic hindsight bias. Looking back into the past, it seems clear what my course of action should've been. But it wasn't at all that clear in the past.

So, I've come up with a technique to attenuate this kind of hindsight-bias driven regret.

First of all, tune in to your current experience. What is it like to be here, right here and right now, doing the things you're doing. Start zooming out: think about the future and what you're going to be doing tomorrow, next week, next month, next year, 5 years later. Is it at all clear what choices you should make? Sure, you have some hints: take care of your health, save money, maybe work harder at your job. But nothing very specific. Tune in to the difficulties of carrying out even definitely good things. You told yourself that you'd definitely go running today, but you didn't. In first-person mode, it is really hard to know what do, to know how to do it and to actually do it. 

Now, think back to the person you were in the past, when you made the choices that you're regretting. Try to imagine the particular place and time when you made that choice. Try to feel into what it was like. Try to color in the details: the ambient lighting of the room, the clothes you and others were wearing, the sounds and the smells. Try to feel into what was going on in your mind. Usually it turns out that you were confused and pulled in many different directions and, all said and done, you had to make a choice and you made one.

Now realize that back then you were facing exactly the kinds of uncertainties and confusions you are feeling now. In the first-person view there are no certainties; there are only half-baked ideas, hunches, gut feelings, mish-mash theories floating in your head, fragments of things you read and heard in different places.

Now think back to the regrettable decision you made. Is it fair to hold that decision against yourself which such moral force? 

Meetup : Washington DC: Singing

0 rocurley 19 April 2014 04:43PM

Discussion article for the meetup : Washington DC: Singing

WHEN: 20 April 2014 03:00:00PM (-0400)

WHERE: National Portrait Gallery, Washington, DC 20001, USA

We'll be meeting up to go singing!

Because this is probably not a good idea in the portrait gallery, we'll meet there, and then head out somewhere (Archives probably) after we've rendezvoused.

Discussion article for the meetup : Washington DC: Singing

Mathematics and saving lives

2 NancyLebovitz 19 April 2014 01:32PM

A high school student with an interest in math asks whether he's obligated on utilitarian grounds to become a doctor.

The commenters pretty much say that he isn't, but now I'm wondering-- if you go into reasonably pure math, what areas or specific problems would be most likely to contribute the most towards saving lives?

[LINK] U.S. Views of Technology and the Future

1 Gunnar_Zarncke 18 April 2014 09:22PM

I just found this on slashdot:

"U.S. Views of Technology and the Future - Science in the next 50 years" by the Pew Research Center

This report emerges from the Pew Research Center’s efforts to understand public attitudes about a variety of scientific and technological changes being discussed today. The time horizons of these technological advances span from today’s realities—for instance, the growing prevalence of drones—to more speculative matters such as the possibility of human control of the weather. 

This is interesting esp. in comparison to the recent posts on forecasting which focussed on expert forecasts.

What I found most notable was the public opinion on their use of future technology:

% who would do the following if possible...

50% ride in a driverless car

26% use brain implant to improve memory or mental capacity

20% eat meat grown in a lab

Don't they know Eutopia is Scary? I'd guess if these technologies really become available and are reliable only the elderly will be inable to overcome their preconceptions. And everybody will eat artificial meat if it is cheaper, more healthy and tastes the same (and the testers say confirm this).

 

[link] Guide on How to Learn Programming

3 peter_hurford 18 April 2014 05:08PM

I've recently seen a lot of interest in people who are looking to learn programming.  So I put together a quick guide with lots of help from other people: http://everydayutilitarian.com/essays/learn-code

Let me know (via comments here or email - peter@peterhurford.com) if you try this guide, so I can get feedback on how it goes for you.

Also, feel free to also reach out to me with comments on how to improve the guide – I’m still relatively new to programming myself and have not yet implemented all these steps personally.  I'd cross-post it here, but I want to keep the document up-to-date and it would be much easier to do that in just one place.

Weekly LW Meetups

0 FrankAdamek 18 April 2014 03:53PM

This meetup summary was posted to LW main on April 11th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Bostrom versus Transcendence

8 Stuart_Armstrong 18 April 2014 08:31AM

How long will Alcor be around?

23 Froolow 17 April 2014 03:28PM

The Drake equation for cryonics is pretty simple: work out all the things that need to happen for cryonics to succeed one day, estimate the probability of each thing occurring independently, then multiply all those numbers together. Here’s one example of the breakdown from Robin Hanson. According to the 2013 LW survey, LW believes the average probability that cryonics will be successful for someone frozen today is 22.8% assuming no major global catastrophe. That seems startlingly high to me – I put the probability at at least two orders of magnitude lower. I decided to unpick some of the assumptions behind that estimate, particularly focussing on assumptions which I could model.

Every breakdown includes a component for ‘the probability that the company you freeze with goes bankrupt’ for obvious reasons. In fact, the probability of bankruptcy (and global catastrophe) are particularly interesting terms because they are the only terms which are ‘time dependant’ in the usual Drake equation. What I mean by this is that if you know your body will be frozen intact forever, then it doesn’t matter to you when effective unfreezing technology is developed (except to the extent you might have a preference to live in a particular time period). By contrast, if you know safe unfreezing techniques will definitely be developed one day it matters very much to you that it occurs sooner rather than later because if you unfreeze before the development of these techniques then they are totally wasted on you.

The probability of bankruptcy is also very interesting because – I naively assumed last week – we must have excellent historical data on the probability of bankruptcy given the size, age and market penetration of a given company. From this – I foolishly reasoned – we must be able to calculate the actual probability of the ‘bankruptcy’ component in the Cryo-Drake equation and slightly update our beliefs.

I began by searching for the expected lifespan of an average company and got two estimates which I thought would be a useful upper- and lower-bound. Startup companies have an average lifespan of four years. S&P 500 companies have an average lifespan of fifteen years. My logic here was that startups must be the most volatile kind of company, S&P 500 must be the least volatile and cryonics firms must be somewhere in the middle. Since the two sources only report the average lifespan, I modelled the average as a half-life. The results really surprised me; take a look at the following graph:

(http://imgur.com/CPoBN9u.jpg)

Even assuming cryonics firms are as well managed as S&P 500 companies, a 22.8% chance of success depends on every single other factor in the Drake equation being absolutely certain AND unfreezing technology being developed in 37 years.

But I noticed I was confused; Alcor has been around forty-ish years. Assuming it started life as a small company, the chance of that happening was one in ten thousand. That both Alcor AND The Cryonics Institute have been successfully freezing people for forty years seems literally beyond belief. I formed some possible hypotheses to explain this:

  1. Many cryo firms have been set up, and I only know about the successes (a kind of anthropic argument)
  2. Cryonics firms are unusually well-managed
  3. The data from one or both of my sources was wrong
  4. Modelling an average life expectancy as a half-life was wrong
  5. Some extremely unlikely event that is still more likely than the one in billion chance my model predicts – for example the BBC article is an April Fool’s joke that I don’t understand.

I’m pretty sure I can rule out 1; if many cryo firms were set up I’d expect to see four lasting twenty years and eight lasting ten years, but in fact we see one lasting about five years and two lasting indefinitely. We can also probably rule out 2; if cryo firms were demonstrably better managed than S&P 500 companies, the CEO of Alcor could go and run Microsoft and use the pay differential to support cryo research (if he was feeling altruistic). Since I can’t do anything about 5, I decided to focus my analysis on 3 and 4. In fact, I think 3 and 4 are both correct explanations; my source for the S&P 500 companies counted dropping out of the S&P 500 as a company ‘death’, when in fact you might drop out because you got taken over, because your industry became less important (but kept existing) or because other companies overtook you – your company can’t do anything about Facebook or Apple displacing them from the S&P 500, but Facebook and Apple don’t make you any more likely to fail. Additionally, modelling as a half-life must have been flawed; a company that has survived one hundred years and a company that has survived one year are not equally likely to collapse!

Consequently I searched Google Scholar for a proper academic source. I found one, but I should introduce the following caveats:

  1. It is UK data, so may not be comparable to the US (my understanding is that the US is a lot more forgiving of a business going bankrupt, so the UK businesses may liquidate slightly less frequently).
  2. It uses data from 1980. As well as being old data, there are specific reasons to believe that this time period overestimates the true survival of companies. For example, the mid-1980’s was an economic boom in the UK and 1980-1985 misses both major UK financial crashes of modern times (Black Wednesday and the Sub-Prime Crash). If the BBC is to be believed, the trend has been for companies to go bankrupt more and more frequently since the 1920’s.

I found it really shocking that this question was not better studied. Anyway, the key table that informed my model was this one, which unfortunately seems to break the website when I try to embed it. The source is Dunne, Paul, and Alan Hughes. "Age, size, growth and survival: UK companies in the 1980s." The Journal of Industrial Economics (1994): 115-140.

You see on the left the size of the company in 1980 (£1 in 1980 is worth about £2.5 now). On the top is the size of the company in 1985, with additional columns for ‘taken over’, ‘bankrupt’ or ‘other’. Even though a takeover might signal the end of a particular product line within a company, I have only counted bankruptcies as representing a threat to a frozen body; it is unlikely Alcor will be bought out by anyone unless they have an interest in cryonics.

The model is a Discrete Time Markov Chain analysis in five-year increments. What this means is that I start my hypothetical cryonics company at <£1m and then allow it to either grow or go bankrupt at the rate indicated in the article. After the first period I look at the new size of the company and allow it to grow, shrink or go bankrupt in accordance with the new probabilities. The only slightly confusing decision was what to do with takeovers. In the end I decided to ignore takeovers completely, and redistribute the probability mass they represented to all other survival scenarios.

The results are astonishingly different:

(http://imgur.com/CkQirYD.jpg)

Now your body can remain alive 415 years for a 22.8% chance of revival (assuming all other probabilities are certain). Perhaps more usefully, if you estimate the year you expect revival to occur you can read across the x axis to find the probability that your cryo company will still exist by then. For example in the OvercomingBias link above, Hanson estimates that this will occur in 2090, meaning he should probably assign something like a 0.65 chance to the probability his cryo company is still around.

Remember you don’t actually need to estimate the actual year YOUR revival will occur, but only the first year the first successful revival proves that cryogenically frozen bodies are ‘alive’ in a meaningful sense and therefore recieve protection under the law in case your company goes bankrupt. In fact, you could instead estimate the year Congress passes a ‘right to not-death’ law which would protect your body in the event of a bankruptcy even before routine unfreezing, or the year when brain-state scanning becomes advanced enough that it doesn’t matter what happens to your meatspace body because a copy of your brain exists on the internet.

My conclusion is that the survival of your cryonics firm is a lot more likely that the average person in the street thinks, but probably a lot less likely that you think if you are strongly into cryonics. This is probably not news to you; most of you will be aware of over-optimism bias, and have tried to correct for it. Hopefully these concrete numbers will be useful next time you consider the Cryo-Drake equation and the net present value of investing in cryonics.

Meetup : Urbana-Champaign: Planning and Re-planning

1 Manfred 17 April 2014 05:56AM

Discussion article for the meetup : Urbana-Champaign: Planning and Re-planning

WHEN: 20 April 2014 12:00:00PM (-0500)

WHERE: 412 W. Elm St, Urbana, IL

When things get complicated enough, you have to plan them in advance or they fail. You need blueprints and logistics before you can build a skyscraper. On a personal level, good plans improve our chances of success at anything we can make a plan for.

One trouble with plans is that once you've made them they're sticky. What kind of life to lead, what to study, when to marry - we inherit plans about these things.from the past and we don't always rethink them when appropriate.

Discussion article for the meetup : Urbana-Champaign: Planning and Re-planning

The usefulness of forecasts and the rationality of forecasters

0 VipulNaik 17 April 2014 03:49AM

Suppose we have a bunch of (forecasted value, actual value) pairs for a given quantity (with different measured actual values at different times). An example would be GDP growth rate measures in different years. For each year, we have a forecasted value and an actual value. So we have a bunch of (forecasted value, actual value) pairs, one for each year. How do we judge the usefulness of the forecasts at predicting the value. Here, we discuss a few related measures: accuracy, bias, and dependency (specifically, correlation).

Accuracy

The accuracy of a forecast refers to how far, on average, the forecast is from the actual value. Two typical ways of measuring the accuracy are:

  • Compute the mean absolute error: Take the arithmetic mean (average) of the absolute values of the errors for each forecast.
  • Compute the root mean square error: Take the square root of the arithmetic mean of the squares of the errors.

The size of the error, measured in either of these ways, is a rough estimate of how accurate the forecasts are in general (the larger the error, the less accurate the forecast). Note that an error of zero represents a perfectly accurate forecast.

Note that this is a global measure of accuracy. But it may be the case that forecasts are more accurate when the actual values are at a particular level, and less accurate when they are at a different level. There are mathematical models to test for this.

Bias

When we ask whether the forecast is biased, we're interested in knowing whether the size of the error in the positive direction systematically exceeds the size of the error in the negative direction. One method for estimating this is to compute the mean signed difference (i.e., take the arithmetic mean of errors for individual forecasts without taking the absolute value). If this comes out as zero, then the forecasting is unbiased. If it comes out as positive, the forecasts are biased in the positive direction, whereas if it comes out as negative, the forecasts are biased in the negative direction.

The above is a start, but it's not good enough. In particular, the error could come out nonzero simply because of random fluctuations rather than bias. We'd need to complicate the model somewhat in order to make probabilistic or quantitative assessments to get a sense of whether or how the forecasts are really biased.

Again, the above is a global measure of bias. But it may be the case that there are different biases for different values. There are mathematical models to test for this.

Are accuracy and bias related? Yes, in the obvious sense that the degree of inaccuracy gives an upper bound on the degree of bias. In particular, for instance, the mean absolute error gives an upper bound on the mean signed difference. So a perfectly accurate forecast is also unbiased. However, we can have fairly inaccurate forecasts that are unbiased. For instance, a forecast that always guesses the mean of the distribution of actual values will be inaccurate but have zero bias.

The above discusses additive bias. There may also be multiplicative bias. For instance, the forecasted value may be reliably half the actual value. In this case, doubling the forecasted value allows us to obtain the actual value. There could also be forms of bias that are not captured in either way.

Dependency and correlation

Ideally, what we want to know is not so much whether the forecasts themselves are accurate or biased, but whether we can use them to generate new forecasts that are good. So what we want to know is: once we correct for bias (of all sorts, not just additive or multiplicative), how accurate is the new forecast? Another way of framing this is: what exactly is the nature of dependency between the variable representing the forecasted value and the variable representing the actual value?

Testing for the nature of the dependency between variables is a hard problem, particularly if we don't have a prior hypothesis for the nature of the dependency. If we do have a hypothesis, and the relation is linear in unknown parameters, we can use the method of ordinary least squares regression (or another suitable regression) to find the best fit. And we can measure the goodness of that fit through various statistical indicators.

In the case of linear regression (i.e., trying to fit using a linear functional dependency between the variables), the square of the correlation between the variables is the R2 of the regression, and offers a decent measure of how close the variables are to being linearly related. A correlation of 1 implies a R2 of 1, and implies that the variables are perfectly correlated, or equivalently, that a linear function with positive slope is a perfect fit. A correlation of -1 also implies a R2 of 1, would mean that a linear function with negative slope is a perfect fit. A correlation of zero means that the variables are completely uncorrelated.

Note also that linear regression covers both additive and multiplicative bias (and combinations thereof) and is often good enough to capture the most basic dependencies.

If the value of R2 for the linear regression is zero, that means the variables are uncorrelated. Although independent implies uncorrelated, uncorrelated does not imply independent, because there may be other nonlinear dependencies that miraculously give zero correlation. In fact, uncorrelated does not imply independent even if the variables are both normally distributed. As a practical matter, a correlation of zero is often taken as strong evidence that neither variable tells us much about the other. This is because even if the relationship isn't linear, the existence of some relationship makes a nonzero correlation more plausible than an exact zero correlation. For instance, if the variables are positively related (higher forecasted values predict higher actual values) we expect a positive correlation and a positive R2. If the variables are negatively related (higher forecasted values predict lower actual values) we expect a negative correlation, but still a positive R2.

For the trigonometrically inclined: The Pearson correlation coefficient, simply called the correlation here, measures the cosine of the angle between a vector based on the forecasted values and a vector based on the actual values. The vector based on the forecasted values is obtained by starting with the vector of the forecasted values and subtracting from each coordinate the mean forecasted value. Similarly, the vector based on the actual values is obtained by starting with the vector of the actual values and subtracting from each coordinate the mean actual value. The R2 value is the square of the correlation, and measures the proportion of variance in one variable that is explained by the other (this is sometimes referred to as the coefficient of determination). 1 -R2 represents the square of the sine between the vectors, and represents how alienated the vectors are from each other. A correlation of 1 means the vectors are collinear and point in the same direction, a positive correlation less than 1 means they form an acute angle, a zero correlation means they are at right angles, a negative correlation greater than -1 means they form an obtuse angle, and a correlation of -1 means the vectors are collinear and point in opposite directions.

Usefulness versus rationality

The simplest situation is where the forecasts are completely accurate. That's perfect. We don't need to worry about doing better.

In the case that the forecasts are not accurate, and if we have had the luxury of crunching the numbers and figuring out the nature of dependency between the forecasted and actual values, we'd want a situation where the actual value can be reliably predicted from the forecasted value, i.e., the actual value is a (known) function of the forecasted value. A simple case of this is where the actual value and forecasted value have a correlation of 1. This means that the actual value is a known linear function of the forecasted value. (UPDATE: This process of using a known linear function to correct for systematic additive and multiplicative bias is known as Theil's correction). So the forecasted value itself is not good, but it allows us to come up with a good forecast.

What would it mean for a forecast to be unimprovable? Essentially, it means that the best value we can forecast based on the forecasted value is the forecasted value. Wait, what? What we mean is that the forecasters aren't leaving any money on the table: if they could improve the forecast simply by correcting for a known bias, they have already done so. Note that a forecast being unimprovable does not say anything directly about the R2 value. Rather, the unimprovability suggests that the best functional fit between the forecasted and the actual value would be the identity function (actual value = forecasted value). For the linear regression case, it suggests that the slope for the linear regression is 1 and the intercept is 0. Or at any rate, that they are close enough. Note that a forecast that's completely useless is unimprovable.

The following table captures the logic (note that the two rows just describe the extreme cases, rather than the logical space of all possibilities).

 The forecast cannot be improved uponThe forecast can be improved upon
The forecast, once improved upon, is perfect The forecasted value equals the actual value. The forecasted value predicts the actual value perfectly, but is not itself perfect. For instance, they could have a correlation of 1, in which case the prediction would be via a linear function.
The forecast, even after improvement, is useless at the margin (i.e., it does not give us information we didn't already have from knowledge of the existing distribution of actual vaues) The forecast just involves perfectly guessing the mean of the distribution of actual values (assuming that the distribution is known in advance; if it's not, then things become even more murky).
The actual value is independent of the forecast, and it does not involve simply guessing the mean.

Note that if forecasters are rational, then we should be in the column "The forecast cannot be improved upon" and therefore between the extreme case that the forecast is already perfect and that the forecast just involves guessing the mean of the distribution (assuming that the distribution is known in advance).

So there are two real and somewhat distinct questions about the value of forecasts:

  • (The question whose extreme answers give the rows): How useful are the forecasts, in the sense that, once we extract all the information upon them by correcting for bias and applying the appropriate functional form, how accurate are the new forecasts?
  • (The question whose answers give the columns): How rational are the forecasters, in the sense of how close are their forecasts to the most useful forecasts that can be extracted from those forecasts? (Note that even if the forecasts cannot be improved upon, that doesn't mean the forecasts are rational in the broader sense of making the best guess in terms of all available information, but it is in any case consistent with rationality in this broader sense).

Background reading

For more background, see the Wikipedia pages on forecast bias and bias of an estimator and the content linked therein.

LINK-Cryonics Institute documentary

0 polymathwannabe 16 April 2014 10:44PM

"WE WILL LIVE AGAIN looks inside the unusual and extraordinary operations of the Cryonics Institute. The film follows Ben Best and Andy Zawacki, the caretakers of 99 deceased human bodies stored at below freezing temperatures in cryopreservation. The Institute and Cryonics Movement were founded by Robert Ettinger who, in his nineties and long retired from running the facility, still self-publishes books on cryonics, awaiting the end of his life and eagerly anticipating the next."

http://www.iht.com/2014/04/15/we-will-live-again/

Meetup : Ugh Fields

1 evand 16 April 2014 04:32PM

Discussion article for the meetup : Ugh Fields

WHEN: 17 April 2014 07:00:00PM (-0400)

WHERE: 2411 N Roxboro St 27704

We'll be discussing Ugh Fields: what they are, how they keep you from accomplishing stuff, and how to recognize and reduce them. As always, RSVPs are appreciated but not required. We encourage you to show up around 7, and we'll start on-topic content at 7:30. If you're feeling energetic about it, there's a relevant article. Afterwards, we will probably meander over to Fullsteam and be sociable.

Discussion article for the meetup : Ugh Fields

Stories for exponential growth

1 VipulNaik 16 April 2014 03:15PM

Disclaimer: This is a collection of some simple stories for exponential growth. I've tried to list the main ones, but I might well have missed some, and I welcome feedback.

The topic of whether and why growth trends are exponential has been discussed on LessWrong before. For instance, see the previous LessWrong posts Why are certain trends so precisely exponential? and Mathematical simplicity bias and exponential functions. The purpose of this post is to explore some general theoretical reasons for expecting exponential growth, and the assumptions that these models rely on. I'll look at economic growth, population dynamics, and technological growth.

TL;DR

  1. Exponential growth (or decay) arises from a situation where the change in level (or growth rate) is proportional to the level. This can be modeled by either a continuous or a discrete differential equation.
  2. Feedback based on proportionality is usually part of the story, but could occur directly for the measured variable or in a hidden variable that affects the measured variable.
  3. In a simplified balanced economic growth model, growth is exponential because the addition to capital stock in a given year is proportional to output in that year, depreciation rate is constant, and output next year is proportional to capital stock this year.
  4. In a simple population dynamics model, growth is exponential under the assumption that the average number of kids per person stays constant.
  5. An alternative story of exponential growth is that performance is determined by multiplying many quantities, and we can work to make proportional improvements in the quantities one after the other. This can explain roughly exponential growth but not close-to-precise exponential growth.
  6. Stories of intra-industry or inter-industry coordination can explain a more regular exponential growth pattern than one might otherwise expect.

#1: Exponential arises from change in level (or growth rate) being proportional to the level

Brief mathematical introduction for people who have a basic knowledge of calculus. Suppose we're trying to understand how a quantity x (this could be national GDP of a country, or the price of 1 GB of NAND flash, or any other indicator) changes as a function of time t. Exponential growth means that we can write:

x = Cat

where C > 0, a > 1 (exponential decay would mean a < 1). More conventionally, it is written in the form:

x = Cekt

where C > 0, k > 0 (exponential decay would mean k < 0). The two forms are related as follows: a = ek.

The key feature of the exponential function is that for any t, the quotient x(t +1)/x(t) is a constant independent of t (the constant in question being a). In other words, the proportional gain is the same over all time periods.

Exponential growth arises as the solution to the (continuous, ordinary, first-order first-degree) differential equation:

dx/dt = kx

This says that the instantaneous rate of change is proportional to the current value.

We can also obtain exponential growth as the solution to the discrete differential equation:

Δ x = (a - 1)x

where Δ x denotes the difference x(t + 1) - x(t) (the discrete derivative of x with respect to t). What this says is that the discrete change in x is proportional to x.

To summarize, exponential growth arises as a solution to both continuous and discrete differential equations where the rate of change is proportional to the current level. The mathematical calculations work somewhat differently, but otherwise, the continuous and discrete situations are qualitatively similar for exponential growth.

#2: Feedback based on proportionality is usually part of the story, but the phenomenon could occur in a visible or hidden process

The simplest story for why a particular indicator grows exponentially is that the growth rate is determined directly in proportion with the value at a given point in time. One way of framing this is that there is feedback from the level of the indicator to the rate of change of the indicator. To get a good story for exponential growth, therefore, we need a good story for why the feedback should be in the form of direct proportionality, rather than some other functional form.

However, we can imagine a subtly different story of exponential growth. Namely, the indicator itself is not the root of the phenomenon at all, but simply a reflection of other hidden variables, and the phenomenon of exponential growth is happening at the level of these hidden variables. For instance, visible indicator x might be determined as being 0.82y2 for a hidden variable y, and it might be that the variable y is the one that experiences feedback from its level to its rate of change. I believe this is conceptually similar to (though not mathematically the same as) hidden Markov models.

One LessWrong comment offered this sort of explanation: perhaps the near-perfect exponential growth of US GDP, and its return to an earlier trend line after deviation during some years, suggests that population growth is the hidden variable that drives long-run trends in GDP. The question of whether economic growth should revert to an earlier trend line after a shock is a core question of macroeconomics with a huge but inconclusive literature; see Arnold Kling's blog post titled Trend vs. Random Walk.

#3: A bare-bones model of balanced economic growth (balanced growth version of Harrod-Domar model)

Let's begin with a very basic model of economic growth This is not to be applied directly in the understanding of real-world economies. Rather, it's meant to give us a crude idea of where exponentiality comes from.

In this model, an economy produces a certain output Y in a given year (Y changes from year to year). The economy consumes part of the output, and saves the rest of it to add to its capital stock K. Suppose the following hold:

  1. The fraction of output produced that is converted to additional capital stock is constant from year to year (i.e., the propensity to save is constant).
  2. The (fractional) rate of depreciation of capital stock (i.e., the fraction of capital stock that is lost every year due to depreciation) is constant.
  3. The amount of output produced in a given year is proportional to the capital stock at the end of the previous year, with the constant of proportionality not changing across years.

We have two variables here, output and capital stock, linked by proportionality relationships between them and between their year-on-year changes. When we work out the algebra, we'll discover that both variables grow exponentially in tandem.

The above describe a balanced growth model, where the shape and nature of the economy do not change. It just keeps growing in size, with all the quantities growing together in the same proportion. Economies may initially be far from a desirable steady state, or may be stuck in a low-savings steady state. Also note that if the rate of depreciation of capital stock exceeds the rate at which new capital stock is added, the economy will decay rather than grow exponentially.

If you're interested in actual models of economic growth used in growth theory and development economics, read up on the Harrod-Domar model and its variants such as the Ramsey-Coopman-Kans model, AK model, and Solow-Swan model. For questions surrounding asymptotic convergence, check out the Inada conditions.

#4: Population dynamics

The use of exponential models for population growth is justified under the assumption that the number of children per woman who survive to adulthood remains constant. Assume a 1:1 sex ratio, and assume that women have an average of 3 kids who survive to adulthood. In that case, with every generation, the population multiplies by a factor of 3/2 = 1.5. After n generations, the population would be (1.5)n times the original population. This is of course a grossly oversimplified model, but it covers the rationale for exponential growth. In practice, the number of surviving children per woman varies over time due to a combination of fertility changes and changes in age-specific mortality rates.

The dynamics are even simpler to understand for bacteria in a controlled environment such as a petri dish. Bacteria are unicellular organisms and they reproduce by binary fission: a given bacterium splits into two new bacteria. As long as there are ample resources, a bacterium may split into two after an average interval of 1 hour. In that case, we expect the number of bacteria in the petri dish to double every hour.

#5: A large number of factors that multiply together to determine the quantity

Here is a somewhat different story for exponential growth that a number of people have proposed independently. In a recent comment, Ben Kuhn wrote:

One story for exponential growth that I don't see you address (though I didn't read the whole post, so forgive me if I'm wrong) is the possibility of multiplicative costs. For example, perhaps genetic sequencing would be a good case study? There seem to be a lot of multiplicative factors there: amount of coverage, time to get one round of coverage, amount of DNA you need to get one round of coverage, ease of extracting/preparing DNA, error probability... With enough such multiplicative factors, you'll get exponential growth in megabases per dollar by applying the same amount of improvement to each factor sequentially (whereas if the factors were additive you'd get linear improvement).

Note that in order for this growth to come out as close to exponential, it's important that the marginal difficulty, or time, or cost, of addressing the factors is about the same. For instance, if the overall indicator we are interested in is a product pqrs, it may be that in a given year, we can zero in on one of the four factors and reduce that by 5%, but it doesn't matter which one.

A slightly more complicated story is that the choice of what factor we can work on at a given stage is constrained, but the best marginal choices at all stages are roughly as good in proportional terms. For instance, maybe, for our product pqrs, the best way to start is by reducing p by 5%. But once we are done with that, next year the best option is to reduce q by 5%. And then, once that's done, the lowest-hanging fruit is to reduce r by 5%. This differs subtly from the previous one in that we're forced from outside in the decision of what factor to work on at the current margin, but the proportional rate of progress still stays constant.

However, in the real world, it's highly unlikely that the proportional gains quite stay constant. I mean, if we can reduce p by 5% in the first year and q by 5% in the second year, what really gets in the way of reducing both together? Is it just a matter of throwing more money at the problem?

By the way, one example of rapid progress that does seem to closely hew to the multiplicative model is the progress made on linear programming algorithms. Linear programming involves a fair number of algorithms within algorithms. For instance, solving certain types of systems of linear equations is a major subroutine invoked in the most time-critical component of linear programming.

My overall conclusion is that multiplicative stories are good for explaining why growth is very roughly close to exponential, but they are not strong enough by themselves to explain a very precise exponential growth trend. However, when combined with stories about regularization, they could explain what a priori seems an unexpectedly close to precise exponential.

#6: The story of coordination and regularization

Some people have argued that the reason Moore's law (and similar computing paradigms) have held for sufficiently long periods of history is due to explicit industry roadmaps such as the International Technology Roadmap for Semiconductors. I believe that a roadmap cannot bootstrap the explanation for growth being exponential. If roadmaps could dictate reality so completely, why didn't the roadmap decide on even faster exponential growth, or perhaps superexponential growth? No, the reason for exponential growth must come from some more fundamental factors.

But explicit or implicit roadmaps and industry expectations can explain why progress was so close to being precisely exponential. I offer one version of the story.

In a world where just one company is involved with research, manufacturing, and selling to the public, the company would try to invest according to what they expected consumer demand to be (see my earlier post for more on this). Since there aren't strong reasons to believe that consumer needs grow exponentially, nor are there good reasons to believe that progress at resolving successive barriers is close to precisely exponential, an exponential growth story here would be surprising.

Suppose now that the research and manufacturing processes are handled by different types of companies. Let's also suppose that there are many different companies competing at the research level and many different companies competing at the manufacturing level. The manufacturing companies need to make plans for how much to produce and how much raw material to keep handy for the next year, and these plans require having an idea of how far research will progress.

Since no individual manufacturer controls any individual researcher, and since the progress of individual research companies can be erratic, the best bet for manufacturers is to make plans based on estimates of how far researchers are expected to go, rather than on any individual research company's promise. And a reasonable way to make such an estimate is to have an industry-wide roadmap that serves a coordinating purpose. Manufacturers have an incentive to follow the roadmap, because deviating in either direction might result in them having factories that don't produce the right sort of stuff, or have too much or too little capacity. The research companies also have incentives to meet the targets, and in particular, to neither overshoot nor undershoot too much. The reasons for not undershooting are obvious: they don't want to be left behind. But why not overshoot? Since the manufacturers are basing their plans on the technology they expect,  a research company overshooting might result in technologies that aren't ready for implementation, so the advantage is illusory. On the other hand, the costs of overshooting (in terms of additional expenditures on research) are all too real.

Thus, the benefits of coordination between different parts of the "supply chain" (in this case, the ideas and the physical manufacturing) lead to greater regularization of the growth trend than one would expect otherwise. If there are reasons to believe that growth is roughly exponential (the multiplicative story could be one such reason) then this could lead to it being far more precisely exponential.

The above explanation is highly speculative and I don't have strong confidence in it.

PS on algorithm improvement

  • If the time taken for an algorithm is described as a sum of products, then only the factors of the summands that dominate in the big-oh sense matter. For simplicity, let's assume that the time taken is a sum of products that are all of the same order as one another.
  • To improve by a given constant of proportionality the time complexity of an algorithm where the time taken is a sum of products that are of the same order of magnitude, one strategy to improve each summand by that constant of proportionality. Alternatively, we could improve some summands by a lot more, in which case we'd have to determine the overall improvement as the appropriate weighted average.
  • To improve a particular summand by a particular constant of proportionality, we may improve any one factor of that summand by that constant of proportionality. Or, we may improve all factors of that summand by constants that together multiply to the desired constant of proportionality.

Open Thread April 16 - April 22, 2014

4 Tenoke 16 April 2014 07:05AM

You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Different time horizons for forecasting

1 VipulNaik 16 April 2014 03:30AM

Disclaimer: This post contains some very preliminary thoughts on a topic that I believe would be of interest to some people here. There are probably better expositions on the subject that I haven't been able to find. If you know of such expositions, I'd appreciate being pointed to them.

There are qualitative differences between the types of forecasting that is feasible, or most suitable, for different time horizons. In this post, I discuss some of the possibilities for such time horizons and the forecasts that can be made for those.

The present (today)

Predicting the present doesn't involve prediction so much as it involves measurement. But that doesn't mean it's a slam dunk: one still needs to make a lot of measurements to come up with precise and accurate quantities. One cannot simply count the entire population of a region in one stroke. Doing so requires planning and a detailed infrastructure. And in many cases, it's not possible to measure perfectly, so we measure in part and then use theory (such as sampling theory) to extrapolate from there.

The very near future (tomorrow)

The very near future differs from the present in that it cannot be measured directly, but measuring it is often no more complicated than measuring the present. In a discrete model, it's the next step beyond the present. An example of a tomorrow prediction is: "what restaurants will be open in the city of Chicago tomorrow?" For any restaurant to be open tomorrow, it is most likely either already operating today, or has applied to open tomorrow. In either case, a good stock-taking of the situation today would give a clear idea of what's in store for tomorrow. Another example is when people make projections about employment or GDP based on asking people about their estimated workforce sizes or production levels in the near future.

Predictions about the near future involve a combination of the following:

  • assuming persistence from the present
  • asking people for their intentions and estimates
  • identifying and adjusting for any major sources of difference between today and tomorrow. In the restaurant case, an example of a major source of difference would be if "tomorrow" happened to be a major festival where restaurants customarily closed.

Who forecasts the very near future? As it turns out, a lot of people. I gave examples of economic indicator estimates based on surveys of representative samples of the economy. Also, I believe (I don't have an inside view here) that industry associations and trade journals function this way: they get data from all their members on their production plans, then they pool together the data and publish comprehensive information so that the industry as a whole is well-informed about production plans, and can think a step ahead. (SEMI might be an example).

The near but not very near future, or a few steps down the line

For the future that's a little farther out than tomorrow, simply assuming persistence or asking people isn't good enough. Persistence doesn't work because even though each day is highly correlated to the next, the correlation weakens as we separate the days out more and more. Asking people for their intentions doesn't work because people themselves are reacting to each other. For inanimate systems, different components of the system interact with each other.

This is probably the time horizon where some sort of formal model or computer simulation works best. For instance, weather models for the next 5 days or so perform somewhat better than the fallback options of persistence and climatology, and in the 5-10 day range they perform somewhat but not a lot better than climatology. Beyond 10 days, climatology generally wins.

Similarly, this sort of modeling might work well for estimating GDP changes over two or three quarters, because the model can account for how the changes in one quarter (the very near future) will have ripple effects for another quarter, and then another.

The problem with such models is that they quickly lose coherence. Small variations in initial assumptions, to a level that we cannot hope to measure precisely, start having huge potential ripple effects. Model uncertainty also gets in the way. The range of possibilities is so large that we might as well get to more general long-term models.

What is the value of making such predictions? The case of weather prediction is obvious: predicting extreme weather events saves lives, and even making more mundane predictions can help people plan their outdoor events and travel and can help transportation services better manage their services. Similar predictions in the economic or business realm can also help.

The organizations who specialize in this sort of prediction tend to be the same as the ones predicting the very near future, probably because they have all the data already, and so it's easiest for them to run the relevant models.

The medium-term future

This is the part of the future where general domain-specific phenomena might be useful. In the case of weather, the medium-term future is general climatology: how warm are summers, and how cold are winters? When does a place get rain?

Computer simulations have decohered, and formal models that are sufficiently realistic in the short term get too complicated. So what we do use? General domain-specific phenomena, including information about equilibrating and balancing influences and positive and negative feedback mechanisms. Trend extrapolation, in the (rare?) cases that it's justified. Reality checks based on considerations of the sizes and growth potentials of different industries and markets.

The medium-term future is the time horizon where:

  • New companies can be started
  • City-level transportation systems can be built
  • Companies can make large-scale capital investments in new product lines and begin reaping the profits from them
  • Government policies, such as overhauls to health care legislation or migration policy, can be implemented and their initial effects be seen

My very crude sense is that this is the highest-leverage area for improvements in forecasting capabilities at the current margin. It's far out enough that major preparatory, preventative, and corrective steps can be taken. It's near enough that the results can actually be seen and can be used to incentivize current decision makers. It's far enough that direct simulation or intricate models don't stay coherent, but it's far enough that intuitions derived from present conditions, combined with general domain-specific knowledge, continue to be broadly valid.

The long-term future

The dividing line between the medium-term and long-term future is unclear. One possible way of distinguish between the two is that the medium-term future is heavily grounded in timelines. It's specifically interested in asking what will happen in a particular interval of time, or in when a particular milestone will be achieved. With the long-term future, on the other hand, timelines are too fuzzy to even be useful. Rather, we are interested simply in filling in the details of what it might look like. A discussion of how a world that's 3 degrees celsius warmer, or of space travel, or of a post-singularity world, or of a world that is solar-powered, might fit this "long-term" moniker. Robin Hanson's discussion of long-term growth and the multiple modes of such growth also fits this "long-term" category.

With the long-term future, simply painting futuristic visions, informed by a broad understanding of theory to separate the plausible from the implausible, might be a better bet than reasoning outward from the present moment in time or from the "climatology" of the world today. Indeed, as I noted in my discussion of Megamistakes, there may well be a negative correlation between having a clear vision of the future in that sense and being able to make good timed predictions for the medium term.

With the long term future, are there, or should there be, incentives to be accurate? No. Rather, the incentives may be in the direction of painting plausible (even if improbable) future scenarios with the dual goal of preparing for them or influencing the probability of achieving them. This means dampening the probability of the catastrophic scenarios (even if they're low-probability to begin with) and increasing the probability of, perhaps even directly working towards, the good scenarios. On the good scenario side, a futurist with a rosy vision of the future might write a science fiction or speculative science book that, a generation or two later, inspires an entrepreneur, scientist, or engineer to go build one of those highly futuristic items.

Nick Beckstead's research on the overwheming importance of shaping the far future makes the relevant philosophical arguments.

I could probably split up the long term further. I'm not sure what some natural ways of performing such a split might be, and I also don't think it's relevant for my purposes, because most long-term forecasts are hard to evaluate anyway.

PS: My post on the logarithmic timeline was a result of similar thinking, but they ended up being on different topics. This post is about the qualitative differences between time horizons, that post is about having a standard to compare forecasts for different time intervals in the future.

Group Rationality Diary, April 16-30

4 therufs 16 April 2014 03:04AM

This is the public group instrumental rationality diary for April 16-30.

It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.

Previous diary: April 1-15

Rationality diaries archive

Using the logarithmic timeline to understand the future

3 VipulNaik 16 April 2014 02:00AM

Disclaimer: I think what I've said is sufficiently obvious and basic that I really doubt that it's original, but I can't easily find any other source that lays out the points I made here. If you are aware of such a source, please let me know in the comments here and I'll credit it. I'd also be happy to be pointed to any relevant literature. It's also possible that I'm overlooking some obvious rejoinders that render my claims wrong or irrelevant; if so, I appreciate criticism on that front.

The logarithmic timeline is a timeline where time is presented on a logarithmic scale. Note that this differs from the idea of plotting logarithms of quantities with respect to time (a common practice when understanding exponential growth). In those plots, the vertical axis (the dependent variable plotted as a function of time) is plotted logarithmically. With the logarithmic timeline, the time axis itself is plotted logarithmically. If we're plotting quantities as a function of time, then using a logarithmic timeline has an effect that's in many ways the opposite of the effect of using a logarithmic scale for the quantity being plotted.

Wikipedia has a page on the logarithmic timeline (see also this detailed logarithmic timeline of the universe and this timeline of the far future), but I haven't seen the topic discussed much in the context of forecasting precision and accuracy, so I thought I'd do a post on it (I'll list some relevant literature I found at the end of the post).

TL;DR

Here's an overview of the sections of the post:

  1. What the logarithmic timeline means for understanding forecasts, and how it differs from the linear timeline.
  2. Crudely, the logarithmic timeline is useful because uncertainties accumulate over time, with the amount of uncertainty accumulated being roughly proportional to how far out we are in the future.
  3. Mathematically, The logarithmic timeline is suitable for processes whose time evolution is functionally described in terms of the product of time with a parameter whose precise value we are uncertain about.
  4. The logarithmic timeline can also be important for the asymptotic analysis of more general functional forms, if the dominant term behaves in the manner described in #3.
  5. I don't know if the logarithmic timeline is correctly calibrated for comparing the value of particular levels of forecasting precision and accuracy.
  6. The logarithmic timeline is related to hyperbolic discounting.
  7. If using the logarithmic timeline, point estimates for how far out in time something will happen should be averaged using geometric means rather than arithmetic means. Similar averaging would need to be done for interval estimates or probability distribution estimates for the time variable.
  8. I don't know if empirical evidence bears out the intuition that forecast accuracy should be time-independent if we use the logarithmic timeline.

#1: What the logarithmic timeline means for understanding forecasts

First off, we are using the origin point for the logarithmic timeline as the present. There are other logarithmic timelines that are better suited for other purposes. Using the origin of the universe is better suited for physics. But when it comes to understanding forecasts based on our best knowledge of what has transpired so far, the present is the natural origin.

Let's first understand the implicit assumption embedded in the use of a linear timeline for understanding forecasts. With a linear timeline, a statement of the form "technological milestone x will happen in year 2017" has equivalent prima facie precision as a statement of the form "technological milestone y will happen in year 2057" despite the fact that the year 2017 is (as of the time of this writing) just 3 years in the future and the year 2057 is 43 years in the future. But a little reflection shows that this doesn't jibe with intuition: making predictions to single years 43 years in advance is more impressive than making predictions to single years a mere 3 years in advance. Similarly, saying that a particular technological innovation will happen between 2031 and 2035 involves making a more precise statement than saying that a particular technological innovation will happen between 2015 and 2019.

We want a timeline where the equivalent in the far future of a near-future year is an interval comprising more than one year. But there are many such choices of monotone functions. I believe that the logarithmic one is best. In other words, I'm advocating for a situation where you find "between 5 and 10 years from now" as precise as "between 14 and 28 years from now", i.e., it is the quotient of the endpoint to the startpoint (the multiplicative distance) that matters rather than the difference between them (the additive distance).

But why use the logarithm rather than some other monotone transformation? I proffer some reasons below.

#2: A crude explanation for the logarithmic timeline

If you're mathematically sophisticated, skip ahead straight to the math.

Here's a crude explanation. Suppose you're trying to estimate the time in which the cost per base pair of DNA sequencing drops to 1/8 of its current level. You have an estimate that it takes between 4 and 11 years to halve. So the natural think to do is say "to get to 1/8, it has to go through three halvings. In the best case, that's 3 times 4 equals 12 years. In the worst case, that's 3 times 11 equals 33 years. So it will happen between 12 and 33 years from now."

Note that the length of the interval for getting to 1/8 is 33 - 12 = 21, three times the length of the interval for getting to half (11 - 4 = 7). But the ratio of the upper to the lower endpoint is the same in both cases (namely 11/4).

None of the numbers above are significant; I chose them for the benefit of people who prefer worked numerical examples before, or instead of, delving into mathematical formalism.

Note also that while this particular example had an exponential process, we don't need the process to be exponential per se for the broad dynamics here to apply. We do need some mathematical conditions, but they aren't tied to the process being exponential (in fact, exponential versus linear isn't a robust distinction for this context because either can be turned to the other via a monotone transformation). I turn to the mathematical formalism next.

#3: The math: logarithmic timeline is natural for a fairly general functional form of evolution with time

Consider a quantity y whose variation with time t (with t = 0 marking the current time) is given by the general functional form:

y = f(kt)

where f is a monotone increasing function, and k is a parameter that we have some uncertainty about. Let's say we know that a < k < b for some known positive constants a and b. We now need to answer a question of the form "at what time will y reach a specific value y1?"

Since f is monotone increasing, it is invertible, so solving for t we obtain:

t = f -1(y1)/k

There's uncertainty about the value of k. So t ranges between the possibilities f -1(y1)/b and f -1(y1)/a. In particular, if we divide the endpoint of the interval by the starting point, we get b/a, a quantity independent of the value of y1. Thus, the use of the logarithmic timeline is a robust choice.

What sort of functional forms match the above description? Many. For instance:

  • A linear functional form y = kt + c where k is a positive constant and c is a constant. Note that even though there are two parameters here, the value of c is determined by evaluating at t = 0 knowing the present value, and is not a source of uncertainty.
  • An exponential functional form y = Cekt where C and k are positive constants. Note that even though there are two parameters here, the value of c is determined by evaluating at t = 0 knowing the present value, and is not a source of uncertainty.
  • A  quadratic functional form y = (kt)2 + c where k is a positive constant and c is a constant. Note that even though there are two parameters here, the value of c is determined by evaluating at t = 0 knowing the present value, and is not a source of uncertainty.

Of course, not every functional form is of this type. For instance, consider the functional form y = tk. Here, the parameter is in the exponent and does not interact multiplicatively with time. Therefore, the logarithmic timeline does not work.

#4: Asymptotic significance of the logarithmic timeline

A functional form may involve a sum of multiple functions, each involving a different parameter. It does not precisely fit the framework above. However, for sufficiently large t, one piece of the functional form dominates, and if that piece has the form described above, everything works well. For instance, consider a functional form with two parameters:

y = ekt + mt + c

Both k and m are parameters with known ranges (c is determined from them and the value at 0). For sufficiently large t, however, this looks close enough to y = ekt that we can use that as an approximation and find that the logarithmic timeline works well enough. Thus, the logarithmic timeline could be asymptotically significant.

#5: Does the logarithmic timeline correctly measure the benefits of a particular level of forecasting precision?

We've given above a reason why the logarithmic timeline correctly measures precision from the perspective of forecasting ability. But what about the perspective of the value of forecasting? Does knowing that something will happen between 5 years and 10 years from now deliver the same amount of value as knowing that something will happen between 14 years and 28 years from now? Unfortunately, I don't have a clear way of thinking about this question, but I can think of plausible intuitions supporting the logarithmic timeline choice: the farther out in the future we are talking, the less valuable it is to know exact dates, and ratios just happen to capture that lower level of value correctly.

#6: Relation with hyperbolic discounting

Gunnar_Zarncke points out in a comment that the logarithmic timeline is related to hyperbolic discounting, a particular form of discounting the future that bears close empirical relation with how people view the future. Hyperbolic discounting gives differential weight 1/t to a time instant t in the future. This relates with the logarithmic timeline because d(ln t)/dt = 1/t. This could potentially be used to provide a rational basis for hyperbolic discounting, vindicating the rationality of human intuition.

A follow-up comment by Gunnar_Zarncke links to an earlier LessWrong comment of his that in turn links to research showing that people's subjective perception of time fits the logarithmic timeline model.

#7: Point estimates and geometric means

Another implication of the logarithmic timeline is that if we have a collection of different point estimates for points in time when a specific milestone will be attained, the appropriate method of averaging is the geometric mean rather than the arithmetic mean. The geometric mean is the averaging notion that corresponds to taking the arithmetic mean on the logarithmic scale.

For instance, if three people are asked for a project estimate, and they give the numbers of 2 years, 8 years, and 32 years, then the geometric mean estimate is the cube root of 2 X 8 X 32. This turns out to be 8. The arithmetic mean estimate is the (2 + 8 + 32)/3 = 14.

Note that, thanks to the AM-GM inequality, the geometric mean is never larger than the arithmetic mean, and they're equal only when all the quantities being averaged are equal to each other to begin with. This suggests that, if people tend to be optimistic about how quickly things will happen when they use arithmetic means, they'll appear even more optimistic when using geometric means. On the other hand, the logarithmic timeline might also result in the optimism not seeming so bad.

Similar geometric averaging would need to be done for interval estimates or probability distribution estimates for the time variable.

#8: Empirically, is forecast accuracy time-independent once we switch to the logarithmic timeline?

I consider this the most important question. Namely, as an empirical matter, are people about as good at figuring out whether something will happen between 5 and 10 years from now as they are at figuring out whether something will happen between 14 and 28 years from now?

I do believe that empirical evidence confirms what intuition knows: on the linear timeline, forecast accuracy decays. Thus, for instance, when people are asked for the precise year when something happens, estimates for things that will happen farther out in the future are later. When people are asked to estimate GDP per capita values, estimates far out in the future are worse than near-term estimates. But how much worse are the long term forecasts? Is the worsening in keeping with the logarithmic timeline story?

Note that if the general functional form I described above correctly describes a process, then the logarithmic timeline story is validated theoretically, but the empirical question is still open.

Most research I'm aware of just looks at estimates within specified intervals, such as "how much will GDP growth rate be in a give year?" I suspect an analysis of the data from these experiments might allow us to judge the hypothesis of constant accuracy on the logarithmic timeline, but I don't think just looking at their abstracts would settle the hypothesis. But I'd welcome suggestions on possible tests based on already existing data.

Note also that if existing research uses arithmetic means to aggregate estimates for "how far out in the future" something will happen, we'll have to get back to the source data and use geometric means instead.

There may be research on the subject of evaluating forecast accuracy using a logarithmic timeline (most research on the logarithmic timeline relates to the history of the universe and evolution, rather than the future of humanity or technology). I haven't been able to locate it, and I'd love if people in the comments point me to it.

Potentially relevant literature: I skimmed the paper Forecasting the growth of complexity and change by Theodore Modis, Technology Forecasting and Social Change, Vol. 69, 2002 (377-404), available online (gated) here. I haven't been able to locate an ungated version. The paper uses a logarithmic timeline for the past, taking the present as the origin. A quick skim did not lead me to believe it overlapped with the points I made here. Incidentally, Modis has been critical of Ray Kurzweil's singularity forecast.

See also the discussion at the end of #6 (hyperbolic discounting) linking to the paper On the perception of time by F. Thomas Bruss and Ludger Ruschendorf.

Addendum: To clarify the relation between logarithmic timeline, logarithmic scales, linear functions, power functions, and exponential functions, the table below gives, in its cells, the type of function we'd end up graphing:

Growth rate of quantity with respect to timeOrdinary scaleLogarithmic timelineLogarithmic scale for quantity, ordinary timelineLogarithmic scale for both
Linear Linear Exponential Logarithmic Linear with slope 1
Power function Power function Exponential Logarithmic Linear
Exponential Exponential Double exponential Linear Exponential

The effect of effectiveness information on charitable giving

14 Unnamed 15 April 2014 04:43PM

A new working paper by economists Dean Karlan and Daniel Wood, The Effect of Effectiveness: Donor Response to Aid Effectiveness in a Direct Mail Fundraising Experiment.

The Abstract:

We test how donors respond to new information about a charity’s effectiveness. Freedom from Hunger implemented a test of its direct marketing solicitations, varying letters by whether they include a discussion of their program’s impact as measured by scientific research. The base script, used for both treatment and control, included a standard qualitative story about an individual beneficiary. Adding scientific impact information has no effect on whether someone donates, or how much, in the full sample. However, we find that amongst recent prior donors (those we posit more likely to open the mail and thus notice the treatment), large prior donors increase the likelihood of giving in response to information on aid effectiveness, whereas small prior donors decrease their giving. We motivate the analysis and experiment with a theoretical model that highlights two predictions. First, larger gift amounts, holding education and income constant, is a proxy for altruism giving (as it is associated with giving more to fewer charities) versus warm glow giving (giving less to more charities). Second, those motivated by altruism will respond positively to appeals based on evidence, whereas those motivated by warm glow may respond negatively to appeals based on evidence as it turns off the emotional trigger for giving, or highlights uncertainty in aid effectiveness.

In the experimental condition (for one of the two waves of mailings), the donors received a mailing with this information about the charity's effectiveness:

In order to know that our programs work for people like Rita, we look for more than anecdotal evidence. That is why we have coordinated with independent researchers [at Yale University] to conduct scientifically rigorous impact studies of our programs. In Peru they found that women who were offered our Credit with Education program had 16% higher profits in their businesses than those who were not, and they increased profits in bad months by 27%! This is particularly important because it means our program helped women generate more stable incomes throughout the year.

These independent researchers used a randomized evaluation, the methodology routinely used in medicine, to measure the impact of our programs on things like business growth, children's health, investment in education, and women's empowerment.

In the control condition, the mailing instead included this paragraph:

Many people would have met Rita and decided she was too poor to repay a loan. Five hungry children and a small plot of mango trees don’t count as collateral. But Freedom from Hunger knows that women like Rita are ready to end hunger in their own families and in their communities.

Meetup : Boston - Two Parables on Language and Philosophy

1 Vika 15 April 2014 12:10PM

Discussion article for the meetup : Boston - Two Parables on Language and Philosophy

WHEN: 20 April 2014 03:30:00PM (-0400)

WHERE: MIT, 25 Ames St, Cambridge, MA

Sam Rosen will continue his talk from March 23 with Parable 2 on Language and Philosophy, starting at 4pm.

Cambridge/Boston-area Less Wrong meetups start at 3:30pm, and have an alternating location:

  • 1st Sunday meetups are at Citadel in Porter Sq, at 98 Elm St, apt 1, Somerville.

  • 3rd Sunday meetups are in MIT's building 66 at 25 Ames St, room 156. Room number subject to change based on availability; signs will be posted with the actual room number.

(We also have last Wednesday meetups at Citadel at 7pm.)

Our default schedule is as follows:

—Phase 1: Arrival, greetings, unstructured conversation.

—Phase 2: The headline event. This starts promptly at 4pm, and lasts 30-60 minutes.

—Phase 3: Further discussion. We'll explore the ideas raised in phase 2, often in smaller groups.

—Phase 4: Dinner.

Discussion article for the meetup : Boston - Two Parables on Language and Philosophy

My Heartbleed learning experience and alternative to poor quality Heartbleed instructions.

13 aisarka 15 April 2014 08:15AM

Due to the difficulty of finding high-quality Heartbleed instructions, I have discovered that perfectly good, intelligent rationalists either didn't do all that was needed and ended up with a false sense of security or did things that increased their risk without realizing it and needed to take some additional steps.  Part of the problem is that organizations who write for end users do not specialize in computer security and vice versa, so many of the Heartbleed instructions for end users had issues.  The issues range from conflicting and confusing information to outright ridiculous hype.  As an IT person and a rationalist, I knew better than to jump to the proposing solutions phase before researching [1].  Recognizing the need for well thought out Heartbleed instructions, I spent 10-15 hours sorting through the chaos to create more comprehensive Heartbleed instructions.  I'm not a security expert, but as an IT person who has read about computer security out of a desire for professional improvement and also due to curiosity and is familiar with various research issues, cognitive biases, logical fallacies, etc, I am not clueless either.  In light of this being a major event that some sources are calling one of the worst security problems ever to happen on the Internet [2], that has been proven to be more than a theoretical risk (Four people hacked the keys to the castle out of Cloudflare's challenge in just one day.) [3], that has been badly exploited (900 Canadian social insurance numbers were leaked today. [4]), and some evidence exists that it may have been used for spying for a long time (EFF found evidence of someone spying on IRC conversations. [5]), I think it's important to share my compilation of Heartbleed instructions just so that a better list of instructions is out there.  More importantly, this disaster is a very rare rationality learning opportunity: reflecting on our behavior and comparing it with what we realize we should have done after becoming more informed may help us see patches of irrationality that could harm us during future disasters.  For that reason, I did some rationality checks on my own behavior by asking myself a set of questions.  I have of course included the questions.

 

Heartbleed Research Challenges this Post Addresses:

  - There are apparent contradictions between sources about which sites were affected by Heartbleed, which sites have updated for Heartbleed, which sites need a password reset, and whether to change your passwords now or wait until the company has updated for Heartbleed.  For instance, Yahoo said Facebook was not vulnerable. [6] LastPass said Facebook was confirmed vulnerable and recommended a password update. [7]

  - Companies are putting out a lot of "fluffspeek"*, which makes it difficult to figure out which of your accounts have been affected, and which companies have updated their software.

  - Most sources *either* specialize in writing for end-users *or* are credible sources on computer security, not both.

  - Different articles have different sets of Heartbleed instructions.  None of the articles I saw contained every instruction.

  - A lot of what's out there is just ridiculous hype. [8]

 

Disclaimer

I am not a security specialist, nor am I certified in any security-related area.  I am an IT person who has randomly read a bunch of security literature over the last 15 years, but there *is* a definite quality difference between an IT person who has read security literature and a professional who is dedicated to security.  I can't give you any guarantees (though I'm not sure it's wise to accept that from the specialists either).  Another problem here is time.  I wanted to act ASAP.  With hackers on the loose, I do not think it wise to invest the time it would take me to create a Gwern style masterpiece.  This isn't exactly slapped together, but I am working within time constraints, so it's not perfect.  If you have something important to protect, or have the money to spend, consult a security specialist.

 

Compilation of Heartbleed Instructions


  Beware fraudulent password reset emails and shiny Heartbleed fixes.

  With all the real password reset emails going around, there are a lot of scam artists out there hoping to sneak in some dupes.  A lot of people get confused.  It doesn't mean you're stupid.  If you clicked a nasty link, or even if you're not sure, call the company's fraud department immediately.  That's why they're there. [9]  Always be careful about anything that seems too good to be true, as the scam artists have also begun to advertise Heartbleed "fixes" as bait.


  If the site hasn't done an update, it's risky to change your password.

  Why: This may increase your risk.  If Heartbleed isn't fixed, any new password you type in could be stolen, and a lot of criminals are probably doing whatever they can to exploit Heartbleed right now since they just found out about it.  "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]


  If you use digital password storing, consider whether it is secure.

  Some digital password storing software is way better than others.  I can't recommend one, but be careful which one you choose.  Also, check them for Heartbleed.


  If you already changed your password, and then a site updates or says "change your password" do it again.

  Why change it twice?: If you changed it before the update, you were sending that new password over a connection with a nasty security flaw.  Consider that password "potentially stolen" and make a new one.  "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]


  If a company says "no need to change your password" do you really want to believe them?

  There's a perverse incentive for companies to tell you "everything is fine" when in fact it is not fine, because nobody wants to be seen as having bad security on their website.  Also, if someone did steal your password through this bug, it's not traceable to the bug.  Companies could conceivably claim "things are fine" without much accountability.  "Exploitation of this bug leaves no traces of anything abnormal happening to the logs." [11] I do not know whether, in practice, companies respond to similar perverse incentives, or if some unknown thing keeps them in check, but I have observed plenty of companies taking advantage of other perverse incentives.  Health care rescission for instance.  That affected much more important things than data.


  When a site has done a Heartbleed update, *then* change your password.

  That's the time to do it. "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]


  Security Questions

  Nothing protected your mother's maiden name or the street you grew up on from Heartbleed any more than your passwords or other data.  A stolen security question can be a much bigger risk than a stolen password, especially if you used the same one on multiple different accounts.  When you change your password, also consider whether you should change your security questions.  Think about changing them to something hard to guess, unique to that account, and remember that you don't have to fill out your security questions with accurate information.  If you filled the questions out in the last two years, there's a risk that they were stolen, too.


  How do I know if a site updated?

 

  Method One:

    Qualys SSL Labs, an Information Security Provider created a free SSL Server Test.  Just plug in the domain name and Qualys will generate a report.  Yes, it checks the certificate, too.  (Very important.)

    Qualys Server Test

 

  Method Two:

    CERT, a major security flaw advisory publisher, listed some (not all!) of the sites that have updated.  If you want a list, you should use CERT's list, not other lists. 

    CERT's List

    Why CERT's list?  Hearing "not vulnerable" on some news website's list does not mean that any independent organization verified that the site was fine, nor that an independent organization even has the ability to verify that the site has been safe for the entire last two years.  If anyone can do that job, it would be CERT, but I am not unaware of tests of their abilities in that regard.  Also, there is no fluffspeek*.


  Method Three:

    Search the site itself for the word "Heartbleed" and read the articles that come up.  If the site had to do a Heartbleed update, change your password.  Here's the quick way to search a whole site in Google (do not add "www"):

    site:websitename.com Heartbleed


  If an important site hasn't updated yet:

  If you have sensitive data stored there, don't log into that site until it's fixed.  If you want to protect it, call them up and try to change your password by phone or lock the account down.  "Stick to reputable websites and services, as those sites are most likely to have addressed the vulnerability right away." [10]


  Check your routers, mobile phones, and other devices.

  Yes, really. [13] [14]


  If you have even the tiniest website:

  Don't think "There's nothing to steal on my website".  Spammers always want to get into your website.  Hackers make software that exploits bugs and can share or sell that software.  If a hacker shares a tool that exploits Heartbleed and your site is vulnerable, spammers will get the tool and could make a huge mess out of everything.  That can get you blacklisted and disrupt email, it can get you removed from Google search engine results, it can disrupt your online advertising ... it can be a mess.

  Get a security expert involved to look for all the places where Heartbleed may have caused a security risk on your site, preferably one who knows about all the different services that your website might be using.  "Services" meaning things like a vendor that you pay so your website can send bulk text messages for two-factor authentication, or a free service that lets users do "social sign on" to log into your site with an external service like Yahoo.  The possibilities for Heartbleed to cause problems on your website, through these kinds of services, is really pretty enormous.  Both paid services and free services could be affected.

  A sysadmin needs to check the server your site is on to figure out if it's got the Heartbleed bug and update it.

  Remember to check your various web providers like domain name registration services, web hosting company, etc.


Rationality Learning Opportunity (The Questions)

We won't get many opportunities to think about how we react in a disaster.  For obvious ethical reasons, we can't exactly create disasters in order to test ourselves.  I am taking the opportunity to reflect on my reactions and am sharing my method for doing this.  Here are some questions I asked myself which are designed to encourage reflection.  I admit to having made two mistakes at first: I did not apply rigorous skepticism to each news source right from the very first article I read, and the mistake of underestimating the full extent of what it would take to address the issue.  What saved me was noticing my confusion.

  When you first heard about Heartbleed, did you fail to react?  (Normalcy bias)

  When you first learned about the risk, what probability did you assign to being affected by it?  What probability do you assign now?  (Optimism bias)

  Were you surprised to find out that someone in your life did not know about Heartbleed, and regret not telling them when it had occurred to you to tell them?  (Bystander effect)

  What did you think it was going to take to address Heartbleed?  Did you underestimate what it would take to address it competently?  (Dunning-Kruger effect)

  After reading news sources on Heartbleed instructions, were you surprised later that some of them were wrong?

  How much time did you think it would take to address the issue?  Did it take longer?  (Planning fallacy)

  Did you ignore Heartbleed?  (Ostrich effect)


*Fluffspeek:

Companies, of course, want to present a respectable face to customers, so most of them are not just coming out and saying "We were affected by Heartbleed.  We have updated.  It's time to change your password now."  Instead, some have been writing fluff like:

  "We see no evidence that data was stolen."

  According to the company that found this bug, Heartbleed doesn't leave a trail in the logs. [15] If someone did steal your password, would there be evidence anyway?  Maybe some really were able to rule that out somehow.  Positivity bias, a type of confirmation bias, is an important possibility here.  Maybe, like many humans, these companies simply failed to "Look into the dark" [16] and think of alternate explanations for the evidence they're seeing (or not seeing, which can sometimes be evidence [17], but not useful evidence in this case).

  "We didn't bother to tell you whether we updated for Heartbleed, but it's always a good idea to change your password however often."

  Unless you know each website has updated for Heartbleed, there's a chance that you're going to go out and send your new passwords right through a bunch of website's Heartbleed security holes as you're changing them.  Now that Heartbleed is big news, every hacker and script kiddie on planet earth probably knows about it, which means there are probably way more people trying to steal passwords through Heartbleed than before.  Which is the greater risk?  Entering in a new password while the site is leaking passwords in a potentially hacker-infested environment, or leaving your potentially stolen password there until the site has updated?  Worse, if people *did not* change their password after the update because they already changed it *before* the update, they've got a false sense of security about the probability that their password was stolen.  Maybe some these companies updated for Heartbleed before saying that.  Maybe the bug was completely non-applicable for them.  Regardless, I think end users deserve to know that updating their password before the Heartbleed update carries a risk.  Users need to be told whether an update has been applied.  As James Lynn wrote for Forbes, "Forcing customers to guess or test themselves is just negligent." [8]

"Fluffspeek" is a play on "leetspeek", a term used to describe bits of text full of numbers and symbols that is attributed to silly "hackers".  Some PR fluff may be a deliberate attempt to exploit others, similar in some ways to the manipulation techniques popular among black hat hackers, called social engineering.  Even when it's not deliberate, this kind of garbage is probably about as ugly to most people with half a brain as "I AM AN 31337 HACKER!!!1", so is still fitting.

 

References:

 1. http://lesswrong.com/lw/ka/hold_off_on_proposing_solutions/

 2. http://money.cnn.com/2014/04/09/technology/security/Heartbleed-bug/

 3. http://blog.cloudflare.com/the-results-of-the-cloudflare-challenge

 4. http://www.cra-arc.gc.ca/gncy/sttmnt2-eng.html

 5. https://www.eff.org/deeplinks/2014/04/wild-heart-were-intelligence-agencies-using-Heartbleed-november-2013

 6. http://finance.yahoo.com/blogs/breakout/Heartbleed-security-flaw--how-to-protect-yourself-172552932.html

 7. https://lastpass.com/Heartbleed/?h=facebook.com

 8. Forbes.com "Avoiding Heartbleed Hype, What To Do To Stay Safe" (I can't link to this for some reason but you can do a search.)

 9. http://www.net-security.org/secworld.php?id=16671

 10. http://www.cnbc.com/id/101569136

 11. http://Heartbleed.com/

 12. https://community.norton.com/t5/Norton-Protection-Blog/Heartbleed-Bug-What-You-Need-to-Know-and-Security-Tips/ba-p/1120128

 13. http://online.wsj.com/news/articles/SB10001424052702303873604579493963847851346

 14. Forbes.com "A Billion Smartphone Users May Be Affected by the Heartbleed Security Flaw" (I can't link to this for some reason but you can do a search.)

 15. http://Heartbleed.com/

 16. http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/

 17. http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/

Meetup : Moscow meet up

1 Yuu 15 April 2014 05:12AM

Discussion article for the meetup : Moscow meet up

WHEN: 20 April 2014 04:00:00AM (+0400)

WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16

We will have:

  • Unicorns: false but useful beliefs, report.

  • Boundaries of rationality, discussion.

  • Cognitive behavioural therapy as a framework for daily rationality, report.

  • How to avoid multitasking and main issues of semantics, report.

We gather in the Yandex office, you need the first revolving door under the archway. Here is additional guide how to get there: link. You can fill this one minute form (in Russian), to share your contact information.

We start at 16:00 and sometimes finish at night. Please pay attention that we only gather near the entrance and then come inside.

Discussion article for the meetup : Moscow meet up

Unfriendly Natural Intelligence

7 Gunnar_Zarncke 15 April 2014 05:05AM

Related to: UFAIPaperclip maximizerReason as memetic immune disorder

A discussion with Stefan (cheers, didn't get your email, please message me) during the European Community Weekend Berlin fleshed out an idea I had toyed around with for some time:

If a UFAI can wreak havoc by driving simple goals to extremes then so should driving human desires to extremes cause problems. And we should already see this. 

Actually we do. 

We know that just following our instincts on eating (sugar, fat) is unhealthy. We know that stimulating our pleasure centers more or less directly (drugs) is dangerous. We know that playing certain games can lead to comparable addiction. And the recognition of this has led to a large number of more or less fine-tuned anti-memes e.g. dieting, early drug prevention, helplines. These memes steering us away from such behaviors were selected for because they provided aggregate benefits to the (members of) social (sub) systems they are present in.     

Many of these memes have become so self-evident we don't recognize them as such. Some are essential parts of highly complex social systems. What is the general pattern? Did we catch all the critical cases? Are the existing memes well-suited for the task?How are they related. Many are probably deeply woven into our culture and traditions.

Did we miss any anti-memes? 

This last question really is at the core of this post. I think we lack some necessary memes keeping new exploitations of our desires in check. Some new ones result from our society a) having developed the capacity to exploit them and b) the scientific knowledge to know how to do this.

continue reading »

Earnings of economics majors: general considerations

4 JonahSinick 14 April 2014 10:23PM

Some liberal arts majors make more money than others, but by far the ones who make the most are economics majors. The 2013-2014 Payscale Salary Report reports the following figures. The second column is median starting salary and the third is median mid-career salary, in thousands of dollars


Economics 50 96
Political Science 41 77
Philosophy 39 78
History 39 71
English Literature 40 71
Psychology 36 60
Sociology 37 55

This trend is robust, and I'll give more supporting data as an appendix at the end of the post.

The fact that economics majors make so much more is often taken to mean that majoring in economics raises future earnings. Is this true? In this post I'll discuss some general considerations relevant to determining this, and discuss the sort of data that one might try use to resolve the question. In future posts, I'll offer some such data, with analysis and discussion.

I'd welcome any other ideas for testing the hypotheses, as well as pushback on the conceptual framework, and/or alternative hypotheses.

continue reading »

Meetup : Melbourne Social Meetup (Note: change of location!)

0 Maelin 14 April 2014 01:52PM

Discussion article for the meetup : Melbourne Social Meetup (Note: change of location!)

WHEN: 18 April 2014 06:30:00PM (+1000)

WHERE: 2 Oranna Court, Glen Waverley, Victoria, Australia

PLEASE NOTE: CHANGE OF LOCATION

April's regular Social meetup is on this month, but our usual venue is no longer available, so we're going to experiment with the location. This month we are in Glen Waverley (see below for transport arrangements).

Social meetups are casual affairs where we chat and play games. We usually arrange some form of take-away for dinner for any who want to be part of it, but feel free to bring your own dinner if you'd prefer. The official start time is 18:30 but you won't upset anything if you turn up later on.

For this month, we are in Glen Waverley. If you're coming by public transport, catch a train on the Glen Waverley line and get off at Glen Waverley, then you can call either me (Richard, 0421-231-789) or Scott (0432-862-932) and we'll do a quick run to the station. Otherwise, there's plenty of parking nearby.

Hope to see you there! :)

Discussion article for the meetup : Melbourne Social Meetup (Note: change of location!)

[Requesting advice] Problems with optimizing my life as a high school student

11 Optimal 14 April 2014 01:07PM

I am writing this because I believe I need advice and direction from people who can understand my problems. This is my first post on Less Wrong, and I am new to practicing serious writing/rationality in general, so please alert me if I have made any glaring mistakes in this text or in my decisions/beliefs. I will begin by describing myself and my situation.

(This article turned out a lot longer than I thought it would, and it might be hard to follow as a result. I urge you to skim through it once, regarding the first sentence of each paragraph, before reading it in full.)

I am a 16 year old male currently enrolled in an online high school that will remain nameless. My story will be very familiar for most of you: I want to help ensure that the invention of self-improving AI will benefit humanity (and myself, particularly), and I am devoting my entire life to this single goal. This is only possible because I am in a highly favorable position, having a safe home, loving family, secure financial support, internet access, and a tremendous amount of unrestrained free time.

My free time is the result of my relatively undemanding online school plus my unrestrictive parents. To give you an idea of how significant it is: for several days, I could do nothing but play video games and look at porn. And I mean nothing: I could rush right through my online lessons, avoid all exercise and sunlight, stay up until 4AM, and have (unhealthy) food brought to my room. Nobody would stop me from maintaining such self-destructive habits. I could go on doing those things for years. And that is exactly what I did, starting when I was age 11 and ending when I was age 15.

For most of the past year, I have been dedicated to overhauling my life, eliminating 'negative' (self-destructive, shortsighted, unproductive) habits and introducing more positive (healthy, considerate of the future, productive) ones. I did this, of course, because I learned about the profound implications of the technological singularity. I decided that I needed to be a healthy, knowledgeable, and productive person to maximize my chances of being able to experience the joys of future technologies. I'm sure that many of you can identify with that sentiment, although I doubt that anyone could have been lazier than me.

The past year was easily the most important year of my life, and will likely remain so for quite a while. As you may have guessed, it was also the most difficult time of my life. The first 5-6 months were particularly painful, mostly because of my severe addiction to internet porn. During that time, I was putting most of my effort into eliminating negative habits. I still added many positive habits, the most prominent being programming, reading (fiction only) offline, exercise, healthy eating, and meditation. Many of my habits fluctuated; I experimented a lot. There was some constant change, however, in the most important habits: average time spent on the computer for entertainment gradually decreased, while time spent on programming and reading increased in turn. 

I would say that I succeeded at overhauling my life. Unfortunately, because my sole goal was 'reduce negative time, increase positive time', my 'positive' time is not nearly as positive as it could be. Sometimes I find myself staring at a programming e-book for an hour or more and learning nothing. Despite its relative ease, schoolwork often causes me to become stressed quickly. I had been practicing mindfulness meditation for 20-40 minutes a day, but I recently reduced and then removed that habit because it almost never helped me. Reading, exercise, and healthy eating were the only habits that always stuck with me no matter how badly I felt.

The most essential habit I built was the habit of tracking my habits. That is, I created a spreadsheet in OpenOffice to keep track of the time I spent on various activities every day. This was a very good thing to do: it motivated me when I was struggling to control my habits, and it now allows me to view my overall progress. These statistics are very helpful in getting a picture of my life and of my habits, so I will provide an abridged/condensed version of the entire spreadsheet collection. For each month, the average time I spent daily on each activity is shown. Numbers in bold indicate highly inaccurate measurements, taken from months wherein I mostly abstained from activity tracking.

(imgur version if it does not display properly)

'Reading offline' means either nonfiction or fiction (it was mostly fiction.) 'Schoolwork' often meant programming assignments. Video games count as leisure computer use. For most of 2013 I only did game programming; this was before I realized that 'AI programming' was more important than 'any programming'. Before recently, I was adding leisure computer use time much too gratuitously: I erroneously categorized it as 'any time spent on computer not covered by other activities'. The statistics for most of 2013 are slightly flawed as a result. All of the recorded daily activity times probably had a margin of error of around 15%. Also, the monthly averages are not good indicators of how I scheduled my activities; in December, for example, I did not play video games for 15-20 minutes every day (having more spaced out longer sessions instead), but my art practice was always 30-80 minutes a day.

Some patterns/trends here are obvious (programming), while others are more random (schoolwork). Programming and reading are obviously the dominant activities in my life. Until late 2013, I only read fiction. For better or worse, I recently realized that reading fiction and practicing art are, from a productivity/time-management perspective, equivalent to playing video games and watching television. I had abstained from activity tracking for most of Jan-Mar as an experiment, but I estimate that I was reading fiction for at least 3 hours every day during most of that period (Kkat is to blame.) This is only slightly odd, because around new years I was starting to focus on maximizing daily programming time, bringing the average up to over 3 hours. If you were wondering just how demanding my online school can be, the 44-min average recorded (over about a week) in January should give you an idea.

As I said before: I have been increasing the time I spend on positive activities, but the activities are not nearly as positive as they could be. I've tried practicing mindfulness many times, in various forms, to increase my productivity and happiness, but I could never consistently get it to work well. I know that quality > quantity here, and that I should study/work mindfully and efficiently instead of simply pouring time into the activity.

I used to put just enough time into productive activities to achieve the set 'daily minimum time' (different for all activities, it was always 40-80 for programming and 15-30 for art) and be satisfied. I don't see it that way now; no matter how much time I put into a productive activity, I can not partake in a 'unproductive' activity without thinking "this time could be used in a more future-benefiting way". This is a big problem, because I am making my leisure time less leisurely and, by pouring time into the productive activities, making them less productive and more stressful. I am also aware of the fact that my present happiness only matters because it increases my productivity/general capability and therefore my chances of experiencing some kind of 'happy singularity'. This makes fun time even more difficult, because I am thinking that I could instead perform my productive activities in a more fun/mindful way, reducing the need for unproductive fun activities.

I recently found an article here that describes, almost exactly, this problem of mine. Reading that nearly blew my mind because I had never explicitly realized the problem before. I quote:

So I'm really not recommending that you try this mindhack. But if you already have spikes of guilt after bouts of escapism, or if you house an arrogant disdain for wasting your time on TV shows, here are a few mantras you can latch on to to help yourself develop a solid hatred of fun (I warn you that these are calibrated for a 14 year old mind and may be somewhat stale):

  • When skiing, partying, or generally having a good time, try remembering that this is exactly the type of thing people should have an opportunity to do after we stop everyone from dying.
  • When doing something transient like watching TV or playing video games, reflect upon how it's not building any skills that are going to make the world a better place, nor really having a lasting impact on the world.
  • Notice that if the world is to be saved then it really does need to be you who saves it, because everybody else is busy skiing, partying, reading fantasy, or dying in third world countries.

(Warning: the following sentences contain opinions.) The worst part is that this seems to be the right thing to do. There is a decent possibility that infinite happiness (or at least, happiness much greater than what could be experienced in a traditional human lifetime) can be experienced via friendly ASI; we should work towards achieving that instead of prioritizing any temporary happiness. But present happiness increases present productivity, so a sort of happiness/productivity balance needs to be struck. Kaj_Sotala, in the comments of the previously linked post, provides a strong argument against hating fun:

The main mechanism here seems to be that guilt not only blocks the relaxation, it also creates negative associations around the productive things - the productivity becomes that nasty uncomfortable reason why you don't get to do fun things, and you flinch away from even thinking about the productive tasks, since thinking about them makes you feel more guilty about not already doing them. Which in turn blocks you from developing a natural motivation to do them.

This feeling is so strong for me because nearly all of my productivity is based on guilt. Especially in the first six months of my productive transformation, I was training myself to feel very guilty when performing negative activities or when failing to perform positive ones. A lot of the time, I only did productive things because I knew I would feel bad if I did otherwise. There was no other way, really; at the time my negative habits were so pronounced that extreme action was required. But my most negative habits are defeated now, and because of my guilt-inducing strategy I cannot find a balance between happiness and productivity. Based on the above quote, the important thing is to make productive activities have a positive mental association. They have negative associations mostly because they are tiring, frustrating, or fruitless, or because they stop you from performing more fun activities.

One apparent solution is to perform all productive tasks mindfully/leisurely and give up unproductive fun activities completely (the most logical choice if human akrasia is not considered.) The other solution is to perform productive tasks mindfully, and have structured, guilt-free periods of leisure time. Based on others' comments here, the second solution is more practical, but I still have a hard time accepting unproductivity and enjoying productivity. My habit of activity tracking makes this worse; I can literally see the 'lost' minutes when I choose to partake in a leisure-time activity.

In the past few weeks, I have been partaking in less leisure time than I ever have before. I only have played video games because other people drag me into them and I am too uncertain to resist, and I always use my designated 'leisure computer use' time in the most 'fun-efficient' way possible (this has been the case for several months.) That means avoiding mind-numbing activities like browsing reddit or 4chan, instead choosing to experience more soulful things that I have always held dear, like music, art, and certain other fantasies. But even then, I feel that I could be doing something more beneficial.

Here is where I need advice and other opinions: how much structured leisure time should I allocate, to achieve the optimal happiness/productivity balance? Would it be practical to attempt to give up structured 'fun time' completely, optimizing productive activities to be more mindful and leisurely? (See activity tracker: I would be able to give up all leisure time, but I would find it much harder to optimize productive time.) How much structured 'fun time' do you think established or upcoming AI researchers regularly allocate, and how does this affect their happiness/productivity balance?

I have established two of my problems: I cannot enjoy fun things and I am not a very good autodidact. I'm not only bad at studying individual topics: I often do not study consistently, glossing over sections or bouncing between books/exercises. I've proven that I definitely learn best by doing, but it's most often hard to find things to do, especially when dealing with more theoretical topics. I'm also never entirely sure of what topics I should be studying. For example: should I read books and take courses about machine learning, or wait until I finish statistics? Should I become competent at competition programming/algorithms before studying cognitive science, or will competition programming skills not even help me at all? Should I not even be asking the above questions, instead just doing everything at once? It's those kinds of questions without answers that make me think that I really don't know what I'm doing, and that college can't come soon enough.

My second request for advice is this: what would you recommend for me to do, to improve my studying habits in the face of uncertainty? How can I choose and maintain a good 'course sequence'? How should I make designated studying time less stressful and more efficient?  Also, based on the averages I provided, should I adjust how much time I am spending on different activities?

And so my main points are concluded. Like I said, I'm not very experienced in rationality, writing, or serious conversation with intelligent people, so I apologize if anything I just said seems erroneous. I do hope that my (perceived) issues can be at least partially resolved as a result of writing this.

I'm not done here, though: I have a few other concerns, these ones about high school and college. My current online school is a favorable learning environment: it is flexible, not overwhelmingly difficult or trivially easy, and easy to exploit when it is sensible to do so. My online schooling provides me with an exceptional degree of freedom; I would never go back to a physical school and give it all up to a broken system. I recently found out about Stanford University Online High School, however, and this challenged my opinion of my current school. My third concern is whether or not I should (attempt to) switch schools. I have good reasons supporting either choice, and I am unsure. I urge you to visit that link to learn about the school if you have not done so already.

Allow me to point out the most important difference: compared to Stanford OHS lessons, my current lessons seem dull and tedious. Stanford OHS lessons are more based on intellectually stimulating and personally engaging activities, in contrast to the more straightforward memorization tasks of (most of) my current school's lessons. At least, this seems to be the case, based on my (probably biased) observations and predictions. I'm not condemning my current school; they are actually trying to get more intellectually stimulating and personally engaging features in, but I can't seem to benefit from any of it. I am about to load up on AP courses, however, which may end up providing more beneficial and engaging work (or just more difficult memorization tasks). Also, enrolling in Stanford OHS would greatly reduce my free time and freedoms when dealing with school, and I might dislike the required video-conferences.

There are other, more defined problems with the Stanford OHS approach. For one, I would need to rush to apply: I would have to take the SAT in less than a month, much earlier than I had originally planned (we've contacted Stanford OHS already, they said that they will allow me to apply after May 1 if I am taking the SAT on May 3.) As a result, I may earn an unsatisfactory grade on the SAT (consider the average scores here). Apparently, they also require recommendations in applications (not very easy to acquire when you're in online school.) Despite those things, I believe I would have a good chance of being accepted, taking into consideration all of my other favorable traits aside from SAT scores or recommendations.

I might be more favored by top colleges if I graduated from the Stanford OHS as opposed to my current school. On the other hand, my capability to self-educate outside of the system will be a hook for colleges, especially if I can complete MOOCs and read college-level textbooks, so perhaps I should maximize free time by staying with my current school. Back on the first hand, I have proven myself to be an inefficient self-educator, so a more structured approach may work better. Either way, after graduating, I am going to apply to the some of the most prominent computer-science programs (no, I'm not going only by that one list). Carnegie Mellon would be my first choice, mostly because of its proximity to home.

And so my last set of questions is formed: Should I attempt to enroll in Stanford OHS? If not, should I indeed be focusing mostly on studying AI-related topics and working on software projects? Either way: assuming I have a >3.7 GPA, >700 SAT scores, and relevant AP courses/tests completed, should I have a decent chance of being accepted to one of the high-ranking computer science colleges?

Well, that will be all for today. If this were any other internet community, I would be very surprised if anyone read the whole thing. Even if I don't receive any helpful answers, I at least gained some writing skill points.

 

 

Meetup : Canberra: Life Hacks Part 2

0 DanielFilan 14 April 2014 01:11AM

Discussion article for the meetup : Canberra: Life Hacks Part 2

WHEN: 25 April 2014 06:00:00PM (+1000)

WHERE: ANU Arts Centre

In Life Hacks Part 1, we discussed life hacks and everyone picked one to try out. In this meetup, we will discuss how well they worked, what it was like to try them, and any unexpected upsides or downsides. Even if you couldn't make it to the previous meetup, come anyway - you might well discover a life hack that you want to try! As always, vegan snacks will be provided.

In unrelated news, the Less Wrong Australia Mega-Meetup is coming up! It will be an awesome event filled with awesome people doing awesome things! Learn more and register here: http://lesswrong.com/r/discussion/lw/k23/meetup_lw_australia_megameetup/

General meetup info:

If you use Facebook, please join our group: https://www.facebook.com/groups/lwcanberra/

Structured meetups are held on the second Saturday and fourth Friday of each month from 6 pm until late at the XSite (home of the XSA), located upstairs in the ANU Arts Centre - http://www.anuxsa.org/wp-content/uploads/2010/11/XSite-Map-First-Amendment.jpg

There will be LWers at the Computer Science Students Association's weekly board games night, held on Wednesdays from 7 pm in the CSIT building, room N101.

Discussion article for the meetup : Canberra: Life Hacks Part 2

European Community Weekend in Berlin Impressions Thread

9 Gunnar_Zarncke 13 April 2014 08:33PM

The European Community Weekend in Berlin is over and was a full sucess.

This is no report of the event but a place where you can e.g. comment on the event, link to photos or what else you want to share.

I'm not the organizer of the Meetup but I have been there and for me it was a great event. Meeting many energetic, compassionate and in general awesome, people. Great presentations and workshops. And a very awesome positive athmosphere.

Cheers to all participants!

Gunnar

PS. I get it that there will be an upload of the presentations by the organizers and maybe some report of the results some time later. Those may or may not be linked from this post.


Beware technological wonderland, or, why text will dominate the future of communication and the Internet

11 VipulNaik 13 April 2014 05:34PM

Disclaimer: The views expressed here are speculative. I don't have a claim to expertise in this area. I welcome pushback and anticipate there's a reasonable chance I'll change my mind in light of new considerations.

One of the interesting ways that many 20th century forecasts made of the future went wrong is that they posited huge physical changes in the way life was organized. For instance, they posited huge changes in these dimensions:

  • The home living arrangements of people. Smart homes and robots were routinely foreseen over time horizons where progress towards those ends would later turn out to be negligible.
  • Overoptimistic as well as overpessimistic scenarios of energy sources merged in strange ways. People believed the world would run out of oil by now, but at the same time envisioned nuclear-powered flight and home electricity.
  • Overoptimistic visions of travel: People thought humans would be sending out regular manned missions to the solar system planets, and space colonization would be on the agenda by now.
  • The types of products that would be manufactured. New products ranging from synthetic meat to room temperature superconductors were routinely prophesied to happen in the near future. Some of them may still happen, but they'll take a lot longer than people had optimistically expected.

At the same time, they underestimated to quite an extent the informational changes in the world:

  • With the exception of forecasters specifically studying computing trends, most missed the dramatic growth of computing and the advent of the Internet and World Wide Web.
  • Most people didn't appreciate the extent of the information and communication revolution and how it would coexist with a world that looked physically indinstinguishable from the world of 30 years ago. Note that I'm looking here at the most advanced First World places, and ignoring the point that many places (particularly in China) have experienced huge physical changes as a result of catch-up growth.

My LessWrong post on megamistakes discusses these themes somewhat in #1 (the technological wonderland and timing point) and #2 (the exceptional case of computing).

What about predictions within the informational realm? I detect a similar bias. It seems that prognosticators and forecasters tend to give undue weight to heavyweight technologies (such as 3D videoconferencing) and ignore the fact that the bulk of the production and innovation has been focused on text (with a little bit in images to augment and interweave with the text), and, to a somewhat lesser extent, images. In this article, I lay the pro-text position. I don't have high confidence in the views expressed here, and I look forward to critical pushback that changes my mind.

Text: easier to produce

One great thing about text is its lower production costs. To the extent that production is quantitatively little and dominated by a few big players, high-quality video and audio play an important role. But as the Internet "democratizes" content production, it's a lot easier for a lot of people to contribute text than to contribute audio or video content.

Some advantages of text from the creation perspective:

  • It's far easier to edit and refine. This is a particularly big issue because with audio and video, you need to rehearse, do retakes, or do heavy editing in order to make something coherent come out. The barriers to text are lower.
  • It's easier to upload and store. Text takes less space. Uploading it to a network or sending it to a friend takes less bandwidth.
  • People are (rightly or wrongly) less concerned about putting their best foot forward with text. People often spend a lot of time selecting their very best photos, even for low-stakes situations like social networks. With text, they are relatively less inhibited, because no individual piece of text represents them as persons as much as they consider their physical appearance or mannerisms to. This allows people to create a lot more text. Note that Snapchat may be an exception that proves the rule: people flocked to it because its impermanence made them less inhibited about sharing. But its impermanence also means it does not add to the stock of Internet content. And it's still images, not videos.
  • It's easy to copy and paste.
  • As an ergonomic matter, typing all day long, although fatiguing, consumes less energy than talking all day long.
  • Text can be created in fits and bursts. An audio or video needs to be recorded more or less in a continuous sitting.
  • You can't play background music while having a video conversation or recording audio or video content.

Text: easier to consume and share

Text is also easier to consume and share.

  • Standardization of format and display methods makes the consumption experience similar across devices.
  • Low storage and bandwidth costs make it easy to consume over poor Internet connections and on a range of devices.
  • Text can be read at the user's own pace. People who are slow at grasping the content can take time. People who are fast can read very quickly.
  • Text can be copied, pasted, modified, and reshared with relative ease.
  • Text is easier to search (this refers both to searching within a given piece of text and to locating a text based on some part of it or some attributes of it).
  • You can't play background music while consuming audio-based content, but you can do it while consuming text.
  • Text can more easily be translated to other languages.

On the flip side, reading text requires you to have your eyes glued to the screen, which reduces your flexibility of movement. But because you can take breaks at your will, it's not a big issue. Audiobooks do offer the advantage that you can move around (e.g., cook in the kitchen) while listening, and some people who work from home are quite fond of audiobooks for that purpose. In general, the benefits of text seem to outweigh the costs.

Text generates more flow-through effects

Holding willingness to pay on the part of consumers the same, text-based content is likely to generate greater flow-through effects because of its ability to foster more discussion and criticism and to be modified and reused for other purposes. This is related to the point that video and audio consumption on the Internet generally tends to substitute for TV and cinema trips, which are largely pure consumption rather than intermediate steps to further production. Text, on the other hand, has a bigger role in work-related stuff.

Augmented text

When I say that text plays a major role, I don't mean that long ASCII strings are the be-all-and-end-all of computing and the Internet. Rather, more creative and innovative ways of interweaving a richer set of expressive and semantically powerful symbols in text is very important to harnessing its full power. It really is a lot different to read The New York Times in HTML than it would be to read the plain text of the article on a monochrome screen. The presence of hyperlinks, share buttons, the occasional image, sidebars with more related content, etc. add a lot of value.

Consider Facebook posts. These are text-based, but they allow text to be augmented in many ways:

  • Inline weblinks are automatically hyperlinked when you submit the post (though at present it's not possible to edit the anchor text to show something different from the weblink).
  • Hashtags can be used, and link to auto-generated Facebook pages listing recent uses of the hashtag.
  • One can tag friends and Facebook groups and pages, subject to some restrictions. For friends tagged, the anchor text can be shortened to any one word in their name.
  • One can attach links, photos, and files of some types. By default, the first weblink that one uses in the post is automatically attached, though this setting can be overridden. The attached link includes a title, summary, and thumbnail.
  • One can set a location for the post.
  • One can set the timing of publication of a post.
  • Smileys are automatically rendered when the post is published.
  • It's possible to edit the post later and make changes (except to attachments?). People can see the entire edit history.
  • One can promote one's own post at a cost.
  • One can delete the post.
  • One can decide who is allowed to view the post (and also restrict who can comment on the post).
  • One can identify who one is with at the time of posting.
  • One can add a rich set of "verbs" to specify what one is doing.

Consider the actions that people reading the posts can perform:

  • Like the post.
  • Comment on the post. Comments automatically include link previews, and they can also be edited later (with edit histories available). Comments can also be used to share photos.
  • Share the post.
  • Select the option to get notifications on updates (such as further comments) on the post.
  • Like comments on the post.
  • Report posts or mark them as spam.
  • View the edit history of the post and comments.
  • For posts with restrictions on who can view them, see who can view the post.
  • View a list of others who re-shared the post.

If you think about it, this system, although it basically relies on text, has augmented text in a lot of ways with the intent of facilitating more meaningful communication. You may find some of the augmentations of little use to you, but each feature probably has at least a few hundred thousand people who greatly benefit from it. (If nobody uses a feature, Facebook axes it).

I suspect that the world in ten years from now will feature text that is richly augmented relative to how text is now in a similar manner that the text of today is richly augmented compared to what it was back in 2006. Unfortunately, I can't predict any very specific innovations (if I could, I'd be busy programming them, not writing a post on LessWrong). And it might very well be the case that the low-hanging fruit with respect to augmenting text is already taken.

Why didn't all the text augmentation happen at once? None of the augmentations are hard to program in principle. The probable reasons are:

  • Training users: The augmented text features need a loyal userbase that supports and implements them. So each augmentation needs to be introduced gradually in order to give users onboarding time. Even if Facebook in 2006 knew exactly what features they would eventually have in 2014, and even if they could code all the features in 2006, introducing them all at once might scare users because of the dramatic increase in complexity.
  • Deeper insight into what features are actually desirable: One can come up with a huge list of features and augmentations of text that might  in principle be desirable, but only a small fraction of them pass a cost-benefit analysis (where the cost is the increased complexity of user interface). Discovering what features work is often a matter of trial and error.
  • Performance in terms of speed and reliability: Each augmentation adds an extra layer of code, reducing the performance in terms of speed and reliability. As computers and software have gotten faster and more powerful, and the Internet companies' revenue has increased (giving them more leeway to spend more for server space), investments in these have become more worthwhile.
  • Focus on userbase growth: Companies were spending their resources in growing their userbase rather than adding features. Note that this is the main point that is likely to change soon: the userbase is within an order of magnitude of being the whole world population.

Images

Images play an important role along with text. Indeed, websites such as 9GAG rely on images, and others like Buzzfeed heavily mix texts and images.

I think images will continue to grow in importance on the Internet. But the vision of images as it is likely to unfold is probably quite different from the vision as futurists generally envisage. We're not talking of a future dominated by professionally done (or even amateurly done) 16 megapixel photography. Rather, we're talking of images that are used to convey basic information or make a memetic point. Consider that many of the most widely shared images are the standard images for memes. The number of meme images is much smaller than the number of meme pictures. Meme creators just use a standard image and their own contribution is the text at the top and bottom of the meme. Thus, even while the Internet uses images, the production at the margin largely involves text. The picture is scaffolding. Webcomics (I'm personally most familiar with SMBC and XKCD, but there are other more popular ones) are at the more professional end, but they too illustrate a similar point: it's often the value of the ideas being creatively expressed, rather than the realism of the imagery, that delivers value.

One trend that was big in the early days of the Internet, then died down, and now seems to be reviving is the animated GIF. Animated GIFs allow people to convey simple ideas that cannot be captured in still images, without having to create a video. They also use a lot less bandwidth for consumers and web hosts than videos. Again, we see that the future is about economically using simple representations to convey ideas or memes rather than technologically awesome photography.

Quantitative estimates

Here's what Martin Hilbert wrote in How Much Information is There in the "Information Society" (p. 3):

It is interesting to observe that the kind of content has not changed significantly since the analog age: despite the general perception that the digital age is synonymous with the proliferation of media-rich audio and videos, we find that text and still images capture a larger share of the world’s technological memories than before the digital age.5 In the early 1990s, video represented more than 80 % of the world’s information stock (mainly stored in analog VHS cassettes) and audio almost 15 % (audio cassettes and vinyl records). By 2007, the share of video in the world’s storage devices decreased to 60 % and the share of audio to merely 5 %, while text increased from less than 1 % to a staggering 20 % (boosted by the vast amounts of alphanumerical content on internet servers, hard-disks and databases. The multi-media age actually turns out to be an alphanumeric text age, which is good news if you want to make life easy for search engines.

I had come across this quote as part of a preliminary investigation for MIRI into the world's distribution of computation (though I had not highlighted the quote in the investigation since it was relatively less important to the investigation). As another data point, Facebook claims that it needed 700 TB (as of October 2013) to store all the text-based status updates and comments plus relevant semantic information on users that would be indexed by Facebook Graph Search once it was extended to posts and comments. Contrast this with a few petabytes of storage needed for all their photos (see also here), despite the fact that one photo takes up a lot more space than one text-based update.

Beautiful text

The Internet looks a lot more beautiful today than it did ten years ago. Why? Small, incremental changes in the way that text is displayed have played a role. New fonts, new WordPress themes, a new Wikipedia or Facebook layout, all conspire to provide a combination of greater usability and greater aesthetic appeal. Also, as processors and bandwidth have improved, some layouts that may have been impractical earlier have been made possible. The block tile layout for websites has caught on quite a bit, inspired by an attempt to create a unified smooth browsing experience across a range of different devices (from small iPhone screens to large monitors used by programmers and data analysts).

Notice that it's the versatility of text that allowed it to be upgraded. Videos created an old way would have to be redone in order to avail of new display technologies. But since text is stored as text, it can be rendered in a new font easily.

The wonders of machine learning

I've noticed personally, and some friends have remarked to me, that Google Search, GMail, and Facebook have gotten a lot better in recent years in many small incremental ways despite no big leaps in the overall layout and functioning of the services. Facebook shows more relevant ads, makes better friend suggestions, and has a much more relevant news feed. Google Search is scarily good at autocompletion. GMail search is improving at autocompletion too, and the interface continues to improve. Many of these improvements are the results of continuous incremental improvement, but there's some reason to believe that the more recent changes are driven in part by application of the wonders of machine learning (see here and here for instance).

Futurists tend to think of the benefits of machine learning in terms of qualitatively new technologies, such as image recognition, video recognition, object recognition, audio transcription, etc. And these are likely to happen, eventually. But my intuition is that futurists underestimate the proportion of the value from machine learning that is intermediated through improvement in the existing interfaces that people already use (and that high-productivity people use more than average), such as their Facebook news feed or GMail or Google Search.

A place for video

Video will continue to be good for many purposes. The watching of movies will continue to migrate from TV and the cinema hall to the Internet, and the quantity watched may also increase because people have to spend less in money and time costs. Educational and entertainment videos will continue to be watched in increasing numbers. Note that these effects are largely in terms of substitution of one medium, plus a raw increase in quantity, for another rather than paradigm shifts in the nature of people's activities.

Video chatting, through tools such as Skype or Google Talk/Hangouts, will probably continue to grow. These will serve as important complements to text-based communication. People do want to see their friends' faces from time to time, even if they carry out the bulk of their conversation in text. As Internet speeds improve around the world, the trivial inconveniences in the way of video communication will reduce.

But these will not drive the bulk of people's value-added from having computing devices or being connected to the Internet. And they will in particular be an even smaller fraction of the value-added for the most productive people or the activities with maximum flow-through effects. Simply put, video just doesn't deliver higher information per unit bandwidth and human inconvenience.

Progress in video may be similar to progress in memes and animated GIFs: there may be more use of animation to quickly create videos expressing simple ideas. Animated video hasn't taken off yet. Xtranormal shut down. The RSA Animate style made waves in some circles, but hasn't caught on widely. It may be that the code for simple video creation hasn't yet been cracked. Or it may be that if people are bothering to watch video, they might as well watch something that delivers video's unique benefits, and animated video offers little advantage over text, memes, animated GIFs, and webcomics. This remains to be seen. I've also heard of Vine (a service owned by Twitter for sharing very short videos), and that might be another direction for video growth, but I don't know enough about Vine to comment.

What about 3D video?

High definition video has made good progress in relative terms, as cameras, Internet bandwidth, and computer video playing abilities have improved. It'll be increasingly common to watch high definition videos on one's computer screen or (for those who can afford it) on a large flatscreen TV.

What about 3D video? If full-blown 3D video could magically appear all of a sudden with a low-cost implementation for both creators and consumers, I believe it would be a smashing success. In practice, however, the path to getting there would be more tortuous. And the relevant question is whether intermediate milestones in that direction would be rewarding enough to producers and consumers to make the investments worth it. I doubt that they would, which is why it seems to me that, despite the fact that a lot of 3D video stuff is technically feasible today, it will still probably take several decades (I'm guessing at least 20 years, probably more than 30 years) to become one of the standard methods of producing and consuming content. For it to even begin, it's necessary that improvements in hardware continue apace to the point that initial big investments in 3D video start becoming worthwhile. And then, once started, we need an ever-growing market to incentivize successive investments in improving the price-performance tradeoff (see #4 in my earlier article on supply, demand, and technological progress). Note also that there may be a gap of a few years, perhaps even a decade or more, between 3D video becoming mainstream for big budget productions (such as movies) and 3D video being common for Skype or Google Hangouts or their equivalent in the later era.

Fractional value estimates

I recently asked my Facebook friends for their thoughts on the fraction of the value they derived from the Internet that was attributable to the ability to play and download videos. I received some interesting comments there that helped confirm initial aspects of my hypothesis. I would welcome thoughts from LessWrongers on the question.

Thanks to some of my Facebook friends who commented on the thread and offered their thoughts on parts of this draft via private messaging.

Meetup : LW Australia Mega-Meetup

4 Ruby 13 April 2014 11:23AM

Discussion article for the meetup : LW Australia Mega-Meetup

WHEN: 09 May 2014 05:00:00PM (+1000)

WHERE: Kanangra Drive, Gwandalan NSW 2259

The organisers of LW Melbourne, LW Sydney, and LW Canberra are elated to announce the first-ever Less Wrong Australia Mega-Meetup!

What: Rationality-themed Weekend Retreat
Where: Point Wolstoncroft Sports and Recreation Centre, NSW
When: May 9-11, Friday evening - Sunday afternoon
Cost: $250*

*$280 after April 25

Enjoy and grow in the company of others who are committed to improving their rationality and to personal growth. The schedule is laden with sessions on rationality skills, revision and teaching of CFAR modules, prediction markets, lightning talks, and all round enlightenment.

The retreat will take place at an idyllic location on the eastern foreshore of Lake Macquarie. Expect glorious outdoors, BBQ, beer, boardgames, bushwalking, and activities selected by popular choice from rock climbing, archery, canoeing, kayaking, and sailing.

Registration is open now: http://goo.gl/425hyo

Discussion article for the meetup : LW Australia Mega-Meetup

Meetup : Buffalo LW - Thursday Meetup

0 StonesOnCanvas 13 April 2014 12:21AM

Discussion article for the meetup : Buffalo LW - Thursday Meetup

WHEN: 17 April 2014 07:00:00PM (-0400)

WHERE: Panera Bread 765 Elmwood Avenue, Buffalo, NY

Buffalo LW meetups occur twice a month (even when I fail to post on LW). Visit the meetup.com page for details: http://www.meetup.com/Less-Wrong-Buffalo/

Discussion article for the meetup : Buffalo LW - Thursday Meetup

Evaluating GiveWell as a startup idea based on Paul Graham's philosophy

13 VipulNaik 12 April 2014 02:04PM

Effective altruism is a growing movement, and a number of organizations (mostly foundations and nonprofits) have been started in the domain. One of the very first of these organizations, and arguably the most successful and influential, has been charity evaluator GiveWell. In this blog post, I examine the early history of GiveWell and see what factors in this early history helped foster its success.

My main information source is GiveWell's original business plan (PDF, 86 pages). I'll simply refer to this as the "GiveWell business plan" later in the post and will not link to the source each time. If you're interested in what the GiveWell website looked like at the time, you can browse the website as of early May 2007 here.

To provide more context to GiveWell's business plan, I will look at it in light of Paul Graham's pathbreaking article How to Get Startup Ideas. The advice here is targeted at early stage startups. GiveWell doesn't quite fit the "for-profit startup" mold, but GiveWell in its early stages was a nonprofit startup of sorts. Thus, it would be illustrative to see just how closely GiveWell's choices were in line with Paul Graham's advice.

There's one obvious way that this analysis is flawed and inconclusive: I do not systematically compare GiveWell with other organizations. There is no "control group" and no possibility of isolating individual aspects that predicted success. I intend to write additional posts later on the origins of other effective altruist organizations, after which a more fruitful comparison can be attempted. I think it's still useful to start with one organization and understand it thoroughly. But keep this limitation in mind before drawing any firm conclusions, or believing that I have drawn firm conclusions.

The idea: working on a real problem that one faces at a personal level, is acutely familiar with, is of deep interest to a (small) set of people right now, and could eventually be of interest to many people

Graham writes (emphasis mine):

The very best startup ideas tend to have three things in common: they're something the founders themselves want, that they themselves can build, and that few others realize are worth doing. Microsoft, Apple, Yahoo, Google, and Facebook all began this way.

Why is it so important to work on a problem you have? Among other things, it ensures the problem really exists. It sounds obvious to say you should only work on problems that exist. And yet by far the most common mistake startups make is to solve problems no one has.

[...]

When a startup launches, there have to be at least some users who really need what they're making—not just people who could see themselves using it one day, but who want it urgently. Usually this initial group of users is small, for the simple reason that if there were something that large numbers of people urgently needed and that could be built with the amount of effort a startup usually puts into a version one, it would probably already exist. Which means you have to compromise on one dimension: you can either build something a large number of people want a small amount, or something a small number of people want a large amount. Choose the latter. Not all ideas of that type are good startup ideas, but nearly all good startup ideas are of that type.

Imagine a graph whose x axis represents all the people who might want what you're making and whose y axis represents how much they want it. If you invert the scale on the y axis, you can envision companies as holes. Google is an immense crater: hundreds of millions of people use it, and they need it a lot. A startup just starting out can't expect to excavate that much volume. So you have two choices about the shape of hole you start with. You can either dig a hole that's broad but shallow, or one that's narrow and deep, like a well.

Made-up startup ideas are usually of the first type. Lots of people are mildly interested in a social network for pet owners.

Nearly all good startup ideas are of the second type. Microsoft was a well when they made Altair Basic. There were only a couple thousand Altair owners, but without this software they were programming in machine language. Thirty years later Facebook had the same shape. Their first site was exclusively for Harvard students, of which there are only a few thousand, but those few thousand users wanted it a lot.

When you have an idea for a startup, ask yourself: who wants this right now? Who wants this so much that they'll use it even when it's a crappy version one made by a two-person startup they've never heard of? If you can't answer that, the idea is probably bad. [3]

You don't need the narrowness of the well per se. It's depth you need; you get narrowness as a byproduct of optimizing for depth (and speed). But you almost always do get it. In practice the link between depth and narrowness is so strong that it's a good sign when you know that an idea will appeal strongly to a specific group or type of user.

But while demand shaped like a well is almost a necessary condition for a good startup idea, it's not a sufficient one. If Mark Zuckerberg had built something that could only ever have appealed to Harvard students, it would not have been a good startup idea. Facebook was a good idea because it started with a small market there was a fast path out of. Colleges are similar enough that if you build a facebook that works at Harvard, it will work at any college. So you spread rapidly through all the colleges. Once you have all the college students, you get everyone else simply by letting them in.

GiveWell in its early history seems like a perfect example of this:

  • Real problem experienced personally: The problem of figuring out how and where to donate money was a personal problem that the founders experienced firsthand as customers, so they knew there was a demand for something like GiveWell.
  • Of deep interest to some people: The people who started GiveWell had a few friends who were in a similar situation: they wanted to know where best to donate money, but did not have enough resources to do a full-fledged investigation. The number of such people may have been small, but since these people were intending to donate money in the thousands of dollars, there were enough of them who had deep interest in GiveWell's offerings.
  • Could eventually be of interest to many people: Norms around evidence and effectiveness could change gradually as more people started identifying as effective altruists. So, there was a plausible story for how GiveWell might eventually influence a large number of donors across the range from small donors to billionaires.

Quoting from the GiveWell business plan (pp. 3-7, footnotes removed; bold face in original):

GiveWell started with a simple question: where should I donate?

We wanted to give. We could afford to give. And we had no prior commitments to any particular charity; we were just looking for the channel through which our donations could help people (reduce suffering; increase opportunity) as much as possible.

The first step was to survey our options. We found that we had more than we could reasonably explore comprehensively. There are 2,625 public charities in the U.S. with annual budgets over $100 million, 88,812 with annual budgets over $1 million. Restricting ourselves to the areas of health, education (excluding universities), and human services, there are 480 with annual budgets over $100 million, 50,505 with annual budgets over $1 million.

We couldn’t explore them all, but we wanted to find as many as possible that fit our broad goal of helping people, and ask two simple questions: what they do with donors’ money, and what evidence exists that their activities help people?

Existing online donor resources, such as Charity Navigator, give only basic financial data and short, broad mission statements (provided by the charities and unedited). To the extent they provide metrics, they are generally based on extremely simplified, problematic assumptions, most notably the assumption that the less a charity spends on administrative expenses, the better. These resources could not begin to help us with our questions, and they weren’t even very useful in narrowing the field (for example, even if we assumed Charity Navigator’s metrics to be viable, there are 1,277 total charities with the highest possible rating, 562 in the areas of health, education and human services).

We scoured the Internet, but couldn’t find the answers to our questions either through charities’ own websites or through the foundations that fund them. It became clear to us that answering these questions was going to be a lot of work. We formed GiveWell as a formal commitment to doing this work, and to putting everything we found on a public website so other donors wouldn’t have to repeat what we did. Each of the eight of us chose a problem of interest (malaria, microfinance, diarrheal disease, etc.) – this was necessary in order to narrow our scope – and started to evaluate charities that addressed the problem.

[...]

We immediately found that there are enormous opportunities to help people, but no consensus whatsoever on how to do it best. [...]

Realizing that we were trying to make complex decisions, we called charities and questioned them thoroughly. We wanted to see what our money was literally being spent on, and for charities with multiple programs and regions of focus we wanted to know how much of their budget was devoted to each. We wanted to see statistics – or failing that, stories – about people
who’d benefited from these programs, so we could begin to figure out what charities were pursuing the best strategies. But when we pushed for these things, charities could not provide them.

They responded with surprise (telling us they rarely get questions as detailed as ours, even from multi-million dollar donors) and even suspicion (one executive from a large organization accused Holden of running a scam, though he wouldn’t explain what sort of scam can be run using information about a charity’s budget and activities). See Appendix A for details of these exchanges. What we saw led us to conclude that charities were neither accustomed to nor capable of answering our basic questions: what do you do, and what is the evidence that it works?

This is why we are starting the Clear Fund, the world’s first completely transparent charitable grantmaker. It’s not because we were looking for a venture to start; everyone involved with this project likes his/her current job. Rather, the Clear Fund comes simply from a need for a resource that doesn’t exist: an information source to help donors direct their money to where it will accomplish the most good.

We feel that the questions necessary to decide between charities aren’t being answered or, largely, asked. Foundations often focus on new projects and innovations, as opposed to scaling up proven ways of helping people; and even when they do evaluate the latter, they do not make what they find available to foster dialogue or help other donors (see Appendix D for more on this). Meanwhile, charities compete for individual contributions in many ways, from marketing campaigns to personal connections, but not through comparison of their answers to our two basic questions. Public scrutiny, transparency, and competition of charities’ actual abilities to improve the world is thus practically nonexistent. That makes us worry about the quality of their operations – as we would for any set of businesses that doesn’t compete on quality – and without good operations, a charity is just throwing money at a problem.

[...]

With money and persistence, we believe we can get the answers to our questions – or at least establish the extent to which different charities are capable of answering them. If we succeed, the tremendous amount of money available for solving the world’s problems will become better spent, and the world will reap enormous benefits. We believe our project will accomplish the following:
1. Help individual donors find the best charities to give to. [...]

2. Foster competition to find the best ways of improving the world. [...]

3. Foster global dialogue between everyone interested – both amateur and professional –
in the best tactics for improving the world.
[...]

4. Increase engagement and participation in charitable causes. [...]

All of the benefits above fall under the same general principle. The Clear Fund will put a new focus on the strategies – as opposed to the funds – being used to attack the world’s problems.

How do you know if the idea is scalable? You just gotta be the right person

We already quoted above GiveWell's reasons for believing that their idea could eventually influence a large volume of donations. But how could we know at the time whether their beliefs were reasonable? Graham writes (emphasis mine):

How do you tell whether there's a path out of an idea? How do you tell whether something is the germ of a giant company, or just a niche product? Often you can't. The founders of Airbnb didn't realize at first how big a market they were tapping. Initially they had a much narrower idea. They were going to let hosts rent out space on their floors during conventions. They didn't foresee the expansion of this idea; it forced itself upon them gradually. All they knew at first is that they were onto something. That's probably as much as Bill Gates or Mark Zuckerberg knew at first.

Occasionally it's obvious from the beginning when there's a path out of the initial niche. And sometimes I can see a path that's not immediately obvious; that's one of our specialties at YC. But there are limits to how well this can be done, no matter how much experience you have. The most important thing to understand about paths out of the initial idea is the meta-fact that these are hard to see.

So if you can't predict whether there's a path out of an idea, how do you choose between ideas? The truth is disappointing but interesting: if you're the right sort of person, you have the right sort of hunches. If you're at the leading edge of a field that's changing fast, when you have a hunch that something is worth doing, you're more likely to be right.

How well does GiveWell fare in terms of the potential of the people involved? Were the people who founded GiveWell (specifically Holden Karnofsky and Elie Hassenfeld) the "right sort of person" to found GiveWell? It's hard to give an honest answer that's not clouded by information available in hindsight. But let's try. On the one hand, neither of the co-founders had direct experience working with nonprofits. However, they had both worked in finance and the analytical skills they employed in the financial industry may have been helpful when they switched to analyzing evidence and organizations in the nonprofit sector (see the "Our qualifications" section of the GiveWell business plan). Arguably, this was more relevant to what they wanted to do with GiveWell than direct experience with the nonprofit world. Overall, it's hard to say (without the benefits of hindsight or inside information about the founders) that the founders were uniquely positioned, but the outside view indicators seem generally favorable.

Post facto, there seems to be some evidence that GiveWell's founders exhibited good aesthetic discernment. But this is based on GiveWell's success, so invoking that as a reason is a circular argument.

Schlep blindness?

In a different essay titled Schlep Blindness, Graham writes:

There are great startup ideas lying around unexploited right under our noses. One reason we don't see them is a phenomenon I call schlep blindness. Schlep was originally a Yiddish word but has passed into general use in the US. It means a tedious, unpleasant task.

[...]

One of the many things we do at Y Combinator is teach hackers about the inevitability of schleps. No, you can't start a startup by just writing code. I remember going through this realization myself. There was a point in 1995 when I was still trying to convince myself I could start a company by just writing code. But I soon learned from experience that schleps are not merely inevitable, but pretty much what business consists of. A company is defined by the schleps it will undertake. And schleps should be dealt with the same way you'd deal with a cold swimming pool: just jump in. Which is not to say you should seek out unpleasant work per se, but that you should never shrink from it if it's on the path to something great.

[...]

How do you overcome schlep blindness? Frankly, the most valuable antidote to schlep blindness is probably ignorance. Most successful founders would probably say that if they'd known when they were starting their company about the obstacles they'd have to overcome, they might never have started it. Maybe that's one reason the most successful startups of all so often have young founders.

In practice the founders grow with the problems. But no one seems able to foresee that, not even older, more experienced founders. So the reason younger founders have an advantage is that they make two mistakes that cancel each other out. They don't know how much they can grow, but they also don't know how much they'll need to. Older founders only make the first mistake.

It could be argued that schlep blindness was the reason nobody else had started GiveWell before GiveWell. Most people weren't even thinking of doing something like this because the idea seemed like so much work that nobody went near it. Why then did GiveWell's founders select the idea? There's no evidence to suggest that Graham's "ignorance" remedy was the reason. Rather, the GiveWell business plan explicitly embraces complexity. In fact, one of their early section titles is Big Problems with Complex Solutions. It seems like the GiveWell founders found challenge more exciting than deterring. Lack of intimate knowledge with the nonprofit sector might have been a factor, but it probably wasn't a driving one.

Competition

Graham writes:

Because a good idea should seem obvious, when you have one you'll tend to feel that you're late. Don't let that deter you. Worrying that you're late is one of the signs of a good idea. Ten minutes of searching the web will usually settle the question. Even if you find someone else working on the same thing, you're probably not too late. It's exceptionally rare for startups to be killed by competitors—so rare that you can almost discount the possibility. So unless you discover a competitor with the sort of lock-in that would prevent users from choosing you, don't discard the idea.

If you're uncertain, ask users. The question of whether you're too late is subsumed by the question of whether anyone urgently needs what you plan to make. If you have something that no competitor does and that some subset of users urgently need, you have a beachhead.

[...]

You don't need to worry about entering a "crowded market" so long as you have a thesis about what everyone else in it is overlooking. In fact that's a very promising starting point. Google was that type of idea. Your thesis has to be more precise than "we're going to make an x that doesn't suck" though. You have to be able to phrase it in terms of something the incumbents are overlooking. Best of all is when you can say that they didn't have the courage of their convictions, and that your plan is what they'd have done if they'd followed through on their own insights. Google was that type of idea too. The search engines that preceded them shied away from the most radical implications of what they were doing—particularly that the better a job they did, the faster users would leave.

A crowded market is actually a good sign, because it means both that there's demand and that none of the existing solutions are good enough. A startup can't hope to enter a market that's obviously big and yet in which they have no competitors. So any startup that succeeds is either going to be entering a market with existing competitors, but armed with some secret weapon that will get them all the users (like Google), or entering a market that looks small but which will turn out to be big (like Microsoft).

Did GiveWell enter a crowded market? As Graham suggests above, it depends heavily on how you define the market. Charity Navigator existed at the time, and GiveWell and Charity Navigator compete to serve certain donor needs. But they are also sufficiently different. Here's what GiveWell said about Charity Navigator in the GiveWell business plan:

Existing online donor resources, such as Charity Navigator, give only basic financial data and short, broad mission statements (provided by the charities and unedited). To the extent they provide metrics, they are generally based on extremely simplified, problematic assumptions, most notably the assumption that the less a charity spends on administrative expenses, the better. These resources could not begin to help us with our questions, and they weren’t even very useful in narrowing the field (for example, even if we assumed Charity Navigator’s metrics to be viable, there are 1,277 total charities with the highest possible rating, 562 in the areas of health, education and human services)

In other words, GiveWell did enter a market with existing players, indicating that there was a need for things in the broad domain that GiveWell was offering. At the same time, what GiveWell offered was sufficiently different that it was not bogged down by the competition.

Incidentally, in recent times, people from Charity Navigator have been critical of GiveWell and other "effective altruism" proponents. Their critique has itself come for some criticism, and some people have argued that this may be a response to GiveWell's growth leading to it moving the same order of magnitude of money as Charity Navigator (see the discussion here for more). Indeed, in 2013, GiveWell surpassed Charity Navigator in money moved through the website, though we don't have clear evidence of whether GiveWell is cutting into Charity Navigator's growth.

Other precursors (of sorts) to GiveWell, mentioned by William MacAskill in a Facebook comment, are the Poverty Action Lab, Copenhagen Consensus.

How prescient was GiveWell?

With the benefit of hindsight, how impressive do we find GiveWell's early plans in predicting its later trajectory? Note that prescience in predicting the later trajectory could also be interpreted as rigidity of plan and unwillingness to change. But since GiveWell appears to have been quite a success, there is a prior in favor of prescience being good (what I mean is that if GiveWell had failed, the fact that they predicted all the things they'd do would be the opposite of impressive, but given their success, the fact that they predicted things in advance also indicates that they chose good strategy from the outset).

Note that I'm certainly not claiming that a startup's failure to predict the future should be a big strike against it. As long as the organization can adapt to and learn from new information, it's fine. But of course, getting more things right from the start is better to the extent it's feasible.

By and large, both the vision and the specific goals outlined in the plan were quite prescient. I noted the following differences between the plan then and the reality as it transpired:

  • In the plan, GiveWell said it would try to identify top charities in a few select areas (they listed seven areas) and refrain from comparing very different domains. Over the years, they have moved more in the direction of directly comparing different domains and offering a few top charities culled across all domains. Even though they seem to have been off in their plan, they were directionally correct compared to what existed. They were already consolidating different causes within the same broad category. For instance, they write (GiveWell business plan, p. 21):

     

    A charity that focuses on fighting malaria and a charity that focuses on fighting tuberculosis are largely aiming for the same end goal – preventing death – and if one were clearly better at preventing death than the other, it would be reasonable to declare it a better use of funds. By contrast, a charity that focuses on creating economic opportunity has a fundamentally different end goal. It may be theoretically possible to put jobs created and lives saved in the same terms (and there have been some attempts to create metrics that do so), but ultimately different donors are going to have very different perspectives on whether it’s more worthwhile to create a certain number of jobs or prevent a certain number of deaths.

  • GiveWell doesn't predict clearly enough that it will evolve into a more "foundation"-like entity. Note that at the time of the business plan, they were envisionining themselves as deriving their negotiating power with nonprofits through their role as grantmakers. They then transformed into deriving their power largely from their role as recommenders of top charities. Then, around 2012, following the collaboration with Good Ventures, they switched back to grantmaker mode, but in a far grander way than they'd originally envisaged.
  • At the time of the GiveWell business plan, they see their main source of money moved being small donors. In recent years, as they moved to more "foundation"-like behavior, they seem to have started shifting attention to influencing the giving decisions of larger donors. This might be purely due to the unpredictable fact that they joined hands with the Good Ventures foundation, rather than due to any systemic or predictable reasons. It remains to be seen whether they influence more donations by very large donors in the future. Another aspect of this is that GiveWell's original business plan was more ambitious about influencing the large number of small donors out there than (I think) GiveWell is now.
  • GiveWell seems to have moved away from a focus on examining individual charities to understanding the landscape sufficiently well to directly identify the best opportunities, and then to comparing broad causes. The GiveWell business plan, on the other hand, repeatedly talked about "pitting charities against each other" (p. 11) as their main focal activity. In recent years, however, GiveWell has started stepping back and concentrating more on using their big picture understanding of the realm to more efficiently identify the very best opportunities rather than evaluating all relevant charities and causes. This is reflected in their conversation notes as well as the GiveWell Labs initiative. After creating GiveWell Labs, they have shifted more in the direction of thinking at the level of causes rather than individual interventions.

The role of other factors in GiveWell's success

Was GiveWell destined to succeed, or did it get lucky? I believe a mix of both: GiveWell was bound to succeed in some measure, but a number of chance factors played a role in its achieving success to its current level. A recent blog post by GiveWell titled Our work on outreach contains some relevant evidence. The one single person who may have been key to GiveWell's success is the ethicist and philosopher Peter Singer. Singer is a passionate advocate of the idea that people are morally obligated to donate money to help the world's poorest people. Singer played a major role in GiveWell's success in the following ways:

  • Singer both encouraged people to give and directed people interested in giving to GiveWell's website when they asked him where they should give.
  • Singer was an inspiration for many effective giving organizations. He is credited as an inspiration by Oxford ethicist Toby Ord and his wife physician Bernadette Young, who together started Giving What We Can, a society promoting effective giving. Giving What We Can used GiveWell's research for its own recommendations and pointed people to the website. In addition, Singer's book The Life You Can Save also inspired the creation of the eponymous organization. Giving What We Can was a starting point for related organizations in the nascent effective altruism movement, including 80000 Hours, the umbrella group The Centre for Effective Altruism, and many other resources.
  • Cari Tuna and her husband (and Facebook co-founder) Dustin Moskovitz read about GiveWell in The Life You Can Save by Peter Singer around the same time they met Holden through a mutual friend. Good Ventures, the foundation set up by Tuna and Moskovitz has donated several million dollars to GiveWell's recommended charities (over 9 million USD in 2013) and the organizations have collaborated somewhat. More in this blog post by Cari Tuna.

The connection of GiveWell to the LessWrong community might also have been important, though less so than Peter Singer. It could have been due to the efforts of a few people interested in GiveWell who discussed it on LessWrong. Jonah Sinick's LessWrong posts about GiveWell (mentioned in GiveWell's post about their work on outreach) are an example (full disclosure: Jonah Sinick is collaborating with me on Cognito Mentoring). Note that although only about 3% of donations made through GiveWell are explicitly attributable to LessWrong, GiveWell has received a lot of intellectual engagement from the LessWrong community and other organizations and individuals connected with the community.

How should the above considerations modify our view of GiveWell's success? I think the key thing GiveWell did correctly was become a canonical go-to reference for where to direct donors on making good giving decisions. By staking out that space early on, they were able to capitalize on Peter Singer. Also, it's not just GiveWell that benefited from Peter Singer — we can also argue that Singer's arguments were made more effective by the existence of GiveWell. The first line of counterargument to Singer's claim is that most charities aren't cost-effective. Singer's being able to point to a resource to help identify good charities make people take his argument more seriously.

I think that GiveWell's success at making itself the canonical source was more important than the specifics of their research. But the specifics may have been important in convincing a sufficiently large critical mass of influential people to recommend GiveWell as a canonical source, so the factors are hard to disentangle.

Would something like GiveWell have existed if GiveWell hadn't existed? How would the effective altruism movement be different?

These questions are difficult to explore, and discussing them would take us too far afield. This post on the Effective Altruists Facebook thread offers an interesting discussion. The upshot is that, although Giving What We Can was started two years after GiveWell, people involved with its early history say that the core ideas of looking at cost-effectiveness and recommending the very best places to donate money was mooted before its formal inception, some time around 2006 (when GiveWell had not been formally created). At the time, the people involved were unaware of GiveWell. William MacAskill says that GWWC may have done more work on the cost-effectiveness side if GiveWell wasn't already doing it.

I ran this post by Jonah Sinick and also emailed a draft to the GiveWell staff. I implemented some of their suggestions, and am grateful to them for taking the time to comment on my draft. Any responsibility for errors, omissions, and misrepresentations is solely mine.

How relevant are the lessons from Megamistakes to forecasting today?

7 VipulNaik 12 April 2014 04:53AM

Disclaimer: This post contains unvetted off-the-cuff thoughts. I've included quotes from the book in a separate quote dump post to prevent this post from getting too long. Read the intro and the TL;DR if you want a quick idea of what I'm saying.

As part of a review of the track record of forecasting and the sorts of models used for it, I read the book Megamistakes: Forecasting and the Myth of Rapid Technological Change (1989) by Steven P. Schnaars (here's a review of the book by the Los Angeles Times from back when it was published). I conducted my review in connection with contract work for the Machine Intelligence Research Institute, but the views expressed here are solely mine and have not been vetted by MIRI. Note that this post is not a full review of the book. Instead, it simply discusses some aspects of the book I found relevant.

The book is a critique of past forecasting efforts. The author identifies many problems with these forecasting efforts, and offers suggestions for improvement. But the book was written in 1989, when the Internet was just starting out and the World Wide Web didn't exist. Thus, the book's suggestions and criticisms may be outdated in one or more of these three ways:

  • Some of the suggestions in the book were mistaken, and this has become clearer based on evidence gathered since the publication of the book: I don't think the book was categorically mistaken on any count. The author was careful to hedge appropriately in cases where the evidence wasn't very strongly in a particular direction. But point #1 below is in the direction of the author not giving appropriate weight to a particular aspect of his analysis.
  • Some of the suggestions or criticisms in the book don't apply today because the sorts of predictions being made today are of a different nature: We'll argue this to be the case in #2 below.
  • Some of the suggestions in the book are already implemented routinely by forecasters today, so they don't make sense as criticisms even though they continue to be valid guidelines. We'll argue this to be the case in #3 below.

I haven't been able to locate any recent work of the author where he assesses his own work in light of new evidence; if any readers can find such material, please link to it in the comments.

TL;DR

  1. A number of the technologies that Schnaars notes were predicted to happen before 1989 and didn't, have in fact happened since then. This doesn't contradict anything Schnaars wrote. In fact, it agrees with many of his claims. But it does seem to be connotatively different from the message that Schnaars seems to be keen on pushing in the book. It seems that the main issues with many predictions is one of timing, rather than of fundamental flaws in the vision of the future being suggested. For instance, in the realm of concerns about unfriendly AI, it may be that the danger of AGI will be imminent in 2145 AD rather than 2045 AD, but the basic concerns espoused by Yudkowsky could still be right.
  2. Schnaars does note that trends related to computing are the exceptions to technological forecasting being way too optimistic: Computing-related trends seem to him to often be right or only modestly optimistic. In 1989, the exceptional nature of computing may have seemed like only a minor point in a book about many other failed technological forecasts. In 2014, the point is anything but minor. To the extent that there are systematic reasons for computing being different from the other technological realms where Schnaars notes a bad track record of forecasting, his critique isn't too relevant. The one trend that grows exponentially, in line with bullish expectations, will come to dominate the rest eventually. And to the extent that software eats the world, it could spill over into other trends as well.
  3. A lot of the suggestions offered by Schnaars (particularly suggestions on diversification, field testing, and collecting feedback) are routinely implemented by many top companies today, and even more so by the top technology companies. This isn't necessarily because they read him. It's probably largely because it's a lot easier to implement those suggestions in today's world with the Internet.

#1: The criticism of "technological wonderland": it's all about timing, honey!

Schnaars is critical of forecasters for being too enamored with the potential of a technology and replacing hard-nosed realism with wishful thinking based on what they'd like the technology to do. Two important criticisms he makes in this regard are:

  • Forecasters often naively extrapolate price-performance curves, ignoring both economic and technological hurdles.
  • Forecasters often focus more on what is possible rather than what people actually want as consumers. They ignore the fact that new product ideas that sound cool may not deliver enough value to end users to be worth the price tag.

The criticism remains topical today. Futurists today often extrapolate trends such as Moore's law far into the future, to the point where there's considerable uncertainty both surrounding the technological feasibility and the economic incentives. A notable example here is Ray Kurzweil, well-known futurist and author of The Singularity is Near. Kurzweil's prediction record is decidedly mixed. An earlier post of mine included a lengthy discussion of the importance of economic incentives in facilitating technological improvement. I'd drafted that post before reading Megamistakes, and the points I make there aren't too similar to the specific points in the book, but it is in the same general direction.

Schnaars notes, but in my view, gives insufficient emphasis to the following point: Many of the predictions he grades aren't fundamentally misguided at a qualitative level. They're just wrong on timing. In fact, a number of them have been realized in the 25 years since. Some others may be realized over the next 25 years, and yet more may be realized over the next 100 years. And some may be realized centuries from now. What the predictions got wrong was timing, in the following two senses:

  • Due to naive extrapolation of price-performance curves, forecasters underestimate the time needed to attain specific price-performance milestones. For instance, they might think that you'd get a certain kind of technological product for $300 by 1985, but it might actually come to market at that price only in 2005.
  • Because of their own obsession with technology, forecasters overestimate the reservation prices (i.e., the maximum price at which consumers are willing to buy a technological product). Thus, even when a particular price-performance milestone is attained, it fails to lead to the widespread use of the technology that forecasters had estimated.

The gravity you assign to this error depends heavily on the purpose of the forecast. If it's for a company deciding whether to invest a few million dollars in research and development, then being off by a couple of decades is a ruinous proposition. If you're trying to paint a picture of the long term future, on the other hand, a few decades here and there need not be a big deal. Schnaars seems to primarily be addressing the first category.

Schnaars makes the point about timing in more detail here (pp. 120-121) (emphasis mine):

A review of past forecasts for video recorders and microwave ovens illustrates the length of time required for even the most successful innovations to diffuse through a mass market. It also refutes the argument that we live in times of ever faster change. Both products were introduced into commercial markets shortly after World War II. Both took more than twenty years to catch fire in a large market. The revolution was characterized more by a series of fits and starts than by a smooth unfolding pattern. The ways in which they achieved success suggests something other than rapid technological change. Progress was slow, erratic, and never assured. And this applies to two of the most successful innovations of the past few decades! The record for less successful innovations is even less impressive.

The path to success for each of those products was paved with a mix of expected and unexpected events. First, it was widely known that to be successful it was necessary to get costs down. But as costs fell, other factors came into play. Microwave ovens looked as if they were going to take off in the late 1960s, when consumer advocates noted that the ovens leaked radiation when dropped from great heights. The media dropped the "great heights" part of the research, and consumers surmised that they would be purchasing a very dangerous product. Consumers decided to cook with heat for a few years longer. Similarly, the success of video recorders is usually attributed to Sony's entry with Betamax in the mid-1970s. But market entries went on for years with the video recorder. Various interpretations of the product were introduced onto the market throughout the 1960s and 1970s. A review of these entries clearly reveals that the road to success for the VCR was far more rocky than the forecasts implied. Even for successful innovations, which are exceptions to begin with, the timing of market sucess and the broad path the product will follow are often obscured from view.

One example where Schnaars notes that timing is the main issue is that of fax machines (full quote in the quote dump)

Here are some technologies that Schnaars notes as failed predictions, but that have, in the intervening years (189-2014), emerged in roughly the predicted form. Full quotes from the book in the quote dump.

  • Computerphones (now implemented as smartphones, though the original vision was of similar phones as landline phones rather than mobile phones).
  • Picture phones (specifically the AT&T PicturePhone(now implemented as smartphones and also as computers with built-in webcams, though note again that the original vision involved landline phones). See Wikipedia for more.
  • Videotex (an early offering whose functionality is now included in GUI-based browsers accessing the World Wide Web and other Internet services).

An interesting general question that this raises, and that I don't have an offhand answer to, is whether there is a tradeoff between having a clear qualitative imagination about what a technology might look like once matured, and having a realistic sense of what will happen in the next few years. If that's the case, the next question would be what sort of steps the starry-eyed futurist types can take to integrate realistic timing into their vision, and/or how people with a realistic sense of timing can acquire the skill of imagining the future without jeopardizing their realism about the short term.

#2: Computing: the exception that eviscerates the rule?

Schnaars acknowledges computing as the exception (pp. 123-124) (emphasis mine, longer version of quote in the quote dump):

Most growth market forecasts, especially those for technological products, are grossly optimistic. The only industry where such dazzling predictions have consistently come to friution is computers. The technological advances in this industry and the expansion of the market have been nothing short of phenomenal. The computer industry is one of those rare instances, where optimism in forecasting seems to have paid off. Even some of the most boastful predictions have come true. In other industries, such optimistic forecasts would have led to horrendous errors. In computers they came to pass.

[...]

The most fascinating aspect of those predictions is that in almost any other industry they would have turned out to be far too optimistic. Only in the computer industry did perpetual boasting turn out to be accurate forecasting, until the slowdown of the mid-1980s.

The tremendous successes in the computer industry illustrate an important point about growth market forecasting. Accurate forecasts are less dependent on the rate of change than on the consistency and direction of change. Change has been rampant in computers; but it has moved the industry consistently upward. Technological advances have reduced costs, improved performance, and, as a result, expanded the market. In few other industries have prices declined so rapidly, opening up larger and larger markets, for decades. Consequently, even the most optimistic predictions of market growth have been largely correct. In many slower growth industries, change has been slower but has served to whipsaw firms in the industry rather than push the market forward. In growth market forecasting, rapid change in one direction is preferable to smaller erratic changes.

This is about the full extent to which Schnaars discusses the case of computing. His failure to discuss it deeper seems like a curious omission. In particular, I would have been curious to see if he had an explanation for why computing has turned out so different, and whether this was due to the fundamental nature of computing or just a lucky historical accident. Further, to the extent that Schnaars believed that computing was fundamentally different, how did he fail to see the long-run implications in terms of how computing would eventually become a dominating factor in all forms of technological progress?

So what makes computing different? I don't have a strong view, but I think that the general-purpose nature and wide applicability of computing may have been critical. A diverse range of companies and organizations knew that they stood to benefit from the improvement of computing technology. This gave them greater incentives to pool and share larger amounts of resources. Radical predictions, such as Moore's law, were given the status of guidelines for the industry. Moreover, improvements in computing technology affected the backend costs of development, and the new technologies did not have to be sold to end consumers. So end consumers' reluctance to change habits was not a bottleneck to computing progress.

Contrast this with a narrower technology such as picture phones. Picture phones were a separate technology developed by a phone company, whose success heavily depended on what that company's consumers wanted. Whether AT&T succeeded or failed with the picture phone, most other companies and organizations didn't care.

Indeed, when the modern equivalents of picture phones, computerphones, and Videotex finally took off, they did so as small addenda to a thriving low-cost infrastructure of general-purpose computing.

The lessons from Megamistakes suggest that converting the technological fruits of advances into computing into products that consumers use can be a lot more tricky and erratic than simply making advances in computing.

I also think there's a strong possibility that the accuracy of computing forecasts may be declining, and that the problems that Schnaars outlines in his book (namely, consumers not finding the new technology useful) will start biting computing. For more, see my earlier post.

#3: Main suggestions already implemented nowadays?

Some of the suggestions that Schnaars makes on the strategy front are listed in Chapter 11 (Strategic Alternatives to Forecasting) and include:

  1. Robust Strategies: If a firm cannot hope to ascertain what future it will face, it can develop a strategy that is resilient no matter which of many outcomes occurs (p. 163).
  2. Flexible Strategies: Another strategy for dealing with an uncertain future is to remain flexible until the future becomes clearer (p. 165).
  3. Multiple Coverage Strategies: Another alternative to forecasting growth markets is to pursue many projects simultaneously (p. 167).

I think that (2) and (3) in particular have increased a lot in the modern era, and (1) has too, though less obviously. This is particularly true in the software and Internet realm, where one can field-test many different experiments over the Internet. But it's also true for manufacturing, as better point-of-sale information and a supply chain that records information accurately at every stage allows for rapid changes to production processes (cf. just in time manufacturing). The example of clothing retailer Zara is illustrative: they measure fashion trends in real time and change their manufacturing choices in response to these trends. In his book Everything is Obvious: *Once You Know the Answer, Duncan Watts uses the phrase "measure and react" for this sort of strategy.

Other pieces of advice that Schnaars offers, that I think are being followed to a greater extent today than back in his time, partly facilitated by greater information flow and more opportunities for measurement, collaboration, and interaction:

  • Start Small: Indeed, a lot of innovation today is either done by startups or by big companies trying out small field tests of experimental products. It's very rarely the case that a company invests a huge amount in something before shipping or field-testing it. Facebook started out at Harvard in February 2004 and gradually ramped up to a few other universities, and only opened to the general public in September 2006 (see their timeline).
  • Take Lots of Tries: The large numbers of failed startups as well as shelved products in various "labs" of Google, Facebook, and other big companies are testimony to this approach.
  • Enter Big: Once something has been shown to work, the scaling up can be very rapid in today's world, due to rapid information flows. Facebook got to a billion users in under a decade of operation. When they roll out a new feature, they can start small, but once the evidence is in that it's working, they can roll it out to everybody within months.
  • Setting Standards of Uniformity: It's easier than before to publicly collaborate in an open fashion on standards. There are many successful examples that form the infrastructure of the Internet, most of them based on open source technologies. Some recent examples of successful collaborative efforts include Schema.org (between search engines), OpenID (between major Internet email ID providers and other identity providers such as Facebook), Internet.org (between Facebook and cellphone manufacturing companies), and the Open Compute Project.
  • Developing the Necessary Infrastructure: Big data companies preemptively get new data center space before the need for it starts kicking in. Data center space is particularly nice because server power and data storage are needed for practically all their operations, and therefore are agnostic to what specific next steps the companies will take. This fits in with the "Flexible Strategy" idea.
  • Ensuring a Supply of Complementary Products: This isn't uniformly followed, but arguably the most successful companies have followed it. Google expanded into Maps, News, and email long before people were clamoring for it. They got into the phone operating system business with Android and the web browser business with Chrome. Facebook has been more focused on its core business of social networking, but it too has been supporting complementary initiatives such as internet.org to boost global Internet connectivity.
  • Lowering Prices: Schnaars cites the example of Xerox, that sidestepped the problem of the high prices of machines by leasing them instead of selling them. Something similar is done in the context of smartphones today.

Schnaars' closing piece of advice is (p. 183):

Assume that the Future Will Be Similar to the Present

Is this good advice, and are companies and organizations today following it? I think it's both good advice and bad advice. On the one hand, Google was able to succeed with GMail because they correctly forecast that disk space would soon be cheap enough to make GMail economical. In this case, it was their ability to see the future as different from the present that proved to be an asset. Similarly, Paul Graham describes good startup ideas as ones created by people who live in the future rather than the present.

At the same time, the best successes do assume that the future won't look physically too different from the present. And unless there is a strong argument in favor of a particular way in which the future will look different, planning based on the present might be the best one can hope for. GMail wasn't based on a fundamental rethinking of human behavior. It was based on the assumption that most things would remain similar, but Internet connectivity and bandwidth would improve and disk space costs would reduce. Both assumptions were well-grounded in the historical record of technology trends, and both were vindicated by history.

Thanks to Luke Muehlhauser (MIRI director) for recommending the book and to Jonah Sinick for sending me his notes on the book. Neither of them have vetted this post.

Quote dump

To keep the main post short, I'm publishing a dump of relevant quotes from the book separately, in a quote dump post.

 

 

Quote dump for "megamistakes"

1 VipulNaik 12 April 2014 04:53AM

This post contains a dump of the less important quotes from Megamistakes that I omitted from the main post in order to keep it short.

#1: Quote dumps related to bad timing

On the fax machine, quote from p. 57 of the book:

Originally, the facsimile machine, or sending mail by phone, was targeted toward business customers. Although a bright future was predicted for those devices, it took nearly twenty years for the product to exhibit rapid growth. Twenty years ago, in 1968, Xerox and Magnavox were joined by Litton, Stewart-Warner, and a host of other entrants in an attempt to garner the lion's share of this growth market. They all believed it was "on the verge of a boom akin to that of the office copier." One executive predicted that the sales would climb to 500,000 units in just a few years, even though only 4,000 were currently in use for business corespondence.

The innovation failed to catch fire. It was too expensive and took too long to send a single document — ten 8 1/2-by-11-inch sheets in an entire day! No wonder companies called the courier.

By 1987 the situation had changed. Prices had been reduce dramatically, and performance had increased. Driven by consumer and small business purchases, sales skyrocketed to nearly 300,000 units in 1987. After twenty years, the balance between price and performance finally stirred a growth market for this innovation and warranted the optimism that had originally been applied to it. In the case of facsimile machines, change came much more slowly than expected.

The book later notes (p. 118):

Facsimile machines lingered for decades until technological advances enhanced the benefits and allowed price declines to spawn a larger market. Only now, after many false starts, has the market for facsimile machines exploded. The balance between price and performance has been struck. The machines send documents faster and cheaper, and are now a "hot" product.

Here are some of the technologies that Schnaars notes as failed predictions, but that have since emerged in a form approximately similar to what was predicted:

  • Computerphones (now implemented as smartphones, though the original vision was of similar phones as landline phones rather than mobile phones). Quote from p. 83:

    The computerphone, a recent innovation which married the data processing capabilities of the personal computer to the voice and data communications capabilities of the modern telephone has also failed to excite consumer interest, although initial expectations for the product were high, Entrepreneurial firms such as Zaisan Corp. introduced reasonably priced, powerful computerphones in 1984. Other firms followed. It was expected to be a classic case of large firms following smaller pioneers into the market.

    The market for computerphones was expected to reach $1 billion a year within a few years. In 1984 Business Week reported: "Most analysts are predicting fast growth in the computer-phone market." Most experts were wrong. The market never materialized. The makers of the machines were unable to convince business buyers that they needed the product. A PC with a modern modem seemed to do the job just fine. A sales pitch that argued that the computerphone eliminated desktop clutter proved unpersuasive. Less than a year later Business Week reexamined the computerphone market. One computer retailer reflecting on his lack of success selling the product notes: "People see it as an expensive PC with a phone on it —with no need for it." Others felt that computerphone manufacturers had not really figured out what the market wanted. Such severe shortcomings are likely to dampen enthusiasm for the product for years to come.

  • Picture phones (now implemented as smartphones and also as computers with built-in webcams, though note again that the original vision involved landline phones). Quote from pp. 86-87:

    One of the most stunning failures of high technology was AT&T's picture telephone. Although expectations for the product were high, and many experts considered it a near certainty that the innovation would revolutionize many markets, it has so far served very few customers after years of intense effort. The company had been working on the product since at least the 1930s. The invention of the transistor in 1948 allowed the product to be reduced to a more manageable size. Commercial seervice was to begin in 1964, the same year the product was featured at the New York World's Fair.

    The advantages of the picture phone were clear. You could meet face to face with customers and colleagues without the expense and bother of business travel. On the long term, electronic meetings would render personal meetings obsolete. Ultimately, the picture phone would serve the home market, where household callers could not only speak but speak and be seen.

    With the picture telephone, salesmen would travel electronically rather than physically. They could literally see their customers without leaving their desks. Productivity would increase while travel expenses declined. Besides, according to one expert, central business districts were dispersing. Soon it would be difficult for cities to support public transportation.

    Other advantages would also accrue to users. With a keyboard they could tie into mainframe computers and work out problems at their desks. The picture phone would also serve as a precursor to videotex. Customers could view airline reservations, stock quotations, and a host of other databases.

    Picture phone service was installed at Union Carbide as a test. The company loved it. It cut down on interoffice visits and kept business meetings on business topics.

    World's Fair visitors were awed by the picture phone. So were the forecasters. A 1969 article in The Bell Laboratories Record noted that "just as the telephone has revolutionized human habits of communicating and made a major contribution to the quality of human life, many of us at Bell Labs believe that the PICTUREPHONE service, the service that lets people see as well as hear each other, offers potential benefits to maind of the same magnitude." Advertisements in the business press of the late 1960s announced the product's features and pictured the desktop model. It was predicted that there would be 100,000 picturephones in use by 1975. By the 1980s "these phones would be widely used by the general public — perhaps replacing some form of transportation, such as trips to local stores to examine merchandise before making purchases." Study after study predicted the same opportunities for the picture telephone.

    A study entitled "A Long Look Ahead," conducted for AT&T by the Institute for the Future in mid-1969, proved no expcetion. Big changes were in store for AT&T. "The world of 1985," the study warned, "will be markedly different than today's." There would be 3 million picture telephones in use in the United States, generating revenues of $5 billion. [...]

    What happened to the picture telephonewas far different from what was predicted. Customers may have been awed by the product bu they were also awed by its price. When it came to paying for the service they decided to forgo the video portion of the product. They decided to just listen rather than look and listen.

    The product was not killed, however. In the early 1980s it evolved into the Picturephone meeting service. Special rooms were set up where the "leading edge" copmanies could hold face-to-face meetings without actually meeting face to face. Mostly, the service offered the same benefits as those offered nearly twenty years earlier.

    The meeting service met mostly with failure. Video teleconferencing never made a dent in business travel. Other business services offered by AT&T soared while the picturephone service slumped. Pressing the flesh proved insurmountably superior to pressing buttons. John Naisbitt's contention that high technology leads to a higher demand for personal contact, what we calls "high touch," certainly rings true in the case of the picturephone.

    The advantages of the picturephone over ordinary telephone service were questionable and surely expensive. In the early 1970s some blamed a recession for the product's slow start. But the product's problems lay deeper.  Post-mortems usually attribute the product's demise to high initial costs. But high costs alone did not destroy the picturephone. What really killed it was that it offered a benefit that was awesome and amusing but essentially unwanted. There is little need to see a person over the phone for perfunctory personal and business communications. Furthermore, in situation where personal contact is crucial, seeing people over the phone is a poor substitute for meeting them in person. Consequently, the video phone was really competing with the traditional telephone, not "in-the-flesh" meetings. Furthermore, the traditional audio telephone proved to be more than sufficient medium. Sensibly and successfully, that is where the emphasis is now placed. Enhancing the audio telephone has led to cost-effective benefits and profitable services. The lack of benefits for the picture telephone over the voice-only telephone, coupled with the much higher price that had to be charged for the service, disconnected the prospects for this technological product.

  • Videotex (an early offering whose functionality is now included in, or rather superseded by, GUI-based browsers accessing the World Wide Web and other Internet services). Quoting from p. 82:

    A more recent example is videotex, widely hailed as a growth market in the early 1980s. It has penetrated the French market — although the French have heavily subsidized the technology. In the United States it has gone nowhere. Most consumers have no strong desire to manipulate checking accounts electronically or scan data bases in their spare time. And they are certainly unwilling to pay a hefty fee for the equipment necessary to do so. Meanwhile, providers of those services search tirelessly for a service that consumers will find beneficial.

#2: Computing: the gaping-hole exception to the rule?

Full quote (pp. 123-124) (emphasis mine):

Most growth market forecasts, especially those for technological products, are grossly optimistic. The only industry where such dazzling predictions have consistently come to friution is computers. The technological advances in this industry and the expansion of the market have been nothing short of phenomenal. The computer industry is one of those rare instances, where optimism in forecasting seems to have paid off. Even some of the most boastful predictions have come true. In other industries, such optimistic forecasts would have led to horrendous errors. In computers they came to pass.

Integrated circuits were widely expected to create wondrous products and stunning growth markets. Over the past few decades there have been numerous calls for integrated circuits. As early as 1962 many foresaw the potential of these devices. It was widely, and correctly, predicted that integrated circuits would follow the time-honored pattern of increasing sales volume and declining unit costs as the technology was transferred to larger markets. In 1962 John W. Mauchly, one of the innovators of early computer technology, predicted: "By the 1980s businessmen will be carrying personal computers around in their pockets." Given the widespread use of portable and laptop computers, and the fact that at the time (1962) computers had not been widely diffused even to business, his prediction is amazingly accurate. It was not unusual, however. Throughout the 1960s, there were equally glowing forecasts for computer gear.

Similarly, Fortune reported in 1962: "These exquisite artifacts [microprocessors] may later the electronics industry, economically as well as technologically, as dramatically as did the transistor." They did. Unlike other sectors of the economy, technological changes in computers were dramatic, even if they were largely expected to occur.

In 1968, in response to critics who saw the end of growth in the computer industry, Thomas J. Watson of IBM noted that "there doesn't seem to be any real limit to the growth of the computer industry."

Finally, in 1973, Intel's Robert N. Noyce started that "the potential applications [for microcomputers] are almost unlimited."

The most fascinating aspect of those predictions is that in almost any other industry they would have turned out to be far too optimistic. Only in the computer industry did perpetual boasting turn out to be accurate forecasting, until the slowdown of the mid-1980s.

The tremendous successes in the computer industry illustrate an important point about growth market forecasting. Accurate forecasts are less dependent on the rate of change than on the consistency and direction of change. Change has been rampant in computers; but it has moved the industry consistently upward. Technological advances have reduced costs, improved performance, and, as a result, expanded the market. In few other industries have prices declined so rapidly, opening up larger and larger markets, for decades. Consequently, even the most optimistic predictions of market growth have been largely correct. In many slower growth industries, change has been slower but has served to whipsaw firms in the industry rather than push the market forward. In growth market forecasting, rapid change in one direction is preferable to smaller erratic changes.

Meetup : Christchurch, NZ Inaugural Meetup

3 free_rip 12 April 2014 02:37AM

Discussion article for the meetup : Christchurch, NZ Inaugural Meetup

WHEN: 27 April 2014 04:30:00PM (+1200)

WHERE: James Hight Library, University of Canterbury, Room 901

Join us for the first Christchurch meetup!

It will be held in room 901, James Hight Library at the University of Canterbury. Just head in the front (only) doors to James Hight, find an elevator (don't take the stairs, they only go up one floor), go up to floor 9 and you'll find room 901 along one of the walls. If you're unsure of how to get there feel free to comment below, PM me for directions or PM me for a cell number in case you get lost.

I'll be there from 4pm, so arrive anytime 4-4.30 to chat and meet people. We'll start the planned activities at 4.30 and go for as long as people would like.

As for what the planned activities are to be - I'm open to suggestions! As a default, we'll start with a few introduction games, move onto discussing an article or two, and finish up with some fun (quick to learn) card games like Story Wars.

Discussion article for the meetup : Christchurch, NZ Inaugural Meetup

Supply, demand, and technological progress: how might the future unfold? Should we believe in runaway exponential growth?

13 VipulNaik 11 April 2014 07:07PM

Warning: This is a somewhat long-winding post with a number of loosely related thoughts and no single, cogent thesis. I have included a TL;DR after the introduction, listing the main points. All corrections and suggestions are greatly appreciated.

It's commonly known, particularly to LessWrong readers, that in the world of computer-related technology, key metrics have been doubling fairly quickly, with doubling times ranging from 1 to 3 years for most metrics. The most famous paradigmatic example is Moore's law, which predicts that the number of transistors on integrated circuits doubles approximately every two years. The law itself stood up quite well until about 2005, but broke down after that (see here for a detailed overview of the breakdown by Sebastian Nickel). Another similar proposed law is Kryder's law, which looks at the doubling of hard disk storage capacity. Chapters 2 and 3 of Ray Kurzweil's book The Singularity is Near goes into detail regarding the technological acceleration (for an assessment of Kurzweil's prediction track record, see here).

One of the key questions facing futurists, including those who want to investigate the Singularity, is the question of whether such exponential-ish growth will continue for long enough for the Singularity to be achieved. Some other reasonable possibilities:

  • Growth will continue for a fairly long time, but slow down to a linear pace and therefore we don't have to worry about the Singularity for a very long time.
  • Growth will continue but converge to an asymptotic value (well below the singularity threshold)beyond which improvements aren't possible. Therefore, growth will progressively slow down but still continue as we come closer and closer to the asymptotic value
  • Growth will come to a halt, because there is insufficient demand at the margin for improvement in the technology.

Ray Kurzweil strongly adheres to the exponential-ish growth model, at least for the duration necessary to reach computers that are thousands of times as powerful as humanity (that's what he calls the Singularity). He argues that although individual paradigms (such as Moore's law) eventually run out of steam, new paradigms tend to replace them. In the context of computational speed, efficiency, and compactness, he mentions nanotechnology, 3D computing, DNA computing, quantum computing, and a few other possibilities as candidates for what might take over once Moore's law is exhausted for good.

Intuitively, I've found the assumption of continued exponential growth wrong. I'll hasten to add that I'm mathematically literate and so it's certainly not the case that I fail to appreciate the nature of exponential growth — in fact, I believe my skepticism is rooted in the fact that I do understand exponential growth. I do think the issue is worth investigating, both from the angle of whether the continued improvements are technologically feasible, and from the angle of whether there will be sufficient incentives for people to invest in achieving the breakthroughs. In this post, I'll go over the economics side of it, though I'll include some technology-side considerations to provide context.

TL;DR

I'll make the following general points:

  1. The industries that rely on knowledge goods tend to have long-run downward-sloping supply curves.
  2. Industries based on knowledge goods exhibit experience curve effects: what matters is cumulative demand rather than demand in a given time interval. The irreversibility of creating knowledge goods creates a dynamic different from that in other industries.
  3. What matters for technological progress is what people investing in research think future demand will be like. Bubbles might actually be beneficial if they help lay the groundwork of investment that is helpful for many years to come, even though the investment wasn't rational for individual investors.
  4. Each stage of investment requires a large enough number of people with just the right level of willingness to pay (see the PS for more). A diverse market, with people at various intermediate stages of willingness to pay, is crucial for supporting a technology through its stages of progress.
  5. The technological challenges confronted at improving price-performance tradeoffs may differ for the high, low, and middle parts of the market for a given product. The more similar these challenges, the faster progress is likely to be (because the same research helps with all the market segments together).
  6. The demand-side story most consistent with exponential technological progress is one where people's desire for improvement in the technologies they are using are proportional to the current level of the technologies. But this story seems inconsistent with the facts: people's appetite for improvement probably declines once technologies get good enough. This creates problems for the economic incentive side of the exponential growth story.
  7. Some exponential growth stories require a number of technologies to progress in tandem. Progress in one technology helps facilitate demand for another complementary technology in this story. Such progress scenarios are highly conjunctive, and it is likely that actual progress will fall far short of projected exponential growth.

#1: Short versus long run for supply and demand

In the short run, supply curves are upward-sloping and demand curves are downward-sloping. In particular, this means that when the demand curve expands (more people wanting to buy the item at the same price) then that causes an increase in price and increase in quantity traded (rising demands creates shortages at the current price, motivating suppliers to increase supplies and also charge more money given the competition between buyers). Similarly, if the supply curve expands (more amount of the stuff getting produced at the same price) then that causes a decrease in price and increase in quantity traded. These are robust empirical observations that form the bread and butter of micreconomics, and they're likely true in most industries.

In the long run, however, things become different because people can reallocate their fixed costs. The more important the allocation of fixed costs is to determining the short-run supply curve, the greater the difference between short-run supply curves based on choices of fixed cost allocation. And in particular, if there are increasing returns to scale on fixed costs (for instance, a factory that produces a million widgets costs less than 1000 times a factory that produces a thousand widgets) and fixed costs contribute a large fraction of production costs, then the long-run supply curve might end up be downward-sloping. An industry where the long-run supply curve is downward-sloping is called a decreasing cost industry (see here and here for more). (My original version of this para was incorrect; see CoItInn's comment and my response below it for more).

#2: Introducing technology, the arrow of time, and experience curves

The typical explanation for why some industries are decreasing cost industries is the fixed costs of investment in infrastructure that scale sublinearly with the amount produced. For instance, running ten flights from New York to Chicago costs less than ten times as much as running one flight might. This could be because the ten flights can share some common resources such as airport facilities or even airplanes, and also they can offer backups for one another in case of flight cancellations and overbooking. The fixed costs of setting up a factory that can produce a million hard drives a year is less than 1000 times the fixed cost of setting up a factory that can produce a thousand hard drives a year. A mass transit system for a city of a million people costs less than 100 times as much as a mass transit system for a city of the same area with 10,000 people. These explanations for decreasing cost have only a moderate level of time-directionality. When I talk of time-directionality, I am  thinking of questions like: "What happens if demand is high in one year, and then falls? Will prices go back up?" It is true that some forms of investment in infrastructure are durable, and therefore, once the infrastructure has already been built in anticipation of high demand, costs will continue to stay low even if demand falls back. However, much of the long-term infrastructure can be repurposed causing prices to go back up. If demand for New York-Chicago flights reverts to low levels, the planes can be diverted to other routes. If demand for hard drives falls, the factory producing them can (at some refurbishing cost) produce flash memory or chips or something totally different. As for intra-city mass transit systems, some are easier to repurpose than others: buses can be sold, and physical train cars can be sold, but the rail lines are harder to repurpose. In all cases, there is some time-directionality, but not a lot.

Technology, particularly the knowledge component thereof, is probably an exception of sorts. Knowledge, once created, is very cheap to store, and very hard to destroy in exchange for other knowledge. Consider a decreasing cost industry where a large part of the efficiency of scale is because larger demand volumes justify bigger investments in research and development that lower production costs permanently (regardless of actual future demand volumes). Once the "genie is out of the bottle" with respect to the new technologies, the lower costs will remain — even in the face of flagging demand. However, flagging demand might stall further technological progress.

This sort of time-directionality is closely related to (though not the same as) the idea of experience curve effects: instead of looking at the quantity demanded or supplied per unit time in a given time period, it's more important to consider the cumulative quantity produced and sold, and the economies of scale arise with respect to this cumulative quantity. Thus, people who have been in the business for ten years enjoy a better price-performance tradeoff than people who have been in the business for only three years, even if they've been producing the same amount per year.

The concept of price skimming is also potentially relevant.

#3: The genie out of the bottle, and gaining from bubbles

The "genie out of the bottle" character of technological progress leads to some interesting possibilities. If suppliers think that future demand will be high, then they'll invest in research and development that lowers the long-run cost of production, and those lower costs will stick permanently, even if future demand turns out to be not too high. This depends on the technology not getting lost if the suppliers go out of business — but that's probably likely, given that suppliers are unlikely to want to destroy cost-lowering technologies. Even if they go out of business, they'll probably sell the technology to somebody who is still in business (after all, selling their technology for a profit might be their main way of recouping some of the costs of their investment). Assuming you like the resulting price reductions, this could be interpreted as an argument in favor of bubbles, at least if you ignore the long-term damage that these might impose on people's confidence to invest. In particular, the tech bubble of 1998-2001 spurred significant investments in Internet infrastructure (based on false premises) as well as in the semiconductor industry, permanently lowering the prices of these, and facilitating the next generation of technological development. However, the argument also ignores the fact that the resources spent on the technological development could instead have gone to other even more valuable technological developments. That's a big omission, and probably destroys the case entirely, except for rare situations where some technologies have huge long-term spillovers despite insufficient short-term demand for a rational for-profit investor to justify investment in the technology.

#4: The importance of market diversity and the importance of intermediate milestones being valuable

The crucial ingredient needed for technological progress is that demand from a segment with just the right level of purchasing power should be sufficiently high. A small population that's willing to pay exorbitant amounts won't spur investments in cost-cutting: for instance, if production costs are $10 per piece and 30 people are willing to pay $100 per piece, then pushing production down from $10 to $5 per piece yields a net gain of only $150 — a pittance compared to the existing profit of $2700. On the other hand, if there are 300 people willing to pay $10 per piece, existing profit is zero whereas the profit arising from reducing the cost to $5 per piece is $1500. On the third hand, people willing to pay only $1 per piece are useless in terms of spurring investment to reduce the price to $5, since they won't buy it anyway.

Building on the preceding point, the market segment that plays the most critical role in pushing the frontier of technology can change as the technology improves. Initially, when prices are too high, the segment that pushes technology further would be the small high-paying elite (the early adopters). As prices fall, the market segment that plays the most critical role becomes less elite and less willing to pay. In a sense, the market segments willing to pay more are "freeriding" off the others — they don't care enough to strike a tough bargain, but they benefit from the lower prices resulting from the others who do. Also, market segments for whom the technology is still too expensive are also benefiting in terms of future expectations. Poor people who couldn't afford mobile phones in 1994 benefited from the rich people who generated demand for the phones in 1994, and the middle-income people who generated demand for the phones in 2004, so that now, in 2014, the phones are cost-effective for many of the poor people.

It becomes clear from the above that the continued operation of technological progress depends on the continued expansion of the market into segments that are progressively larger and willing to pay less. Note that the new populations don't have to be different from the old ones — it could happen that the earlier population has a sea change in expectations and demands more from the same suppliers. But it seems like the effect would be greater if the population size expanded and the willingness to pay declined in a genuine sense (see the PS). Note, however, that if the willingness to pay for the new population was dramatically lower than that for the earlier one, there would be too large a gap to bridge (as in the example above, going from customers willing to pay $100 to customers willing to pay $1 would require too much investment in research and development and may not be supported by the market). You need people at each intermediate stage to spur successive stages of investment.

A  closely related point is that even though improving a technology by a huge factor (such as 1000X) could yield huge gains that would, on paper, justify the cost of investment, the costs in question may be too large and the uncertainty may be too high to justify the investment. What would make it worthwhile is if intermediate milestones were profitable. This is related to the point about gradual expansion of the market from a small number of buyers with high willingness to pay to a large number of buyers with low willingness to pay.

In particular, the vision of the Singularity is very impressive, but simply having that kind of end in mind 30 years down the line isn't sufficient for commercial investment in the technological progress that would be necessary. The intermediate goals must be enticing enough.

#5: Different market segments may face different technological challenges

There are two ends at which technological improvement may occur: the frontier end (of the highest capacity or performance that's available commercially) and the low-cost end (the lowest cost at which something useful is available). To some extent, progress at either end helps with the other, but the relationship isn't perfect. The low-cost end caters to a larger mass of low-paying customers and the high-cost end caters to a smaller number of higher-paying customers. If progress on either end complements the other, that creates a larger demand for technological progress on the whole, with each market segment freeriding off the other. If, on the other hand, progress at the two ends requires distinct sets of technological innovations, then overall progress is likely to be slower.

In some cases, we can identify more than two market segments based on cost, and the technological challenge for each market segment differs.

Consider the case of USB flash drives. We can broadly classify the market into three segments:

  • At the high end, there are 1 TB USB 3.0 flash drives worth $3000. These may appeal to power users who like to transfer or back up movies and videos using USB drives regularly.
  • In the middle (which is what most customers in the First World, and their equivalents elsewhere in the world, would consider) are flash drives in the 16-128 GB range with prices ranging from $10-100. These are typically used to transfer documents and install softwares, with the occasional transfer of a movie.
  • At the "low" end are flash drives with 4 GB or less of storage space. These are sometimes ordered in bulk for organizations and distributed to individual members. They may be used by people who are highly cash-constrained (so that even a $10 cost is too much) and don't anticipate needing to transfer huge files over a USB flash drive.

The cost challenges in the three market segments differ:

  • At the high end, the challenges of miniaturization of the design dominate.
  • At the middle, NAND flash memory is a critical determinant of costs.
  • At the low end, the critical factor determining cost is the fixed costs of production, including the costs of packaging. Reducing these costs would presumably involve lowering the fixed costs of production, including cheaper, more automated, more efficient packaging.

Progress in all three areas is somewhat related but not too much. In particular, the middle is the part that has seen the most progress over the last decade or so, perhaps because demand in this sector is most robust and price-sensitive, or because the challenges there are the ones that are easiest to tackle. Note also that the definitions of the low, middle, and high end are themselves subject to change. Ten years ago, there wasn't really a low or high end (more on this in the historical anecdote below). More recently, some disk space values have moved from the high end to the middle end, and others have moved from the middle end to the low end.

#6: How does the desire for more technological progress relate with the current level of a technology? Is it proportional, as per the exponential growth story?

Most of the discussion of laws such as Moore's law and Kryder's law focus on the question of technological feasibility. But demand-side considerations matter, because that's what motivates investments in these technologies. In particular, we might ask: to what extent do people value continued improvements in processing speed, memory, and hard disk space, directly or indirectly?

The answer most consistent with exponential growth is that whatever level you are currently at, you pine for having more in a fixed proportion to what you currently have. For instance, for hard disk space, one theory could be that if you can buy x GB of hard disk space for $1, you'd be really satisfied only with 3x GB of hard disk space for $1, and that this relationship will continue to hold whatever the value of x. This model relates to exponential growth because it means that the incentives for proportional improvement remain constant with time. It doesn't imply exponential growth (we still have to consider technological hurdles) but it does take care of the demand side. On the other hand, if the model were false, it wouldn't falsify exponential growth, but it should make us more skeptical of claims that exponential growth will continue to be robustly supported by market incentives.

How close is the proportional desire model to the reality? I think it's a bad description. I will take a couple of examples to illustrate.

  • Hard disk space: When I started using computers in the 1990s, I worked on a computer with a hard disk size of 270 MB (that included space for the operating system). The hard disk really did get full just with ordinary documents and spreadsheets and a few games played on monochrome screens — no MP3s, no photos, no videos, no books stored as PDFs, and minimal Internet browsing support. When I bought a computer in 2007, it had 120 GB (105 GB accessible) and when I bought a computer last year, it had 500 GB (450 GB accessible). I can say quite categorically that the experiences are qualitatively different. I no longer have to think about disk space considerations when downloading PDFs, books, or music — but keeping hard disk copies of movies and videos might still give me pause in the aggregate. I actually downloaded an offline version of Wikipedia for 10 GB, something that gave me only a small amount of pause with regards to disk space requirements. Do I clamor for an even larger hard disk? Given that I like to store videos and movies and offline Wikipedia, I'd be happy if the next computer I buy (maybe 7-10 years down the line?) had a few terabytes of storage. But the issue lacks anything like the urgency that running out of disk space had back in the day. I probably wouldn't be willing to pay much for improvements in disk space at the margin. And I'm probably at the "use more disk space" extreme of the spectrum — many of my friends have machines with 120 GB hard drives and are nowhere near close to running out of it. Basically, the strong demand imperative that existed in the past for improving  hard drive capacity no longer exists (here's a Facebook discussion I initiated on the subject).
  • USB flash drives: In 2005, I bought a 128 MB USB flash drive for about $50 USD. At the time, things like Dropbox didn't exist, and the Internet wasn't too reliable, so USB flash drives were the best way of both backing and transferring stuff. I would often come close to running out of space on my flash drive just to transfer essential items. In 2012, I bought two 32 GB USB flash drives for a total cost of $32 USD. I used one of them to back up all my documents plus a number of my favorite movies, and still had a few GB to spare. The flash drives do prove inadequate for transferring large numbers of videos and movies, but those are niche needs that most people don't have. It's not clear to me that people would be willing to pay more for a 1 TB USB flash drive (a few friends I polled on Facebook listed reservation prices for a 1 TB USB flash drive ranging from $45 to $85. Currently, $85 is the approximate price of 128 GB USB flash drives; here's the Facebook discussion). At the same time, it's not clear that lowering the cost of production for the 32 GB USB flash drive would significantly increase the number of people who would buy that. On either end, therefore, the incentives for innovation seem low.

#7: Complementary innovation and high conjunctivity of the progress scenario

The discussion of the hard disk and USB flash drive examples suggests one way to rescue the proportional desire and exponential growth views. Namely, the problem isn't with people's desires not growing fast enough, it's with complementary innovations not happening fast enough. In this view, maybe in processor speed improved dramatically, new applications enabled by that would revive the demand for extra hard disk space and NAND flash memory. Possibilities in this direction include highly redundant backup systems (including peer-to-peer backup), extensive internal logging of activity (so that any accidental changes can be easily located and undone),  extensive offline caching of websites (so that temporary lack of connectivity has minimal impact on browsing experience), and applications that rely on large hard disk caching to complement memory for better performance.

This rescues continued exponential growth, but at a high price: we now need to make sure that a number of different technologies are progressing simultaneously. Any one of these technologies slowing down can cause demand for the others to flag. The growth scenario becomes highly conjunctive (you need a lot of particular things to happen simultaneously), and it's highly unlikely to remain reliably exponential over the long run.

I personally think there's some truth to the complementary innovation story, but I think the flagging of demand in absolute terms is also an important component of the story. In other words, even if home processors did get a lot faster, it's not clear that the creative applications this would enable would have enough of a demand to spur innovation in other sectors. And even if that's true at the current margin, I'm not sure how long it will remain true.

This blog post was written in connection with contract work I am doing for the Machine Intelligence Research Institute, but repreesents my own views and has not been vetted by MIRI. I'd like to thank Luke Muehlhauser (MIRI director) for spurring my interest in the subject, Jonah Sinick and Sebastian Nickel for helpful discussions on related matters, and my Facebook friends who commented on the posts I've linked to above.

Comments and suggestions are greatly appreciated.

PS: In the discussion of different market sectors, I argued that the presence of larger populations with lower willingness to pay might be crucial in creating market incentives to further improve a technology. It's worth emphasizing here that the absolute size of the incentive depends on the population more than the willingness to pay. To reduce the product cost from $10 to $5, the profit from a population of 300 people willing to pay at least $10 is $1500, regardless of the precise amount they are willing to pay. But as an empirical matter, accessing larger populations requires going to lower levels of willingness to pay (that's what it means to say that demand curves slope downward). Moreover, the nature of current distribution of disposable wealth (as well as willingness to experiment with technology) around the world is such that the increase in population size is huge as we go down the rung of willingness to pay. Finally, the proportional gain from reducing production costs is higher from populations with lower willingness to pay, and proportional gains might often be better proxies of the incentives to invest than absolute gains.

I made some minor edits to the TL;DR, replacing "downward-sloping demand curves" with "downward-sloping supply curves" and replacing "technological progress" with "exponential technological progress". Apologies for not having proofread the TL;DR carefully before.

Weekly LW Meetups

2 FrankAdamek 11 April 2014 04:09PM

This summary was posted to LW main on April 4th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Meetup: Philadelphia, April 12, 1PM

2 NancyLebovitz 11 April 2014 08:24AM

WHEN: 12 April 2014 1:00 PM

WHERE: Philadelphia

The meetup is at Nam Phuong (llth and Broad) at 1:00 PM. This is a Saturday (change from the previous Sunday meetups).

Discussion prompt: http://slatestarcodex.com/2013/03/17/not-just-a-mere-political-issue/

Discussion group/mailing list

Meetup : Phoenix/ASU Less Wrong

1 Danny_Hintze 10 April 2014 05:39AM

Discussion article for the meetup : Phoenix/ASU Less Wrong

WHEN: 12 April 2014 10:00:00AM (-0700)

WHERE: 300 E Orange Mall, Tempe, AZ 85281

We will be meeting up at Hayden Library. We're going to try a Saturday morning meetup to mix up the states of mind and see if we can bring some new people out of the woodwork.

We will probably continue what has so far been a productive search for topics of disagreement. We will also likely be discussing the Data Science Coursera study group.

Discussion article for the meetup : Phoenix/ASU Less Wrong

Meetup : Urbana-Champaign Scantily Attended Meetups Rerun

1 Mestroyer 10 April 2014 05:06AM

Discussion article for the meetup : Urbana-Champaign Scantily Attended Meetups Rerun

WHEN: 13 April 2014 12:00:00PM (-0500)

WHERE: 300 S Goodwin Ave Apt 102, Urbana.

Two meetups I can remember didn't have enough people at them to make them what they could have been. By the power of disjunction, I bet that one of them was only scantily attended because people couldn't make it that week, even though it is a worthwhile topic.

So, two topics for this meetup: Nomic (A game where you vote to change any of the rules) and Folk Wisdom (How to extract real wisdom from it),

Discussion article for the meetup : Urbana-Champaign Scantily Attended Meetups Rerun

Meetup : Washington DC Games meetup

2 rocurley 10 April 2014 04:35AM

Discussion article for the meetup : Washington DC Games meetup

WHEN: 13 April 2014 03:00:00PM (-0400)

WHERE: National Portrait Gallery, Washington, DC 20001, USA

We'll be meeting to hang out and play games.

Discussion article for the meetup : Washington DC Games meetup

View more: Next