Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Meetup : West LA—Honor and Glory

OpenThreadGuy 24 April 2014 10:24PM

Discussion article for the meetup : West LA—Honor and Glory

WHEN: 30 April 2014 06:59:00PM (-0700)

WHERE: 11066 Santa Monica Blvd, Los Angeles, CA

How to Find Us: Go into this Del Taco. I will bring a Rubik's Cube. The presence of a Rubik's Cube will be strong Bayesian evidence of the presence of a Less Wrong meetup.

Parking is completely free. There is a sign that claims there is a 45-minute time limit, but it is not enforced.

Discussion: Honor, next to courage, is the manliest virtue, but manliness and chivalry are dead, and so one wonders (when one is not me) whether honor matters anymore, whether it brings goodness to the world or whether it is merely another status signal. I claim that it is still a virtue even in these depraved and decadent times, that the function of honor is a commitment device: it is the ability to make promises, to fail gracefully if one should fail, to be punished with harm rather than death when one makes a reasonable mistake, and to grow in reputation and in stature. But while it can be true and beautiful, it can be exploited, by mountebanks and scavengers alike. For honor is inextricably tied to reputation, and no commitment device can give a man enough control over what others think of him to reliably prevent atrocities.

Recommended Reading:

NB: No prior knowledge of or exposure to Less Wrong is necessary; this will be generally accessible. However, I will tolerate no slights nor aspersions on my ancestry.

You must thoroughly research this.

Discussion article for the meetup : West LA—Honor and Glory

How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence

mszegedy 24 April 2014 09:41PM

It took me until I read The Things They Carried for the third time until I realized that it contained something very valuable to rationalists. In "The Logical Fallacy of Generalization from Fictional Evidence," EY explains how using fiction as evidence is bad not only because it's deliberately wrong in particular ways to make it more interesting, but more importantly because it does not provide a probabilistic model of what happened, and gives at best a bit or two of evidence that looks like a hundred or more bits of evidence.

In The Things They Carried, Tim O'Brien not only explains (more or less) this, but also has his own solution to the problem that actually works, i.e. gives the reader a useful probabilistic model of what happened in such a way that actually interests the reader. He does this by telling his stories many times, changing significant things about them. A reader is not inclined to read a list of probabilities, but they are inclined to read a bunch of short stories. He talks about this practice a lot in the book itself, writing, "All you can do is tell it one more time, patiently, adding and subtracting, making up a few things to get at the real truth. … You can tell a true war story if you just keep on telling it." He always says war story, but the principle generalizes. At one point, he has a character represent the forces that act on conventional writing, telling a storyteller that he cannot say that he doesn't know what happened, and that he cannot insert any analysis.

O'Brien also writes about a lot of other things I don't want to mention more than briefly here, such as the specific ways in which the model that conventional war stories give of war is wrong, and specific ways in which the audience misinterprets stories. I recommend the book very much, especially if you think writing "tell multiple short stories" fiction is a great idea and want to do it.

I apologize if this post has been made before.

Meetup : Washington DC Singing Meetup

0 rocurley 24 April 2014 05:27PM

Discussion article for the meetup : Washington DC Singing Meetup

WHEN: 27 April 2014 03:00:00PM (-0400)

WHERE: National Portrait Gallery, Washington DC

Same as before, but this time for real!

Discussion article for the meetup : Washington DC Singing Meetup

Open Thread April 23 - 29, 2014

-6 pinyaka 24 April 2014 04:50PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

The Cryonics Strategy Space

3 Froolow 24 April 2014 04:11PM

In four paragraphs I’m going to claim, “It is highly likely reading this article will increase your chance of living forever”. I’m pretty sure you won’t disagree with me. First, however, I’d like to talk about how much I don’t like Monopoly.

I play a lot of Monopoly, because I am forced into it – against my will – by friends, family, work-related-bonding etc. I understand this is a controversial opinion, but I really, really don’t like Monopoly – there is very little scope for creative play. In fact there is so little scope for create play I spotted that I could win at Monopoly, in a probabilistic sense, by going online and looking up the optimal allocation of houses to properties and valuation of houses in the ‘bargaining’ mid-game. For a while, the fact that nobody but me played ‘perfect’ Monopoly meant I won nearly every game, and I felt much better about playing because games tended to conclude more quickly when one player was a soulless, utility-hungry robot – it left me more time to concentrate on the stuff I actually enjoyed, which was socialising.

But Monopoly, despite being an almost completely deterministic dice-rolling game, hides unexpected complexity; a salutary lesson for an aspiring rationalist. Winning the game was completely secondary to my actual aim, which was forcing the game to take as little time as possible. I realised a few months ago that it didn’t matter who won, as long as somebody won quickly, and it was very unlikely the strategy optimised for one player was the same as the strategy optimised for all of them. As a consequence, I reran the computer simulations I built and developed an optimal ‘turn reducing’ strategy (it won’t surprise you to know that the basic rule is ‘play with as much variance as you possibly can’; having one maverick player lowers the average number of turns to the first bankruptcy, and bankruptcy is gamebreaking in Monopoly).

I agree that I could lower the number of turns even more by simply flipping over the board and storming out when someone suggests I play, but let’s assume I am also trying to balance a nebulously-defined but nonetheless real value of ‘not losing all my friends’, which is satisfied when I play a risky-but-exciting strategy and not satisfied when I constantly demand to play games I find fun. The point is, I had what Kuhn calls a ‘paradigm shift’ – once I realised that my goal when playing Monopoly was not to win as quickly as possible, but to ensure anyone wins as quickly as possible I was able to greatly, greatly increase my utility with no troublesome side-effects.

I’m relating this story to you because noticing my aims and strategy weren’t perfectly aligned improved my experience of Monopoly without doing anything difficult like hacking my motivation, and I’m sure you have similar stories of paradigm shifts improving your experience of a certain event (I hear people talk about the day they discovered coding was fun once they learned the rules, or maths was awesome once they got past the spadework. That has yet to happen to me, but my experience at thrashing my friends at a children’s board games means I can totally relate). What’s striking with these paradigm shifts is how obvious the conclusion seems in retrospect, and how opaque it seemed before the lightbulb moment. With that in mind, let me make a claim you might find concerning; “The aims and strategy of people who want to live forever are highly likely to be out of alignment”. In particular, from what I read on LW and other pro-cryo communities, the strategy-space explored is vastly smaller than the strategy-space of all possible cryonics strategies. Indeed, the strategy space explored by people who want to live forever is – in some ways – smaller than that explored by me while trying to get out of playing tedious boardgames. I’m going to talk about that strategy space a little in this article, mostly with the aim of triggering a ‘lightbulb moment’ – if there are any to be had – in readers a lot more committed to cryonics than me. To draw an obvious conclusion, if there are such lightbulb moments to be had, it is highly likely reading this article will increase your chance of living forever by increasing the size of the cryonics strategy space you consider.

That the strategy space explored is small is pretty hard to disagree with; there is an option to freeze or not-freeze in the first place, go with Alcor or The Cryonics Institute (or possibly KryoRuss), go for your full body or just your head and – maybe – whether to hang on for plastination or begin investing in cryonics insurance now. As far as I can tell, more ‘fringe’ options are not discussed with very much regularity. A search of the LW archives turned up this thread which was along similar lines, but didn’t trigger anything like the discussion I thought it would; this surprises me – when the ‘prize’ for picking a marginal improvement in your cryonics strategy that doubles your chance of revivification is that you double your chance of living forever, I’m highly surprised the cryonics strategy space is not exhaustively searched at this point, certainly amongst people who turn rationality into an art form.

For example, there are at least three ways I can think of to raise your chance of being successfully frozen:

·         (Sensible) Redundancy cryonics: Make redundant copies of the information you intend to preserve. For example, MRI scans of your brain and detailed notes on your reactions to certain stimuli. In the event that current technology almost-but-not-quite preserves information in the brain, your notes and images might help future scientists reconstruct your personality. You might even go further and send hippocampal slices to multiple cryonics facilities, gambling on the fact that the increased probability of at least one facility’s survival outweighs the lower probability of revivification from a single hippocampal slice.

·         (Sensible) Diversified cryonics: In addition to cryonics, employ some other strategy(s) which might result in you living forever but which are as completely uncorrelated with the success or failure of cryonics as you can manage given that ‘the complete destruction of the earth on a molecular level by a malevolent alien race’ correlates with many bad outcomes and few good ones. I actually have a list of about ten of these, which I will happily make available on request (i.e. I’ll write another discussion post about them if people are interested) but I don’t want the whole discussion of this post to be about this one single issue, which it was when I tried the content of the post out on my friend. This is about the cryonics strategy-space only, not the living-forever strategy space, which is much bigger.

·         (Inadvisable) Suicide cryonics: Calculate the point at which your belief in the utility of cryonics outweighs the expected utility of the rest of your life (this will likely come a few seconds before the average age of death in your demographic). Kill yourself in the most cryonics-friendly way you can imagine, which I suspect will involve injecting yourself with toxic cryoprotectants on top of a platform suspended over a large vat of liquid nitrogen so that when you collapse, you collapse into the nitrogen and freeze yourself (which should limit the amount of time the dead brain is at body-temperature). If you are not concerned about your body, you should also try to decapitate yourself as you fall to raise the surface area to volume ratio of the object you are trying to freeze.

Here are three ways that raise your chance of successfully remaining frozen:

·         (Sensible) Positive cryonics: Lobby for laws that ensure the government protects your body. Either lobby for these laws directly (I talked about a ‘right to not-death’ in my last post on this subject) or promise to report to future!USA’s equivalent of the Department of Defence to see if they can weaponise any microbes on you after you’re unfrozen. Remember that we’re talking in terms of expected utility here; the chance that such lobbying is effective is minute, but it might be an effective way to spend your twilight years if you would otherwise be unproductive.

·         (Sensible, but worryingly immoral) Negative cryonics: Sabotage as many cryonics labs as possible before going under, or lobby for laws that make it illegal to freeze yourself which only come into force after you die. This raises the chances that you are the James Bedford of modern cryonics and society has a particular interest in keeping your body safe. Note that though sabotaging an entire lab is difficult and illegal, trashing the field of cryonics itself is pretty easy and socially high-status because people already think it’s pretty weird – you’d predict that at least some detractors of cryonics are actually extremely pro-cryonics and trying to raise their chances of being kept frozen as a cultural curiosity rather than as only one of millions of corpsicles.

·         (Sensible if your name is Lex Luthor, otherwise implausible) Ninja cryonics: Build a cryonics pod yourself, with enough liquid nitrogen to keep you frozen for several thousand years, known only to the highly trusted individual who transfers your cryo-preserved body from Alcor to this location (if you could somehow get yourself into an unprotected far-earth orbit after freezing this would be perfect). Hope that your pod is discovered by friendly future-humans before you run out of coolant. This is insurance against the possibility that society destroys all cryonics labs somehow and then later regrets it (although, now I think about it, someone following this strategy certainly wouldn’t tell anyone about it on a public forum…)

Here are three ways that raise your chance of successfully being revived:

·         (Sensible if legal) Compound-interest cryonics: Devote a small chunk of your resources towards a fund which you expect to grow faster than the rate of inflation, with exponential growth (the simplest example would be a bank account with a variable rate that pays epsilon percent higher than the rate of inflation in perpetuity). Sign a contract saying the person(s) who revive you receive the entire pot. Since after a few thousand years the pot will nominally contain almost all the money in the world this strategy will eventually incentivise almost the entire world to dedicate itself to seeking your revival. Although this strategy will not work if postscarcity happens before unfreezing, it collapses into the conventional cryonics problem and therefore costs you no more than the opportunity cost of spending the capital in the fund before you die. (Although apparently this is illegal)

·         (Sensible) Cultural-value cryonics: Freeze yourself with something which is relatively cheap now, but you predict might be worth a lot of money in the future. I suspect that – for example – rare earth metals or gold might be a decent guess at something that will increase in value whatever society does, but the real treasure trove will be things like first-editions of books you expect might become classics in the future, original paintings by artists who might become very trendy in the 25th Century or photographs of an important historic event which will become disputed or lionised in the future (my best bet would be anything involving the relationship between China and America if we’re talking a few centuries, and pre-technology parts of Africa if we’re talking millennia). It’s hard to believe even a post-singularity society won’t have some social signalling remaining, so you’ve got a respectable chance of finding a buyer for these artefacts. These fantastically valuable artefacts will be used to pay your way in a society where – thanks to the Flynn effect – you will have an IQ which breaks the curve at the ‘dangerously stupid’ end and you might not be able to survive otherwise. Be careful nobody knows you’re doing this, otherwise your cryopod will be raided like an Egyptian tomb! Even disregarding this financial advice, it might be a good idea to ensure you freeze yourself with e.g. a beloved pet, or the complete works of Shakespeare. This ensures that even if future society is so totally different to what you were expecting you will still have some information-age artefacts to protect you from culture-shock.

·         (Inadvisable and high-risk) Game-theory cryonics: Set up an alarm on your cryonics pod that unthaws you after five hundred years. This is insurance against the possibility that society is able to unfreeze you, but chooses not to, since no society would just let you die (you hope). You could go more supervillain-y than this by planting a deadly bomb somewhere, timed to go off in five hundred years unless you enter a 128-digit disarming key. This should incentivise society to develop revivification processes as a matter of urgency. Bear in mind if it is easier for future society to develop extremely strong counter-cryptography or radiation shielding your plan may backfire as research that would have been undertaken in cryopreservation is redeployed to stop your diabolical scheme.

I think most of these strategies have never been written about before, and of those that have been written about they have all been throwaway thought experiments on LW. Given that the strategy space of cryonics strategies is much bigger than cryonics advocates appear to instinctively gravitate around, I conclude that it is very unlikely there has been a serious effort to optimise the cryonics process beyond the scientific advances made by Alcor (and hence it is very unlikely we have all hit upon the optimal strategy by chance). This is especially true because the optimal strategy in some cases depends on the probability that the future resembles certain kinds of predictions, and I know people disagree over those predictions on LW. For example, the ratio of culturally-valuable artefacts to sanity-preserving artefacts you should take with you probably depends on the relative likelihood you assign that a post-scarcity or post-singularity world will be the one to revive you. I’m not in a very good position to make that particular judgement myself, but I am in a good position to say that there is a very real opportunity cost to considering a narrow strategy space when considering life-extending strategies, just as there is an opportunity cost when considering over-narrow Monopoly strategies. In the first case, the impact of your decision might result in you throwing your life away. In the second, it only feels like it does.

Less Wrong Business Networking Google Group

3 moridinamael 24 April 2014 02:45PM

Following on JoshuaFox's thread polling for interest in business networking between Less Wrong community members, a Less Wrong Networking Google group has been created.  If you're interested in the potential of discovering potential business opportunities with other Less Wrong users, with whom you may have reason to assume you share some philosophical and ideological alignment, please consider joining the group.

As Gunnar_Zarncke proposed, please consider modifying your user page to indicate your participation.

Please tell me if there's a best practice I should be doing with regards to this Google group that I'm not doing.

Meetup : Sunday Meetup

0 StonesOnCanvas 24 April 2014 02:42AM

Discussion article for the meetup : Sunday Meetup

WHEN: 04 May 2014 04:00:00PM (-0400)

WHERE: Bidwell Park Bidwell and Elmwood, Buffalo, NY

Buffalo-area Less Wrong meetups on the first Sunday and Third Thursday of every month. (although you should always check out the meetup page for up to date information: http://www.meetup.com/Less-Wrong-Buffalo/)

Hi everyone, I was thinking we could try holding a meetup outdoors for once (weather permitting) at Bidwell Park. If its raining, we'll meet at the Elmwood Panera Bread instead.

Often times, we use "obviously wrong" beliefs as examples of what to avoid and why having correct beliefs is important. Beliefs like homeopathy or creationism are really tempting punching bags, but as Scott Alexander points out in the cowpox of doubt (http://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/), focusing on these easy targets can make it seem like most wrong beliefs are easy to spot and easy to avoid. With this in mind, let us kick off the spring with our own version of a spring cleaning: "epistemic spring cleaning"

Discussion article for the meetup : Sunday Meetup

LessWrongWiki User Pages Underutilized; Tag Proposal

5 Gunnar_Zarncke 23 April 2014 09:50PM

I think the LessWrong user pages are underutilized. There isn't even a wiki pages describing them (or at least I can't find it, otherwise I would have linked it here).

The user pages are what is shown when you click on a user like you would do to see that users other contributions, his karma and to send a personal message. These user pages are maintained and editable in the wiki but embedded in the blog view. This link makes these pages highly visible in everyday LW browsing and could create an awareness of LWers preferences.

Example: me in LW vs. me in LWWiki

Currently only 3 of the top ten poster have one (including EY). And this despite it being so easy to create them via the LWWiki (switch to the Wiki via the link in the nav bar and then click on your name). I guess it's partly because the sync-feature is relatively new.

JoshuaFox recently proposed to indicate your interest in business networking on the user page. But that is only one bit of information you could put there. Another information I'd like to see are the tags proposed on the LW Community Weekend Berlin which could indicate

 

  • your openness to personal messages
  • your willingness to answer questions (to specific topics); one special sub-case might be willingness to proofread posts of non-native speakers (the welcome page mentions four persons explicitly three of whom have not posted recently). 
  • whether you operate under Crockers rules (a tag seen very often on the Berlin badges)
  • other information of this kind like offers of help, dating, ...
ADDED:
ete provided a Template:

I've made a template for UserInfo. Very open to suggestions on parameters, default text, ordering, category names. It's a wiki, so you're welcome to improve it if you feel you have something to add, or let me know and I'll do the editing.

Try it out on your userpages by adding {{UserInfo |network= |questions= |proofread= |messages= |helpwith= |crocker= }} to your userpage, or see the quick documentation for more details.

Meetup : Urbana-Champaign, Consciousness

0 Mestroyer 23 April 2014 04:54PM

Discussion article for the meetup : Urbana-Champaign, Consciousness

WHEN: 27 April 2014 12:00:00PM (-0500)

WHERE: 300 S Goodwin Ave Apt 102, Urbana

Yes, really.

WHAT: What is consciousness? What things are conscious?

Recommended reading: http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/ http://www.utilitarian-essays.com/consciousness.html

Feel free to post and add anything to the recommended reading, as long as it is short enough to read before Sunday. Don't post entire Hofstadter books.

WHERE: 300 S Goodwin Ave Apt 102, Urbana. The door directly to my ground-floor apartment, on which you can knock and I will hear it, is at the North-West corner of the building. Do not attempt to enter through the building's main door, because that requires keycard access, and I will not be waiting there to let you in. If you have trouble getting in, call me at (907) 590 0079.

WHEN: 12pm Sunday.

Discussion article for the meetup : Urbana-Champaign, Consciousness

Meetup : Less Wrong Montreal - Easy Lifehacks

1 bartimaeus 23 April 2014 04:42PM

Discussion article for the meetup : Less Wrong Montreal - Easy Lifehacks

WHEN: 28 April 2014 07:00:00PM (-0400)

WHERE: 3459 Mctavish, Montréal, QC

There are lots of little things you can do to gain massive improvements in your life, and some of these tricks aren't widely known. Let's pool together any life hacks we've accumulated over the years, for the benefit of everyone. Hopefully, everyone will leave with something new to try!

The meetup is in a different room than the last few times; I'll update the event once I know exactly where it is. If anyone can't find it, PM me and I'll give you my phone number.

Discussion article for the meetup : Less Wrong Montreal - Easy Lifehacks

Ergonomics Revisited

4 diegocaleiro 22 April 2014 09:57PM

Continuation of: Spend Money on Ergonomics, by Kevin

 

Three years have elapsed since Kevin wisely told us to spend money on treating our bodies well. It may be time to check for new gadgets, to verify what has worked and what has not etc... 

If you have purchased an item for this purpose, or intend to buy one and don't know which, tell here, ask here. 

Nick Bostrom uses a mouse that looks like a plane controller joystick. 

I've seen keyboards that bend sideways, that are concave, that are convex, and that look like a sphere. 

At FHI, dozens of books are used so that computer screens stay at eye level or above. 

But I am no expert and I have not looked myself, nor would know how to. So please share in the comments the best knowledge about:

Keyboards

Mice

Chairs

Balls to sit on

Pillows

Beds/Matresses etc.. 

Screens - Size, position, brightness etc... 

Other household office items - Stairs, Handles, Shower etc... 

 

Link-Keeping organs alive on their own

1 polymathwannabe 22 April 2014 05:45PM

"A new medical device is keeping hearts warm and beating during transport, something that could be a major breakthrough in transplant history."

Video (the episode contains other news as well):

http://www.aljazeera.com/programmes/techknow/2014/04/heart-box-201442013591803545.html

The track record of survey-based macroeconomic forecasting

3 VipulNaik 22 April 2014 04:57AM

I'm interested in forecasting, and one of the areas where plenty of forecasting has been done is macroeconomic indicators. This post looks at what's known about macroeconomic forecasting.

Macroeconomic indicators such as total GDP, GDP per capita, inflation, unemployment, etc. are reported through direct measurement every so often (on a yearly, quarterly, or monthly basis). A number of organizations publish forecasts of these values, and the forecasts can eventually be compared against the actual values. Some of these forecasts are consensus forecasts: they involve polling a number of experts on the subject and aggregating the responses (for instance, by taking an arithmetic mean or geometric mean or appropriate weighted variant of either). We can therefore try to measure the usefulness of the forecasts and the rationality of the forecasters.

Why might we want to measure this usefulness and rationality? There could be two main motivations:

  1. A better understanding of macroeconomic indicators and whether and how we can forecast them well.
  2. A better understanding of forecasting as a domain as well as the rationality of forecasters and the inherent difficulties in forecasting.

My interest in the subject stems largely via (2) rather than (1): I'm trying to understand just how valuable forecasting is. However, the research I cite has motivations that involve some mix of (1) and (2).

Within (2), our interest might be in studying:

  • The usefulness and rationality of individual forecasts (that are part of the consensus) in absolute terms.
  • The usefulness and rationality of the consensus forecast.
  • The usefulness and rationality of individual forecasts relative to the consensus forecasts (treating the consensus forecast as a benchmark for how easy the forecasting task is).

The macroeconomic forecasting discussed here generally falls in the near but not very near future category in the framework I outlined in a recent post.

Here is a list of regularly published macroeconomic consensus forecasts. The table is taken from Wikipedia (I added the table to Wikipedia).

Organization name Forecast name Number of individuals surveyed Number of countries covered List of countries/regions covered Frequency How far ahead the forecasts are made for Start date
Consensus Economics[2][3] Consensus ForecastsTM More than 700[2][3] 85[2][3] Member countries of the G-7 industralized nations, Asia Pacific, Eastern Europe, and Latin America.[2][3] Monthly[2][3] 24 months October 1989[4]
FocusEconomics[5] FocusEconomics Consensus Forecast[6] Several hundred[6] More than 70[6] Asia, Eastern Europe, Euro Area, Latin America, Nordic economies[6] Monthly[6]  ? 1998[7]
Blue Chip Publications division of Aspen Publishers[8] Blue Chip Economic Indicators[8] 50+[8] 1 United States Monthly[8]  ? 1976[8]
Federal Reserve Bank of Philadelphia Survey of Professional Forecasters[9][10] a few hundred 1 United States Quarterly[9] 6 quarters, plus a few more long-range forecasts 1968[9][10]
European Central Bank ECB Survey of Professional Forecasters[11][12]  ?  ? Europe Quarterly[11] Two quarters and six quarters from now, plus the current and next two years 1999[11][12]
Federal Reserve Bank of Philadelphia Livingston Survey[13]  ? 1 United States[13] Bi-annually (June and December every year)[13] Two bi-annual periods (6 months and 12 months from now), plus some forecasts for two years 1946[13]

Strengths and weaknesses of the different surveys

  • Time series available: The surveys that have been around longer, such as the Livingston Survey (started 1946), Survey of Professional Forecasters (started 1968) and the Blue Chip Economic Indicators (started 1976) have accumulated a larger time series of data. This allows for more interesting analysis.
  • Number of regions for which macroeconomic indicators are forecast: The surveys that cover a larger number of countries, such as the Consensus ForecastsTM (85 countries) and the FocusEconomics Consensus Forecast (over 70 countries) can be used to study hypotheses about differences in the accuracy and bias in forecasts based on country.
  • Time that people are asked to forecast ahead, frequency of forecast, and number of different forecasts (at different points in time) for the same indicator: Surveys differ in how far ahead people have to forecast, how frequently the forecasts are published, and the number of different times a particular quantity is forecast. For instance, the Consensus ForecastsTM includes forecasts for the next 24 months, and is published monthly. So we have 24 different forecasts of any given quantity, with the forecasts made at time points separated by a month each. This is at the upper end. The Survey of Professional Forecasters publishes at a quarterly frequency and includes macroeconomic indicator forecasts for the next 6 quarters. This is a similar time interval to the Consensus ForecastsTM but a smaller number of forecasts for the same quantity because of a lower frequency of publication.
  • Evaluation of individual versus consensus forecasts: For some forecasts (such as those published by the Survey of Professional Forecasters), the published information includes individual forecasts, so we can measure the usefulness and rationality of individual forecasts rather than that of the consensus forecast. For others, such as Consensus ForecastsTM, only the consensus is available, so only more limited tests are possible. Note that the question of the value of individual forecasts and the question of the value of the consensus forecast are both important questions.

The history of research based on consensus forecast sources

There has been a gradual shift in what consensus forecasts are used in research studying forecasts:

  • Early research on macroeconomic forecasting, in the 1970s, began with a few people collecting their own data by polling experts.
  • In the 1980s, the Livingston bi-annual survey was used as a major data source by researchers.
  • In the late 1980s and through the 1990s, researchers switched to the Survey of Professional Forecasters and the Blue Chip Economic Indicators Survey, with the focus shifting to the latter more over time. Note that the Blue Chip Economic Indicators had been started only in 1976, so it's natural that it took some time for people to have enough data from it to publish research.
  • In the 2000s, research based on Consensus ForecastsTM was added to the mix. Note that Consensus Economics started out in 1989, so it's understandable that research based on it took a while to start getting published.

There has also been a gradual shift in views about forecast accuracy:

  • Early literature in the 1970s and early 1980s found evidence of inaccuracy and bias in forecasts.
  • In the 1990s, as the literature started looking at forecasts that polled more people and had higher frequency, the view shifted in the direction of consensus forecasts having very little inaccuracy and bias, whereas the topic of bias in individual forecasts is more hotly contested.

Tabulated bibliography (not comprehensive, but intended to cover a reasonably representative sample)

PaperForecast usedConclusion about efficiency and bias of individual and consensus forecast
McNees (1978) Own data (3 people, 4 quarterly forecasts)

Some forecasts are biased, and forecasters are not rational

Figlewski and Wachtel (1981) Livingston Survey Inflationary expectations are more consistent with the adaptive expectations hypothesis than the rational expectations hypothesis. The paper was critiqued by Dietrich and Joines (1983), and the authors responded in Figlewski and Wachtel (1983).
Keane and Runkle (1990) Survey of Professional Forecasters (called the ASA-NBER survey at the time) Individual forecasters appear rational, although rationality is not established conclusively. Methodological problems are noted with past literature arguing for irrationality and bias in individual forecasts.
Swidler and Ketchler (February 1990) Blue Chip Economic Indicators Consensus forecasts are unbiased and efficient. Does not appear to look at individual forecasts.
Batchelor and Dua (November 1991) Blue Chip Economic Indicators Consensus forecasts are unbiased, but some individual forecasts are biased.
Ehrbeck and Waldmann (1996) North-Holland Economic Forecasts The abstract: "Professional forecasters may not simply aim to minimize expected squared forecast errors. In models with repeated forecasts the pattern of forecasts reveals valuable information about the forecasters even before the outcome is realized. Rational forecasters will compromise between minimizing errors and mimicking prediction patterns typical of able forecasters. Simple models based on this argument imply that forecasts are biased in the direction of forecasts typical of able forecasters. Our models of strategic bias are rejected empirically as forecasts are biased in directions typical of forecasters with large mean squared forecast errors. This observation is consistent with behavioral explanations of forecast bias."
Stark (1997) Survey of Professional Forecasters Attempts to replicate, for the Survey of Professional Forecasters, the results of Lamont (1995) for the Business Week survey that forecasters get more radical as they gain experience. Finds that the results do not replicate, and posits an explanation for this.
Laster, Bennett, and Geoum (1999) Blue Chip Economic Indicators Individual forecasters are biased. The paper describes a theory for how such bias might be rational given the incentives facing forecasters. The empirical data is a sanity check rather than the focus of the paper.
Batchelor (2001) (ungated early draft here) Consensus ForecastsTM Does not discuss bias in Consensus ForecastsTM per se, but notes that it is better than the IMF and OECD forecasts and that incorporating information from those forecasts does not improve upon Consensus ForecastsTM.
Ottaviani and Sorensen (2006) (none, discusses general theoretical model) Abstract: "We develop and compare two theories of professional forecasters’ strategic behavior. The first theory, reputational cheap talk, posits that forecasters endeavor to convince the market that they are well informed. The market evaluates their forecasting talent on the basis of the forecasts and the realized state. If the market expects forecasters to report their posterior expectations honestly, then forecasts are shaded toward the prior mean. With correct market expectations, equilibrium forecasts are imprecise but not shaded. The second theory posits that forecasters compete in a forecasting contest with pre-specified rules. In a winner-take-all contest, equilibrium forecasts are excessively differentiated."
Batchelor (2007) Consensus ForecastsTM Consensus forecasts are unbiased, some individual forecasts are biased. But the persistent optimism and pessimism of some forecasters seems inconsistent with existing models of rational bias.
Ager, Kappler, and Osterloh (2009) (ungated version) Consensus ForecastsTM There are consistently biased forecasts for some countries, but not for all. A lack of information efficiency is more severe for GDP forecasts than for inflation forecasts.

The following overall conclusions seem to emerge from the literature:

  • For mature and well-understood economics such as that of the United States, consensus forecasts are not notably biased or inefficient. In cases where they miss the mark, this can usually be attributed to issues of insufficient information or shocks to the economy.
  • There may however be some countries. particularly those whose economies are not sufficiently well-understood, where the consensus forecasts are more biased.
  • The evidence on whether individual forecasts are biased or inefficient is more murky, but the research generally points in the direction of some individual forecasts being biased. Some people have posited a "rational bias" theory where forecasters have incentives to choose a value that is plausible but not the most likely in order to maximize their chances of getting a successful unexpected prediction. We can think of this as an example of product differentiation. Other sources and theories of rational bias have also been posited, but there is no consensus in the literature on whether and how these are sufficient to explain observed individual bias.

Some addenda

  • A Forbes article recommends using the standard sources for forecasts to business people who need economic forecasts for their business plan, rather than aiming for something more fancy.
  • There are some other forecasts I didn't list here, such as the Greenbook forecasts, IMF's World Economic Outlook, and OECD Economic Outlook. As far as I could make out, these are not generated through a consensus forecast procedure. They involve some combination of models and human judgment and discussion. The bibliography I tabulated above includes Batchelor (2001), that found that the Consensus ForecastsTM outperformed the OECD and IMF forecasts. Some research on the Greenbook forecasts can be found in the footnotes on the Wikipedia page about Greenbook. I didn't think these were sufficiently germane to be included in the main bibliography.

How do you approach the problem of social discovery?

16 InquilineKea 21 April 2014 09:05PM

As in, how do you find ways to meet the right people you talk to? Presumably, they would have personality fit with you, and be high on both intelligence and openness. Furthermore, they would be in the point of their life where they are willing to spend time with you (although sometimes you can learn a lot from people simply by friending them on Facebook and just observing their feeds from time to time).

Historically, I've made myself extremely stalkable on the Internet. In retrospect, I believe that this "decision" is on the order of one of the very best decisions I've ever made in my life, and has made me better at social discovery than most people I know, despite my dual social anxiety and Asperger's. In fact, if a more extroverted non-Aspie could do the same thing, I think they could do WONDERS with developing an online profile.

I've also realized more that social discovery is often more rewarding when done with teenagers. You can do so much to impact teenagers, and they often tend to be a lot more open to your ideas/musings (just as long as you're responsible).

But I've wondered - how else have you done it? Especially in real life? What are some other questions you ask with respect to social discovery? I tend to avoid real life for social discovery simply because it's extremely hit-and-miss, but I've discovered (from Richard Florida's books) that the Internet often strengthens real-life interaction because it makes it so much easier to discover other people in real life (and then it's in real life when you can really get to know people).

Meetup : Ottawa - How to Run a Successful Less Wrong Meetup Group

0 amacfie 21 April 2014 05:41PM

Discussion article for the meetup : Ottawa - How to Run a Successful Less Wrong Meetup Group

WHEN: 30 April 2014 07:30:00PM (-0400)

WHERE: Royal Oak on the Canal, 221 Echo Dr, Ottawa, ON K1S 1N1, Canada

We'll go through and discuss the How to Run a Successful Less Wrong Meetup Group document, try some fun activities, and go meta: a meetup about meetups. There'll be a "LW" sign on the table.

Discussion article for the meetup : Ottawa - How to Run a Successful Less Wrong Meetup Group

Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]

13 Dr_Manhattan 21 April 2014 04:55PM

http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html

Very surprised none has linked to this yet:

TL;DR: AI is a very underfunded existential risk.

Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.

 

Meetup : Effective Altruism 102 (NYC)

0 Raemon 21 April 2014 02:53PM

Discussion article for the meetup : Effective Altruism 102 (NYC)

WHEN: 26 April 2014 05:00:00PM (-0400)

WHERE: 851 Park Place, Brooklyn NY 11216

It's been a while since we explicitly discussed Effective Altruism. The movement has changed a lot in the past couple years: • There's a much more deliberate focus on entrepreneurship, • Givewell is spinning off Givewell Labs to explore more complex but high payoff opportunities • MIRI has shifted focus, emphasizing math workshops and outreach to current-generation Narrow AI Safety experts in addition to Artificial General Intelligence researchers Early saturday evening, April 26th, we'll have a series of short talks about the state of the movement and opportunities you can pursue, including: 1) How to think strategically about doing good. 2) How to switch careers effectively 3) Recent updates by organizations in the Effective Altruism movement. WHEN+WHERE: Saturday, April 26th, 5:00 PM - 6:30 PM Highgarden House 851 Park Place, Brooklyn NY 11216

Discussion article for the meetup : Effective Altruism 102 (NYC)

Meetup : Munich Meetup

0 cadac 21 April 2014 01:36PM

Discussion article for the meetup : Munich Meetup

WHEN: 11 May 2014 02:00:00PM (+0200)

WHERE: Theresienstraße 41, 80333 München

You are invited to come to the May Munich LW Meeutp! One of our regulars will give a short talk, probably about meditation. Everyone is welcome to bring articles to discuss, rationality-related games to play etc. Like in April, we're planning to meet outside the mathematics building at the LMU. Depending on the weather, we'll stay outside or occupy a free room inside the math department. Whoever brings food for the group is awesome. :) It goes without saying that newcomers are very welcome

Discussion article for the meetup : Munich Meetup

Open thread, 21-27 April 2014

4 Metus 21 April 2014 10:54AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Thread started before the end of the last thread to ecourage Monday as first day.

Utilitarian discernment bleg

-1 VipulNaik 20 April 2014 11:33PM

People who're engaging in learning partly or wholly for the explicit purpose of human capital need to be strategic about their learning choices. Only some subjects develop human capital useful to the person's goals. Within each subject, only some subtopics develop useful human capital. Even within a particular course, the material covered in some weeks could be highly relevant, and the material covered in other weeks need to be relevant. Therefore, learners need to be discerning in figuring out what material to focus their learning effort on and what material to just skim, or even ignore.

Such discernment is most relevant for self-learners who are unconstrained by formal mastery requirements of courses. Self-learners may of course be motivated by many concerns other than human capital acquisition. In particular, they may be learning for pure consumptive reasons, or to signal their smarts to friends. But at any rate, they have more flexibility than people in courses and therefore they can gain more from better discernment.

Those who're taking courses primarily for signaling purposes need to acquire sufficient mastery to attain their desired grade, but even here, they have considerable flexibility:

  • People who're already able to get the top grade without stretching themselves too much have flexibility in how to allocate additional time. Should they try to acquire some more mastery of the entire curriculum, or delve deeper into one topic?
  • People who're far from getting a top grade may have the same grade-per-unit-effort payoff from delving deep into one subtopic or acquiring a shallow understanding of many topics. Considerations regarding long-term human capital acquisition can then help them decide what path to pursue among paths that confer roughly similar signaling benefits.

What self-learners and people with some flexibility in a formal learning situation need is what I call utilitarian discernment: the ability to figure out what stuff to concentrate on. Ideally, they should be able to figure this out relatively easily:

  1. Sequencing within the course: Important topics are often foundational and therefore done early on.
  2. Relative time and emphasis placed on topics should give an indicator of their relative importance.
  3. Important topics should be explicitly marked as important by course texts, videos, and syllabi.
  4. Important topics should receive more emphasis in end-of-course assessments.
  5. Important topics should be more frequently listed as prerequisites in follow-on courses covering the sort of material the learner wants to do next.
  6. The learner can consult friends and websites: This includes more advanced students and subject matter experts, as well as online sources such as Quora and Less Wrong.

The above work better than nothing, but I think they still leave a lot to be desired. Some obvious pitfalls:

  1. Sequencing within the course: While important topics are often done early on because they are foundational, they are sometimes done later because they rely on a synthesis of other knowledge.
  2. Relative time and emphasis: Often, courses place more time and emphasis on more difficult topics than more important ones. There's also the element of time and emphasis being placed on topics that subject matter experts find interesting or relevant, rather than topics that are relevant to somebody who does not intend to pursue a lifetime of research in the subject but is learning it to apply it in other subjects. Note also that the signaling story would suggest that more time and emphasis would be given to topics that do a better job at sorting and ranking students' relevant general abilities than to topics that teach relevant knowledge and skills.
  3. Important topics marked as important: This is often the case, but it too fails, because what is important to teachers may differ from what is important to students.
  4. Emphasis in end-of-course assessments: The relative weight to topics in end-of-course assessments is often in proportion to the time spent on the topics than their relative importance, bringing us back to (1).
  5. Important topics should be more frequently listed as prerequisites: This would work well if somebody actually compiled and combined prerequisites for all follow-on courses, but this is a labor-intensive exercise that few people have engaged in.
  6. The learner can consult friends and websites: Friends who lack strong subject matter knowledge may simply be guessing or giving too much weight to their personal beliefs and experiences. Many of them may not even remember the material enough to offer an informed judgment. Those who have subject matter knowledge may be too focused on academic relevance within the subject rather than real-world relevance outside it. People may also be biased (in either direction) about how a particular topic taught them general analytical skills because they fail to consider other counterfactual topics that could have achieved a similar effect.

In light of these pitfalls, I'm interested in developing general guidelines for improving one's utilitarian discernment. For this purpose, I list some example head-to-head contest questions. I'd like it if commenters indicated a clear choice of winner for each head-to-head contest (you don't have to indicate a choice of winner for every one, but I would prefer a clear choice rather than lots of branch cases within each contest), then explained their reasoning and how somebody without an inside view or relevant expertise could have come to the same conclusion. For some of the choices I've listed, I think the winner should be clear, whereas for others, the contest is closer. Note that the numbering in this list is independent of the preceding numbering.

  1. Middle school and high school mathematics: Manipulating fractions (basic arithmetic operations on fractions) versus solving quadratic equations (you may assume that the treatment of quadratic equations does not require detailed knowledge of fractions)
  2. High school physics: Classical mechanics versus geometrical optics
  3. Precalculus/functions: Logarithmic and exponential functions versus trigonometric functions
  4. Differential calculus: Conceptual definition of derivative as a limit of a difference quotient versus differentiation of trigonometric functions
  5. Integral calculus and applications: Integration of rational functions versus solution strategy for separable differential equations
  6. Physical chemistry: Stoichiometry versus chemical kinetics
  7. Basic biology: Cell biology versus plant taxonomy
  8. Micreconomics: Supply and demand curves versus adverse selection

PS: The examples chosen here are all standard topics in the sciences and social sciences ranging from middle school to early college, but my question is more general. I didn't have enough domain knowledge to come up with quick examples of self-learning head-to-head contests for other domains or for learning at other stages of life, but feel free to discuss these in the comments.

Human capital or signaling? No, it's about doing the Right Thing and acquiring karma

17 VipulNaik 20 April 2014 09:04PM

There's a huge debate among economists of education on whether the positive relationship between educational attainment and income is due to human capital, signaling, or ability bias. But what do the students themselves believe? Bryan Caplan has argued that students' actions (for instance, their not sitting in for free on classes and their rejoicing at class cancellation) suggest a belief in the signaling model of education. At the same time, he notes that students may not fully believe the signaling model, and that shifting in the direction of that belief might improve individual educational attainment.

Still, something seems wrong about the view that most people believe in the signaling model of education. While their actions are consistent with that view, I don't think they frame it quite that way. I don't think they usually think of it as "education is useless, but I'll go through it anyway because that allows me to signal to potential employers that I have the necessary intelligence and personality traits to succeed on the job." Instead, I believe that people's model of school education is linked to the idea of karma: they do what the System wants them to do, because that's their duty and the Right Thing to do. Many of them also expect that if they do the Right Thing, and fulfill their duties well, then the System shall reward them with financial security and a rewarding life. Others may take a more fateful stance, saying that it's not up to them to judge what the System has in store for them, but they still need to do the Right Thing.

The case of the devout Christian

Consider a reasonably devout Christian who goes to church regularly. For such a person, going to church, and living a life in accordance with (his understanding of) Christian ethics is part of what he's supposed to do. God will take care of him as long as he does his job well. In the long run, God will reward good behavior and doing the Right Thing, but it's not for him to question God's actions.

Such a person might look bemused if you asked him, "Are you a practicing Christian because you believe in the prudential value of Christian teachings (the "human capital" theory) or because you want to give God the impression that you are worthy of being rewarded (the "signaling" theory")?" Why? Partly, because the person attributes omniscience, omnipotence, and omnibenevolence to God, so that the very idea of having a conceptual distinction between what's right and how to impress God seems wrong. Yes, he does expect that God will take care of him and reward him for his goodness (the "signaling" theory). Yes, he also believes that the Christian teachings are prudent (the "human capital" theory). But to him, these are not separate theories but just parts of the general belief in doing right and letting God take care of the rest.

Surely not all Christians are like this. Some might be extreme signalers: they may be deliberately trying to optimize for (what they believe to be) God's favor and maximizing the probability of making the cut to Heaven. Others might believe truly in the prudence of God's teachings and think that any rewards that flow are because the advice makes sense at the worldly level (in terms of the non-divine consequences of actions) rather than because God is impressed by the signals they're sending him through those actions. There are also a number of devout Christians I personally know who, regardless of their views on the matter, would be happy to entertain, examine, and discuss such hypotheses without feeling bemused. Still, I suspect the majority of Christians don't separate the issue, and many might even be offended at second-guessing God.

Note: I selected Christianity and a male sex just for ease of description; similar ideas apply to other religions and the female sex. Also note that in theory, some religious sects emphasize free will and others emphasize determinism more, but it's not clear to me how much effect this has on people's mental models on the ground.

The schoolhouse as church: why human capital and signaling sound ridiculous

Just as many people believe in following God's path and letting Him take care of the rewards, many people believe that by doing the Right Thing educationally (being a Good Student and jumping through the appropriate hoops through correctly applied sincere effort) they're doing their bit for the System. These people might be bemused at the cynicism involved in separating out "human capital" and "signaling" theories of education.

Again, not everybody is like this. Some people are extreme signalers: they openly claim that school builds no useful skills, but grades are necessary to impress future employers, mates, and society at large. Some are human capital extremists: they openly claim that the main purpose is to acquire a strong foundation of knowledge, and they continue to do so even when the incentive from the perspective of grades is low. Some are consumption extremists: they believe in learning because it's fun and intellectually stimulating. And some strategically combine these approaches. Yet, none of these categories describe most people.

I've had students who worked considerably harder on courses than the bare minimum effort needed to get an A. This is despite the fact that they aren't deeply interested in the subject, don't believe it will be useful in later life, and aren't likely to remember it for too long anyway. I think that the karma explanation fits best: people develop an image of themselves as Good Students who do their duty and fulfill their role in the system. They strive hard to fulfill that image, often going somewhat overboard beyond the bare minimum needed for signaling purposes, while still not trying to learn in ways that optimize for human capital acquisition. There are of course many other people who claim to aspire to the label of Good Student because it's the Right Thing, and consider it a failing of virtue that they don't currently qualify as Good Students. Of course, that's what they say, and social desirability bias might play a role in individuals' statements,  but the very fact that people consider such views socially desirable indicates the strong societal belief in being a Good Student and doing one's academic duty.

If you presented the signaling hypothesis to self-identified Good Students they'd probably be insulted. It's like telling a devout Christian that he's in it only to curry favor with God. At the same time, the human capital hypothesis might also seem ridiculous to them in light of their actual actions and experiences: they know they don't remember or understand the material too well. Thinking of it as doing their bit for the System because it's the Right Thing to do seems both noble and realistic.

The impressive success of this approach

At the individual level, this works! Regardless of the relative roles of human capital, signaling, and ability bias, people who go through higher levels of education and get better grades tend to earn better and get more high-status jobs than others. People who transform themselves from being bad students to good students often see rewards both academically and in later life in the form of better jobs. This could again be human capital, signaling, or ability bias. The ability bias explanation is plausible because it requires a lot of ability to turn from a bad student into a good student, about the same as it does to be a good student from the get-go or perhaps even more because transforming oneself is a difficult task.

Can one do better?

Doing what the System commands can be reasonably satisfying, and even rewarding. But for many people, and particularly for the people who do the most impressive things, it's not necessarily the optimal path. This is because the System isn't designed to maximize every individual's success or life satisfaction, or even to optimize things for society as a whole. It's based on a series of adjustments driven by squabbling between competing interests. It could be a lot worse, but a motivated person could do better.

Also note that being a Good Student is fundamentally different from being a Good Worker. A worker, whether directly serving customers or reporting to a boss, is producing stuff that other people value. So, at least in principle, being a better worker translates to more gains for the customers. This means that a Good Worker is contributing to the System in a literal sense, and by doing a better job, directly adds more value. But this sort of reasoning doesn't apply to Good Students, because the actions of students qua students aren't producing direct value. Their value is largely their consumption value to the students themselves and their instrumental value to the students' current and later life choices.

Many of the qualities that define a Good Student are qualities that are desirable in other contexts as well. In particular, good study habits are valuable not just in school but in any form of research that relies on intellectual comprehension and synthesis (this may be an example of the human capital gains from education, except that I don't think most students acquire good study habits). So, one thing to learn from the Good Student model is good study habits. General traits of conscientiousness, hardwork, and willingness to work beyond the bare minimum needed for signaling purposes are also valuable to learn and practice.

But the Good Student model breaks down when it comes to acquiring perspective about how to prioritize between different subjects, and how to actually learn and do things of direct value. A common example is perfectionism. The Good Student may spend hours practicing calculus to get a perfect score in the test, far beyond what's necessary to get an A in the class or an AP BC 5, and yet not acquire a conceptual understanding of calculus or learn calculus in a way that would stick. Such a student has acquired a lot of karma, but has failed from both the human capital perspective (in not acquiring durable human capital) and the signaling perspective (in spending more effort than is needed for the signal). In an ideal world, material would be taught in a way that one can score highly on tests if and only if it serves useful human capital or signaling functions, but this is often not the case.

Thus, I believe it makes sense to critically examine the activities one is pursuing as a student, and ask: "does this serve a useful purpose for me?" The purpose could be human capital. signaling, pure consumption, or something else (such as networking). Consider the following four extreme answers a student may give to why a particular high school or college course matters:

  • Pure signaling: A follow-up might be: "how much effort would I need to put in to get a good return on investment as far as the signaling benefits go?" And then one has to stop at that level, rather than overshoot or undershoot.
  • Pure human capital: A follow-up might be: "how do I learn to maximize the long-term human capital acquired and retained?" In this world, test performance matters only as feedback rather than as the ultimate goal of one's actions. Rather than trying to practice for hours on end to get a perfect score on a test, more effort will go into learning in ways that increase the probability of long-term retention in ways that are likely to prove useful later on. (As mentioned above, in an ideal world, these goals would converge).
  • Pure consumption: A follow-up might be: "how much effort should I put in in order to get the maximum enjoyment and stimulation (or other forms of consumptive experience), without feeling stressed or burdened by the material?"
  • Pure networking: A follow-up might be: "how do I optimize my course experience to maximize the extent to which I'm able to network with fellow students and instructors?"

One might also believe that some combination of these explanations applies. For instance, a mixed human capital-cum-signaling explanation might recommend that one study all topics well enough to get an A, and then concentrate on acquiring a durable understanding of the few subtopics that one believes are needed for long-term knowledge and skills. For instance, a mastery of fractions matters a lot more than a mastery of quadratic equations, so a student preparing for a middle school or high school algebra course might choose to learn both at a basic level but get a really deep understanding of fractions. Similarly, in calculus, having a clear idea of what a function and derivative means matters a lot more than knowing how to differentiate trigonometric functions, so a student may superficially understand all aspects (to get the signaling benefits of a good grade) but dig deep into the concept of functions and the conceptual definition of derivatives (to acquire useful human capital). By thinking clearly about this, one may realize that perfecting one's ability to differentiate complicated trigonometric function expressions or integrate complicated rational functions may not be valuable from either a human capital perspective or a signaling perspective.

Ultimately, the changes wrought by consciously thinking about these issues are not too dramatic. Even though the System is suboptimal, it's locally optimal in small ways and one is constrained in one's actions in any case. But the changes can nevertheless add up to lead one to be more strategic and less stressed, do better on all fronts (human capital, signaling, and consumption), and discover opportunities one might otherwise have missed.

LSD, Meditation, Enlightenment, and Ego Death

6 Fink 20 April 2014 07:41PM

A little background information first, I'm a computer science/neuroscience dual-major in my junior year of university. AGI is what I really want to work on and I'm especially interested in Gortzel's OpenCog. Unfortunately I do not have nearly the understanding of the human mind I would like, let alone the knowledge of how to make a new one.

DavidM's post on meditation is particularly interesting to me. I've been practicing mindfulness-based meditation techniques for some time now and I've seen some solid results but the concept of 'enlightenment' was always appealing to me, and I've always wanted to know if such a thing existed. I have been practicing his technique for a few weeks now and although it is difficult I believe I understand what he means by 'vibrations' in your attentional focus.

I've experimented with psilocybin mushrooms for about a year now. Mostly for fun, sometimes for better understanding my own brain. Light doses have enhanced my perception and led me to re-evaluate my life from a different perspective, although I am never as clear-headed as I would like.

I've read that LSD provides a 'cleaner' experience while avoiding some of the thought-loops of mushrooms, it also lasts much longer. Stanislav Grof once said that LSD can be to psychology what the microscope is to biology, with deep introspection we can view our thoughts coalesce. After months of looking for a reliable producer and several 'look-alike' drugs I finally obtained a few doses of LSD. Satisfied that it was the real thing I took a single dose and fell into my standard meditation session, trying to keep my concentration on the breath.

I experienced what wikipedia calls 'ego death'. That is I felt my 'self' splitting into the individual sub-components that formed consciousness. Acid is well-known for causing synaesthesia and as I fell deeper into meditation I felt like I could actually see the way sensory experiences interacted with cognitive heuristics and rose to the level of conscious perception. I felt that I could what see 'I' really was, what Douglas Hofstadter referred to as a 'strange loop' looking back on itself, with my perception switching between sensory input, memories, and thought patterns resonating in frequency with DavidM's 'vibrations'. Of course I was under the effects of an hallucinogenic drug, but I felt my experience was quite lucid.

DavidM hasn't posted in years which is a shame because I really want to see his third article and ask him more about it. I will continue practicing his enlightenment meditation techniques in an attempt to try to foster these experiences without the use of drugs. Has anyone here had experiences with psychedelic drugs or transcendental meditation? If so, could you tell me about them?

Meetup : Utrecht

1 SoerenMind 20 April 2014 10:14AM

Discussion article for the meetup : Utrecht

WHEN: 03 May 2014 05:00:00PM (+0200)

WHERE: Utrecht

A growing number of rationalists and effective altruists is joining us to share ideas and help each other to be rational, to improve themselves and to make the world a better place as effectively as possible.

Agenda

The full agenda is to be determined later, but at least we will talk about the charity evaluator GiveWell (http://www.givewell.org/). GiveWell is looking for outstanding giving opportunities: where to give in order to do the most good per dollar or euro spent. How could that be possible? How does GiveWell (try to) do that? If there is another topic you would like to present or discuss with the group, please add the topic here: https://docs.google.com/document/d/16bBtla1iVzkJjie-JK7Ozb9Ao8SbyJ9U924XyaEXTqY/edit . There is room for your questions, personal discussions, smalltalk, etc.

Everyone is invited, and new people will be warmly welcomed! Location is to be determined, probably Utrecht.

If you have troule finding us, for this time you can reach Imma at 0612001233, since I will be abroad.

Discussion article for the meetup : Utrecht

Southern California FAI Workshop

13 Coscott 20 April 2014 08:55AM

This Saturday, April 26th, we will be holding a one day FAI workshop in southern California, modeled after MIRI's FAI workshops. We are a group of individuals who, aside from attending some past MIRI workshops, are in no way affiliated with the MIRI organization. More specifically, we are a subset of the existing Los Angeles Less Wrong meetup group that has decided to start working on FAI research together. 

The event will start at 10:00 AM, and the location will be:

USC Institute for Creative Technologies
12015 Waterfront Drive
Playa Vista, CA 90094-2536.

This first workshop will be open to anyone who would like to join us. If you are interested, please let us know in the comments or by private message. We plan to have more of these in the future, so if you are interested but unable to makethis event, please also let us know. You are welcome to decide to join at the last minute. If you do, still comment here, so we can give you necessary phone numbers.

Our hope is to produce results that will be helpful for MIRI, and so we are starting off by going through the MIRI workshop publications. If you will be joining us, it would be nice if you read the papers linked to here, here, here, here, and here before Saturday. Reading all of these papers is not necessary, but it would be nice if you take a look at one or two of them to get an idea of what we will be doing.

Experience in artificial intelligence will not be at all necessary, but experience in mathematics probably is. If you can follow the MIRI publications, you should be fine. Even if you are under-qualified, there is very little risk of holding anyone back or otherwise having a negative impact on the workshop. If you think you would enjoy the experience, go ahead and join us.

This event will be in the spirit of collaboration with MIRI, and will attempt to respect their guidelines on doing research that will decrease, rather than increase, existential risk. As such, practical implementation questions related to making an approximate Bayesian reasoner fast enough to operate in the real world will not be on-topic. Rather, the focus will be on the abstract mathematical design of a system capable of having reflexively consistent goals, preforming naturalistic induction, et cetera. 

Food and refreshments will be provided for this event, courtesy of MIRI.

Economics majors and earnings: further exploration

3 JonahSinick 20 April 2014 03:15AM

In Earnings of economics majors: general considerations I presented data showing that economics majors make substantially more money (20%-50%+) than majors in other liberal arts. I gave five hypotheses, each of which could partially account for the wage gap. These are possible differences between the majors in:

  1. Human capital acquisition.
  2. Acquisition of a desire to make money.
  3. Pre-existing ability as measured by tests.
  4. Pre-existing desire to make money.
  5. Signaling.

I discussed a priori reasons for believing that they might be significant, and how one might go about testing the hypotheses and the extent to which they explain the wage gap.

Having examined available data, I believe that with the possible exception of #3, based on publicly available information, there's a huge amount of uncertainty as to the roles of these factors in explaining the wage gap. In many cases there is data suggesting the presence of effects, but the data is not robust and the sizes of the effects are entirely unclear. Furthermore, the hypotheses are not exhaustive: other factors (such as those mentioned at the very end of this post) plausibly play a role, making it difficult to reason in the fashion "factors A, B and C play very small roles, therefore factor D must play a large role." 

I was originally hoping that there would be a simple, clearcut case for or against majoring in economics increasing earnings (relative to other liberal arts), but resolving the question would seem to be a major research project. Still, I hope that this post can help students who are contemplating majoring in economics or another liberal art get a feel for the "lay of the land," and some of the points therein may be actionable for particular individuals. 

I'll address each hypothesis in turn.

This post is very long. If you're short on time or attention, consider scanning over the subtopic headings and reading the sections that look most interesting. As usual, I'd appreciate any relevant thoughts, particularly if you're a former economics major.

continue reading »

Regret, Hindsight Bias and First-Person Experience

8 Stabilizer 20 April 2014 02:10AM

Here is an experience that I often have: I'm walking down the street, perfectly content and all of a sudden some memory pops into my stream of consciousness. The memory triggers some past circumstance where I did not act completely admirably. Immediately following this, there is often regret. Regret of the form like: "I should've studied harder for that class", "I should've researched my options better before choosing my college", "I should've asked that girl out", "I shouldn't have been such an asshole to her" and so on. So this is regret which is of the kind: "Well, of course, I should've done X. But I did Y. And now here I am."

This is classic hindsight bias. Looking back into the past, it seems clear what my course of action should've been. But it wasn't at all that clear in the past.

So, I've come up with a technique to attenuate this kind of hindsight-bias driven regret.

First of all, tune in to your current experience. What is it like to be here, right here and right now, doing the things you're doing. Start zooming out: think about the future and what you're going to be doing tomorrow, next week, next month, next year, 5 years later. Is it at all clear what choices you should make? Sure, you have some hints: take care of your health, save money, maybe work harder at your job. But nothing very specific. Tune in to the difficulties of carrying out even definitely good things. You told yourself that you'd definitely go running today, but you didn't. In first-person mode, it is really hard to know what do, to know how to do it and to actually do it. 

Now, think back to the person you were in the past, when you made the choices that you're regretting. Try to imagine the particular place and time when you made that choice. Try to feel into what it was like. Try to color in the details: the ambient lighting of the room, the clothes you and others were wearing, the sounds and the smells. Try to feel into what was going on in your mind. Usually it turns out that you were confused and pulled in many different directions and, all said and done, you had to make a choice and you made one.

Now realize that back then you were facing exactly the kinds of uncertainties and confusions you are feeling now. In the first-person view there are no certainties; there are only half-baked ideas, hunches, gut feelings, mish-mash theories floating in your head, fragments of things you read and heard in different places.

Now think back to the regrettable decision you made. Is it fair to hold that decision against yourself which such moral force? 

Meetup : Washington DC: Singing

0 rocurley 19 April 2014 04:43PM

Discussion article for the meetup : Washington DC: Singing

WHEN: 20 April 2014 03:00:00PM (-0400)

WHERE: National Portrait Gallery, Washington, DC 20001, USA

We'll be meeting up to go singing!

Because this is probably not a good idea in the portrait gallery, we'll meet there, and then head out somewhere (Archives probably) after we've rendezvoused.

Discussion article for the meetup : Washington DC: Singing

Mathematics and saving lives

2 NancyLebovitz 19 April 2014 01:32PM

A high school student with an interest in math asks whether he's obligated on utilitarian grounds to become a doctor.

The commenters pretty much say that he isn't, but now I'm wondering-- if you go into reasonably pure math, what areas or specific problems would be most likely to contribute the most towards saving lives?

[LINK] U.S. Views of Technology and the Future

2 Gunnar_Zarncke 18 April 2014 09:22PM

I just found this on slashdot:

"U.S. Views of Technology and the Future - Science in the next 50 years" by the Pew Research Center

This report emerges from the Pew Research Center’s efforts to understand public attitudes about a variety of scientific and technological changes being discussed today. The time horizons of these technological advances span from today’s realities—for instance, the growing prevalence of drones—to more speculative matters such as the possibility of human control of the weather. 

This is interesting esp. in comparison to the recent posts on forecasting which focussed on expert forecasts.

What I found most notable was the public opinion on their use of future technology:

% who would do the following if possible...

50% ride in a driverless car

26% use brain implant to improve memory or mental capacity

20% eat meat grown in a lab

Don't they know Eutopia is Scary? I'd guess if these technologies really become available and are reliable only the elderly will be inable to overcome their preconceptions. And everybody will eat artificial meat if it is cheaper, more healthy and tastes the same (and the testers say confirm this).

 

[link] Guide on How to Learn Programming

4 peter_hurford 18 April 2014 05:08PM

I've recently seen a lot of interest in people who are looking to learn programming.  So I put together a quick guide with lots of help from other people: http://everydayutilitarian.com/essays/learn-code

Let me know (via comments here or email - peter@peterhurford.com) if you try this guide, so I can get feedback on how it goes for you.

Also, feel free to also reach out to me with comments on how to improve the guide – I’m still relatively new to programming myself and have not yet implemented all these steps personally.  I'd cross-post it here, but I want to keep the document up-to-date and it would be much easier to do that in just one place.

Weekly LW Meetups

0 FrankAdamek 18 April 2014 03:53PM

This meetup summary was posted to LW main on April 11th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Bostrom versus Transcendence

10 Stuart_Armstrong 18 April 2014 08:31AM

How long will Alcor be around?

28 Froolow 17 April 2014 03:28PM

The Drake equation for cryonics is pretty simple: work out all the things that need to happen for cryonics to succeed one day, estimate the probability of each thing occurring independently, then multiply all those numbers together. Here’s one example of the breakdown from Robin Hanson. According to the 2013 LW survey, LW believes the average probability that cryonics will be successful for someone frozen today is 22.8% assuming no major global catastrophe. That seems startlingly high to me – I put the probability at at least two orders of magnitude lower. I decided to unpick some of the assumptions behind that estimate, particularly focussing on assumptions which I could model.

EDIT: This needs a health warning; here be overconfidence dragons. There are psychological biases that can lead you to estimating these numbers badly based on the number of terms you're asked to evaluate, statistical biases that lead to correlated events being evaluated independently by these kind of models and overall this can lead to suicidal overconfidence if you take the nice neat number these equations spit out as gospel.

Every breakdown includes a component for ‘the probability that the company you freeze with goes bankrupt’ for obvious reasons. In fact, the probability of bankruptcy (and global catastrophe) are particularly interesting terms because they are the only terms which are ‘time dependant’ in the usual Drake equation. What I mean by this is that if you know your body will be frozen intact forever, then it doesn’t matter to you when effective unfreezing technology is developed (except to the extent you might have a preference to live in a particular time period). By contrast, if you know safe unfreezing techniques will definitely be developed one day it matters very much to you that it occurs sooner rather than later because if you unfreeze before the development of these techniques then they are totally wasted on you.

The probability of bankruptcy is also very interesting because – I naively assumed last week – we must have excellent historical data on the probability of bankruptcy given the size, age and market penetration of a given company. From this – I foolishly reasoned – we must be able to calculate the actual probability of the ‘bankruptcy’ component in the Cryo-Drake equation and slightly update our beliefs.

I began by searching for the expected lifespan of an average company and got two estimates which I thought would be a useful upper- and lower-bound. Startup companies have an average lifespan of four years. S&P 500 companies have an average lifespan of fifteen years. My logic here was that startups must be the most volatile kind of company, S&P 500 must be the least volatile and cryonics firms must be somewhere in the middle. Since the two sources only report the average lifespan, I modelled the average as a half-life. The results really surprised me; take a look at the following graph:

(http://imgur.com/CPoBN9u.jpg)

Even assuming cryonics firms are as well managed as S&P 500 companies, a 22.8% chance of success depends on every single other factor in the Drake equation being absolutely certain AND unfreezing technology being developed in 37 years.

But I noticed I was confused; Alcor has been around forty-ish years. Assuming it started life as a small company, the chance of that happening was one in ten thousand. That both Alcor AND The Cryonics Institute have been successfully freezing people for forty years seems literally beyond belief. I formed some possible hypotheses to explain this:

  1. Many cryo firms have been set up, and I only know about the successes (a kind of anthropic argument)
  2. Cryonics firms are unusually well-managed
  3. The data from one or both of my sources was wrong
  4. Modelling an average life expectancy as a half-life was wrong
  5. Some extremely unlikely event that is still more likely than the one in billion chance my model predicts – for example the BBC article is an April Fool’s joke that I don’t understand.

I’m pretty sure I can rule out 1; if many cryo firms were set up I’d expect to see four lasting twenty years and eight lasting ten years, but in fact we see one lasting about five years and two lasting indefinitely. We can also probably rule out 2; if cryo firms were demonstrably better managed than S&P 500 companies, the CEO of Alcor could go and run Microsoft and use the pay differential to support cryo research (if he was feeling altruistic). Since I can’t do anything about 5, I decided to focus my analysis on 3 and 4. In fact, I think 3 and 4 are both correct explanations; my source for the S&P 500 companies counted dropping out of the S&P 500 as a company ‘death’, when in fact you might drop out because you got taken over, because your industry became less important (but kept existing) or because other companies overtook you – your company can’t do anything about Facebook or Apple displacing them from the S&P 500, but Facebook and Apple don’t make you any more likely to fail. Additionally, modelling as a half-life must have been flawed; a company that has survived one hundred years and a company that has survived one year are not equally likely to collapse!

Consequently I searched Google Scholar for a proper academic source. I found one, but I should introduce the following caveats:

  1. It is UK data, so may not be comparable to the US (my understanding is that the US is a lot more forgiving of a business going bankrupt, so the UK businesses may liquidate slightly less frequently).
  2. It uses data from 1980. As well as being old data, there are specific reasons to believe that this time period overestimates the true survival of companies. For example, the mid-1980’s was an economic boom in the UK and 1980-1985 misses both major UK financial crashes of modern times (Black Wednesday and the Sub-Prime Crash). If the BBC is to be believed, the trend has been for companies to go bankrupt more and more frequently since the 1920’s.

I found it really shocking that this question was not better studied. Anyway, the key table that informed my model was this one, which unfortunately seems to break the website when I try to embed it. The source is Dunne, Paul, and Alan Hughes. "Age, size, growth and survival: UK companies in the 1980s." The Journal of Industrial Economics (1994): 115-140.

You see on the left the size of the company in 1980 (£1 in 1980 is worth about £2.5 now). On the top is the size of the company in 1985, with additional columns for ‘taken over’, ‘bankrupt’ or ‘other’. Even though a takeover might signal the end of a particular product line within a company, I have only counted bankruptcies as representing a threat to a frozen body; it is unlikely Alcor will be bought out by anyone unless they have an interest in cryonics.

The model is a Discrete Time Markov Chain analysis in five-year increments. What this means is that I start my hypothetical cryonics company at <£1m and then allow it to either grow or go bankrupt at the rate indicated in the article. After the first period I look at the new size of the company and allow it to grow, shrink or go bankrupt in accordance with the new probabilities. The only slightly confusing decision was what to do with takeovers. In the end I decided to ignore takeovers completely, and redistribute the probability mass they represented to all other survival scenarios.

The results are astonishingly different:

(http://imgur.com/CkQirYD.jpg)

Now your body can remain alive 415 years for a 22.8% chance of revival (assuming all other probabilities are certain). Perhaps more usefully, if you estimate the year you expect revival to occur you can read across the x axis to find the probability that your cryo company will still exist by then. For example in the OvercomingBias link above, Hanson estimates that this will occur in 2090, meaning he should probably assign something like a 0.65 chance to the probability his cryo company is still around.

Remember you don’t actually need to estimate the actual year YOUR revival will occur, but only the first year the first successful revival proves that cryogenically frozen bodies are ‘alive’ in a meaningful sense and therefore recieve protection under the law in case your company goes bankrupt. In fact, you could instead estimate the year Congress passes a ‘right to not-death’ law which would protect your body in the event of a bankruptcy even before routine unfreezing, or the year when brain-state scanning becomes advanced enough that it doesn’t matter what happens to your meatspace body because a copy of your brain exists on the internet.

My conclusion is that the survival of your cryonics firm is a lot more likely that the average person in the street thinks, but probably a lot less likely that you think if you are strongly into cryonics. This is probably not news to you; most of you will be aware of over-optimism bias, and have tried to correct for it. Hopefully these concrete numbers will be useful next time you consider the Cryo-Drake equation and the net present value of investing in cryonics.

Meetup : Urbana-Champaign: Planning and Re-planning

1 Manfred 17 April 2014 05:56AM

Discussion article for the meetup : Urbana-Champaign: Planning and Re-planning

WHEN: 20 April 2014 12:00:00PM (-0500)

WHERE: 412 W. Elm St, Urbana, IL

When things get complicated enough, you have to plan them in advance or they fail. You need blueprints and logistics before you can build a skyscraper. On a personal level, good plans improve our chances of success at anything we can make a plan for.

One trouble with plans is that once you've made them they're sticky. What kind of life to lead, what to study, when to marry - we inherit plans about these things.from the past and we don't always rethink them when appropriate.

Discussion article for the meetup : Urbana-Champaign: Planning and Re-planning

The usefulness of forecasts and the rationality of forecasters

0 VipulNaik 17 April 2014 03:49AM

Suppose we have a bunch of (forecasted value, actual value) pairs for a given quantity (with different measured actual values at different times). An example would be GDP growth rate measures in different years. For each year, we have a forecasted value and an actual value. So we have a bunch of (forecasted value, actual value) pairs, one for each year. How do we judge the usefulness of the forecasts at predicting the value. Here, we discuss a few related measures: accuracy, bias, and dependency (specifically, correlation).

Accuracy

The accuracy of a forecast refers to how far, on average, the forecast is from the actual value. Two typical ways of measuring the accuracy are:

  • Compute the mean absolute error: Take the arithmetic mean (average) of the absolute values of the errors for each forecast.
  • Compute the root mean square error: Take the square root of the arithmetic mean of the squares of the errors.

The size of the error, measured in either of these ways, is a rough estimate of how accurate the forecasts are in general (the larger the error, the less accurate the forecast). Note that an error of zero represents a perfectly accurate forecast.

Note that this is a global measure of accuracy. But it may be the case that forecasts are more accurate when the actual values are at a particular level, and less accurate when they are at a different level. There are mathematical models to test for this.

Bias

When we ask whether the forecast is biased, we're interested in knowing whether the size of the error in the positive direction systematically exceeds the size of the error in the negative direction. One method for estimating this is to compute the mean signed difference (i.e., take the arithmetic mean of errors for individual forecasts without taking the absolute value). If this comes out as zero, then the forecasting is unbiased. If it comes out as positive, the forecasts are biased in the positive direction, whereas if it comes out as negative, the forecasts are biased in the negative direction.

The above is a start, but it's not good enough. In particular, the error could come out nonzero simply because of random fluctuations rather than bias. We'd need to complicate the model somewhat in order to make probabilistic or quantitative assessments to get a sense of whether or how the forecasts are really biased.

Again, the above is a global measure of bias. But it may be the case that there are different biases for different values. There are mathematical models to test for this.

Are accuracy and bias related? Yes, in the obvious sense that the degree of inaccuracy gives an upper bound on the degree of bias. In particular, for instance, the mean absolute error gives an upper bound on the mean signed difference. So a perfectly accurate forecast is also unbiased. However, we can have fairly inaccurate forecasts that are unbiased. For instance, a forecast that always guesses the mean of the distribution of actual values will be inaccurate but have zero bias.

The above discusses additive bias. There may also be multiplicative bias. For instance, the forecasted value may be reliably half the actual value. In this case, doubling the forecasted value allows us to obtain the actual value. There could also be forms of bias that are not captured in either way.

Dependency and correlation

Ideally, what we want to know is not so much whether the forecasts themselves are accurate or biased, but whether we can use them to generate new forecasts that are good. So what we want to know is: once we correct for bias (of all sorts, not just additive or multiplicative), how accurate is the new forecast? Another way of framing this is: what exactly is the nature of dependency between the variable representing the forecasted value and the variable representing the actual value?

Testing for the nature of the dependency between variables is a hard problem, particularly if we don't have a prior hypothesis for the nature of the dependency. If we do have a hypothesis, and the relation is linear in unknown parameters, we can use the method of ordinary least squares regression (or another suitable regression) to find the best fit. And we can measure the goodness of that fit through various statistical indicators.

In the case of linear regression (i.e., trying to fit using a linear functional dependency between the variables), the square of the correlation between the variables is the R2 of the regression, and offers a decent measure of how close the variables are to being linearly related. A correlation of 1 implies a R2 of 1, and implies that the variables are perfectly correlated, or equivalently, that a linear function with positive slope is a perfect fit. A correlation of -1 also implies a R2 of 1, would mean that a linear function with negative slope is a perfect fit. A correlation of zero means that the variables are completely uncorrelated.

Note also that linear regression covers both additive and multiplicative bias (and combinations thereof) and is often good enough to capture the most basic dependencies.

If the value of R2 for the linear regression is zero, that means the variables are uncorrelated. Although independent implies uncorrelated, uncorrelated does not imply independent, because there may be other nonlinear dependencies that miraculously give zero correlation. In fact, uncorrelated does not imply independent even if the variables are both normally distributed. As a practical matter, a correlation of zero is often taken as strong evidence that neither variable tells us much about the other. This is because even if the relationship isn't linear, the existence of some relationship makes a nonzero correlation more plausible than an exact zero correlation. For instance, if the variables are positively related (higher forecasted values predict higher actual values) we expect a positive correlation and a positive R2. If the variables are negatively related (higher forecasted values predict lower actual values) we expect a negative correlation, but still a positive R2.

For the trigonometrically inclined: The Pearson correlation coefficient, simply called the correlation here, measures the cosine of the angle between a vector based on the forecasted values and a vector based on the actual values. The vector based on the forecasted values is obtained by starting with the vector of the forecasted values and subtracting from each coordinate the mean forecasted value. Similarly, the vector based on the actual values is obtained by starting with the vector of the actual values and subtracting from each coordinate the mean actual value. The R2 value is the square of the correlation, and measures the proportion of variance in one variable that is explained by the other (this is sometimes referred to as the coefficient of determination). 1 -R2 represents the square of the sine between the vectors, and represents how alienated the vectors are from each other. A correlation of 1 means the vectors are collinear and point in the same direction, a positive correlation less than 1 means they form an acute angle, a zero correlation means they are at right angles, a negative correlation greater than -1 means they form an obtuse angle, and a correlation of -1 means the vectors are collinear and point in opposite directions.

Usefulness versus rationality

The simplest situation is where the forecasts are completely accurate. That's perfect. We don't need to worry about doing better.

In the case that the forecasts are not accurate, and if we have had the luxury of crunching the numbers and figuring out the nature of dependency between the forecasted and actual values, we'd want a situation where the actual value can be reliably predicted from the forecasted value, i.e., the actual value is a (known) function of the forecasted value. A simple case of this is where the actual value and forecasted value have a correlation of 1. This means that the actual value is a known linear function of the forecasted value. (UPDATE: This process of using a known linear function to correct for systematic additive and multiplicative bias is known as Theil's correction). So the forecasted value itself is not good, but it allows us to come up with a good forecast.

What would it mean for a forecast to be unimprovable? Essentially, it means that the best value we can forecast based on the forecasted value is the forecasted value. Wait, what? What we mean is that the forecasters aren't leaving any money on the table: if they could improve the forecast simply by correcting for a known bias, they have already done so. Note that a forecast being unimprovable does not say anything directly about the R2 value. Rather, the unimprovability suggests that the best functional fit between the forecasted and the actual value would be the identity function (actual value = forecasted value). For the linear regression case, it suggests that the slope for the linear regression is 1 and the intercept is 0. Or at any rate, that they are close enough. Note that a forecast that's completely useless is unimprovable.

The following table captures the logic (note that the two rows just describe the extreme cases, rather than the logical space of all possibilities).

 The forecast cannot be improved uponThe forecast can be improved upon
The forecast, once improved upon, is perfect The forecasted value equals the actual value. The forecasted value predicts the actual value perfectly, but is not itself perfect. For instance, they could have a correlation of 1, in which case the prediction would be via a linear function.
The forecast, even after improvement, is useless at the margin (i.e., it does not give us information we didn't already have from knowledge of the existing distribution of actual vaues) The forecast just involves perfectly guessing the mean of the distribution of actual values (assuming that the distribution is known in advance; if it's not, then things become even more murky).
The actual value is independent of the forecast, and it does not involve simply guessing the mean.

Note that if forecasters are rational, then we should be in the column "The forecast cannot be improved upon" and therefore between the extreme case that the forecast is already perfect and that the forecast just involves guessing the mean of the distribution (assuming that the distribution is known in advance).

So there are two real and somewhat distinct questions about the value of forecasts:

  • (The question whose extreme answers give the rows): How useful are the forecasts, in the sense that, once we extract all the information upon them by correcting for bias and applying the appropriate functional form, how accurate are the new forecasts?
  • (The question whose answers give the columns): How rational are the forecasters, in the sense of how close are their forecasts to the most useful forecasts that can be extracted from those forecasts? (Note that even if the forecasts cannot be improved upon, that doesn't mean the forecasts are rational in the broader sense of making the best guess in terms of all available information, but it is in any case consistent with rationality in this broader sense).

Background reading

For more background, see the Wikipedia pages on forecast bias and bias of an estimator and the content linked therein.

LINK-Cryonics Institute documentary

0 polymathwannabe 16 April 2014 10:44PM

"WE WILL LIVE AGAIN looks inside the unusual and extraordinary operations of the Cryonics Institute. The film follows Ben Best and Andy Zawacki, the caretakers of 99 deceased human bodies stored at below freezing temperatures in cryopreservation. The Institute and Cryonics Movement were founded by Robert Ettinger who, in his nineties and long retired from running the facility, still self-publishes books on cryonics, awaiting the end of his life and eagerly anticipating the next."

http://www.iht.com/2014/04/15/we-will-live-again/

Meetup : Ugh Fields

1 evand 16 April 2014 04:32PM

Discussion article for the meetup : Ugh Fields

WHEN: 17 April 2014 07:00:00PM (-0400)

WHERE: 2411 N Roxboro St 27704

We'll be discussing Ugh Fields: what they are, how they keep you from accomplishing stuff, and how to recognize and reduce them. As always, RSVPs are appreciated but not required. We encourage you to show up around 7, and we'll start on-topic content at 7:30. If you're feeling energetic about it, there's a relevant article. Afterwards, we will probably meander over to Fullsteam and be sociable.

Discussion article for the meetup : Ugh Fields

Stories for exponential growth

1 VipulNaik 16 April 2014 03:15PM

Disclaimer: This is a collection of some simple stories for exponential growth. I've tried to list the main ones, but I might well have missed some, and I welcome feedback.

The topic of whether and why growth trends are exponential has been discussed on LessWrong before. For instance, see the previous LessWrong posts Why are certain trends so precisely exponential? and Mathematical simplicity bias and exponential functions. The purpose of this post is to explore some general theoretical reasons for expecting exponential growth, and the assumptions that these models rely on. I'll look at economic growth, population dynamics, and technological growth.

TL;DR

  1. Exponential growth (or decay) arises from a situation where the change in level (or growth rate) is proportional to the level. This can be modeled by either a continuous or a discrete differential equation.
  2. Feedback based on proportionality is usually part of the story, but could occur directly for the measured variable or in a hidden variable that affects the measured variable.
  3. In a simplified balanced economic growth model, growth is exponential because the addition to capital stock in a given year is proportional to output in that year, depreciation rate is constant, and output next year is proportional to capital stock this year.
  4. In a simple population dynamics model, growth is exponential under the assumption that the average number of kids per person stays constant.
  5. An alternative story of exponential growth is that performance is determined by multiplying many quantities, and we can work to make proportional improvements in the quantities one after the other. This can explain roughly exponential growth but not close-to-precise exponential growth.
  6. Stories of intra-industry or inter-industry coordination can explain a more regular exponential growth pattern than one might otherwise expect.

#1: Exponential arises from change in level (or growth rate) being proportional to the level

Brief mathematical introduction for people who have a basic knowledge of calculus. Suppose we're trying to understand how a quantity x (this could be national GDP of a country, or the price of 1 GB of NAND flash, or any other indicator) changes as a function of time t. Exponential growth means that we can write:

x = Cat

where C > 0, a > 1 (exponential decay would mean a < 1). More conventionally, it is written in the form:

x = Cekt

where C > 0, k > 0 (exponential decay would mean k < 0). The two forms are related as follows: a = ek.

The key feature of the exponential function is that for any t, the quotient x(t +1)/x(t) is a constant independent of t (the constant in question being a). In other words, the proportional gain is the same over all time periods.

Exponential growth arises as the solution to the (continuous, ordinary, first-order first-degree) differential equation:

dx/dt = kx

This says that the instantaneous rate of change is proportional to the current value.

We can also obtain exponential growth as the solution to the discrete differential equation:

Δ x = (a - 1)x

where Δ x denotes the difference x(t + 1) - x(t) (the discrete derivative of x with respect to t). What this says is that the discrete change in x is proportional to x.

To summarize, exponential growth arises as a solution to both continuous and discrete differential equations where the rate of change is proportional to the current level. The mathematical calculations work somewhat differently, but otherwise, the continuous and discrete situations are qualitatively similar for exponential growth.

#2: Feedback based on proportionality is usually part of the story, but the phenomenon could occur in a visible or hidden process

The simplest story for why a particular indicator grows exponentially is that the growth rate is determined directly in proportion with the value at a given point in time. One way of framing this is that there is feedback from the level of the indicator to the rate of change of the indicator. To get a good story for exponential growth, therefore, we need a good story for why the feedback should be in the form of direct proportionality, rather than some other functional form.

However, we can imagine a subtly different story of exponential growth. Namely, the indicator itself is not the root of the phenomenon at all, but simply a reflection of other hidden variables, and the phenomenon of exponential growth is happening at the level of these hidden variables. For instance, visible indicator x might be determined as being 0.82y2 for a hidden variable y, and it might be that the variable y is the one that experiences feedback from its level to its rate of change. I believe this is conceptually similar to (though not mathematically the same as) hidden Markov models.

One LessWrong comment offered this sort of explanation: perhaps the near-perfect exponential growth of US GDP, and its return to an earlier trend line after deviation during some years, suggests that population growth is the hidden variable that drives long-run trends in GDP. The question of whether economic growth should revert to an earlier trend line after a shock is a core question of macroeconomics with a huge but inconclusive literature; see Arnold Kling's blog post titled Trend vs. Random Walk.

#3: A bare-bones model of balanced economic growth (balanced growth version of Harrod-Domar model)

Let's begin with a very basic model of economic growth This is not to be applied directly in the understanding of real-world economies. Rather, it's meant to give us a crude idea of where exponentiality comes from.

In this model, an economy produces a certain output Y in a given year (Y changes from year to year). The economy consumes part of the output, and saves the rest of it to add to its capital stock K. Suppose the following hold:

  1. The fraction of output produced that is converted to additional capital stock is constant from year to year (i.e., the propensity to save is constant).
  2. The (fractional) rate of depreciation of capital stock (i.e., the fraction of capital stock that is lost every year due to depreciation) is constant.
  3. The amount of output produced in a given year is proportional to the capital stock at the end of the previous year, with the constant of proportionality not changing across years.

We have two variables here, output and capital stock, linked by proportionality relationships between them and between their year-on-year changes. When we work out the algebra, we'll discover that both variables grow exponentially in tandem.

The above describe a balanced growth model, where the shape and nature of the economy do not change. It just keeps growing in size, with all the quantities growing together in the same proportion. Economies may initially be far from a desirable steady state, or may be stuck in a low-savings steady state. Also note that if the rate of depreciation of capital stock exceeds the rate at which new capital stock is added, the economy will decay rather than grow exponentially.

If you're interested in actual models of economic growth used in growth theory and development economics, read up on the Harrod-Domar model and its variants such as the Ramsey-Coopman-Kans model, AK model, and Solow-Swan model. For questions surrounding asymptotic convergence, check out the Inada conditions.

#4: Population dynamics

The use of exponential models for population growth is justified under the assumption that the number of children per woman who survive to adulthood remains constant. Assume a 1:1 sex ratio, and assume that women have an average of 3 kids who survive to adulthood. In that case, with every generation, the population multiplies by a factor of 3/2 = 1.5. After n generations, the population would be (1.5)n times the original population. This is of course a grossly oversimplified model, but it covers the rationale for exponential growth. In practice, the number of surviving children per woman varies over time due to a combination of fertility changes and changes in age-specific mortality rates.

The dynamics are even simpler to understand for bacteria in a controlled environment such as a petri dish. Bacteria are unicellular organisms and they reproduce by binary fission: a given bacterium splits into two new bacteria. As long as there are ample resources, a bacterium may split into two after an average interval of 1 hour. In that case, we expect the number of bacteria in the petri dish to double every hour.

#5: A large number of factors that multiply together to determine the quantity

Here is a somewhat different story for exponential growth that a number of people have proposed independently. In a recent comment, Ben Kuhn wrote:

One story for exponential growth that I don't see you address (though I didn't read the whole post, so forgive me if I'm wrong) is the possibility of multiplicative costs. For example, perhaps genetic sequencing would be a good case study? There seem to be a lot of multiplicative factors there: amount of coverage, time to get one round of coverage, amount of DNA you need to get one round of coverage, ease of extracting/preparing DNA, error probability... With enough such multiplicative factors, you'll get exponential growth in megabases per dollar by applying the same amount of improvement to each factor sequentially (whereas if the factors were additive you'd get linear improvement).

Note that in order for this growth to come out as close to exponential, it's important that the marginal difficulty, or time, or cost, of addressing the factors is about the same. For instance, if the overall indicator we are interested in is a product pqrs, it may be that in a given year, we can zero in on one of the four factors and reduce that by 5%, but it doesn't matter which one.

A slightly more complicated story is that the choice of what factor we can work on at a given stage is constrained, but the best marginal choices at all stages are roughly as good in proportional terms. For instance, maybe, for our product pqrs, the best way to start is by reducing p by 5%. But once we are done with that, next year the best option is to reduce q by 5%. And then, once that's done, the lowest-hanging fruit is to reduce r by 5%. This differs subtly from the previous one in that we're forced from outside in the decision of what factor to work on at the current margin, but the proportional rate of progress still stays constant.

However, in the real world, it's highly unlikely that the proportional gains quite stay constant. I mean, if we can reduce p by 5% in the first year and q by 5% in the second year, what really gets in the way of reducing both together? Is it just a matter of throwing more money at the problem?

By the way, one example of rapid progress that does seem to closely hew to the multiplicative model is the progress made on linear programming algorithms. Linear programming involves a fair number of algorithms within algorithms. For instance, solving certain types of systems of linear equations is a major subroutine invoked in the most time-critical component of linear programming.

My overall conclusion is that multiplicative stories are good for explaining why growth is very roughly close to exponential, but they are not strong enough by themselves to explain a very precise exponential growth trend. However, when combined with stories about regularization, they could explain what a priori seems an unexpectedly close to precise exponential.

#6: The story of coordination and regularization

Some people have argued that the reason Moore's law (and similar computing paradigms) have held for sufficiently long periods of history is due to explicit industry roadmaps such as the International Technology Roadmap for Semiconductors. I believe that a roadmap cannot bootstrap the explanation for growth being exponential. If roadmaps could dictate reality so completely, why didn't the roadmap decide on even faster exponential growth, or perhaps superexponential growth? No, the reason for exponential growth must come from some more fundamental factors.

But explicit or implicit roadmaps and industry expectations can explain why progress was so close to being precisely exponential. I offer one version of the story.

In a world where just one company is involved with research, manufacturing, and selling to the public, the company would try to invest according to what they expected consumer demand to be (see my earlier post for more on this). Since there aren't strong reasons to believe that consumer needs grow exponentially, nor are there good reasons to believe that progress at resolving successive barriers is close to precisely exponential, an exponential growth story here would be surprising.

Suppose now that the research and manufacturing processes are handled by different types of companies. Let's also suppose that there are many different companies competing at the research level and many different companies competing at the manufacturing level. The manufacturing companies need to make plans for how much to produce and how much raw material to keep handy for the next year, and these plans require having an idea of how far research will progress.

Since no individual manufacturer controls any individual researcher, and since the progress of individual research companies can be erratic, the best bet for manufacturers is to make plans based on estimates of how far researchers are expected to go, rather than on any individual research company's promise. And a reasonable way to make such an estimate is to have an industry-wide roadmap that serves a coordinating purpose. Manufacturers have an incentive to follow the roadmap, because deviating in either direction might result in them having factories that don't produce the right sort of stuff, or have too much or too little capacity. The research companies also have incentives to meet the targets, and in particular, to neither overshoot nor undershoot too much. The reasons for not undershooting are obvious: they don't want to be left behind. But why not overshoot? Since the manufacturers are basing their plans on the technology they expect,  a research company overshooting might result in technologies that aren't ready for implementation, so the advantage is illusory. On the other hand, the costs of overshooting (in terms of additional expenditures on research) are all too real.

Thus, the benefits of coordination between different parts of the "supply chain" (in this case, the ideas and the physical manufacturing) lead to greater regularization of the growth trend than one would expect otherwise. If there are reasons to believe that growth is roughly exponential (the multiplicative story could be one such reason) then this could lead to it being far more precisely exponential.

The above explanation is highly speculative and I don't have strong confidence in it.

PS on algorithm improvement

  • If the time taken for an algorithm is described as a sum of products, then only the factors of the summands that dominate in the big-oh sense matter. For simplicity, let's assume that the time taken is a sum of products that are all of the same order as one another.
  • To improve by a given constant of proportionality the time complexity of an algorithm where the time taken is a sum of products that are of the same order of magnitude, one strategy to improve each summand by that constant of proportionality. Alternatively, we could improve some summands by a lot more, in which case we'd have to determine the overall improvement as the appropriate weighted average.
  • To improve a particular summand by a particular constant of proportionality, we may improve any one factor of that summand by that constant of proportionality. Or, we may improve all factors of that summand by constants that together multiply to the desired constant of proportionality.

Open Thread April 16 - April 22, 2014

4 Tenoke 16 April 2014 07:05AM

You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Different time horizons for forecasting

1 VipulNaik 16 April 2014 03:30AM

Disclaimer: This post contains some very preliminary thoughts on a topic that I believe would be of interest to some people here. There are probably better expositions on the subject that I haven't been able to find. If you know of such expositions, I'd appreciate being pointed to them.

There are qualitative differences between the types of forecasting that is feasible, or most suitable, for different time horizons. In this post, I discuss some of the possibilities for such time horizons and the forecasts that can be made for those.

The present (today)

Predicting the present doesn't involve prediction so much as it involves measurement. But that doesn't mean it's a slam dunk: one still needs to make a lot of measurements to come up with precise and accurate quantities. One cannot simply count the entire population of a region in one stroke. Doing so requires planning and a detailed infrastructure. And in many cases, it's not possible to measure perfectly, so we measure in part and then use theory (such as sampling theory) to extrapolate from there.

The very near future (tomorrow)

The very near future differs from the present in that it cannot be measured directly, but measuring it is often no more complicated than measuring the present. In a discrete model, it's the next step beyond the present. An example of a tomorrow prediction is: "what restaurants will be open in the city of Chicago tomorrow?" For any restaurant to be open tomorrow, it is most likely either already operating today, or has applied to open tomorrow. In either case, a good stock-taking of the situation today would give a clear idea of what's in store for tomorrow. Another example is when people make projections about employment or GDP based on asking people about their estimated workforce sizes or production levels in the near future.

Predictions about the near future involve a combination of the following:

  • assuming persistence from the present
  • asking people for their intentions and estimates
  • identifying and adjusting for any major sources of difference between today and tomorrow. In the restaurant case, an example of a major source of difference would be if "tomorrow" happened to be a major festival where restaurants customarily closed.

Who forecasts the very near future? As it turns out, a lot of people. I gave examples of economic indicator estimates based on surveys of representative samples of the economy. Also, I believe (I don't have an inside view here) that industry associations and trade journals function this way: they get data from all their members on their production plans, then they pool together the data and publish comprehensive information so that the industry as a whole is well-informed about production plans, and can think a step ahead. (SEMI might be an example).

The near but not very near future, or a few steps down the line

For the future that's a little farther out than tomorrow, simply assuming persistence or asking people isn't good enough. Persistence doesn't work because even though each day is highly correlated to the next, the correlation weakens as we separate the days out more and more. Asking people for their intentions doesn't work because people themselves are reacting to each other. For inanimate systems, different components of the system interact with each other.

This is probably the time horizon where some sort of formal model or computer simulation works best. For instance, weather models for the next 5 days or so perform somewhat better than the fallback options of persistence and climatology, and in the 5-10 day range they perform somewhat but not a lot better than climatology. Beyond 10 days, climatology generally wins.

Similarly, this sort of modeling might work well for estimating GDP changes over two or three quarters, because the model can account for how the changes in one quarter (the very near future) will have ripple effects for another quarter, and then another.

The problem with such models is that they quickly lose coherence. Small variations in initial assumptions, to a level that we cannot hope to measure precisely, start having huge potential ripple effects. Model uncertainty also gets in the way. The range of possibilities is so large that we might as well get to more general long-term models.

What is the value of making such predictions? The case of weather prediction is obvious: predicting extreme weather events saves lives, and even making more mundane predictions can help people plan their outdoor events and travel and can help transportation services better manage their services. Similar predictions in the economic or business realm can also help.

The organizations who specialize in this sort of prediction tend to be the same as the ones predicting the very near future, probably because they have all the data already, and so it's easiest for them to run the relevant models.

The medium-term future

This is the part of the future where general domain-specific phenomena might be useful. In the case of weather, the medium-term future is general climatology: how warm are summers, and how cold are winters? When does a place get rain?

Computer simulations have decohered, and formal models that are sufficiently realistic in the short term get too complicated. So what we do use? General domain-specific phenomena, including information about equilibrating and balancing influences and positive and negative feedback mechanisms. Trend extrapolation, in the (rare?) cases that it's justified. Reality checks based on considerations of the sizes and growth potentials of different industries and markets.

The medium-term future is the time horizon where:

  • New companies can be started
  • City-level transportation systems can be built
  • Companies can make large-scale capital investments in new product lines and begin reaping the profits from them
  • Government policies, such as overhauls to health care legislation or migration policy, can be implemented and their initial effects be seen

My very crude sense is that this is the highest-leverage area for improvements in forecasting capabilities at the current margin. It's far out enough that major preparatory, preventative, and corrective steps can be taken. It's near enough that the results can actually be seen and can be used to incentivize current decision makers. It's far enough that direct simulation or intricate models don't stay coherent, but it's far enough that intuitions derived from present conditions, combined with general domain-specific knowledge, continue to be broadly valid.

The long-term future

The dividing line between the medium-term and long-term future is unclear. One possible way of distinguish between the two is that the medium-term future is heavily grounded in timelines. It's specifically interested in asking what will happen in a particular interval of time, or in when a particular milestone will be achieved. With the long-term future, on the other hand, timelines are too fuzzy to even be useful. Rather, we are interested simply in filling in the details of what it might look like. A discussion of how a world that's 3 degrees celsius warmer, or of space travel, or of a post-singularity world, or of a world that is solar-powered, might fit this "long-term" moniker. Robin Hanson's discussion of long-term growth and the multiple modes of such growth also fits this "long-term" category.

With the long-term future, simply painting futuristic visions, informed by a broad understanding of theory to separate the plausible from the implausible, might be a better bet than reasoning outward from the present moment in time or from the "climatology" of the world today. Indeed, as I noted in my discussion of Megamistakes, there may well be a negative correlation between having a clear vision of the future in that sense and being able to make good timed predictions for the medium term.

With the long term future, are there, or should there be, incentives to be accurate? No. Rather, the incentives may be in the direction of painting plausible (even if improbable) future scenarios with the dual goal of preparing for them or influencing the probability of achieving them. This means dampening the probability of the catastrophic scenarios (even if they're low-probability to begin with) and increasing the probability of, perhaps even directly working towards, the good scenarios. On the good scenario side, a futurist with a rosy vision of the future might write a science fiction or speculative science book that, a generation or two later, inspires an entrepreneur, scientist, or engineer to go build one of those highly futuristic items.

Nick Beckstead's research on the overwheming importance of shaping the far future makes the relevant philosophical arguments.

I could probably split up the long term further. I'm not sure what some natural ways of performing such a split might be, and I also don't think it's relevant for my purposes, because most long-term forecasts are hard to evaluate anyway.

PS: My post on the logarithmic timeline was a result of similar thinking, but they ended up being on different topics. This post is about the qualitative differences between time horizons, that post is about having a standard to compare forecasts for different time intervals in the future.

View more: Next