Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Ergonomics Revisited

1 diegocaleiro 22 April 2014 09:57PM

Continuation of: Spend Money on Ergonomics, by Kevin

 

Three years have elapsed since Kevin wisely told us to spend money on treating our bodies well. It may be time to check for new gadgets, to verify what has worked and what has not etc... 

If you have purchased an item for this purpose, or intend to buy one and don't know which, tell here, ask here. 

Nick Bostrom uses a mouse that looks like a plane controller joystick. 

I've seen keyboards that bend sideways, that are concave, that are convex, and that look like a sphere. 

At FHI, dozens of books are used so that computer screens stay at eye level or above. 

But I am no expert and I have not looked myself, nor would know how to. So please share in the comments the best knowledge about:

Keyboards

Mice

Chairs

Balls to sit on

Pillows

Beds/Matresses etc.. 

Screens - Size, position, brightness etc... 

Other household office items - Stairs, Handles, Shower etc... 

 

Link-Keeping organs alive on their own

1 polymathwannabe 22 April 2014 05:45PM

"A new medical device is keeping hearts warm and beating during transport, something that could be a major breakthrough in transplant history."

Video (the episode contains other news as well):

http://www.aljazeera.com/programmes/techknow/2014/04/heart-box-201442013591803545.html

The track record of survey-based macroeconomic forecasting

4 VipulNaik 22 April 2014 04:57AM

I'm interested in forecasting, and one of the areas where plenty of forecasting has been done is macroeconomic indicators. This post looks at what's known about macroeconomic forecasting.

Macroeconomic indicators such as total GDP, GDP per capita, inflation, unemployment, etc. are reported through direct measurement every so often (on a yearly, quarterly, or monthly basis). A number of organizations publish forecasts of these values, and the forecasts can eventually be compared against the actual values. Some of these forecasts are consensus forecasts: they involve polling a number of experts on the subject and aggregating the responses (for instance, by taking an arithmetic mean or geometric mean or appropriate weighted variant of either). We can therefore try to measure the usefulness of the forecasts and the rationality of the forecasters.

Why might we want to measure this usefulness and rationality? There could be two main motivations:

  1. A better understanding of macroeconomic indicators and whether and how we can forecast them well.
  2. A better understanding of forecasting as a domain as well as the rationality of forecasters and the inherent difficulties in forecasting.

My interest in the subject stems largely via (2) rather than (1): I'm trying to understand just how valuable forecasting is. However, the research I cite has motivations that involve some mix of (1) and (2).

Within (2), our interest might be in studying:

  • The usefulness and rationality of individual forecasts (that are part of the consensus) in absolute terms.
  • The usefulness and rationality of the consensus forecast.
  • The usefulness and rationality of individual forecasts relative to the consensus forecasts (treating the consensus forecast as a benchmark for how easy the forecasting task is).

The macroeconomic forecasting discussed here generally falls in the near but not very near future category in the framework I outlined in a recent post.

Here is a list of regularly published macroeconomic consensus forecasts. The table is taken from Wikipedia (I added the table to Wikipedia).

Organization name Forecast name Number of individuals surveyed Number of countries covered List of countries/regions covered Frequency How far ahead the forecasts are made for Start date
Consensus Economics[2][3] Consensus ForecastsTM More than 700[2][3] 85[2][3] Member countries of the G-7 industralized nations, Asia Pacific, Eastern Europe, and Latin America.[2][3] Monthly[2][3] 24 months October 1989[4]
FocusEconomics[5] FocusEconomics Consensus Forecast[6] Several hundred[6] More than 70[6] Asia, Eastern Europe, Euro Area, Latin America, Nordic economies[6] Monthly[6]  ? 1998[7]
Blue Chip Publications division of Aspen Publishers[8] Blue Chip Economic Indicators[8] 50+[8] 1 United States Monthly[8]  ? 1976[8]
Federal Reserve Bank of Philadelphia Survey of Professional Forecasters[9][10] a few hundred 1 United States Quarterly[9] 6 quarters, plus a few more long-range forecasts 1968[9][10]
European Central Bank ECB Survey of Professional Forecasters[11][12]  ?  ? Europe Quarterly[11] Two quarters and six quarters from now, plus the current and next two years 1999[11][12]
Federal Reserve Bank of Philadelphia Livingston Survey[13]  ? 1 United States[13] Bi-annually (June and December every year)[13] Two bi-annual periods (6 months and 12 months from now), plus some forecasts for two years 1946[13]

Strengths and weaknesses of the different surveys

  • Time series available: The surveys that have been around longer, such as the Livingston Survey (started 1946), Survey of Professional Forecasters (started 1968) and the Blue Chip Economic Indicators (started 1976) have accumulated a larger time series of data. This allows for more interesting analysis.
  • Number of regions for which macroeconomic indicators are forecast: The surveys that cover a larger number of countries, such as the Consensus ForecastsTM (85 countries) and the FocusEconomics Consensus Forecast (over 70 countries) can be used to study hypotheses about differences in the accuracy and bias in forecasts based on country.
  • Time that people are asked to forecast ahead, frequency of forecast, and number of different forecasts (at different points in time) for the same indicator: Surveys differ in how far ahead people have to forecast, how frequently the forecasts are published, and the number of different times a particular quantity is forecast. For instance, the Consensus ForecastsTM includes forecasts for the next 24 months, and is published monthly. So we have 24 different forecasts of any given quantity, with the forecasts made at time points separated by a month each. This is at the upper end. The Survey of Professional Forecasters publishes at a quarterly frequency and includes macroeconomic indicator forecasts for the next 6 quarters. This is a similar time interval to the Consensus ForecastsTM but a smaller number of forecasts for the same quantity because of a lower frequency of publication.
  • Evaluation of individual versus consensus forecasts: For some forecasts (such as those published by the Survey of Professional Forecasters), the published information includes individual forecasts, so we can measure the usefulness and rationality of individual forecasts rather than that of the consensus forecast. For others, such as Consensus ForecastsTM, only the consensus is available, so only more limited tests are possible. Note that the question of the value of individual forecasts and the question of the value of the consensus forecast are both important questions.

The history of research based on consensus forecast sources

There has been a gradual shift in what consensus forecasts are used in research studying forecasts:

  • Early research on macroeconomic forecasting, in the 1970s, began with a few people collecting their own data by polling experts.
  • In the 1980s, the Livingston bi-annual survey was used as a major data source by researchers.
  • In the late 1980s and through the 1990s, researchers switched to the Survey of Professional Forecasters and the Blue Chip Economic Indicators Survey, with the focus shifting to the latter more over time. Note that the Blue Chip Economic Indicators had been started only in 1976, so it's natural that it took some time for people to have enough data from it to publish research.
  • In the 2000s, research based on Consensus ForecastsTM was added to the mix. Note that Consensus Economics started out in 1989, so it's understandable that research based on it took a while to start getting published.

There has also been a gradual shift in views about forecast accuracy:

  • Early literature in the 1970s and early 1980s found evidence of inaccuracy and bias in forecasts.
  • In the 1990s, as the literature started looking at forecasts that polled more people and had higher frequency, the view shifted in the direction of consensus forecasts having very little inaccuracy and bias, whereas the topic of bias in individual forecasts is more hotly contested.

Tabulated bibliography (not comprehensive, but intended to cover a reasonably representative sample)

PaperForecast usedConclusion about efficiency and bias of individual and consensus forecast
McNees (1978) Own data (3 people, 4 quarterly forecasts)

Some forecasts are biased, and forecasters are not rational

Figlewski and Wachtel (1981) Livingston Survey Inflationary expectations are more consistent with the adaptive expectations hypothesis than the rational expectations hypothesis. The paper was critiqued by Dietrich and Joines (1983), and the authors responded in Figlewski and Wachtel (1983).
Keane and Runkle (1990) Survey of Professional Forecasters (called the ASA-NBER survey at the time) Individual forecasters appear rational, although rationality is not established conclusively. Methodological problems are noted with past literature arguing for irrationality and bias in individual forecasts.
Swidler and Ketchler (February 1990) Blue Chip Economic Indicators Consensus forecasts are unbiased and efficient. Does not appear to look at individual forecasts.
Batchelor and Dua (November 1991) Blue Chip Economic Indicators Consensus forecasts are unbiased, but some individual forecasts are biased.
Ehrbeck and Waldmann (1996) North-Holland Economic Forecasts The abstract: "Professional forecasters may not simply aim to minimize expected squared forecast errors. In models with repeated forecasts the pattern of forecasts reveals valuable information about the forecasters even before the outcome is realized. Rational forecasters will compromise between minimizing errors and mimicking prediction patterns typical of able forecasters. Simple models based on this argument imply that forecasts are biased in the direction of forecasts typical of able forecasters. Our models of strategic bias are rejected empirically as forecasts are biased in directions typical of forecasters with large mean squared forecast errors. This observation is consistent with behavioral explanations of forecast bias."
Stark (1997) Survey of Professional Forecasters Attempts to replicate, for the Survey of Professional Forecasters, the results of Lamont (1995) for the Business Week survey that forecasters get more radical as they gain experience. Finds that the results do not replicate, and posits an explanation for this.
Laster, Bennett, and Geoum (1999) Blue Chip Economic Indicators Individual forecasters are biased. The paper describes a theory for how such bias might be rational given the incentives facing forecasters. The empirical data is a sanity check rather than the focus of the paper.
Batchelor (2001) (ungated early draft here) Consensus ForecastsTM Does not discuss bias in Consensus ForecastsTM per se, but notes that it is better than the IMF and OECD forecasts and that incorporating information from those forecasts does not improve upon Consensus ForecastsTM.
Ottaviani and Sorensen (2006) (none, discusses general theoretical model) Abstract: "We develop and compare two theories of professional forecasters’ strategic behavior. The first theory, reputational cheap talk, posits that forecasters endeavor to convince the market that they are well informed. The market evaluates their forecasting talent on the basis of the forecasts and the realized state. If the market expects forecasters to report their posterior expectations honestly, then forecasts are shaded toward the prior mean. With correct market expectations, equilibrium forecasts are imprecise but not shaded. The second theory posits that forecasters compete in a forecasting contest with pre-specified rules. In a winner-take-all contest, equilibrium forecasts are excessively differentiated."
Batchelor (2007) Consensus ForecastsTM Consensus forecasts are unbiased, some individual forecasts are biased. But the persistent optimism and pessimism of some forecasters seems inconsistent with existing models of rational bias.
Ager, Kappler, and Osterloh (2009) (ungated version) Consensus ForecastsTM There are consistently biased forecasts for some countries, but not for all. A lack of information efficiency is more severe for GDP forecasts than for inflation forecasts.

The following overall conclusions seem to emerge from the literature:

  • For mature and well-understood economics such as that of the United States, consensus forecasts are not notably biased or inefficient. In cases where they miss the mark, this can usually be attributed to issues of insufficient information or shocks to the economy.
  • There may however be some countries. particularly those whose economies are not sufficiently well-understood, where the consensus forecasts are more biased.
  • The evidence on whether individual forecasts are biased or inefficient is more murky, but the research generally points in the direction of some individual forecasts being biased. Some people have posited a "rational bias" theory where forecasters have incentives to choose a value that is plausible but not the most likely in order to maximize their chances of getting a successful unexpected prediction. We can think of this as an example of product differentiation. Other sources and theories of rational bias have also been posited, but there is no consensus in the literature on whether and how these are sufficient to explain observed individual bias.

Some addenda

  • A Forbes article recommends using the standard sources for forecasts to business people who need economic forecasts for their business plan, rather than aiming for something more fancy.
  • There are some other forecasts I didn't list here, such as the Greenbook forecasts, IMF's World Economic Outlook, and OECD Economic Outlook. As far as I could make out, these are not generated through a consensus forecast procedure. They involve some combination of models and human judgment and discussion. The bibliography I tabulated above includes Batchelor (2001), that found that the Consensus ForecastsTM outperformed the OECD and IMF forecasts. Some research on the Greenbook forecasts can be found in the footnotes on the Wikipedia page about Greenbook. I didn't think these were sufficiently germane to be included in the main bibliography.

How do you approach the problem of social discovery?

15 InquilineKea 21 April 2014 09:05PM

As in, how do you find ways to meet the right people you talk to? Presumably, they would have personality fit with you, and be high on both intelligence and openness. Furthermore, they would be in the point of their life where they are willing to spend time with you (although sometimes you can learn a lot from people simply by friending them on Facebook and just observing their feeds from time to time).

Historically, I've made myself extremely stalkable on the Internet. In retrospect, I believe that this "decision" is on the order of one of the very best decisions I've ever made in my life, and has made me better at social discovery than most people I know, despite my dual social anxiety and Asperger's. In fact, if a more extroverted non-Aspie could do the same thing, I think they could do WONDERS with developing an online profile.

I've also realized more that social discovery is often more rewarding when done with teenagers. You can do so much to impact teenagers, and they often tend to be a lot more open to your ideas/musings (just as long as you're responsible).

But I've wondered - how else have you done it? Especially in real life? What are some other questions you ask with respect to social discovery? I tend to avoid real life for social discovery simply because it's extremely hit-and-miss, but I've discovered (from Richard Florida's books) that the Internet often strengthens real-life interaction because it makes it so much easier to discover other people in real life (and then it's in real life when you can really get to know people).

Meetup : Ottawa - How to Run a Successful Less Wrong Meetup Group

0 amacfie 21 April 2014 05:41PM

Discussion article for the meetup : Ottawa - How to Run a Successful Less Wrong Meetup Group

WHEN: 30 April 2014 07:30:00PM (-0400)

WHERE: Royal Oak on the Canal, 221 Echo Dr, Ottawa, ON K1S 1N1, Canada

We'll go through and discuss the How to Run a Successful Less Wrong Meetup Group document, try some fun activities, and go meta: a meetup about meetups. There'll be a "LW" sign on the table.

Discussion article for the meetup : Ottawa - How to Run a Successful Less Wrong Meetup Group

Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]

9 Dr_Manhattan 21 April 2014 04:55PM

http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html

Very surprised none has linked to this yet:

TL;DR: AI is a very underfunded existential risk.

Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.

 

Meetup : Effective Altruism 102 (NYC)

0 Raemon 21 April 2014 02:53PM

Discussion article for the meetup : Effective Altruism 102 (NYC)

WHEN: 26 April 2014 05:00:00PM (-0400)

WHERE: 851 Park Place, Brooklyn NY 11216

It's been a while since we explicitly discussed Effective Altruism. The movement has changed a lot in the past couple years: • There's a much more deliberate focus on entrepreneurship, • Givewell is spinning off Givewell Labs to explore more complex but high payoff opportunities • MIRI has shifted focus, emphasizing math workshops and outreach to current-generation Narrow AI Safety experts in addition to Artificial General Intelligence researchers Early saturday evening, April 26th, we'll have a series of short talks about the state of the movement and opportunities you can pursue, including: 1) How to think strategically about doing good. 2) How to switch careers effectively 3) Recent updates by organizations in the Effective Altruism movement. WHEN+WHERE: Saturday, April 26th, 5:00 PM - 6:30 PM Highgarden House 851 Park Place, Brooklyn NY 11216

Discussion article for the meetup : Effective Altruism 102 (NYC)

Meetup : Munich Meetup

0 cadac 21 April 2014 01:36PM

Discussion article for the meetup : Munich Meetup

WHEN: 11 May 2014 02:00:00PM (+0200)

WHERE: Theresienstraße 41, 80333 München

You are invited to come to the May Munich LW Meeutp! One of our regulars will give a short talk, probably about meditation. Everyone is welcome to bring articles to discuss, rationality-related games to play etc. Like in April, we're planning to meet outside the mathematics building at the LMU. Depending on the weather, we'll stay outside or occupy a free room inside the math department. Whoever brings food for the group is awesome. :) It goes without saying that newcomers are very welcome

Discussion article for the meetup : Munich Meetup

Open thread, 21-27 April 2014

2 Metus 21 April 2014 10:54AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Thread started before the end of the last thread to ecourage Monday as first day.

Utilitarian discernment bleg

0 VipulNaik 20 April 2014 11:33PM

People who're engaging in learning partly or wholly for the explicit purpose of human capital need to be strategic about their learning choices. Only some subjects develop human capital useful to the person's goals. Within each subject, only some subtopics develop useful human capital. Even within a particular course, the material covered in some weeks could be highly relevant, and the material covered in other weeks need to be relevant. Therefore, learners need to be discerning in figuring out what material to focus their learning effort on and what material to just skim, or even ignore.

Such discernment is most relevant for self-learners who are unconstrained by formal mastery requirements of courses. Self-learners may of course be motivated by many concerns other than human capital acquisition. In particular, they may be learning for pure consumptive reasons, or to signal their smarts to friends. But at any rate, they have more flexibility than people in courses and therefore they can gain more from better discernment.

Those who're taking courses primarily for signaling purposes need to acquire sufficient mastery to attain their desired grade, but even here, they have considerable flexibility:

  • People who're already able to get the top grade without stretching themselves too much have flexibility in how to allocate additional time. Should they try to acquire some more mastery of the entire curriculum, or delve deeper into one topic?
  • People who're far from getting a top grade may have the same grade-per-unit-effort payoff from delving deep into one subtopic or acquiring a shallow understanding of many topics. Considerations regarding long-term human capital acquisition can then help them decide what path to pursue among paths that confer roughly similar signaling benefits.

What self-learners and people with some flexibility in a formal learning situation need is what I call utilitarian discernment: the ability to figure out what stuff to concentrate on. Ideally, they should be able to figure this out relatively easily:

  1. Sequencing within the course: Important topics are often foundational and therefore done early on.
  2. Relative time and emphasis placed on topics should give an indicator of their relative importance.
  3. Important topics should be explicitly marked as important by course texts, videos, and syllabi.
  4. Important topics should receive more emphasis in end-of-course assessments.
  5. Important topics should be more frequently listed as prerequisites in follow-on courses covering the sort of material the learner wants to do next.
  6. The learner can consult friends and websites: This includes more advanced students and subject matter experts, as well as online sources such as Quora and Less Wrong.

The above work better than nothing, but I think they still leave a lot to be desired. Some obvious pitfalls:

  1. Sequencing within the course: While important topics are often done early on because they are foundational, they are sometimes done later because they rely on a synthesis of other knowledge.
  2. Relative time and emphasis: Often, courses place more time and emphasis on more difficult topics than more important ones. There's also the element of time and emphasis being placed on topics that subject matter experts find interesting or relevant, rather than topics that are relevant to somebody who does not intend to pursue a lifetime of research in the subject but is learning it to apply it in other subjects. Note also that the signaling story would suggest that more time and emphasis would be given to topics that do a better job at sorting and ranking students' relevant general abilities than to topics that teach relevant knowledge and skills.
  3. Important topics marked as important: This is often the case, but it too fails, because what is important to teachers may differ from what is important to students.
  4. Emphasis in end-of-course assessments: The relative weight to topics in end-of-course assessments is often in proportion to the time spent on the topics than their relative importance, bringing us back to (1).
  5. Important topics should be more frequently listed as prerequisites: This would work well if somebody actually compiled and combined prerequisites for all follow-on courses, but this is a labor-intensive exercise that few people have engaged in.
  6. The learner can consult friends and websites: Friends who lack strong subject matter knowledge may simply be guessing or giving too much weight to their personal beliefs and experiences. Many of them may not even remember the material enough to offer an informed judgment. Those who have subject matter knowledge may be too focused on academic relevance within the subject rather than real-world relevance outside it. People may also be biased (in either direction) about how a particular topic taught them general analytical skills because they fail to consider other counterfactual topics that could have achieved a similar effect.

In light of these pitfalls, I'm interested in developing general guidelines for improving one's utilitarian discernment. For this purpose, I list some example head-to-head contest questions. I'd like it if commenters indicated a clear choice of winner for each head-to-head contest (you don't have to indicate a choice of winner for every one, but I would prefer a clear choice rather than lots of branch cases within each contest), then explained their reasoning and how somebody without an inside view or relevant expertise could have come to the same conclusion. For some of the choices I've listed, I think the winner should be clear, whereas for others, the contest is closer. Note that the numbering in this list is independent of the preceding numbering.

  1. Middle school and high school mathematics: Manipulating fractions (basic arithmetic operations on fractions) versus solving quadratic equations (you may assume that the treatment of quadratic equations does not require detailed knowledge of fractions)
  2. High school physics: Classical mechanics versus geometrical optics
  3. Precalculus/functions: Logarithmic and exponential functions versus trigonometric functions
  4. Differential calculus: Conceptual definition of derivative as a limit of a difference quotient versus differentiation of trigonometric functions
  5. Integral calculus and applications: Integration of rational functions versus solution strategy for separable differential equations
  6. Physical chemistry: Stoichiometry versus chemical kinetics
  7. Basic biology: Cell biology versus plant taxonomy
  8. Micreconomics: Supply and demand curves versus adverse selection

PS: The examples chosen here are all standard topics in the sciences and social sciences ranging from middle school to early college, but my question is more general. I didn't have enough domain knowledge to come up with quick examples of self-learning head-to-head contests for other domains or for learning at other stages of life, but feel free to discuss these in the comments.

Human capital or signaling? No, it's about doing the Right Thing and acquiring karma

16 VipulNaik 20 April 2014 09:04PM

There's a huge debate among economists of education on whether the positive relationship between educational attainment and income is due to human capital, signaling, or ability bias. But what do the students themselves believe? Bryan Caplan has argued that students' actions (for instance, their not sitting in for free on classes and their rejoicing at class cancellation) suggest a belief in the signaling model of education. At the same time, he notes that students may not fully believe the signaling model, and that shifting in the direction of that belief might improve individual educational attainment.

Still, something seems wrong about the view that most people believe in the signaling model of education. While their actions are consistent with that view, I don't think they frame it quite that way. I don't think they usually think of it as "education is useless, but I'll go through it anyway because that allows me to signal to potential employers that I have the necessary intelligence and personality traits to succeed on the job." Instead, I believe that people's model of school education is linked to the idea of karma: they do what the System wants them to do, because that's their duty and the Right Thing to do. Many of them also expect that if they do the Right Thing, and fulfill their duties well, then the System shall reward them with financial security and a rewarding life. Others may take a more fateful stance, saying that it's not up to them to judge what the System has in store for them, but they still need to do the Right Thing.

The case of the devout Christian

Consider a reasonably devout Christian who goes to church regularly. For such a person, going to church, and living a life in accordance with (his understanding of) Christian ethics is part of what he's supposed to do. God will take care of him as long as he does his job well. In the long run, God will reward good behavior and doing the Right Thing, but it's not for him to question God's actions.

Such a person might look bemused if you asked him, "Are you a practicing Christian because you believe in the prudential value of Christian teachings (the "human capital" theory) or because you want to give God the impression that you are worthy of being rewarded (the "signaling" theory")?" Why? Partly, because the person attributes omniscience, omnipotence, and omnibenevolence to God, so that the very idea of having a conceptual distinction between what's right and how to impress God seems wrong. Yes, he does expect that God will take care of him and reward him for his goodness (the "signaling" theory). Yes, he also believes that the Christian teachings are prudent (the "human capital" theory). But to him, these are not separate theories but just parts of the general belief in doing right and letting God take care of the rest.

Surely not all Christians are like this. Some might be extreme signalers: they may be deliberately trying to optimize for (what they believe to be) God's favor and maximizing the probability of making the cut to Heaven. Others might believe truly in the prudence of God's teachings and think that any rewards that flow are because the advice makes sense at the worldly level (in terms of the non-divine consequences of actions) rather than because God is impressed by the signals they're sending him through those actions. There are also a number of devout Christians I personally know who, regardless of their views on the matter, would be happy to entertain, examine, and discuss such hypotheses without feeling bemused. Still, I suspect the majority of Christians don't separate the issue, and many might even be offended at second-guessing God.

Note: I selected Christianity and a male sex just for ease of description; similar ideas apply to other religions and the female sex. Also note that in theory, some religious sects emphasize free will and others emphasize determinism more, but it's not clear to me how much effect this has on people's mental models on the ground.

The schoolhouse as church: why human capital and signaling sound ridiculous

Just as many people believe in following God's path and letting Him take care of the rewards, many people believe that by doing the Right Thing educationally (being a Good Student and jumping through the appropriate hoops through correctly applied sincere effort) they're doing their bit for the System. These people might be bemused at the cynicism involved in separating out "human capital" and "signaling" theories of education.

Again, not everybody is like this. Some people are extreme signalers: they openly claim that school builds no useful skills, but grades are necessary to impress future employers, mates, and society at large. Some are human capital extremists: they openly claim that the main purpose is to acquire a strong foundation of knowledge, and they continue to do so even when the incentive from the perspective of grades is low. Some are consumption extremists: they believe in learning because it's fun and intellectually stimulating. And some strategically combine these approaches. Yet, none of these categories describe most people.

I've had students who worked considerably harder on courses than the bare minimum effort needed to get an A. This is despite the fact that they aren't deeply interested in the subject, don't believe it will be useful in later life, and aren't likely to remember it for too long anyway. I think that the karma explanation fits best: people develop an image of themselves as Good Students who do their duty and fulfill their role in the system. They strive hard to fulfill that image, often going somewhat overboard beyond the bare minimum needed for signaling purposes, while still not trying to learn in ways that optimize for human capital acquisition. There are of course many other people who claim to aspire to the label of Good Student because it's the Right Thing, and consider it a failing of virtue that they don't currently qualify as Good Students. Of course, that's what they say, and social desirability bias might play a role in individuals' statements,  but the very fact that people consider such views socially desirable indicates the strong societal belief in being a Good Student and doing one's academic duty.

If you presented the signaling hypothesis to self-identified Good Students they'd probably be insulted. It's like telling a devout Christian that he's in it only to curry favor with God. At the same time, the human capital hypothesis might also seem ridiculous to them in light of their actual actions and experiences: they know they don't remember or understand the material too well. Thinking of it as doing their bit for the System because it's the Right Thing to do seems both noble and realistic.

The impressive success of this approach

At the individual level, this works! Regardless of the relative roles of human capital, signaling, and ability bias, people who go through higher levels of education and get better grades tend to earn better and get more high-status jobs than others. People who transform themselves from being bad students to good students often see rewards both academically and in later life in the form of better jobs. This could again be human capital, signaling, or ability bias. The ability bias explanation is plausible because it requires a lot of ability to turn from a bad student into a good student, about the same as it does to be a good student from the get-go or perhaps even more because transforming oneself is a difficult task.

Can one do better?

Doing what the System commands can be reasonably satisfying, and even rewarding. But for many people, and particularly for the people who do the most impressive things, it's not necessarily the optimal path. This is because the System isn't designed to maximize every individual's success or life satisfaction, or even to optimize things for society as a whole. It's based on a series of adjustments driven by squabbling between competing interests. It could be a lot worse, but a motivated person could do better.

Also note that being a Good Student is fundamentally different from being a Good Worker. A worker, whether directly serving customers or reporting to a boss, is producing stuff that other people value. So, at least in principle, being a better worker translates to more gains for the customers. This means that a Good Worker is contributing to the System in a literal sense, and by doing a better job, directly adds more value. But this sort of reasoning doesn't apply to Good Students, because the actions of students qua students aren't producing direct value. Their value is largely their consumption value to the students themselves and their instrumental value to the students' current and later life choices.

Many of the qualities that define a Good Student are qualities that are desirable in other contexts as well. In particular, good study habits are valuable not just in school but in any form of research that relies on intellectual comprehension and synthesis (this may be an example of the human capital gains from education, except that I don't think most students acquire good study habits). So, one thing to learn from the Good Student model is good study habits. General traits of conscientiousness, hardwork, and willingness to work beyond the bare minimum needed for signaling purposes are also valuable to learn and practice.

But the Good Student model breaks down when it comes to acquiring perspective about how to prioritize between different subjects, and how to actually learn and do things of direct value. A common example is perfectionism. The Good Student may spend hours practicing calculus to get a perfect score in the test, far beyond what's necessary to get an A in the class or an AP BC 5, and yet not acquire a conceptual understanding of calculus or learn calculus in a way that would stick. Such a student has acquired a lot of karma, but has failed from both the human capital perspective (in not acquiring durable human capital) and the signaling perspective (in spending more effort than is needed for the signal). In an ideal world, material would be taught in a way that one can score highly on tests if and only if it serves useful human capital or signaling functions, but this is often not the case.

Thus, I believe it makes sense to critically examine the activities one is pursuing as a student, and ask: "does this serve a useful purpose for me?" The purpose could be human capital. signaling, pure consumption, or something else (such as networking). Consider the following four extreme answers a student may give to why a particular high school or college course matters:

  • Pure signaling: A follow-up might be: "how much effort would I need to put in to get a good return on investment as far as the signaling benefits go?" And then one has to stop at that level, rather than overshoot or undershoot.
  • Pure human capital: A follow-up might be: "how do I learn to maximize the long-term human capital acquired and retained?" In this world, test performance matters only as feedback rather than as the ultimate goal of one's actions. Rather than trying to practice for hours on end to get a perfect score on a test, more effort will go into learning in ways that increase the probability of long-term retention in ways that are likely to prove useful later on. (As mentioned above, in an ideal world, these goals would converge).
  • Pure consumption: A follow-up might be: "how much effort should I put in in order to get the maximum enjoyment and stimulation (or other forms of consumptive experience), without feeling stressed or burdened by the material?"
  • Pure networking: A follow-up might be: "how do I optimize my course experience to maximize the extent to which I'm able to network with fellow students and instructors?"

One might also believe that some combination of these explanations applies. For instance, a mixed human capital-cum-signaling explanation might recommend that one study all topics well enough to get an A, and then concentrate on acquiring a durable understanding of the few subtopics that one believes are needed for long-term knowledge and skills. For instance, a mastery of fractions matters a lot more than a mastery of quadratic equations, so a student preparing for a middle school or high school algebra course might choose to learn both at a basic level but get a really deep understanding of fractions. Similarly, in calculus, having a clear idea of what a function and derivative means matters a lot more than knowing how to differentiate trigonometric functions, so a student may superficially understand all aspects (to get the signaling benefits of a good grade) but dig deep into the concept of functions and the conceptual definition of derivatives (to acquire useful human capital). By thinking clearly about this, one may realize that perfecting one's ability to differentiate complicated trigonometric function expressions or integrate complicated rational functions may not be valuable from either a human capital perspective or a signaling perspective.

Ultimately, the changes wrought by consciously thinking about these issues are not too dramatic. Even though the System is suboptimal, it's locally optimal in small ways and one is constrained in one's actions in any case. But the changes can nevertheless add up to lead one to be more strategic and less stressed, do better on all fronts (human capital, signaling, and consumption), and discover opportunities one might otherwise have missed.

LSD, Meditation, Enlightenment, and Ego Death

6 Fink 20 April 2014 07:41PM

A little background information first, I'm a computer science/neuroscience dual-major in my junior year of university. AGI is what I really want to work on and I'm especially interested in Gortzel's OpenCog. Unfortunately I do not have nearly the understanding of the human mind I would like, let alone the knowledge of how to make a new one.

DavidM's post on meditation is particularly interesting to me. I've been practicing mindfulness-based meditation techniques for some time now and I've seen some solid results but the concept of 'enlightenment' was always appealing to me, and I've always wanted to know if such a thing existed. I have been practicing his technique for a few weeks now and although it is difficult I believe I understand what he means by 'vibrations' in your attentional focus.

I've experimented with psilocybin mushrooms for about a year now. Mostly for fun, sometimes for better understanding my own brain. Light doses have enhanced my perception and led me to re-evaluate my life from a different perspective, although I am never as clear-headed as I would like.

I've read that LSD provides a 'cleaner' experience while avoiding some of the thought-loops of mushrooms, it also lasts much longer. Stanislav Grof once said that LSD can be to psychology what the microscope is to biology, with deep introspection we can view our thoughts coalesce. After months of looking for a reliable producer and several 'look-alike' drugs I finally obtained a few doses of LSD. Satisfied that it was the real thing I took a single dose and fell into my standard meditation session, trying to keep my concentration on the breath.

I experienced what wikipedia calls 'ego death'. That is I felt my 'self' splitting into the individual sub-components that formed consciousness. Acid is well-known for causing synaesthesia and as I fell deeper into meditation I felt like I could actually see the way sensory experiences interacted with cognitive heuristics and rose to the level of conscious perception. I felt that I could what see 'I' really was, what Douglas Hofstadter referred to as a 'strange loop' looking back on itself, with my perception switching between sensory input, memories, and thought patterns resonating in frequency with DavidM's 'vibrations'. Of course I was under the effects of an hallucinogenic drug, but I felt my experience was quite lucid.

DavidM hasn't posted in years which is a shame because I really want to see his third article and ask him more about it. I will continue practicing his enlightenment meditation techniques in an attempt to try to foster these experiences without the use of drugs. Has anyone here had experiences with psychedelic drugs or transcendental meditation? If so, could you tell me about them?

Meetup : Utrecht

1 SoerenMind 20 April 2014 10:14AM

Discussion article for the meetup : Utrecht

WHEN: 03 May 2014 05:00:00PM (+0200)

WHERE: Utrecht

A growing number of rationalists and effective altruists is joining us to share ideas and help each other to be rational, to improve themselves and to make the world a better place as effectively as possible.

Agenda

The full agenda is to be determined later, but at least we will talk about the charity evaluator GiveWell (http://www.givewell.org/). GiveWell is looking for outstanding giving opportunities: where to give in order to do the most good per dollar or euro spent. How could that be possible? How does GiveWell (try to) do that? If there is another topic you would like to present or discuss with the group, please add the topic here: https://docs.google.com/document/d/16bBtla1iVzkJjie-JK7Ozb9Ao8SbyJ9U924XyaEXTqY/edit . There is room for your questions, personal discussions, smalltalk, etc.

Everyone is invited, and new people will be warmly welcomed! Location is to be determined, probably Utrecht.

If you have troule finding us, for this time you can reach Imma at 0612001233, since I will be abroad.

Discussion article for the meetup : Utrecht

Southern California FAI Workshop

13 Coscott 20 April 2014 08:55AM

This Saturday, April 26th, we will be holding a one day FAI workshop in southern California, modeled after MIRI's FAI workshops. We are a group of individuals who, aside from attending some past MIRI workshops, are in no way affiliated with the MIRI organization. More specifically, we are a subset of the existing Los Angeles Less Wrong meetup group that has decided to start working on FAI research together. 

The event will start at 10:00 AM, and the location will be:

USC Institute for Creative Technologies
12015 Waterfront Drive
Playa Vista, CA 90094-2536.

This first workshop will be open to anyone who would like to join us. If you are interested, please let us know in the comments or by private message. We plan to have more of these in the future, so if you are interested but unable to makethis event, please also let us know. You are welcome to decide to join at the last minute. If you do, still comment here, so we can give you necessary phone numbers.

Our hope is to produce results that will be helpful for MIRI, and so we are starting off by going through the MIRI workshop publications. If you will be joining us, it would be nice if you read the papers linked to here, here, here, here, and here before Saturday. Reading all of these papers is not necessary, but it would be nice if you take a look at one or two of them to get an idea of what we will be doing.

Experience in artificial intelligence will not be at all necessary, but experience in mathematics probably is. If you can follow the MIRI publications, you should be fine. Even if you are under-qualified, there is very little risk of holding anyone back or otherwise having a negative impact on the workshop. If you think you would enjoy the experience, go ahead and join us.

This event will be in the spirit of collaboration with MIRI, and will attempt to respect their guidelines on doing research that will decrease, rather than increase, existential risk. As such, practical implementation questions related to making an approximate Bayesian reasoner fast enough to operate in the real world will not be on-topic. Rather, the focus will be on the abstract mathematical design of a system capable of having reflexively consistent goals, preforming naturalistic induction, et cetera. 

Food and refreshments will be provided for this event, courtesy of MIRI.

Economics majors and earnings: further exploration

3 JonahSinick 20 April 2014 03:15AM

In Earnings of economics majors: general considerations I presented data showing that economics majors make substantially more money (20%-50%+) than majors in other liberal arts. I gave five hypotheses, each of which could partially account for the wage gap. These are possible differences between the majors in:

  1. Human capital acquisition.
  2. Acquisition of a desire to make money.
  3. Pre-existing ability as measured by tests.
  4. Pre-existing desire to make money.
  5. Signaling.

I discussed a priori reasons for believing that they might be significant, and how one might go about testing the hypotheses and the extent to which they explain the wage gap.

Having examined available data, I believe that with the possible exception of #3, based on publicly available information, there's a huge amount of uncertainty as to the roles of these factors in explaining the wage gap. In many cases there is data suggesting the presence of effects, but the data is not robust and the sizes of the effects are entirely unclear. Furthermore, the hypotheses are not exhaustive: other factors (such as those mentioned at the very end of this post) plausibly play a role, making it difficult to reason in the fashion "factors A, B and C play very small roles, therefore factor D must play a large role." 

I was originally hoping that there would be a simple, clearcut case for or against majoring in economics increasing earnings (relative to other liberal arts), but resolving the question would seem to be a major research project. Still, I hope that this post can help students who are contemplating majoring in economics or another liberal art get a feel for the "lay of the land," and some of the points therein may be actionable for particular individuals. 

I'll address each hypothesis in turn.

This post is very long. If you're short on time or attention, consider scanning over the subtopic headings and reading the sections that look most interesting. As usual, I'd appreciate any relevant thoughts, particularly if you're a former economics major.

continue reading »

Regret, Hindsight Bias and First-Person Experience

8 Stabilizer 20 April 2014 02:10AM

Here is an experience that I often have: I'm walking down the street, perfectly content and all of a sudden some memory pops into my stream of consciousness. The memory triggers some past circumstance where I did not act completely admirably. Immediately following this, there is often regret. Regret of the form like: "I should've studied harder for that class", "I should've researched my options better before choosing my college", "I should've asked that girl out", "I shouldn't have been such an asshole to her" and so on. So this is regret which is of the kind: "Well, of course, I should've done X. But I did Y. And now here I am."

This is classic hindsight bias. Looking back into the past, it seems clear what my course of action should've been. But it wasn't at all that clear in the past.

So, I've come up with a technique to attenuate this kind of hindsight-bias driven regret.

First of all, tune in to your current experience. What is it like to be here, right here and right now, doing the things you're doing. Start zooming out: think about the future and what you're going to be doing tomorrow, next week, next month, next year, 5 years later. Is it at all clear what choices you should make? Sure, you have some hints: take care of your health, save money, maybe work harder at your job. But nothing very specific. Tune in to the difficulties of carrying out even definitely good things. You told yourself that you'd definitely go running today, but you didn't. In first-person mode, it is really hard to know what do, to know how to do it and to actually do it. 

Now, think back to the person you were in the past, when you made the choices that you're regretting. Try to imagine the particular place and time when you made that choice. Try to feel into what it was like. Try to color in the details: the ambient lighting of the room, the clothes you and others were wearing, the sounds and the smells. Try to feel into what was going on in your mind. Usually it turns out that you were confused and pulled in many different directions and, all said and done, you had to make a choice and you made one.

Now realize that back then you were facing exactly the kinds of uncertainties and confusions you are feeling now. In the first-person view there are no certainties; there are only half-baked ideas, hunches, gut feelings, mish-mash theories floating in your head, fragments of things you read and heard in different places.

Now think back to the regrettable decision you made. Is it fair to hold that decision against yourself which such moral force? 

Meetup : Washington DC: Singing

0 rocurley 19 April 2014 04:43PM

Discussion article for the meetup : Washington DC: Singing

WHEN: 20 April 2014 03:00:00PM (-0400)

WHERE: National Portrait Gallery, Washington, DC 20001, USA

We'll be meeting up to go singing!

Because this is probably not a good idea in the portrait gallery, we'll meet there, and then head out somewhere (Archives probably) after we've rendezvoused.

Discussion article for the meetup : Washington DC: Singing

Mathematics and saving lives

2 NancyLebovitz 19 April 2014 01:32PM

A high school student with an interest in math asks whether he's obligated on utilitarian grounds to become a doctor.

The commenters pretty much say that he isn't, but now I'm wondering-- if you go into reasonably pure math, what areas or specific problems would be most likely to contribute the most towards saving lives?

[LINK] U.S. Views of Technology and the Future

2 Gunnar_Zarncke 18 April 2014 09:22PM

I just found this on slashdot:

"U.S. Views of Technology and the Future - Science in the next 50 years" by the Pew Research Center

This report emerges from the Pew Research Center’s efforts to understand public attitudes about a variety of scientific and technological changes being discussed today. The time horizons of these technological advances span from today’s realities—for instance, the growing prevalence of drones—to more speculative matters such as the possibility of human control of the weather. 

This is interesting esp. in comparison to the recent posts on forecasting which focussed on expert forecasts.

What I found most notable was the public opinion on their use of future technology:

% who would do the following if possible...

50% ride in a driverless car

26% use brain implant to improve memory or mental capacity

20% eat meat grown in a lab

Don't they know Eutopia is Scary? I'd guess if these technologies really become available and are reliable only the elderly will be inable to overcome their preconceptions. And everybody will eat artificial meat if it is cheaper, more healthy and tastes the same (and the testers say confirm this).

 

[link] Guide on How to Learn Programming

4 peter_hurford 18 April 2014 05:08PM

I've recently seen a lot of interest in people who are looking to learn programming.  So I put together a quick guide with lots of help from other people: http://everydayutilitarian.com/essays/learn-code

Let me know (via comments here or email - peter@peterhurford.com) if you try this guide, so I can get feedback on how it goes for you.

Also, feel free to also reach out to me with comments on how to improve the guide – I’m still relatively new to programming myself and have not yet implemented all these steps personally.  I'd cross-post it here, but I want to keep the document up-to-date and it would be much easier to do that in just one place.

Weekly LW Meetups

0 FrankAdamek 18 April 2014 03:53PM

This meetup summary was posted to LW main on April 11th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Bostrom versus Transcendence

10 Stuart_Armstrong 18 April 2014 08:31AM

How long will Alcor be around?

27 Froolow 17 April 2014 03:28PM

The Drake equation for cryonics is pretty simple: work out all the things that need to happen for cryonics to succeed one day, estimate the probability of each thing occurring independently, then multiply all those numbers together. Here’s one example of the breakdown from Robin Hanson. According to the 2013 LW survey, LW believes the average probability that cryonics will be successful for someone frozen today is 22.8% assuming no major global catastrophe. That seems startlingly high to me – I put the probability at at least two orders of magnitude lower. I decided to unpick some of the assumptions behind that estimate, particularly focussing on assumptions which I could model.

Every breakdown includes a component for ‘the probability that the company you freeze with goes bankrupt’ for obvious reasons. In fact, the probability of bankruptcy (and global catastrophe) are particularly interesting terms because they are the only terms which are ‘time dependant’ in the usual Drake equation. What I mean by this is that if you know your body will be frozen intact forever, then it doesn’t matter to you when effective unfreezing technology is developed (except to the extent you might have a preference to live in a particular time period). By contrast, if you know safe unfreezing techniques will definitely be developed one day it matters very much to you that it occurs sooner rather than later because if you unfreeze before the development of these techniques then they are totally wasted on you.

The probability of bankruptcy is also very interesting because – I naively assumed last week – we must have excellent historical data on the probability of bankruptcy given the size, age and market penetration of a given company. From this – I foolishly reasoned – we must be able to calculate the actual probability of the ‘bankruptcy’ component in the Cryo-Drake equation and slightly update our beliefs.

I began by searching for the expected lifespan of an average company and got two estimates which I thought would be a useful upper- and lower-bound. Startup companies have an average lifespan of four years. S&P 500 companies have an average lifespan of fifteen years. My logic here was that startups must be the most volatile kind of company, S&P 500 must be the least volatile and cryonics firms must be somewhere in the middle. Since the two sources only report the average lifespan, I modelled the average as a half-life. The results really surprised me; take a look at the following graph:

(http://imgur.com/CPoBN9u.jpg)

Even assuming cryonics firms are as well managed as S&P 500 companies, a 22.8% chance of success depends on every single other factor in the Drake equation being absolutely certain AND unfreezing technology being developed in 37 years.

But I noticed I was confused; Alcor has been around forty-ish years. Assuming it started life as a small company, the chance of that happening was one in ten thousand. That both Alcor AND The Cryonics Institute have been successfully freezing people for forty years seems literally beyond belief. I formed some possible hypotheses to explain this:

  1. Many cryo firms have been set up, and I only know about the successes (a kind of anthropic argument)
  2. Cryonics firms are unusually well-managed
  3. The data from one or both of my sources was wrong
  4. Modelling an average life expectancy as a half-life was wrong
  5. Some extremely unlikely event that is still more likely than the one in billion chance my model predicts – for example the BBC article is an April Fool’s joke that I don’t understand.

I’m pretty sure I can rule out 1; if many cryo firms were set up I’d expect to see four lasting twenty years and eight lasting ten years, but in fact we see one lasting about five years and two lasting indefinitely. We can also probably rule out 2; if cryo firms were demonstrably better managed than S&P 500 companies, the CEO of Alcor could go and run Microsoft and use the pay differential to support cryo research (if he was feeling altruistic). Since I can’t do anything about 5, I decided to focus my analysis on 3 and 4. In fact, I think 3 and 4 are both correct explanations; my source for the S&P 500 companies counted dropping out of the S&P 500 as a company ‘death’, when in fact you might drop out because you got taken over, because your industry became less important (but kept existing) or because other companies overtook you – your company can’t do anything about Facebook or Apple displacing them from the S&P 500, but Facebook and Apple don’t make you any more likely to fail. Additionally, modelling as a half-life must have been flawed; a company that has survived one hundred years and a company that has survived one year are not equally likely to collapse!

Consequently I searched Google Scholar for a proper academic source. I found one, but I should introduce the following caveats:

  1. It is UK data, so may not be comparable to the US (my understanding is that the US is a lot more forgiving of a business going bankrupt, so the UK businesses may liquidate slightly less frequently).
  2. It uses data from 1980. As well as being old data, there are specific reasons to believe that this time period overestimates the true survival of companies. For example, the mid-1980’s was an economic boom in the UK and 1980-1985 misses both major UK financial crashes of modern times (Black Wednesday and the Sub-Prime Crash). If the BBC is to be believed, the trend has been for companies to go bankrupt more and more frequently since the 1920’s.

I found it really shocking that this question was not better studied. Anyway, the key table that informed my model was this one, which unfortunately seems to break the website when I try to embed it. The source is Dunne, Paul, and Alan Hughes. "Age, size, growth and survival: UK companies in the 1980s." The Journal of Industrial Economics (1994): 115-140.

You see on the left the size of the company in 1980 (£1 in 1980 is worth about £2.5 now). On the top is the size of the company in 1985, with additional columns for ‘taken over’, ‘bankrupt’ or ‘other’. Even though a takeover might signal the end of a particular product line within a company, I have only counted bankruptcies as representing a threat to a frozen body; it is unlikely Alcor will be bought out by anyone unless they have an interest in cryonics.

The model is a Discrete Time Markov Chain analysis in five-year increments. What this means is that I start my hypothetical cryonics company at <£1m and then allow it to either grow or go bankrupt at the rate indicated in the article. After the first period I look at the new size of the company and allow it to grow, shrink or go bankrupt in accordance with the new probabilities. The only slightly confusing decision was what to do with takeovers. In the end I decided to ignore takeovers completely, and redistribute the probability mass they represented to all other survival scenarios.

The results are astonishingly different:

(http://imgur.com/CkQirYD.jpg)

Now your body can remain alive 415 years for a 22.8% chance of revival (assuming all other probabilities are certain). Perhaps more usefully, if you estimate the year you expect revival to occur you can read across the x axis to find the probability that your cryo company will still exist by then. For example in the OvercomingBias link above, Hanson estimates that this will occur in 2090, meaning he should probably assign something like a 0.65 chance to the probability his cryo company is still around.

Remember you don’t actually need to estimate the actual year YOUR revival will occur, but only the first year the first successful revival proves that cryogenically frozen bodies are ‘alive’ in a meaningful sense and therefore recieve protection under the law in case your company goes bankrupt. In fact, you could instead estimate the year Congress passes a ‘right to not-death’ law which would protect your body in the event of a bankruptcy even before routine unfreezing, or the year when brain-state scanning becomes advanced enough that it doesn’t matter what happens to your meatspace body because a copy of your brain exists on the internet.

My conclusion is that the survival of your cryonics firm is a lot more likely that the average person in the street thinks, but probably a lot less likely that you think if you are strongly into cryonics. This is probably not news to you; most of you will be aware of over-optimism bias, and have tried to correct for it. Hopefully these concrete numbers will be useful next time you consider the Cryo-Drake equation and the net present value of investing in cryonics.

Meetup : Urbana-Champaign: Planning and Re-planning

1 Manfred 17 April 2014 05:56AM

Discussion article for the meetup : Urbana-Champaign: Planning and Re-planning

WHEN: 20 April 2014 12:00:00PM (-0500)

WHERE: 412 W. Elm St, Urbana, IL

When things get complicated enough, you have to plan them in advance or they fail. You need blueprints and logistics before you can build a skyscraper. On a personal level, good plans improve our chances of success at anything we can make a plan for.

One trouble with plans is that once you've made them they're sticky. What kind of life to lead, what to study, when to marry - we inherit plans about these things.from the past and we don't always rethink them when appropriate.

Discussion article for the meetup : Urbana-Champaign: Planning and Re-planning

The usefulness of forecasts and the rationality of forecasters

0 VipulNaik 17 April 2014 03:49AM

Suppose we have a bunch of (forecasted value, actual value) pairs for a given quantity (with different measured actual values at different times). An example would be GDP growth rate measures in different years. For each year, we have a forecasted value and an actual value. So we have a bunch of (forecasted value, actual value) pairs, one for each year. How do we judge the usefulness of the forecasts at predicting the value. Here, we discuss a few related measures: accuracy, bias, and dependency (specifically, correlation).

Accuracy

The accuracy of a forecast refers to how far, on average, the forecast is from the actual value. Two typical ways of measuring the accuracy are:

  • Compute the mean absolute error: Take the arithmetic mean (average) of the absolute values of the errors for each forecast.
  • Compute the root mean square error: Take the square root of the arithmetic mean of the squares of the errors.

The size of the error, measured in either of these ways, is a rough estimate of how accurate the forecasts are in general (the larger the error, the less accurate the forecast). Note that an error of zero represents a perfectly accurate forecast.

Note that this is a global measure of accuracy. But it may be the case that forecasts are more accurate when the actual values are at a particular level, and less accurate when they are at a different level. There are mathematical models to test for this.

Bias

When we ask whether the forecast is biased, we're interested in knowing whether the size of the error in the positive direction systematically exceeds the size of the error in the negative direction. One method for estimating this is to compute the mean signed difference (i.e., take the arithmetic mean of errors for individual forecasts without taking the absolute value). If this comes out as zero, then the forecasting is unbiased. If it comes out as positive, the forecasts are biased in the positive direction, whereas if it comes out as negative, the forecasts are biased in the negative direction.

The above is a start, but it's not good enough. In particular, the error could come out nonzero simply because of random fluctuations rather than bias. We'd need to complicate the model somewhat in order to make probabilistic or quantitative assessments to get a sense of whether or how the forecasts are really biased.

Again, the above is a global measure of bias. But it may be the case that there are different biases for different values. There are mathematical models to test for this.

Are accuracy and bias related? Yes, in the obvious sense that the degree of inaccuracy gives an upper bound on the degree of bias. In particular, for instance, the mean absolute error gives an upper bound on the mean signed difference. So a perfectly accurate forecast is also unbiased. However, we can have fairly inaccurate forecasts that are unbiased. For instance, a forecast that always guesses the mean of the distribution of actual values will be inaccurate but have zero bias.

The above discusses additive bias. There may also be multiplicative bias. For instance, the forecasted value may be reliably half the actual value. In this case, doubling the forecasted value allows us to obtain the actual value. There could also be forms of bias that are not captured in either way.

Dependency and correlation

Ideally, what we want to know is not so much whether the forecasts themselves are accurate or biased, but whether we can use them to generate new forecasts that are good. So what we want to know is: once we correct for bias (of all sorts, not just additive or multiplicative), how accurate is the new forecast? Another way of framing this is: what exactly is the nature of dependency between the variable representing the forecasted value and the variable representing the actual value?

Testing for the nature of the dependency between variables is a hard problem, particularly if we don't have a prior hypothesis for the nature of the dependency. If we do have a hypothesis, and the relation is linear in unknown parameters, we can use the method of ordinary least squares regression (or another suitable regression) to find the best fit. And we can measure the goodness of that fit through various statistical indicators.

In the case of linear regression (i.e., trying to fit using a linear functional dependency between the variables), the square of the correlation between the variables is the R2 of the regression, and offers a decent measure of how close the variables are to being linearly related. A correlation of 1 implies a R2 of 1, and implies that the variables are perfectly correlated, or equivalently, that a linear function with positive slope is a perfect fit. A correlation of -1 also implies a R2 of 1, would mean that a linear function with negative slope is a perfect fit. A correlation of zero means that the variables are completely uncorrelated.

Note also that linear regression covers both additive and multiplicative bias (and combinations thereof) and is often good enough to capture the most basic dependencies.

If the value of R2 for the linear regression is zero, that means the variables are uncorrelated. Although independent implies uncorrelated, uncorrelated does not imply independent, because there may be other nonlinear dependencies that miraculously give zero correlation. In fact, uncorrelated does not imply independent even if the variables are both normally distributed. As a practical matter, a correlation of zero is often taken as strong evidence that neither variable tells us much about the other. This is because even if the relationship isn't linear, the existence of some relationship makes a nonzero correlation more plausible than an exact zero correlation. For instance, if the variables are positively related (higher forecasted values predict higher actual values) we expect a positive correlation and a positive R2. If the variables are negatively related (higher forecasted values predict lower actual values) we expect a negative correlation, but still a positive R2.

For the trigonometrically inclined: The Pearson correlation coefficient, simply called the correlation here, measures the cosine of the angle between a vector based on the forecasted values and a vector based on the actual values. The vector based on the forecasted values is obtained by starting with the vector of the forecasted values and subtracting from each coordinate the mean forecasted value. Similarly, the vector based on the actual values is obtained by starting with the vector of the actual values and subtracting from each coordinate the mean actual value. The R2 value is the square of the correlation, and measures the proportion of variance in one variable that is explained by the other (this is sometimes referred to as the coefficient of determination). 1 -R2 represents the square of the sine between the vectors, and represents how alienated the vectors are from each other. A correlation of 1 means the vectors are collinear and point in the same direction, a positive correlation less than 1 means they form an acute angle, a zero correlation means they are at right angles, a negative correlation greater than -1 means they form an obtuse angle, and a correlation of -1 means the vectors are collinear and point in opposite directions.

Usefulness versus rationality

The simplest situation is where the forecasts are completely accurate. That's perfect. We don't need to worry about doing better.

In the case that the forecasts are not accurate, and if we have had the luxury of crunching the numbers and figuring out the nature of dependency between the forecasted and actual values, we'd want a situation where the actual value can be reliably predicted from the forecasted value, i.e., the actual value is a (known) function of the forecasted value. A simple case of this is where the actual value and forecasted value have a correlation of 1. This means that the actual value is a known linear function of the forecasted value. (UPDATE: This process of using a known linear function to correct for systematic additive and multiplicative bias is known as Theil's correction). So the forecasted value itself is not good, but it allows us to come up with a good forecast.

What would it mean for a forecast to be unimprovable? Essentially, it means that the best value we can forecast based on the forecasted value is the forecasted value. Wait, what? What we mean is that the forecasters aren't leaving any money on the table: if they could improve the forecast simply by correcting for a known bias, they have already done so. Note that a forecast being unimprovable does not say anything directly about the R2 value. Rather, the unimprovability suggests that the best functional fit between the forecasted and the actual value would be the identity function (actual value = forecasted value). For the linear regression case, it suggests that the slope for the linear regression is 1 and the intercept is 0. Or at any rate, that they are close enough. Note that a forecast that's completely useless is unimprovable.

The following table captures the logic (note that the two rows just describe the extreme cases, rather than the logical space of all possibilities).

 The forecast cannot be improved uponThe forecast can be improved upon
The forecast, once improved upon, is perfect The forecasted value equals the actual value. The forecasted value predicts the actual value perfectly, but is not itself perfect. For instance, they could have a correlation of 1, in which case the prediction would be via a linear function.
The forecast, even after improvement, is useless at the margin (i.e., it does not give us information we didn't already have from knowledge of the existing distribution of actual vaues) The forecast just involves perfectly guessing the mean of the distribution of actual values (assuming that the distribution is known in advance; if it's not, then things become even more murky).
The actual value is independent of the forecast, and it does not involve simply guessing the mean.

Note that if forecasters are rational, then we should be in the column "The forecast cannot be improved upon" and therefore between the extreme case that the forecast is already perfect and that the forecast just involves guessing the mean of the distribution (assuming that the distribution is known in advance).

So there are two real and somewhat distinct questions about the value of forecasts:

  • (The question whose extreme answers give the rows): How useful are the forecasts, in the sense that, once we extract all the information upon them by correcting for bias and applying the appropriate functional form, how accurate are the new forecasts?
  • (The question whose answers give the columns): How rational are the forecasters, in the sense of how close are their forecasts to the most useful forecasts that can be extracted from those forecasts? (Note that even if the forecasts cannot be improved upon, that doesn't mean the forecasts are rational in the broader sense of making the best guess in terms of all available information, but it is in any case consistent with rationality in this broader sense).

Background reading

For more background, see the Wikipedia pages on forecast bias and bias of an estimator and the content linked therein.

LINK-Cryonics Institute documentary

0 polymathwannabe 16 April 2014 10:44PM

"WE WILL LIVE AGAIN looks inside the unusual and extraordinary operations of the Cryonics Institute. The film follows Ben Best and Andy Zawacki, the caretakers of 99 deceased human bodies stored at below freezing temperatures in cryopreservation. The Institute and Cryonics Movement were founded by Robert Ettinger who, in his nineties and long retired from running the facility, still self-publishes books on cryonics, awaiting the end of his life and eagerly anticipating the next."

http://www.iht.com/2014/04/15/we-will-live-again/

Meetup : Ugh Fields

1 evand 16 April 2014 04:32PM

Discussion article for the meetup : Ugh Fields

WHEN: 17 April 2014 07:00:00PM (-0400)

WHERE: 2411 N Roxboro St 27704

We'll be discussing Ugh Fields: what they are, how they keep you from accomplishing stuff, and how to recognize and reduce them. As always, RSVPs are appreciated but not required. We encourage you to show up around 7, and we'll start on-topic content at 7:30. If you're feeling energetic about it, there's a relevant article. Afterwards, we will probably meander over to Fullsteam and be sociable.

Discussion article for the meetup : Ugh Fields

Stories for exponential growth

1 VipulNaik 16 April 2014 03:15PM

Disclaimer: This is a collection of some simple stories for exponential growth. I've tried to list the main ones, but I might well have missed some, and I welcome feedback.

The topic of whether and why growth trends are exponential has been discussed on LessWrong before. For instance, see the previous LessWrong posts Why are certain trends so precisely exponential? and Mathematical simplicity bias and exponential functions. The purpose of this post is to explore some general theoretical reasons for expecting exponential growth, and the assumptions that these models rely on. I'll look at economic growth, population dynamics, and technological growth.

TL;DR

  1. Exponential growth (or decay) arises from a situation where the change in level (or growth rate) is proportional to the level. This can be modeled by either a continuous or a discrete differential equation.
  2. Feedback based on proportionality is usually part of the story, but could occur directly for the measured variable or in a hidden variable that affects the measured variable.
  3. In a simplified balanced economic growth model, growth is exponential because the addition to capital stock in a given year is proportional to output in that year, depreciation rate is constant, and output next year is proportional to capital stock this year.
  4. In a simple population dynamics model, growth is exponential under the assumption that the average number of kids per person stays constant.
  5. An alternative story of exponential growth is that performance is determined by multiplying many quantities, and we can work to make proportional improvements in the quantities one after the other. This can explain roughly exponential growth but not close-to-precise exponential growth.
  6. Stories of intra-industry or inter-industry coordination can explain a more regular exponential growth pattern than one might otherwise expect.

#1: Exponential arises from change in level (or growth rate) being proportional to the level

Brief mathematical introduction for people who have a basic knowledge of calculus. Suppose we're trying to understand how a quantity x (this could be national GDP of a country, or the price of 1 GB of NAND flash, or any other indicator) changes as a function of time t. Exponential growth means that we can write:

x = Cat

where C > 0, a > 1 (exponential decay would mean a < 1). More conventionally, it is written in the form:

x = Cekt

where C > 0, k > 0 (exponential decay would mean k < 0). The two forms are related as follows: a = ek.

The key feature of the exponential function is that for any t, the quotient x(t +1)/x(t) is a constant independent of t (the constant in question being a). In other words, the proportional gain is the same over all time periods.

Exponential growth arises as the solution to the (continuous, ordinary, first-order first-degree) differential equation:

dx/dt = kx

This says that the instantaneous rate of change is proportional to the current value.

We can also obtain exponential growth as the solution to the discrete differential equation:

Δ x = (a - 1)x

where Δ x denotes the difference x(t + 1) - x(t) (the discrete derivative of x with respect to t). What this says is that the discrete change in x is proportional to x.

To summarize, exponential growth arises as a solution to both continuous and discrete differential equations where the rate of change is proportional to the current level. The mathematical calculations work somewhat differently, but otherwise, the continuous and discrete situations are qualitatively similar for exponential growth.

#2: Feedback based on proportionality is usually part of the story, but the phenomenon could occur in a visible or hidden process

The simplest story for why a particular indicator grows exponentially is that the growth rate is determined directly in proportion with the value at a given point in time. One way of framing this is that there is feedback from the level of the indicator to the rate of change of the indicator. To get a good story for exponential growth, therefore, we need a good story for why the feedback should be in the form of direct proportionality, rather than some other functional form.

However, we can imagine a subtly different story of exponential growth. Namely, the indicator itself is not the root of the phenomenon at all, but simply a reflection of other hidden variables, and the phenomenon of exponential growth is happening at the level of these hidden variables. For instance, visible indicator x might be determined as being 0.82y2 for a hidden variable y, and it might be that the variable y is the one that experiences feedback from its level to its rate of change. I believe this is conceptually similar to (though not mathematically the same as) hidden Markov models.

One LessWrong comment offered this sort of explanation: perhaps the near-perfect exponential growth of US GDP, and its return to an earlier trend line after deviation during some years, suggests that population growth is the hidden variable that drives long-run trends in GDP. The question of whether economic growth should revert to an earlier trend line after a shock is a core question of macroeconomics with a huge but inconclusive literature; see Arnold Kling's blog post titled Trend vs. Random Walk.

#3: A bare-bones model of balanced economic growth (balanced growth version of Harrod-Domar model)

Let's begin with a very basic model of economic growth This is not to be applied directly in the understanding of real-world economies. Rather, it's meant to give us a crude idea of where exponentiality comes from.

In this model, an economy produces a certain output Y in a given year (Y changes from year to year). The economy consumes part of the output, and saves the rest of it to add to its capital stock K. Suppose the following hold:

  1. The fraction of output produced that is converted to additional capital stock is constant from year to year (i.e., the propensity to save is constant).
  2. The (fractional) rate of depreciation of capital stock (i.e., the fraction of capital stock that is lost every year due to depreciation) is constant.
  3. The amount of output produced in a given year is proportional to the capital stock at the end of the previous year, with the constant of proportionality not changing across years.

We have two variables here, output and capital stock, linked by proportionality relationships between them and between their year-on-year changes. When we work out the algebra, we'll discover that both variables grow exponentially in tandem.

The above describe a balanced growth model, where the shape and nature of the economy do not change. It just keeps growing in size, with all the quantities growing together in the same proportion. Economies may initially be far from a desirable steady state, or may be stuck in a low-savings steady state. Also note that if the rate of depreciation of capital stock exceeds the rate at which new capital stock is added, the economy will decay rather than grow exponentially.

If you're interested in actual models of economic growth used in growth theory and development economics, read up on the Harrod-Domar model and its variants such as the Ramsey-Coopman-Kans model, AK model, and Solow-Swan model. For questions surrounding asymptotic convergence, check out the Inada conditions.

#4: Population dynamics

The use of exponential models for population growth is justified under the assumption that the number of children per woman who survive to adulthood remains constant. Assume a 1:1 sex ratio, and assume that women have an average of 3 kids who survive to adulthood. In that case, with every generation, the population multiplies by a factor of 3/2 = 1.5. After n generations, the population would be (1.5)n times the original population. This is of course a grossly oversimplified model, but it covers the rationale for exponential growth. In practice, the number of surviving children per woman varies over time due to a combination of fertility changes and changes in age-specific mortality rates.

The dynamics are even simpler to understand for bacteria in a controlled environment such as a petri dish. Bacteria are unicellular organisms and they reproduce by binary fission: a given bacterium splits into two new bacteria. As long as there are ample resources, a bacterium may split into two after an average interval of 1 hour. In that case, we expect the number of bacteria in the petri dish to double every hour.

#5: A large number of factors that multiply together to determine the quantity

Here is a somewhat different story for exponential growth that a number of people have proposed independently. In a recent comment, Ben Kuhn wrote:

One story for exponential growth that I don't see you address (though I didn't read the whole post, so forgive me if I'm wrong) is the possibility of multiplicative costs. For example, perhaps genetic sequencing would be a good case study? There seem to be a lot of multiplicative factors there: amount of coverage, time to get one round of coverage, amount of DNA you need to get one round of coverage, ease of extracting/preparing DNA, error probability... With enough such multiplicative factors, you'll get exponential growth in megabases per dollar by applying the same amount of improvement to each factor sequentially (whereas if the factors were additive you'd get linear improvement).

Note that in order for this growth to come out as close to exponential, it's important that the marginal difficulty, or time, or cost, of addressing the factors is about the same. For instance, if the overall indicator we are interested in is a product pqrs, it may be that in a given year, we can zero in on one of the four factors and reduce that by 5%, but it doesn't matter which one.

A slightly more complicated story is that the choice of what factor we can work on at a given stage is constrained, but the best marginal choices at all stages are roughly as good in proportional terms. For instance, maybe, for our product pqrs, the best way to start is by reducing p by 5%. But once we are done with that, next year the best option is to reduce q by 5%. And then, once that's done, the lowest-hanging fruit is to reduce r by 5%. This differs subtly from the previous one in that we're forced from outside in the decision of what factor to work on at the current margin, but the proportional rate of progress still stays constant.

However, in the real world, it's highly unlikely that the proportional gains quite stay constant. I mean, if we can reduce p by 5% in the first year and q by 5% in the second year, what really gets in the way of reducing both together? Is it just a matter of throwing more money at the problem?

By the way, one example of rapid progress that does seem to closely hew to the multiplicative model is the progress made on linear programming algorithms. Linear programming involves a fair number of algorithms within algorithms. For instance, solving certain types of systems of linear equations is a major subroutine invoked in the most time-critical component of linear programming.

My overall conclusion is that multiplicative stories are good for explaining why growth is very roughly close to exponential, but they are not strong enough by themselves to explain a very precise exponential growth trend. However, when combined with stories about regularization, they could explain what a priori seems an unexpectedly close to precise exponential.

#6: The story of coordination and regularization

Some people have argued that the reason Moore's law (and similar computing paradigms) have held for sufficiently long periods of history is due to explicit industry roadmaps such as the International Technology Roadmap for Semiconductors. I believe that a roadmap cannot bootstrap the explanation for growth being exponential. If roadmaps could dictate reality so completely, why didn't the roadmap decide on even faster exponential growth, or perhaps superexponential growth? No, the reason for exponential growth must come from some more fundamental factors.

But explicit or implicit roadmaps and industry expectations can explain why progress was so close to being precisely exponential. I offer one version of the story.

In a world where just one company is involved with research, manufacturing, and selling to the public, the company would try to invest according to what they expected consumer demand to be (see my earlier post for more on this). Since there aren't strong reasons to believe that consumer needs grow exponentially, nor are there good reasons to believe that progress at resolving successive barriers is close to precisely exponential, an exponential growth story here would be surprising.

Suppose now that the research and manufacturing processes are handled by different types of companies. Let's also suppose that there are many different companies competing at the research level and many different companies competing at the manufacturing level. The manufacturing companies need to make plans for how much to produce and how much raw material to keep handy for the next year, and these plans require having an idea of how far research will progress.

Since no individual manufacturer controls any individual researcher, and since the progress of individual research companies can be erratic, the best bet for manufacturers is to make plans based on estimates of how far researchers are expected to go, rather than on any individual research company's promise. And a reasonable way to make such an estimate is to have an industry-wide roadmap that serves a coordinating purpose. Manufacturers have an incentive to follow the roadmap, because deviating in either direction might result in them having factories that don't produce the right sort of stuff, or have too much or too little capacity. The research companies also have incentives to meet the targets, and in particular, to neither overshoot nor undershoot too much. The reasons for not undershooting are obvious: they don't want to be left behind. But why not overshoot? Since the manufacturers are basing their plans on the technology they expect,  a research company overshooting might result in technologies that aren't ready for implementation, so the advantage is illusory. On the other hand, the costs of overshooting (in terms of additional expenditures on research) are all too real.

Thus, the benefits of coordination between different parts of the "supply chain" (in this case, the ideas and the physical manufacturing) lead to greater regularization of the growth trend than one would expect otherwise. If there are reasons to believe that growth is roughly exponential (the multiplicative story could be one such reason) then this could lead to it being far more precisely exponential.

The above explanation is highly speculative and I don't have strong confidence in it.

PS on algorithm improvement

  • If the time taken for an algorithm is described as a sum of products, then only the factors of the summands that dominate in the big-oh sense matter. For simplicity, let's assume that the time taken is a sum of products that are all of the same order as one another.
  • To improve by a given constant of proportionality the time complexity of an algorithm where the time taken is a sum of products that are of the same order of magnitude, one strategy to improve each summand by that constant of proportionality. Alternatively, we could improve some summands by a lot more, in which case we'd have to determine the overall improvement as the appropriate weighted average.
  • To improve a particular summand by a particular constant of proportionality, we may improve any one factor of that summand by that constant of proportionality. Or, we may improve all factors of that summand by constants that together multiply to the desired constant of proportionality.

Open Thread April 16 - April 22, 2014

4 Tenoke 16 April 2014 07:05AM

You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Different time horizons for forecasting

1 VipulNaik 16 April 2014 03:30AM

Disclaimer: This post contains some very preliminary thoughts on a topic that I believe would be of interest to some people here. There are probably better expositions on the subject that I haven't been able to find. If you know of such expositions, I'd appreciate being pointed to them.

There are qualitative differences between the types of forecasting that is feasible, or most suitable, for different time horizons. In this post, I discuss some of the possibilities for such time horizons and the forecasts that can be made for those.

The present (today)

Predicting the present doesn't involve prediction so much as it involves measurement. But that doesn't mean it's a slam dunk: one still needs to make a lot of measurements to come up with precise and accurate quantities. One cannot simply count the entire population of a region in one stroke. Doing so requires planning and a detailed infrastructure. And in many cases, it's not possible to measure perfectly, so we measure in part and then use theory (such as sampling theory) to extrapolate from there.

The very near future (tomorrow)

The very near future differs from the present in that it cannot be measured directly, but measuring it is often no more complicated than measuring the present. In a discrete model, it's the next step beyond the present. An example of a tomorrow prediction is: "what restaurants will be open in the city of Chicago tomorrow?" For any restaurant to be open tomorrow, it is most likely either already operating today, or has applied to open tomorrow. In either case, a good stock-taking of the situation today would give a clear idea of what's in store for tomorrow. Another example is when people make projections about employment or GDP based on asking people about their estimated workforce sizes or production levels in the near future.

Predictions about the near future involve a combination of the following:

  • assuming persistence from the present
  • asking people for their intentions and estimates
  • identifying and adjusting for any major sources of difference between today and tomorrow. In the restaurant case, an example of a major source of difference would be if "tomorrow" happened to be a major festival where restaurants customarily closed.

Who forecasts the very near future? As it turns out, a lot of people. I gave examples of economic indicator estimates based on surveys of representative samples of the economy. Also, I believe (I don't have an inside view here) that industry associations and trade journals function this way: they get data from all their members on their production plans, then they pool together the data and publish comprehensive information so that the industry as a whole is well-informed about production plans, and can think a step ahead. (SEMI might be an example).

The near but not very near future, or a few steps down the line

For the future that's a little farther out than tomorrow, simply assuming persistence or asking people isn't good enough. Persistence doesn't work because even though each day is highly correlated to the next, the correlation weakens as we separate the days out more and more. Asking people for their intentions doesn't work because people themselves are reacting to each other. For inanimate systems, different components of the system interact with each other.

This is probably the time horizon where some sort of formal model or computer simulation works best. For instance, weather models for the next 5 days or so perform somewhat better than the fallback options of persistence and climatology, and in the 5-10 day range they perform somewhat but not a lot better than climatology. Beyond 10 days, climatology generally wins.

Similarly, this sort of modeling might work well for estimating GDP changes over two or three quarters, because the model can account for how the changes in one quarter (the very near future) will have ripple effects for another quarter, and then another.

The problem with such models is that they quickly lose coherence. Small variations in initial assumptions, to a level that we cannot hope to measure precisely, start having huge potential ripple effects. Model uncertainty also gets in the way. The range of possibilities is so large that we might as well get to more general long-term models.

What is the value of making such predictions? The case of weather prediction is obvious: predicting extreme weather events saves lives, and even making more mundane predictions can help people plan their outdoor events and travel and can help transportation services better manage their services. Similar predictions in the economic or business realm can also help.

The organizations who specialize in this sort of prediction tend to be the same as the ones predicting the very near future, probably because they have all the data already, and so it's easiest for them to run the relevant models.

The medium-term future

This is the part of the future where general domain-specific phenomena might be useful. In the case of weather, the medium-term future is general climatology: how warm are summers, and how cold are winters? When does a place get rain?

Computer simulations have decohered, and formal models that are sufficiently realistic in the short term get too complicated. So what we do use? General domain-specific phenomena, including information about equilibrating and balancing influences and positive and negative feedback mechanisms. Trend extrapolation, in the (rare?) cases that it's justified. Reality checks based on considerations of the sizes and growth potentials of different industries and markets.

The medium-term future is the time horizon where:

  • New companies can be started
  • City-level transportation systems can be built
  • Companies can make large-scale capital investments in new product lines and begin reaping the profits from them
  • Government policies, such as overhauls to health care legislation or migration policy, can be implemented and their initial effects be seen

My very crude sense is that this is the highest-leverage area for improvements in forecasting capabilities at the current margin. It's far out enough that major preparatory, preventative, and corrective steps can be taken. It's near enough that the results can actually be seen and can be used to incentivize current decision makers. It's far enough that direct simulation or intricate models don't stay coherent, but it's far enough that intuitions derived from present conditions, combined with general domain-specific knowledge, continue to be broadly valid.

The long-term future

The dividing line between the medium-term and long-term future is unclear. One possible way of distinguish between the two is that the medium-term future is heavily grounded in timelines. It's specifically interested in asking what will happen in a particular interval of time, or in when a particular milestone will be achieved. With the long-term future, on the other hand, timelines are too fuzzy to even be useful. Rather, we are interested simply in filling in the details of what it might look like. A discussion of how a world that's 3 degrees celsius warmer, or of space travel, or of a post-singularity world, or of a world that is solar-powered, might fit this "long-term" moniker. Robin Hanson's discussion of long-term growth and the multiple modes of such growth also fits this "long-term" category.

With the long-term future, simply painting futuristic visions, informed by a broad understanding of theory to separate the plausible from the implausible, might be a better bet than reasoning outward from the present moment in time or from the "climatology" of the world today. Indeed, as I noted in my discussion of Megamistakes, there may well be a negative correlation between having a clear vision of the future in that sense and being able to make good timed predictions for the medium term.

With the long term future, are there, or should there be, incentives to be accurate? No. Rather, the incentives may be in the direction of painting plausible (even if improbable) future scenarios with the dual goal of preparing for them or influencing the probability of achieving them. This means dampening the probability of the catastrophic scenarios (even if they're low-probability to begin with) and increasing the probability of, perhaps even directly working towards, the good scenarios. On the good scenario side, a futurist with a rosy vision of the future might write a science fiction or speculative science book that, a generation or two later, inspires an entrepreneur, scientist, or engineer to go build one of those highly futuristic items.

Nick Beckstead's research on the overwheming importance of shaping the far future makes the relevant philosophical arguments.

I could probably split up the long term further. I'm not sure what some natural ways of performing such a split might be, and I also don't think it's relevant for my purposes, because most long-term forecasts are hard to evaluate anyway.

PS: My post on the logarithmic timeline was a result of similar thinking, but they ended up being on different topics. This post is about the qualitative differences between time horizons, that post is about having a standard to compare forecasts for different time intervals in the future.

Group Rationality Diary, April 16-30

4 therufs 16 April 2014 03:04AM

This is the public group instrumental rationality diary for April 16-30.

It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.

Previous diary: April 1-15

Rationality diaries archive

Using the logarithmic timeline to understand the future

4 VipulNaik 16 April 2014 02:00AM

Disclaimer: I think what I've said is sufficiently obvious and basic that I really doubt that it's original, but I can't easily find any other source that lays out the points I made here. If you are aware of such a source, please let me know in the comments here and I'll credit it. I'd also be happy to be pointed to any relevant literature. It's also possible that I'm overlooking some obvious rejoinders that render my claims wrong or irrelevant; if so, I appreciate criticism on that front.

The logarithmic timeline is a timeline where time is presented on a logarithmic scale. Note that this differs from the idea of plotting logarithms of quantities with respect to time (a common practice when understanding exponential growth). In those plots, the vertical axis (the dependent variable plotted as a function of time) is plotted logarithmically. With the logarithmic timeline, the time axis itself is plotted logarithmically. If we're plotting quantities as a function of time, then using a logarithmic timeline has an effect that's in many ways the opposite of the effect of using a logarithmic scale for the quantity being plotted.

Wikipedia has a page on the logarithmic timeline (see also this detailed logarithmic timeline of the universe and this timeline of the far future), but I haven't seen the topic discussed much in the context of forecasting precision and accuracy, so I thought I'd do a post on it (I'll list some relevant literature I found at the end of the post).

TL;DR

Here's an overview of the sections of the post:

  1. What the logarithmic timeline means for understanding forecasts, and how it differs from the linear timeline.
  2. Crudely, the logarithmic timeline is useful because uncertainties accumulate over time, with the amount of uncertainty accumulated being roughly proportional to how far out we are in the future.
  3. Mathematically, The logarithmic timeline is suitable for processes whose time evolution is functionally described in terms of the product of time with a parameter whose precise value we are uncertain about.
  4. The logarithmic timeline can also be important for the asymptotic analysis of more general functional forms, if the dominant term behaves in the manner described in #3.
  5. I don't know if the logarithmic timeline is correctly calibrated for comparing the value of particular levels of forecasting precision and accuracy.
  6. The logarithmic timeline is related to hyperbolic discounting.
  7. If using the logarithmic timeline, point estimates for how far out in time something will happen should be averaged using geometric means rather than arithmetic means. Similar averaging would need to be done for interval estimates or probability distribution estimates for the time variable.
  8. I don't know if empirical evidence bears out the intuition that forecast accuracy should be time-independent if we use the logarithmic timeline.

#1: What the logarithmic timeline means for understanding forecasts

First off, we are using the origin point for the logarithmic timeline as the present. There are other logarithmic timelines that are better suited for other purposes. Using the origin of the universe is better suited for physics. But when it comes to understanding forecasts based on our best knowledge of what has transpired so far, the present is the natural origin.

Let's first understand the implicit assumption embedded in the use of a linear timeline for understanding forecasts. With a linear timeline, a statement of the form "technological milestone x will happen in year 2017" has equivalent prima facie precision as a statement of the form "technological milestone y will happen in year 2057" despite the fact that the year 2017 is (as of the time of this writing) just 3 years in the future and the year 2057 is 43 years in the future. But a little reflection shows that this doesn't jibe with intuition: making predictions to single years 43 years in advance is more impressive than making predictions to single years a mere 3 years in advance. Similarly, saying that a particular technological innovation will happen between 2031 and 2035 involves making a more precise statement than saying that a particular technological innovation will happen between 2015 and 2019.

We want a timeline where the equivalent in the far future of a near-future year is an interval comprising more than one year. But there are many such choices of monotone functions. I believe that the logarithmic one is best. In other words, I'm advocating for a situation where you find "between 5 and 10 years from now" as precise as "between 14 and 28 years from now", i.e., it is the quotient of the endpoint to the startpoint (the multiplicative distance) that matters rather than the difference between them (the additive distance).

But why use the logarithm rather than some other monotone transformation? I proffer some reasons below.

#2: A crude explanation for the logarithmic timeline

If you're mathematically sophisticated, skip ahead straight to the math.

Here's a crude explanation. Suppose you're trying to estimate the time in which the cost per base pair of DNA sequencing drops to 1/8 of its current level. You have an estimate that it takes between 4 and 11 years to halve. So the natural think to do is say "to get to 1/8, it has to go through three halvings. In the best case, that's 3 times 4 equals 12 years. In the worst case, that's 3 times 11 equals 33 years. So it will happen between 12 and 33 years from now."

Note that the length of the interval for getting to 1/8 is 33 - 12 = 21, three times the length of the interval for getting to half (11 - 4 = 7). But the ratio of the upper to the lower endpoint is the same in both cases (namely 11/4).

None of the numbers above are significant; I chose them for the benefit of people who prefer worked numerical examples before, or instead of, delving into mathematical formalism.

Note also that while this particular example had an exponential process, we don't need the process to be exponential per se for the broad dynamics here to apply. We do need some mathematical conditions, but they aren't tied to the process being exponential (in fact, exponential versus linear isn't a robust distinction for this context because either can be turned to the other via a monotone transformation). I turn to the mathematical formalism next.

#3: The math: logarithmic timeline is natural for a fairly general functional form of evolution with time

Consider a quantity y whose variation with time t (with t = 0 marking the current time) is given by the general functional form:

y = f(kt)

where f is a monotone increasing function, and k is a parameter that we have some uncertainty about. Let's say we know that a < k < b for some known positive constants a and b. We now need to answer a question of the form "at what time will y reach a specific value y1?"

Since f is monotone increasing, it is invertible, so solving for t we obtain:

t = f -1(y1)/k

There's uncertainty about the value of k. So t ranges between the possibilities f -1(y1)/b and f -1(y1)/a. In particular, if we divide the endpoint of the interval by the starting point, we get b/a, a quantity independent of the value of y1. Thus, the use of the logarithmic timeline is a robust choice.

What sort of functional forms match the above description? Many. For instance:

  • A linear functional form y = kt + c where k is a positive constant and c is a constant. Note that even though there are two parameters here, the value of c is determined by evaluating at t = 0 knowing the present value, and is not a source of uncertainty.
  • An exponential functional form y = Cekt where C and k are positive constants. Note that even though there are two parameters here, the value of c is determined by evaluating at t = 0 knowing the present value, and is not a source of uncertainty.
  • A  quadratic functional form y = (kt)2 + c where k is a positive constant and c is a constant. Note that even though there are two parameters here, the value of c is determined by evaluating at t = 0 knowing the present value, and is not a source of uncertainty.

Of course, not every functional form is of this type. For instance, consider the functional form y = tk. Here, the parameter is in the exponent and does not interact multiplicatively with time. Therefore, the logarithmic timeline does not work.

#4: Asymptotic significance of the logarithmic timeline

A functional form may involve a sum of multiple functions, each involving a different parameter. It does not precisely fit the framework above. However, for sufficiently large t, one piece of the functional form dominates, and if that piece has the form described above, everything works well. For instance, consider a functional form with two parameters:

y = ekt + mt + c

Both k and m are parameters with known ranges (c is determined from them and the value at 0). For sufficiently large t, however, this looks close enough to y = ekt that we can use that as an approximation and find that the logarithmic timeline works well enough. Thus, the logarithmic timeline could be asymptotically significant.

#5: Does the logarithmic timeline correctly measure the benefits of a particular level of forecasting precision?

We've given above a reason why the logarithmic timeline correctly measures precision from the perspective of forecasting ability. But what about the perspective of the value of forecasting? Does knowing that something will happen between 5 years and 10 years from now deliver the same amount of value as knowing that something will happen between 14 years and 28 years from now? Unfortunately, I don't have a clear way of thinking about this question, but I can think of plausible intuitions supporting the logarithmic timeline choice: the farther out in the future we are talking, the less valuable it is to know exact dates, and ratios just happen to capture that lower level of value correctly.

#6: Relation with hyperbolic discounting

Gunnar_Zarncke points out in a comment that the logarithmic timeline is related to hyperbolic discounting, a particular form of discounting the future that bears close empirical relation with how people view the future. Hyperbolic discounting gives differential weight 1/t to a time instant t in the future. This relates with the logarithmic timeline because d(ln t)/dt = 1/t. This could potentially be used to provide a rational basis for hyperbolic discounting, vindicating the rationality of human intuition.

A follow-up comment by Gunnar_Zarncke links to an earlier LessWrong comment of his that in turn links to research showing that people's subjective perception of time fits the logarithmic timeline model.

#7: Point estimates and geometric means

Another implication of the logarithmic timeline is that if we have a collection of different point estimates for points in time when a specific milestone will be attained, the appropriate method of averaging is the geometric mean rather than the arithmetic mean. The geometric mean is the averaging notion that corresponds to taking the arithmetic mean on the logarithmic scale.

For instance, if three people are asked for a project estimate, and they give the numbers of 2 years, 8 years, and 32 years, then the geometric mean estimate is the cube root of 2 X 8 X 32. This turns out to be 8. The arithmetic mean estimate is the (2 + 8 + 32)/3 = 14.

Note that, thanks to the AM-GM inequality, the geometric mean is never larger than the arithmetic mean, and they're equal only when all the quantities being averaged are equal to each other to begin with. This suggests that, if people tend to be optimistic about how quickly things will happen when they use arithmetic means, they'll appear even more optimistic when using geometric means. On the other hand, the logarithmic timeline might also result in the optimism not seeming so bad.

Similar geometric averaging would need to be done for interval estimates or probability distribution estimates for the time variable.

#8: Empirically, is forecast accuracy time-independent once we switch to the logarithmic timeline?

I consider this the most important question. Namely, as an empirical matter, are people about as good at figuring out whether something will happen between 5 and 10 years from now as they are at figuring out whether something will happen between 14 and 28 years from now?

I do believe that empirical evidence confirms what intuition knows: on the linear timeline, forecast accuracy decays. Thus, for instance, when people are asked for the precise year when something happens, estimates for things that will happen farther out in the future are later. When people are asked to estimate GDP per capita values, estimates far out in the future are worse than near-term estimates. But how much worse are the long term forecasts? Is the worsening in keeping with the logarithmic timeline story?

Note that if the general functional form I described above correctly describes a process, then the logarithmic timeline story is validated theoretically, but the empirical question is still open.

Most research I'm aware of just looks at estimates within specified intervals, such as "how much will GDP growth rate be in a give year?" I suspect an analysis of the data from these experiments might allow us to judge the hypothesis of constant accuracy on the logarithmic timeline, but I don't think just looking at their abstracts would settle the hypothesis. But I'd welcome suggestions on possible tests based on already existing data.

Note also that if existing research uses arithmetic means to aggregate estimates for "how far out in the future" something will happen, we'll have to get back to the source data and use geometric means instead.

There may be research on the subject of evaluating forecast accuracy using a logarithmic timeline (most research on the logarithmic timeline relates to the history of the universe and evolution, rather than the future of humanity or technology). I haven't been able to locate it, and I'd love if people in the comments point me to it.

Potentially relevant literature: I skimmed the paper Forecasting the growth of complexity and change by Theodore Modis, Technology Forecasting and Social Change, Vol. 69, 2002 (377-404), available online (gated) here. I haven't been able to locate an ungated version. The paper uses a logarithmic timeline for the past, taking the present as the origin. A quick skim did not lead me to believe it overlapped with the points I made here. Incidentally, Modis has been critical of Ray Kurzweil's singularity forecast.

See also the discussion at the end of #6 (hyperbolic discounting) linking to the paper On the perception of time by F. Thomas Bruss and Ludger Ruschendorf.

Addendum: To clarify the relation between logarithmic timeline, logarithmic scales, linear functions, power functions, and exponential functions, the table below gives, in its cells, the type of function we'd end up graphing:

Growth rate of quantity with respect to timeOrdinary scaleLogarithmic timelineLogarithmic scale for quantity, ordinary timelineLogarithmic scale for both
Linear Linear Exponential Logarithmic Linear with slope 1
Power function Power function Exponential Logarithmic Linear
Exponential Exponential Double exponential Linear Exponential

The effect of effectiveness information on charitable giving

14 Unnamed 15 April 2014 04:43PM

A new working paper by economists Dean Karlan and Daniel Wood, The Effect of Effectiveness: Donor Response to Aid Effectiveness in a Direct Mail Fundraising Experiment.

The Abstract:

We test how donors respond to new information about a charity’s effectiveness. Freedom from Hunger implemented a test of its direct marketing solicitations, varying letters by whether they include a discussion of their program’s impact as measured by scientific research. The base script, used for both treatment and control, included a standard qualitative story about an individual beneficiary. Adding scientific impact information has no effect on whether someone donates, or how much, in the full sample. However, we find that amongst recent prior donors (those we posit more likely to open the mail and thus notice the treatment), large prior donors increase the likelihood of giving in response to information on aid effectiveness, whereas small prior donors decrease their giving. We motivate the analysis and experiment with a theoretical model that highlights two predictions. First, larger gift amounts, holding education and income constant, is a proxy for altruism giving (as it is associated with giving more to fewer charities) versus warm glow giving (giving less to more charities). Second, those motivated by altruism will respond positively to appeals based on evidence, whereas those motivated by warm glow may respond negatively to appeals based on evidence as it turns off the emotional trigger for giving, or highlights uncertainty in aid effectiveness.

In the experimental condition (for one of the two waves of mailings), the donors received a mailing with this information about the charity's effectiveness:

In order to know that our programs work for people like Rita, we look for more than anecdotal evidence. That is why we have coordinated with independent researchers [at Yale University] to conduct scientifically rigorous impact studies of our programs. In Peru they found that women who were offered our Credit with Education program had 16% higher profits in their businesses than those who were not, and they increased profits in bad months by 27%! This is particularly important because it means our program helped women generate more stable incomes throughout the year.

These independent researchers used a randomized evaluation, the methodology routinely used in medicine, to measure the impact of our programs on things like business growth, children's health, investment in education, and women's empowerment.

In the control condition, the mailing instead included this paragraph:

Many people would have met Rita and decided she was too poor to repay a loan. Five hungry children and a small plot of mango trees don’t count as collateral. But Freedom from Hunger knows that women like Rita are ready to end hunger in their own families and in their communities.

Meetup : Boston - Two Parables on Language and Philosophy

1 Vika 15 April 2014 12:10PM

Discussion article for the meetup : Boston - Two Parables on Language and Philosophy

WHEN: 20 April 2014 03:30:00PM (-0400)

WHERE: MIT, 25 Ames St, Cambridge, MA

Sam Rosen will continue his talk from March 23 with Parable 2 on Language and Philosophy, starting at 4pm.

Cambridge/Boston-area Less Wrong meetups start at 3:30pm, and have an alternating location:

  • 1st Sunday meetups are at Citadel in Porter Sq, at 98 Elm St, apt 1, Somerville.

  • 3rd Sunday meetups are in MIT's building 66 at 25 Ames St, room 156. Room number subject to change based on availability; signs will be posted with the actual room number.

(We also have last Wednesday meetups at Citadel at 7pm.)

Our default schedule is as follows:

—Phase 1: Arrival, greetings, unstructured conversation.

—Phase 2: The headline event. This starts promptly at 4pm, and lasts 30-60 minutes.

—Phase 3: Further discussion. We'll explore the ideas raised in phase 2, often in smaller groups.

—Phase 4: Dinner.

Discussion article for the meetup : Boston - Two Parables on Language and Philosophy

My Heartbleed learning experience and alternative to poor quality Heartbleed instructions.

14 aisarka 15 April 2014 08:15AM

Due to the difficulty of finding high-quality Heartbleed instructions, I have discovered that perfectly good, intelligent rationalists either didn't do all that was needed and ended up with a false sense of security or did things that increased their risk without realizing it and needed to take some additional steps.  Part of the problem is that organizations who write for end users do not specialize in computer security and vice versa, so many of the Heartbleed instructions for end users had issues.  The issues range from conflicting and confusing information to outright ridiculous hype.  As an IT person and a rationalist, I knew better than to jump to the proposing solutions phase before researching [1].  Recognizing the need for well thought out Heartbleed instructions, I spent 10-15 hours sorting through the chaos to create more comprehensive Heartbleed instructions.  I'm not a security expert, but as an IT person who has read about computer security out of a desire for professional improvement and also due to curiosity and is familiar with various research issues, cognitive biases, logical fallacies, etc, I am not clueless either.  In light of this being a major event that some sources are calling one of the worst security problems ever to happen on the Internet [2], that has been proven to be more than a theoretical risk (Four people hacked the keys to the castle out of Cloudflare's challenge in just one day.) [3], that has been badly exploited (900 Canadian social insurance numbers were leaked today. [4]), and some evidence exists that it may have been used for spying for a long time (EFF found evidence of someone spying on IRC conversations. [5]), I think it's important to share my compilation of Heartbleed instructions just so that a better list of instructions is out there.  More importantly, this disaster is a very rare rationality learning opportunity: reflecting on our behavior and comparing it with what we realize we should have done after becoming more informed may help us see patches of irrationality that could harm us during future disasters.  For that reason, I did some rationality checks on my own behavior by asking myself a set of questions.  I have of course included the questions.

 

Heartbleed Research Challenges this Post Addresses:

  - There are apparent contradictions between sources about which sites were affected by Heartbleed, which sites have updated for Heartbleed, which sites need a password reset, and whether to change your passwords now or wait until the company has updated for Heartbleed.  For instance, Yahoo said Facebook was not vulnerable. [6] LastPass said Facebook was confirmed vulnerable and recommended a password update. [7]

  - Companies are putting out a lot of "fluffspeek"*, which makes it difficult to figure out which of your accounts have been affected, and which companies have updated their software.

  - Most sources *either* specialize in writing for end-users *or* are credible sources on computer security, not both.

  - Different articles have different sets of Heartbleed instructions.  None of the articles I saw contained every instruction.

  - A lot of what's out there is just ridiculous hype. [8]

 

Disclaimer

I am not a security specialist, nor am I certified in any security-related area.  I am an IT person who has randomly read a bunch of security literature over the last 15 years, but there *is* a definite quality difference between an IT person who has read security literature and a professional who is dedicated to security.  I can't give you any guarantees (though I'm not sure it's wise to accept that from the specialists either).  Another problem here is time.  I wanted to act ASAP.  With hackers on the loose, I do not think it wise to invest the time it would take me to create a Gwern style masterpiece.  This isn't exactly slapped together, but I am working within time constraints, so it's not perfect.  If you have something important to protect, or have the money to spend, consult a security specialist.

 

Compilation of Heartbleed Instructions


  Beware fraudulent password reset emails and shiny Heartbleed fixes.

  With all the real password reset emails going around, there are a lot of scam artists out there hoping to sneak in some dupes.  A lot of people get confused.  It doesn't mean you're stupid.  If you clicked a nasty link, or even if you're not sure, call the company's fraud department immediately.  That's why they're there. [9]  Always be careful about anything that seems too good to be true, as the scam artists have also begun to advertise Heartbleed "fixes" as bait.


  If the site hasn't done an update, it's risky to change your password.

  Why: This may increase your risk.  If Heartbleed isn't fixed, any new password you type in could be stolen, and a lot of criminals are probably doing whatever they can to exploit Heartbleed right now since they just found out about it.  "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]


  If you use digital password storing, consider whether it is secure.

  Some digital password storing software is way better than others.  I can't recommend one, but be careful which one you choose.  Also, check them for Heartbleed.


  If you already changed your password, and then a site updates or says "change your password" do it again.

  Why change it twice?: If you changed it before the update, you were sending that new password over a connection with a nasty security flaw.  Consider that password "potentially stolen" and make a new one.  "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]


  If a company says "no need to change your password" do you really want to believe them?

  There's a perverse incentive for companies to tell you "everything is fine" when in fact it is not fine, because nobody wants to be seen as having bad security on their website.  Also, if someone did steal your password through this bug, it's not traceable to the bug.  Companies could conceivably claim "things are fine" without much accountability.  "Exploitation of this bug leaves no traces of anything abnormal happening to the logs." [11] I do not know whether, in practice, companies respond to similar perverse incentives, or if some unknown thing keeps them in check, but I have observed plenty of companies taking advantage of other perverse incentives.  Health care rescission for instance.  That affected much more important things than data.


  When a site has done a Heartbleed update, *then* change your password.

  That's the time to do it. "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]


  Security Questions

  Nothing protected your mother's maiden name or the street you grew up on from Heartbleed any more than your passwords or other data.  A stolen security question can be a much bigger risk than a stolen password, especially if you used the same one on multiple different accounts.  When you change your password, also consider whether you should change your security questions.  Think about changing them to something hard to guess, unique to that account, and remember that you don't have to fill out your security questions with accurate information.  If you filled the questions out in the last two years, there's a risk that they were stolen, too.


  How do I know if a site updated?

 

  Method One:

    Qualys SSL Labs, an Information Security Provider created a free SSL Server Test.  Just plug in the domain name and Qualys will generate a report.  Yes, it checks the certificate, too.  (Very important.)

    Qualys Server Test

 

  Method Two:

    CERT, a major security flaw advisory publisher, listed some (not all!) of the sites that have updated.  If you want a list, you should use CERT's list, not other lists. 

    CERT's List

    Why CERT's list?  Hearing "not vulnerable" on some news website's list does not mean that any independent organization verified that the site was fine, nor that an independent organization even has the ability to verify that the site has been safe for the entire last two years.  If anyone can do that job, it would be CERT, but I am not unaware of tests of their abilities in that regard.  Also, there is no fluffspeek*.


  Method Three:

    Search the site itself for the word "Heartbleed" and read the articles that come up.  If the site had to do a Heartbleed update, change your password.  Here's the quick way to search a whole site in Google (do not add "www"):

    site:websitename.com Heartbleed


  If an important site hasn't updated yet:

  If you have sensitive data stored there, don't log into that site until it's fixed.  If you want to protect it, call them up and try to change your password by phone or lock the account down.  "Stick to reputable websites and services, as those sites are most likely to have addressed the vulnerability right away." [10]


  Check your routers, mobile phones, and other devices.

  Yes, really. [13] [14]


  If you have even the tiniest website:

  Don't think "There's nothing to steal on my website".  Spammers always want to get into your website.  Hackers make software that exploits bugs and can share or sell that software.  If a hacker shares a tool that exploits Heartbleed and your site is vulnerable, spammers will get the tool and could make a huge mess out of everything.  That can get you blacklisted and disrupt email, it can get you removed from Google search engine results, it can disrupt your online advertising ... it can be a mess.

  Get a security expert involved to look for all the places where Heartbleed may have caused a security risk on your site, preferably one who knows about all the different services that your website might be using.  "Services" meaning things like a vendor that you pay so your website can send bulk text messages for two-factor authentication, or a free service that lets users do "social sign on" to log into your site with an external service like Yahoo.  The possibilities for Heartbleed to cause problems on your website, through these kinds of services, is really pretty enormous.  Both paid services and free services could be affected.

  A sysadmin needs to check the server your site is on to figure out if it's got the Heartbleed bug and update it.

  Remember to check your various web providers like domain name registration services, web hosting company, etc.


Rationality Learning Opportunity (The Questions)

We won't get many opportunities to think about how we react in a disaster.  For obvious ethical reasons, we can't exactly create disasters in order to test ourselves.  I am taking the opportunity to reflect on my reactions and am sharing my method for doing this.  Here are some questions I asked myself which are designed to encourage reflection.  I admit to having made two mistakes at first: I did not apply rigorous skepticism to each news source right from the very first article I read, and the mistake of underestimating the full extent of what it would take to address the issue.  What saved me was noticing my confusion.

  When you first heard about Heartbleed, did you fail to react?  (Normalcy bias)

  When you first learned about the risk, what probability did you assign to being affected by it?  What probability do you assign now?  (Optimism bias)

  Were you surprised to find out that someone in your life did not know about Heartbleed, and regret not telling them when it had occurred to you to tell them?  (Bystander effect)

  What did you think it was going to take to address Heartbleed?  Did you underestimate what it would take to address it competently?  (Dunning-Kruger effect)

  After reading news sources on Heartbleed instructions, were you surprised later that some of them were wrong?

  How much time did you think it would take to address the issue?  Did it take longer?  (Planning fallacy)

  Did you ignore Heartbleed?  (Ostrich effect)


*Fluffspeek:

Companies, of course, want to present a respectable face to customers, so most of them are not just coming out and saying "We were affected by Heartbleed.  We have updated.  It's time to change your password now."  Instead, some have been writing fluff like:

  "We see no evidence that data was stolen."

  According to the company that found this bug, Heartbleed doesn't leave a trail in the logs. [15] If someone did steal your password, would there be evidence anyway?  Maybe some really were able to rule that out somehow.  Positivity bias, a type of confirmation bias, is an important possibility here.  Maybe, like many humans, these companies simply failed to "Look into the dark" [16] and think of alternate explanations for the evidence they're seeing (or not seeing, which can sometimes be evidence [17], but not useful evidence in this case).

  "We didn't bother to tell you whether we updated for Heartbleed, but it's always a good idea to change your password however often."

  Unless you know each website has updated for Heartbleed, there's a chance that you're going to go out and send your new passwords right through a bunch of website's Heartbleed security holes as you're changing them.  Now that Heartbleed is big news, every hacker and script kiddie on planet earth probably knows about it, which means there are probably way more people trying to steal passwords through Heartbleed than before.  Which is the greater risk?  Entering in a new password while the site is leaking passwords in a potentially hacker-infested environment, or leaving your potentially stolen password there until the site has updated?  Worse, if people *did not* change their password after the update because they already changed it *before* the update, they've got a false sense of security about the probability that their password was stolen.  Maybe some these companies updated for Heartbleed before saying that.  Maybe the bug was completely non-applicable for them.  Regardless, I think end users deserve to know that updating their password before the Heartbleed update carries a risk.  Users need to be told whether an update has been applied.  As James Lynn wrote for Forbes, "Forcing customers to guess or test themselves is just negligent." [8]

"Fluffspeek" is a play on "leetspeek", a term used to describe bits of text full of numbers and symbols that is attributed to silly "hackers".  Some PR fluff may be a deliberate attempt to exploit others, similar in some ways to the manipulation techniques popular among black hat hackers, called social engineering.  Even when it's not deliberate, this kind of garbage is probably about as ugly to most people with half a brain as "I AM AN 31337 HACKER!!!1", so is still fitting.

 

References:

 1. http://lesswrong.com/lw/ka/hold_off_on_proposing_solutions/

 2. http://money.cnn.com/2014/04/09/technology/security/Heartbleed-bug/

 3. http://blog.cloudflare.com/the-results-of-the-cloudflare-challenge

 4. http://www.cra-arc.gc.ca/gncy/sttmnt2-eng.html

 5. https://www.eff.org/deeplinks/2014/04/wild-heart-were-intelligence-agencies-using-Heartbleed-november-2013

 6. http://finance.yahoo.com/blogs/breakout/Heartbleed-security-flaw--how-to-protect-yourself-172552932.html

 7. https://lastpass.com/Heartbleed/?h=facebook.com

 8. Forbes.com "Avoiding Heartbleed Hype, What To Do To Stay Safe" (I can't link to this for some reason but you can do a search.)

 9. http://www.net-security.org/secworld.php?id=16671

 10. http://www.cnbc.com/id/101569136

 11. http://Heartbleed.com/

 12. https://community.norton.com/t5/Norton-Protection-Blog/Heartbleed-Bug-What-You-Need-to-Know-and-Security-Tips/ba-p/1120128

 13. http://online.wsj.com/news/articles/SB10001424052702303873604579493963847851346

 14. Forbes.com "A Billion Smartphone Users May Be Affected by the Heartbleed Security Flaw" (I can't link to this for some reason but you can do a search.)

 15. http://Heartbleed.com/

 16. http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/

 17. http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/

Meetup : Moscow meet up

1 Yuu 15 April 2014 05:12AM

Discussion article for the meetup : Moscow meet up

WHEN: 20 April 2014 04:00:00AM (+0400)

WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16

We will have:

  • Unicorns: false but useful beliefs, report.

  • Boundaries of rationality, discussion.

  • Cognitive behavioural therapy as a framework for daily rationality, report.

  • How to avoid multitasking and main issues of semantics, report.

We gather in the Yandex office, you need the first revolving door under the archway. Here is additional guide how to get there: link. You can fill this one minute form (in Russian), to share your contact information.

We start at 16:00 and sometimes finish at night. Please pay attention that we only gather near the entrance and then come inside.

Discussion article for the meetup : Moscow meet up

Unfriendly Natural Intelligence

7 Gunnar_Zarncke 15 April 2014 05:05AM

Related to: UFAIPaperclip maximizerReason as memetic immune disorder

A discussion with Stefan (cheers, didn't get your email, please message me) during the European Community Weekend Berlin fleshed out an idea I had toyed around with for some time:

If a UFAI can wreak havoc by driving simple goals to extremes then so should driving human desires to extremes cause problems. And we should already see this. 

Actually we do. 

We know that just following our instincts on eating (sugar, fat) is unhealthy. We know that stimulating our pleasure centers more or less directly (drugs) is dangerous. We know that playing certain games can lead to comparable addiction. And the recognition of this has led to a large number of more or less fine-tuned anti-memes e.g. dieting, early drug prevention, helplines. These memes steering us away from such behaviors were selected for because they provided aggregate benefits to the (members of) social (sub) systems they are present in.     

Many of these memes have become so self-evident we don't recognize them as such. Some are essential parts of highly complex social systems. What is the general pattern? Did we catch all the critical cases? Are the existing memes well-suited for the task?How are they related. Many are probably deeply woven into our culture and traditions.

Did we miss any anti-memes? 

This last question really is at the core of this post. I think we lack some necessary memes keeping new exploitations of our desires in check. Some new ones result from our society a) having developed the capacity to exploit them and b) the scientific knowledge to know how to do this.

continue reading »

Earnings of economics majors: general considerations

4 JonahSinick 14 April 2014 10:23PM

Some liberal arts majors make more money than others, but by far the ones who make the most are economics majors. The 2013-2014 Payscale Salary Report reports the following figures. The second column is median starting salary and the third is median mid-career salary, in thousands of dollars


Economics 50 96
Political Science 41 77
Philosophy 39 78
History 39 71
English Literature 40 71
Psychology 36 60
Sociology 37 55

This trend is robust, and I'll give more supporting data as an appendix at the end of the post.

The fact that economics majors make so much more is often taken to mean that majoring in economics raises future earnings. Is this true? In this post I'll discuss some general considerations relevant to determining this, and discuss the sort of data that one might try use to resolve the question. In future posts, I'll offer some such data, with analysis and discussion.

I'd welcome any other ideas for testing the hypotheses, as well as pushback on the conceptual framework, and/or alternative hypotheses.

continue reading »

Meetup : Melbourne Social Meetup (Note: change of location!)

0 Maelin 14 April 2014 01:52PM

Discussion article for the meetup : Melbourne Social Meetup (Note: change of location!)

WHEN: 18 April 2014 06:30:00PM (+1000)

WHERE: 2 Oranna Court, Glen Waverley, Victoria, Australia

PLEASE NOTE: CHANGE OF LOCATION

April's regular Social meetup is on this month, but our usual venue is no longer available, so we're going to experiment with the location. This month we are in Glen Waverley (see below for transport arrangements).

Social meetups are casual affairs where we chat and play games. We usually arrange some form of take-away for dinner for any who want to be part of it, but feel free to bring your own dinner if you'd prefer. The official start time is 18:30 but you won't upset anything if you turn up later on.

For this month, we are in Glen Waverley. If you're coming by public transport, catch a train on the Glen Waverley line and get off at Glen Waverley, then you can call either me (Richard, 0421-231-789) or Scott (0432-862-932) and we'll do a quick run to the station. Otherwise, there's plenty of parking nearby.

Hope to see you there! :)

Discussion article for the meetup : Melbourne Social Meetup (Note: change of location!)

[Requesting advice] Problems with optimizing my life as a high school student

12 Optimal 14 April 2014 01:07PM

I am writing this because I believe I need advice and direction from people who can understand my problems. This is my first post on Less Wrong, and I am new to practicing serious writing/rationality in general, so please alert me if I have made any glaring mistakes in this text or in my decisions/beliefs. I will begin by describing myself and my situation.

(This article turned out a lot longer than I thought it would, and it might be hard to follow as a result. I urge you to skim through it once, regarding the first sentence of each paragraph, before reading it in full.)

I am a 16 year old male currently enrolled in an online high school that will remain nameless. My story will be very familiar for most of you: I want to help ensure that the invention of self-improving AI will benefit humanity (and myself, particularly), and I am devoting my entire life to this single goal. This is only possible because I am in a highly favorable position, having a safe home, loving family, secure financial support, internet access, and a tremendous amount of unrestrained free time.

My free time is the result of my relatively undemanding online school plus my unrestrictive parents. To give you an idea of how significant it is: for several days, I could do nothing but play video games and look at porn. And I mean nothing: I could rush right through my online lessons, avoid all exercise and sunlight, stay up until 4AM, and have (unhealthy) food brought to my room. Nobody would stop me from maintaining such self-destructive habits. I could go on doing those things for years. And that is exactly what I did, starting when I was age 11 and ending when I was age 15.

For most of the past year, I have been dedicated to overhauling my life, eliminating 'negative' (self-destructive, shortsighted, unproductive) habits and introducing more positive (healthy, considerate of the future, productive) ones. I did this, of course, because I learned about the profound implications of the technological singularity. I decided that I needed to be a healthy, knowledgeable, and productive person to maximize my chances of being able to experience the joys of future technologies. I'm sure that many of you can identify with that sentiment, although I doubt that anyone could have been lazier than me.

The past year was easily the most important year of my life, and will likely remain so for quite a while. As you may have guessed, it was also the most difficult time of my life. The first 5-6 months were particularly painful, mostly because of my severe addiction to internet porn. During that time, I was putting most of my effort into eliminating negative habits. I still added many positive habits, the most prominent being programming, reading (fiction only) offline, exercise, healthy eating, and meditation. Many of my habits fluctuated; I experimented a lot. There was some constant change, however, in the most important habits: average time spent on the computer for entertainment gradually decreased, while time spent on programming and reading increased in turn. 

I would say that I succeeded at overhauling my life. Unfortunately, because my sole goal was 'reduce negative time, increase positive time', my 'positive' time is not nearly as positive as it could be. Sometimes I find myself staring at a programming e-book for an hour or more and learning nothing. Despite its relative ease, schoolwork often causes me to become stressed quickly. I had been practicing mindfulness meditation for 20-40 minutes a day, but I recently reduced and then removed that habit because it almost never helped me. Reading, exercise, and healthy eating were the only habits that always stuck with me no matter how badly I felt.

The most essential habit I built was the habit of tracking my habits. That is, I created a spreadsheet in OpenOffice to keep track of the time I spent on various activities every day. This was a very good thing to do: it motivated me when I was struggling to control my habits, and it now allows me to view my overall progress. These statistics are very helpful in getting a picture of my life and of my habits, so I will provide an abridged/condensed version of the entire spreadsheet collection. For each month, the average time I spent daily on each activity is shown. Numbers in bold indicate highly inaccurate measurements, taken from months wherein I mostly abstained from activity tracking.

(imgur version if it does not display properly)

'Reading offline' means either nonfiction or fiction (it was mostly fiction.) 'Schoolwork' often meant programming assignments. Video games count as leisure computer use. For most of 2013 I only did game programming; this was before I realized that 'AI programming' was more important than 'any programming'. Before recently, I was adding leisure computer use time much too gratuitously: I erroneously categorized it as 'any time spent on computer not covered by other activities'. The statistics for most of 2013 are slightly flawed as a result. All of the recorded daily activity times probably had a margin of error of around 15%. Also, the monthly averages are not good indicators of how I scheduled my activities; in December, for example, I did not play video games for 15-20 minutes every day (having more spaced out longer sessions instead), but my art practice was always 30-80 minutes a day.

Some patterns/trends here are obvious (programming), while others are more random (schoolwork). Programming and reading are obviously the dominant activities in my life. Until late 2013, I only read fiction. For better or worse, I recently realized that reading fiction and practicing art are, from a productivity/time-management perspective, equivalent to playing video games and watching television. I had abstained from activity tracking for most of Jan-Mar as an experiment, but I estimate that I was reading fiction for at least 3 hours every day during most of that period (Kkat is to blame.) This is only slightly odd, because around new years I was starting to focus on maximizing daily programming time, bringing the average up to over 3 hours. If you were wondering just how demanding my online school can be, the 44-min average recorded (over about a week) in January should give you an idea.

As I said before: I have been increasing the time I spend on positive activities, but the activities are not nearly as positive as they could be. I've tried practicing mindfulness many times, in various forms, to increase my productivity and happiness, but I could never consistently get it to work well. I know that quality > quantity here, and that I should study/work mindfully and efficiently instead of simply pouring time into the activity.

I used to put just enough time into productive activities to achieve the set 'daily minimum time' (different for all activities, it was always 40-80 for programming and 15-30 for art) and be satisfied. I don't see it that way now; no matter how much time I put into a productive activity, I can not partake in a 'unproductive' activity without thinking "this time could be used in a more future-benefiting way". This is a big problem, because I am making my leisure time less leisurely and, by pouring time into the productive activities, making them less productive and more stressful. I am also aware of the fact that my present happiness only matters because it increases my productivity/general capability and therefore my chances of experiencing some kind of 'happy singularity'. This makes fun time even more difficult, because I am thinking that I could instead perform my productive activities in a more fun/mindful way, reducing the need for unproductive fun activities.

I recently found an article here that describes, almost exactly, this problem of mine. Reading that nearly blew my mind because I had never explicitly realized the problem before. I quote:

So I'm really not recommending that you try this mindhack. But if you already have spikes of guilt after bouts of escapism, or if you house an arrogant disdain for wasting your time on TV shows, here are a few mantras you can latch on to to help yourself develop a solid hatred of fun (I warn you that these are calibrated for a 14 year old mind and may be somewhat stale):

  • When skiing, partying, or generally having a good time, try remembering that this is exactly the type of thing people should have an opportunity to do after we stop everyone from dying.
  • When doing something transient like watching TV or playing video games, reflect upon how it's not building any skills that are going to make the world a better place, nor really having a lasting impact on the world.
  • Notice that if the world is to be saved then it really does need to be you who saves it, because everybody else is busy skiing, partying, reading fantasy, or dying in third world countries.

(Warning: the following sentences contain opinions.) The worst part is that this seems to be the right thing to do. There is a decent possibility that infinite happiness (or at least, happiness much greater than what could be experienced in a traditional human lifetime) can be experienced via friendly ASI; we should work towards achieving that instead of prioritizing any temporary happiness. But present happiness increases present productivity, so a sort of happiness/productivity balance needs to be struck. Kaj_Sotala, in the comments of the previously linked post, provides a strong argument against hating fun:

The main mechanism here seems to be that guilt not only blocks the relaxation, it also creates negative associations around the productive things - the productivity becomes that nasty uncomfortable reason why you don't get to do fun things, and you flinch away from even thinking about the productive tasks, since thinking about them makes you feel more guilty about not already doing them. Which in turn blocks you from developing a natural motivation to do them.

This feeling is so strong for me because nearly all of my productivity is based on guilt. Especially in the first six months of my productive transformation, I was training myself to feel very guilty when performing negative activities or when failing to perform positive ones. A lot of the time, I only did productive things because I knew I would feel bad if I did otherwise. There was no other way, really; at the time my negative habits were so pronounced that extreme action was required. But my most negative habits are defeated now, and because of my guilt-inducing strategy I cannot find a balance between happiness and productivity. Based on the above quote, the important thing is to make productive activities have a positive mental association. They have negative associations mostly because they are tiring, frustrating, or fruitless, or because they stop you from performing more fun activities.

One apparent solution is to perform all productive tasks mindfully/leisurely and give up unproductive fun activities completely (the most logical choice if human akrasia is not considered.) The other solution is to perform productive tasks mindfully, and have structured, guilt-free periods of leisure time. Based on others' comments here, the second solution is more practical, but I still have a hard time accepting unproductivity and enjoying productivity. My habit of activity tracking makes this worse; I can literally see the 'lost' minutes when I choose to partake in a leisure-time activity.

In the past few weeks, I have been partaking in less leisure time than I ever have before. I only have played video games because other people drag me into them and I am too uncertain to resist, and I always use my designated 'leisure computer use' time in the most 'fun-efficient' way possible (this has been the case for several months.) That means avoiding mind-numbing activities like browsing reddit or 4chan, instead choosing to experience more soulful things that I have always held dear, like music, art, and certain other fantasies. But even then, I feel that I could be doing something more beneficial.

Here is where I need advice and other opinions: how much structured leisure time should I allocate, to achieve the optimal happiness/productivity balance? Would it be practical to attempt to give up structured 'fun time' completely, optimizing productive activities to be more mindful and leisurely? (See activity tracker: I would be able to give up all leisure time, but I would find it much harder to optimize productive time.) How much structured 'fun time' do you think established or upcoming AI researchers regularly allocate, and how does this affect their happiness/productivity balance?

I have established two of my problems: I cannot enjoy fun things and I am not a very good autodidact. I'm not only bad at studying individual topics: I often do not study consistently, glossing over sections or bouncing between books/exercises. I've proven that I definitely learn best by doing, but it's most often hard to find things to do, especially when dealing with more theoretical topics. I'm also never entirely sure of what topics I should be studying. For example: should I read books and take courses about machine learning, or wait until I finish statistics? Should I become competent at competition programming/algorithms before studying cognitive science, or will competition programming skills not even help me at all? Should I not even be asking the above questions, instead just doing everything at once? It's those kinds of questions without answers that make me think that I really don't know what I'm doing, and that college can't come soon enough.

My second request for advice is this: what would you recommend for me to do, to improve my studying habits in the face of uncertainty? How can I choose and maintain a good 'course sequence'? How should I make designated studying time less stressful and more efficient?  Also, based on the averages I provided, should I adjust how much time I am spending on different activities?

And so my main points are concluded. Like I said, I'm not very experienced in rationality, writing, or serious conversation with intelligent people, so I apologize if anything I just said seems erroneous. I do hope that my (perceived) issues can be at least partially resolved as a result of writing this.

I'm not done here, though: I have a few other concerns, these ones about high school and college. My current online school is a favorable learning environment: it is flexible, not overwhelmingly difficult or trivially easy, and easy to exploit when it is sensible to do so. My online schooling provides me with an exceptional degree of freedom; I would never go back to a physical school and give it all up to a broken system. I recently found out about Stanford University Online High School, however, and this challenged my opinion of my current school. My third concern is whether or not I should (attempt to) switch schools. I have good reasons supporting either choice, and I am unsure. I urge you to visit that link to learn about the school if you have not done so already.

Allow me to point out the most important difference: compared to Stanford OHS lessons, my current lessons seem dull and tedious. Stanford OHS lessons are more based on intellectually stimulating and personally engaging activities, in contrast to the more straightforward memorization tasks of (most of) my current school's lessons. At least, this seems to be the case, based on my (probably biased) observations and predictions. I'm not condemning my current school; they are actually trying to get more intellectually stimulating and personally engaging features in, but I can't seem to benefit from any of it. I am about to load up on AP courses, however, which may end up providing more beneficial and engaging work (or just more difficult memorization tasks). Also, enrolling in Stanford OHS would greatly reduce my free time and freedoms when dealing with school, and I might dislike the required video-conferences.

There are other, more defined problems with the Stanford OHS approach. For one, I would need to rush to apply: I would have to take the SAT in less than a month, much earlier than I had originally planned (we've contacted Stanford OHS already, they said that they will allow me to apply after May 1 if I am taking the SAT on May 3.) As a result, I may earn an unsatisfactory grade on the SAT (consider the average scores here). Apparently, they also require recommendations in applications (not very easy to acquire when you're in online school.) Despite those things, I believe I would have a good chance of being accepted, taking into consideration all of my other favorable traits aside from SAT scores or recommendations.

I might be more favored by top colleges if I graduated from the Stanford OHS as opposed to my current school. On the other hand, my capability to self-educate outside of the system will be a hook for colleges, especially if I can complete MOOCs and read college-level textbooks, so perhaps I should maximize free time by staying with my current school. Back on the first hand, I have proven myself to be an inefficient self-educator, so a more structured approach may work better. Either way, after graduating, I am going to apply to the some of the most prominent computer-science programs (no, I'm not going only by that one list). Carnegie Mellon would be my first choice, mostly because of its proximity to home.

And so my last set of questions is formed: Should I attempt to enroll in Stanford OHS? If not, should I indeed be focusing mostly on studying AI-related topics and working on software projects? Either way: assuming I have a >3.7 GPA, >700 SAT scores, and relevant AP courses/tests completed, should I have a decent chance of being accepted to one of the high-ranking computer science colleges?

Well, that will be all for today. If this were any other internet community, I would be very surprised if anyone read the whole thing. Even if I don't receive any helpful answers, I at least gained some writing skill points.

 

 

View more: Next