In response to Why CFAR's Mission?
Comment author: alyssavance 31 December 2015 01:11:21PM *  11 points [-]

I mostly agree with the post, but I think it'd be very helpful to add specific examples of epistemic problems that CFAR students have solved, both "practice" problems and "real" problems. Eg., we know that math skills are trainable. If Bob learns to do math, along the way he'll solve lots of specific math problems, like "x^2 + 3x - 2 = 0, solve for x". When he's built up some skill, he'll start helping professors solve real math problems, ones where the answers aren't known yet. Eventually, if he's dedicated enough, Bob might solve really important problems and become a math professor himself.

Training epistemic skills (or "world-modeling skills", "reaching true beliefs skills", "sanity skills", etc.) should go the same way. At the beginning, a student solves practice epistemic problems, like the ones Tetlock uses in the Good Judgement Project. When they get skilled enough, they can start trying to solve real epistemic problems. Eventually, after enough practice, they might have big new insights about the global economy, and make billions at a global macro fund (or some such, lots of possibilities of course).

To use another analogy, suppose Carol teaches people how to build bridges. Carol knows a lot about why bridges are important, what the parts of a bridge are, why iron bridges are stronger than wood bridges, and so on. But we'd also expect that Carol's students have built models of bridges with sticks and stuff, and (ideally) that some students became civil engineers and built real bridges. Similarly, if one teaches how to model the world and find truth, it's very good to have examples of specific models built and truths found - both "practice" ones (that are already known, or not that important) and ideally "real" ones (important and haven't been discovered before).

Comment author: alyssavance 18 December 2015 03:19:17AM 18 points [-]

Hey! Thanks for writing all of this up. A few questions, in no particular order:

  • The CFAR fundraiser page says that CFAR "search[es] through hundreds of hours of potential curricula, and test[s] them on smart, caring, motivated individuals to find the techniques that people actually end up finding useful in the weeks, months and years after our workshops." Could you give a few examples of curricula that worked well, and curricula that worked less well? What kind of testing methodology was used to evaluate the results, and in what ways is that methodology better (or worse) than methods used by academic psychologists?

  • One can imagine a scale for the effectiveness of training programs. Say, 0 points is a program where you play Minesweeper all day; and 100 points is a program that could take randomly chosen people, and make them as skilled as Einstein, Bismarck, or von Neumann. Where would CFAR rank its workshops on this scale, and how much improvement does CFAR feel like there has been from year to year? Where on this scale would CFAR place other training programs, such as MIT grad school, Landmark Forum, or popular self-help/productivity books like Getting Things Done or How to Win Friends and Influence People? (One could also choose different scale endpoints, if mine are too suboptimal.)

  • While discussing goals for 2015, you note that "We created a metric for strategic usefulness, solidly hitting the first goal; we started tracking that metric, solidly hitting the second goal." What does the metric for strategic usefulness look like, and how has CFAR's score on the metric changed from 2012 through now? What would a failure scenario (ie. where CFAR did not achieve this goal) have looked like, and how likely do you think that failure scenario was?

  • CFAR places a lot of emphasis on "epistemic rationality", or the process of discovering truth. What important truths have been discovered by CFAR staff or alumni, which would probably not have been discovered without CFAR, and which were not previously known by any of the staff/alumni (or by popular media outlets)? (If the truths discovered are sensitive, I can post a GPG public key, although I think it would be better to openly publish them if that's practical.)

  • You say that "As our understanding of the art grew, it became clear to us that “figure out true things”, “be effective”, and “do-gooding” weren’t separate things per se, but aspects of a core thing." Could you be more specific about what this caches out to in concrete terms; ie. what the world would look like if this were true, and what the world would look like if this were false? How strong is the empirical evidence that we live in the first world, and not the second? Historically, adjusted for things we probably can't change (like eg. IQ and genetics), how strong have the correlations been between truth-seeking people like Einstein, effective people like Deng Xiaoping, and do-gooding people like Norman Borlaug?

  • How many CFAR alumni have been accepted into Y Combinator, either as part of a for-profit or a non-profit team, after attending a CFAR workshop?

Comment author: Nick_Beckstead 27 May 2013 07:47:10PM *  1 point [-]

You wrote:

For example, any modification to the English language, the American political system, the New York Subway or the Islamic religion will almost certainly be moot in five thousand years, just as changes to Old Kingdom Egypt are moot to us now.

I disagree, especially with the religion example. Religions partially involve values and I think values are a plausible area for path-dependence. And I'm not the only one who has the opposite intuition. Here is Robin Hanson:

S – Standards – We can become so invested in the conventions, interfaces, and standards we use to coordinate our activities that we each can’t afford to individually switch to more efficient standards, and we also can’t manage to coordinate to switch together. Conceivably, the genetic code, base ten math, ASCII, English language and units, Java, or the Windows operating system might last for trillions of years.

You wrote:

The only exception would be if the changes to post-human society are self-reinforcing, like a tyrannical constitution which is enforced by unbeatable strong nanotech for eternity. However, by Bostrom's definition, such a self-reinforcing black hole would be an existential risk.

Not all permanent suboptimal states are existential catastrophes, only ones that "drastically" curtail the potential for desirable future development.

You wrote:

Are there any examples of changes to post-human society which a) cannot ever be altered by that society, even when alteration is a good idea, b) represent a significant utility loss, even compared to total extinction, c) are not themselves total or near-total extinction (and are thus not existential risks), and d) we have an ability to predictably effect at least on par with our ability to predictably prevent x-risk? I can't think of any, and this post doesn't provide any examples

It sounds like you are asking me for promising highly targeted strategies for addressing specific trajectory changes in the distant future. One of the claims in this post is that this is not the best way to create smaller trajectory changes. I said:

For example, it may be reasonable to try to assess, in detail, questions like, “What are the largest specific existential risks?” and, “What are the most effective ways of reducing those specific risks?” In contrast, it seems less promising to try to make specific guesses about how we might create smaller positive trajectory changes because there are so many possibilities and many trajectory changes do not have significance that is predictable in advance....Because of this, promising ways to create positive trajectory changes in the world may be more broad than the most promising ways of trying to reduce existential risk specifically. Improving education, improving parenting, improving science, improving our political system, spreading humanitarian values, or otherwise improving our collective wisdom as stewards of the future could, I believe, create many small, unpredictable positive trajectory changes.

For specific examples of changes that I believe could have very broad impact and lead to small, unpredictable positive trajectory changes, I would offer political advocacy of various kinds (immigration liberalization seems promising to me right now), spreading effective altruism, and supporting meta-research.

Comment author: alyssavance 27 May 2013 08:45:49PM 9 points [-]

Religions partially involve values and I think values are a plausible area for path-dependence.

Please explain the influence that, eg., the theological writings of Peter Abelard, described as "the keenest thinker and boldest theologian of the 12th Century", had on modern-day values that might reasonably have been predictable in advance during his time. And that was only eight hundred years ago, only ten human lifetimes. We're talking about timescales of thousands or millions or billions of current human lifetimes.

Conceivably, the genetic code, base ten math, ASCII, English language and units, Java, or the Windows operating system might last for trillions of years.

This claim is prima facie preposterous, and Robin presents no arguments for it. Indeed, it is so farcically absurd that it substantially lowers my prior on the accuracy of all his statements, and it lowers my prior on your statements that you would present it with no evidence except a blunt appeal to authority. To see why, consider, eg., this set of claims about standards lasting two thousand years (a tiny fraction of a comparative eyeblink), and why even that is highly questionable. Or this essay about programming languages a mere hundred years from now, assuming no x-risk and no strong-AI and no nanotech.

For specific examples of changes that I believe could have very broad impact and lead to small, unpredictable positive trajectory changes, I would offer political advocacy of various kinds (immigration liberalization seems promising to me right now), spreading effective altruism, and supporting meta-research.

Do you have any numbers on those? Bostrom's calculations obviously aren't exact, but we can usually get key numbers (eg. # of lives that can be saved with X amount of human/social capital, dedicated to Y x-risk reduction strategy) pinned down to within an order of magnitude or two. You haven't specified any numbers at all for the size of "small, unpredictable positive trajectory changes" in comparison to x-risk, or the cost-effectiveness of different strategies for pursuing them. Indeed, it is unclear how one could come up with such numbers even in theory, since the mechanisms behind such changes causing long-run improved outcomes remain unspecified. Making today's society a nicer place to live is likely worthwhile for all kinds of reasons, but expecting it to have direct influence on the future of a billion years seems absurd. Ancient Minoans from merely 3,500 years ago apparently lived very nicely, by the standards of their day. What predictable impacts did this have on us?

Furthermore, pointing to "political advocacy" as the first thing on the to-do list seems highly suspicious as a signal of bad reasoning somewhere, sorta like learning that your new business partner has offices only in Nigeria. Humans are biased to make everything seem like it's about modern-day politics, even when it's obviously irrelevant, and Cthulhu knows it would be difficult finding any predictable effects of eg. Old Kingdom Egypt dynastic struggles on life now. Political advocacy is also very unlikely to be a low-hanging-fruit area, as huge amounts of human and social capital already go into it, and so the effect of a marginal contribution by any of us is tiny.

Comment author: alyssavance 27 May 2013 07:11:48PM 12 points [-]

The main reason to focus on existential risk generally, and human extinction in particular, is that anything else about posthuman society can be modified by the posthumans (who will be far smarter and more knowledgeable than us) if desired, while extinction can obviously never be undone. For example, any modification to the English language, the American political system, the New York Subway or the Islamic religion will almost certainly be moot in five thousand years, just as changes to Old Kingdom Egypt are moot to us now.

The only exception would be if the changes to post-human society are self-reinforcing, like a tyrannical constitution which is enforced by unbeatable strong nanotech for eternity. However, by Bostrom's definition, such a self-reinforcing black hole would be an existential risk.

Are there any examples of changes to post-human society which a) cannot ever be altered by that society, even when alteration is a good idea, b) represent a significant utility loss, even compared to total extinction, c) are not themselves total or near-total extinction (and are thus not existential risks), and d) we have an ability to predictably effect at least on par with our ability to predictably prevent x-risk? I can't think of any, and this post doesn't provide any examples.

In response to Why AI may not foom
Comment author: alyssavance 23 March 2013 11:58:57PM 1 point [-]

A handful of the many, many problems here:

  • It would be trivial for even a Watson-level AI, specialized to the task, to hack into pretty much every existing computer system; almost all software is full of holes and is routinely hacked by bacterium-complexity viruses

  • "The world's AI researchers" aren't remotely close to a single entity working towards a single goal; a human (appropriately trained) is much more like that than Apple, which is much more like than than the US government, which is much more like that than a nebulous cluster of people who sometimes kinda know each other

  • Human abilities and AI abilities are not "equivalent", even if their medians are the same AIs will be much stronger in some areas (eg. arithmetic, to pick an obvious one); AIs have no particular need for our level of visual modeling or face recognition, but will have other strengths, both obvious and not

  • There is already a huge body of literature, formal and informal, on when humans use System 1 vs. System 2 reasoning

  • A huge amount of progress has been made in compilers, in terms of designing languages that implement powerful features in reasonable amounts of computing time; just try taking any modern Python or Ruby or C++ program and porting it to Altair BASIC

  • Large sections of the economy are already being monopolized by AI (Google is the most obvious example)

I'm not going to bother going farther, as in previous conversations you haven't updated your position at all (http://lesswrong.com/lw/i9/the_importance_of_saying_oops/) regardless of how much evidence I've given you.

Comment author: wedrifid 05 March 2013 08:46:20PM 16 points [-]

Once MetaMed has been paid for and done a literature search on a given item, will that information only be communicated to the individual who hired them, or will it be made more widely available?

A related question: Assuming that the information remains private (as seems to be the most viable business model) will the company attempt to place restrictions on what the clients may do with the information? That is, is the client free to publish it?

Comment author: alyssavance 05 March 2013 08:59:33PM 33 points [-]

Clients are free to publish whatever they like, but we are very strict about patient confidentiality, and do not release any patient information without express written consent.

What Is Optimal Philanthropy?

24 alyssavance 12 July 2012 12:17AM

Much has been written about the idea of optimal philanthropy. Yet, it seems like optimal philanthropy isn't a single claim. Instead, it's a collection of related, but quite distinct, claims that have all been bundled together, much like the Singularity.

Here's the website of GiveWell, and here's the main video introduction for 80,000 Hours, two of the major optimal philanthropy sites. I'll try to break them down into their component claims (written in bold), and also give my views on each of the claims. Some of them are explicitly stated, but others are more implicit, so I definitely welcome feedback if optimal philanthropists feel they disagree with some of the claims as stated.

1. We should evaluate charities according to how efficient they are, along some common metric - for example, number of lives saved per dollar, or existential risk reduction per dollar. We should then encourage charities to be more efficient, and selectively donate to (or otherwise help) the most efficient ones.

This one I support wholeheartedly. It's the main message of GiveWell, and though I have disagreements with their methodology, the basic idea (of marginal utility evaluation) is one that must happen more often. People are way too prone, by default, to donate to the Society for Curing Rare Diseases in Cute Puppies. Much has been written about this in Purchase Fuzzies And Utilons Separately, and other Less Wrong posts.

2. In order to do the most good for our fellow humans, we should start/work for/donate to/otherwise become involved in charitable organizations.

Of course, this is widely believed outside the optimal philanthropy movement. But I think this belief is inherent in many optimal philanthropy claims, and it ought to be examined more critically. It's plausible, but if one looks at the total good done over the last thousand years, the vast majority comes from science and various businesses, not popular causes like (back then) "tithe to the church" or (now, from the 80,000 Hours video) "campaigning against climate change". (Examples: electricity, air travel, enough food, trains, air conditioning... ) However, it's also true that much more total effort has been put into for-profit organizations than non-profits. Which one is more efficient per dollar, I don't know, but it's a question worth examining, rather than just ignoring it by default.

3. We should design careers around being able to donate the largest possible amount.

This one I see as highly damaging. Human psychology is such that, in order for a movement to get long-term, voluntary participation by highly capable people, stuff needs to be fun. Less Wrong itself, and the New York Less Wrong meetup group, are two obvious examples. "The more fun we have, the more people will want to join us."

Of course, the primary purpose of a community or activity doesn't have to be fun. Eg., Google doesn't exist for its employees to have fun. But working for Google still is fun, and if it weren't, Google would soon start losing people, become less productive, and ultimately go bankrupt. (Disclosure: I am a former Google intern.)

Writing a donation check can be very useful, but it isn't fun - it violates all the principles of Fun Theory. To go through the list, it isn't novel, doesn't involve tackling new challenges, doesn't engage the senses, doesn't get better over time (if we assume things work well, the marginal utility of dollars donated should go down, not up), doesn't involve long-term personal consequences, doesn't involve freedom of action, doesn't involve personal control over politics (assuming that one isn't personally involved in the charity, which is generally assumed), etc. etc. etc. (I'm referring to the actual act of writing the check here - for earning the money in the first place, see the next claim.)

Not everything in life is fun, nor can it be, at least pre-Singularity. Taking out the garbage isn't fun, but I do it anyway. However, trying to design lives around things that are inherently un-fun will probably lead to bad outcomes.

4. People can donate the largest amount through a traditional "high-earning career", like investment banking.

This one involves, to some extent, the classic American confusion between social class and income. One might think of "lawyer" as a high-earning career, since it's an upper middle class career; you need a graduate degree and dress up in suits. However, lawyer is actually a terrible career from a money-making perspective, and a law degree usually leaves people worse off financially (details here). Investment bankers themselves don't make that much money, except at the top levels (details here, and see here for general analysis of why gross pay isn't money in the bank).

In fact, all things being equal, one would expect a negative correlation between how prestigious a career is and how much money it makes. Prestige is, to some extent, a substitute for money - a musician might happily play for nothing, because being a musician is cool. "Where there's muck, there's brass" - for info on people who made millions doing boring stuff, see the excellent books The Millionaire Next Door and How To Get Rich.

But, even supposing that a "high-earning career" actually pays a lot (eg., partner at a Big Law firm), standard "career tracks" have serious disadvantages, like working insane hours doing unpleasant stuff. They sap what I call human capital and social capital - human capital is your skills, capabilities, and the value you can provide to an organization, while social capital is your network of friends and people who want to work with you. Human capital and social capital are the two critical things one needs to do anything, including world saving; they shouldn't be spent lightly.

5. People are morally responsible for the opportunity costs of their actions.

This is somewhat tricky/ambiguous, so I've deliberately made the wording vague, but the best example I've found is Peter Singer's argument (analyzed here). Singer compares philanthropy to a Trolley Problem. There's a set of train tracks, which a child is lying on, and a train is fast approaching. You're driving a luxury car, and if you drive the car on the tracks, the train will run into the car and save the child. What should you do?

In standard morality, the right thing to do is save the child, even if it means destroying your really expensive car. Indeed, we might socially shame someone who didn't. According to Singer, this means that we should be willing to donate any amount of money less than the price of an expensive car to charity, if it meant saving a life. Not donating to charity would be the same as letting the train run over the kid - murder through inaction.

I haven't figured out in detail what the real moral framework should be, but this argument doesn't work. For one thing, it produces atrocious incentives. Suppose you have a nice, cushy software job, and donate 10% of your income to charity, even though you could easily afford 20%. You work really hard, and a year later, you get another job for twice as much money. If not donating surplus money is morally equivalent to causing whatever bad outcome the donation would prevent, you are now twice as guilty, since the amount you aren't donating (20% vs. 10%) is twice as large. This is despite the fact that the total amount of good done is also twice as large. Why punish an improvement?

Another huge problem is the creation of unbounded obligations. I suspect a lot of thinking is inherently binary - you've either graduated college or you haven't, either paid back the loan or you haven't, either obeyed the rules or you haven't. With this line of argument, there's literally no point at which one can sit back and say, "I've fulfilled my duty to charity - there's nothing more to do". There's (short of FAI) always another child to save. One can never say, "I've met the goal", or even "I've gotten a third of the way to the goal", since the goal of solving all the world's problems is so huge. But if all states of the world - whether they be donating 0%, 10%, or 20% of income - result in 0% total goal fulfillment, then they're all equivalent, at least in some sense. A moral framework should make the good outcome and bad outcome as distinct as possible, not the same.

6. More people interested in doing good should become professional philanthropists.

This one I totally agree with, which might seem odd, given how closely related it is to #3 and #4. However, I think there are two important differences. A professional philanthropist is, typically, someone whose full-time job it is to figure out how to give away their money. But almost always, it's someone who already has lots of money. Historically, there isn't much precedent for people taking high-paying jobs and donating most of their salary... but there's lots of precedent for getting rich first, in whatever field, and then working full-time on donating.

The other difference is that professional philanthropists don't optimize for donating the maximum amount. They see donating as good, but they also see it as a good to be traded off against other goods, like having lots of nice stuff and social respect. Optimizing for more than one thing allows one to have a lot more Fun, as I suspect Bill Gates and Warren Buffett do.

This really does seem to be better than conventional routes of do-gooding. When I was in college, a huge number of people did stuff like fly to Africa to dig wells. This isn't just inefficient - it actually does net harm, since the cost of utilizing unskilled labor usually outweighs the benefits of such labor. Surely we can do better.

 

Advice On Getting A Software Job

22 alyssavance 09 July 2012 06:52PM

(Note to LWers: This post was written for a general audience at my blog, but I think it's particularly applicable to Less Wrong, as many here are already interested in programming. Programming is also an important route into two of the main paths to get rich, entrepreneurship and angel investing. Many of the leading donors to the Singularity Institute are professional programmers.)

You’ve already graduated college and found a job, but aren’t making the money you’d like. Or you live in the middle of nowhere. (Or, your job just sucks.) You’re pretty smart, and want to do something about this. What should you do?

One option is working as a computer programmer. Programming has a lot going for it: people tend to enjoy it, software companies have great perks, the work is usually in a laid-back atmosphere, and of course there’s no manual labor. Programming salaries generally range from high five figures (just out of college) to mid six figures (for senior people, and quants at Wall Street banks). This assumes you live in a major city, so be sure to factor that into cost-of-living math. (If you aren’t in a major city, seriously consider moving – most of the best jobs are there.)

Before you apply, you’ll need to learn how to program. To get started, there are lots of books on introductory programming – just search for “Introduction to C”, “Introduction to Python”, “Introduction to Haskell” and stuff like that. It’s good to know at least one language well, and also have experience with a few others, preferably ones that differ in important ways. Once you’ve learned the basics, there are lots of problems online to practice on. If you’re into math, Project Euler has a big, well-curated collection of them. You’ll also want to know your way around Linux, since it’s the main operating system of web servers; try installing it, and using it as your main OS for a few months.

To actually get a programming job, you’ll mainly need to demonstrate a) programming experience, and b) knowledge of computer science. For the first one, the most important thing is to build lots of real software that people use, since this teaches some vital practical skills you won’t learn in books or college classes. Working on open source projects is an excellent way to build experience – check out the list of Debian packages for stuff people are currently doing. You should also get an account on Github, browse some of the projects, and pay attention to the issues; notice how programmers talk about bugs and ideas for fixing them. It’s sort of like being an artist – you want to have a portfolio of past work to show employers.

As you learn to program, you’ll discover that a majority of programming is debugging – discovering errors, and figuring out how to fix them. When you get an error (and you will get errors constantly), Google, Google, Google. Especially when working with open source software, just about any concrete problem you encounter has been dealt with before. If there’s anything you want to learn to do, or an error message you don’t understand, Googling should be a reflex.

For the second part, a lot of interview questions ask about computer science concepts like linked lists, sorting, standard search algorithms, and so on. To learn these, you can usually read a standard freshman comp sci textbook all the way through, and do a few of each chapter’s problems. It might be helpful to write out code on paper, as practice for job interviews where you might not have a computer.

For most smart people, learning programming won’t be terribly difficult… but it does require sitting down at a desk every day, and putting in the hours. If you aren’t disciplined enough to work without a boss or a formal structure, you’ll have to figure out how to solve that problem first. Try working on problems you care about, ones that inspire you. It can be really helpful to work with a more experienced programmer, especially in person – they’ll review your work, correct your mistakes, and (more importantly) it’ll make you feel like you have to get things done on time.

Practicing programming isn’t like studying for a test – anything that requires flash cards or lots of memorization is likely a waste of time. “The best way to learn to program is by doing it.” Don’t memorize; build. If you have any task you find boring, annoying, or repetitive, see if you can write a program to do it instead. (This may require using other people’s software. Eg. if a website makes you type in the same number ten thousand times, Selenium is your best friend.)

A college degree isn’t necessary, but it can be very useful, since a lot of larger companies will only hire people with degrees (especially if you lack previous experience). If you don’t have a degree, try applying to smaller companies and startups, which have more flexible hiring procedures. Majoring in computer science and having a high GPA may help, but it won’t get you a job by itself, and a lot of places don’t care much. A lot of job postings will say things like “X years experience in languages A, B and C required” – ignore these, apply anyway. (Famously, one HR department once posted “ten years of experience required” for a language invented eight years ago.)

Some words of caution: Don’t work for anyone who offers you a full-time job without a decent salary (generally, at least $50,000) because you’ll be “paid in equity”. This is almost certainly a scam. If you want to get experience or learn skills, and someone is working on a cool project but doesn’t have money to pay you, do it on a volunteer basis so you won’t be tempted to see it as a job.

Also, don’t quit your current job to be a programmer, unless you either a) have lots of professional programming experience, or b) have some other way to pay rent. Generally, finding programming jobs in a major city is fast and easy (1-2 months), but a lot of people overestimate their skills, and you don’t want to run out of cash while discovering you aren’t as good as you hoped. It isn’t an overnight project; getting basic competence will take months, and true skill takes years.

Lastly, like most fields, don’t be afraid to apply to a zillion different jobs, and network, network, network. If you’ve been working with any other programmers for a while, ask (once, politely) if they know where you can get a job; they’ll likely have some ideas. Also, you might want to check out the Who’s Hiring thread on Hacker News. Go forth, and build the future!

(Disclaimer: This is based on my and my friends’ personal experiences. It isn’t applicable to everyone, and the reader must be responsible for the consequences of any decisions they make.)

This post was co-authored with Mike Blume, who went from having little programming knowledge as a physics grad student to being a software engineer at Loggly.

Further resources: How To Become A Hacker, Get That Job At Google

Negative and Positive Selection

71 alyssavance 06 July 2012 01:34AM

(Originally posted to my blog, The Rationalist Conspiracy; cross-posted here on request of Lukeprog.)

You’re the captain of a team, and you want to select really good players. How do you do it?

One way is through what I call positive selection. You devise a test – say, who can run the fastest – and pick the people who do best. If you want to be really strict, like if you’re selecting for the Olympics, you only pick the top fraction of a percent. If you’re a player, and you want to get selected, you have to train to do better on the test.

The opposite method is negative selection. Instead of one test to pick out winners, you design many tests to pick out losers. You test, say, who can’t run very well when it’s hot out, and get rid of them. Then you test who can’t run very well when it’s cold out, and get rid of them. Then you test running in the rain, and get rid of the losers there. And so on and so forth. When you’re strict with negative selection, you have lots and lots of tests, so that it’s very hard for any one person to pass through all the filters.

I think a big part of where American society’s gone wrong over the last hundred years is the ubiquitous use of negative selection over positive selection. (Athletics is one of the only exceptions. It’s apparently so important that people really care about performance – as opposed to, say, in medicine, where we exclude brilliant doctors if they don’t have the stamina to work ninety hours a week.) A single test can always be flawed; for example, IQ tests and SATs have many flaws. However, with negative selection, how badly you do is determined by the failure rate of every test combined. If you have twenty tests, and even one of them is so flawed it excludes good players, then your team will suck.

Elite college admissions is an example of a negative selection test. There’s no one way you can do really, really well, and thereby be admitted to Harvard. Instead, you have to pass a bunch of different selection filters: Are your SATs good enough? Are your grades good enough? Is your essay good enough? Are your extracurriculars good enough? Are your recommendations good enough? Failure on any one step usually means not getting admitted. And as competition has intensified, colleges have added more and more filters, like the supplemental applications top schools now require (in addition to the Common Application). It wasn’t always this way – Harvard used to admit primarily based on an entrance exam – until they discovered this let too many Jews in (no, seriously). More recently, the negative selection has been intensified by eliminating the SAT’s high ceiling.

Academia is another example of negative selection. To get tenure, first you have to get into a top PhD program. Then you have to graduate. Then you have to get a good recommendation from your advisor. Then you have to get a good postdoc. Then you have to get another good postdoc. Then you have to get a good assistant professorship. Then you have to get approved by the tenure committee. For the most part, if even one of those steps goes wrong – if you went to a second-tier PhD program, say – there’s no way to recover. Once you’re off the “track”, you’re off, and there’s no getting back on. It’s fail once, fail forever.

Grades are another example – A is a good grade, but there’s no excellent grade. There’s no grade that you only get if you’re in the top 0.1%. Hence, getting a really good GPA doesn’t mean excelling, so much as it means never failing. If you’re in high school and are taking six classes, if you fail one, your GPA is now 3.3 or less, regardless of how good you are otherwise.

In any field, at the top end, you tend to get a lot of variance. (Insert tales of the mad artist and mad mathematician.) Negative selection suppresses variance, by eliminating many of the dimensions on which people vary. Students at Yale are, for the most part, all strikingly similar – same socioeconomic class, same interests, same pursuits, same life goals, even the same style of dress. A lot of people tend to assume performance follows a bell curve, but in some cases, it’s more like a Pareto distribution: the top people do hundreds or thousands of times better than average. Hence, if you eliminate the small fraction of people at the very top, your performance is hosed. Fortunately for VC funds, the startup world is still positive selection.

Less obviously, a world with lots of negative selection might be a nasty one to live in. If you think of yourself as trying to eliminate bad, rather than encourage good, you start operating on the purity vs. contamination moral axis. Any tiny amount of bad, anywhere, must be gotten rid of, and that can lead to all sorts of nastiness. “When you are a Guardian of the Truth, all you can do is try to stave off the inevitable slide into entropy by zapping anything that departs from the Truth.  If there’s some way to pump against entropy, generate new true beliefs along with a little waste heat, that same pump can keep the truth alive without secret police.”

New York Less Wrong: Expansion Plans

13 alyssavance 01 July 2012 01:20AM
Last week, the New York Less Wrong group hosted the Summer Solstice Megameetup. Everyone had a huge amount of fun, and we met a lot of new people who didn't normally come to meetups. I've recently moved to New York, and we all enjoyed the solstice events so much that I'd like to host more of them. 
So, in addition to the regular Tuesday evening New York meetups, I'm going to start hosting Saturday afternoon meetups - it seems a lot of people are free on the weekends, but can't come to events during the work week. We also plan to host larger parties once a month or so, to just have fun, talk, and blow off steam. Parties are also useful as a Schelling point for those who are interested, but live far away or otherwise can't come every week. 
continue reading »

View more: Next