What Is Optimal Philanthropy?

24 alyssavance 12 July 2012 12:17AM

Much has been written about the idea of optimal philanthropy. Yet, it seems like optimal philanthropy isn't a single claim. Instead, it's a collection of related, but quite distinct, claims that have all been bundled together, much like the Singularity.

Here's the website of GiveWell, and here's the main video introduction for 80,000 Hours, two of the major optimal philanthropy sites. I'll try to break them down into their component claims (written in bold), and also give my views on each of the claims. Some of them are explicitly stated, but others are more implicit, so I definitely welcome feedback if optimal philanthropists feel they disagree with some of the claims as stated.

1. We should evaluate charities according to how efficient they are, along some common metric - for example, number of lives saved per dollar, or existential risk reduction per dollar. We should then encourage charities to be more efficient, and selectively donate to (or otherwise help) the most efficient ones.

This one I support wholeheartedly. It's the main message of GiveWell, and though I have disagreements with their methodology, the basic idea (of marginal utility evaluation) is one that must happen more often. People are way too prone, by default, to donate to the Society for Curing Rare Diseases in Cute Puppies. Much has been written about this in Purchase Fuzzies And Utilons Separately, and other Less Wrong posts.

2. In order to do the most good for our fellow humans, we should start/work for/donate to/otherwise become involved in charitable organizations.

Of course, this is widely believed outside the optimal philanthropy movement. But I think this belief is inherent in many optimal philanthropy claims, and it ought to be examined more critically. It's plausible, but if one looks at the total good done over the last thousand years, the vast majority comes from science and various businesses, not popular causes like (back then) "tithe to the church" or (now, from the 80,000 Hours video) "campaigning against climate change". (Examples: electricity, air travel, enough food, trains, air conditioning... ) However, it's also true that much more total effort has been put into for-profit organizations than non-profits. Which one is more efficient per dollar, I don't know, but it's a question worth examining, rather than just ignoring it by default.

3. We should design careers around being able to donate the largest possible amount.

This one I see as highly damaging. Human psychology is such that, in order for a movement to get long-term, voluntary participation by highly capable people, stuff needs to be fun. Less Wrong itself, and the New York Less Wrong meetup group, are two obvious examples. "The more fun we have, the more people will want to join us."

Of course, the primary purpose of a community or activity doesn't have to be fun. Eg., Google doesn't exist for its employees to have fun. But working for Google still is fun, and if it weren't, Google would soon start losing people, become less productive, and ultimately go bankrupt. (Disclosure: I am a former Google intern.)

Writing a donation check can be very useful, but it isn't fun - it violates all the principles of Fun Theory. To go through the list, it isn't novel, doesn't involve tackling new challenges, doesn't engage the senses, doesn't get better over time (if we assume things work well, the marginal utility of dollars donated should go down, not up), doesn't involve long-term personal consequences, doesn't involve freedom of action, doesn't involve personal control over politics (assuming that one isn't personally involved in the charity, which is generally assumed), etc. etc. etc. (I'm referring to the actual act of writing the check here - for earning the money in the first place, see the next claim.)

Not everything in life is fun, nor can it be, at least pre-Singularity. Taking out the garbage isn't fun, but I do it anyway. However, trying to design lives around things that are inherently un-fun will probably lead to bad outcomes.

4. People can donate the largest amount through a traditional "high-earning career", like investment banking.

This one involves, to some extent, the classic American confusion between social class and income. One might think of "lawyer" as a high-earning career, since it's an upper middle class career; you need a graduate degree and dress up in suits. However, lawyer is actually a terrible career from a money-making perspective, and a law degree usually leaves people worse off financially (details here). Investment bankers themselves don't make that much money, except at the top levels (details here, and see here for general analysis of why gross pay isn't money in the bank).

In fact, all things being equal, one would expect a negative correlation between how prestigious a career is and how much money it makes. Prestige is, to some extent, a substitute for money - a musician might happily play for nothing, because being a musician is cool. "Where there's muck, there's brass" - for info on people who made millions doing boring stuff, see the excellent books The Millionaire Next Door and How To Get Rich.

But, even supposing that a "high-earning career" actually pays a lot (eg., partner at a Big Law firm), standard "career tracks" have serious disadvantages, like working insane hours doing unpleasant stuff. They sap what I call human capital and social capital - human capital is your skills, capabilities, and the value you can provide to an organization, while social capital is your network of friends and people who want to work with you. Human capital and social capital are the two critical things one needs to do anything, including world saving; they shouldn't be spent lightly.

5. People are morally responsible for the opportunity costs of their actions.

This is somewhat tricky/ambiguous, so I've deliberately made the wording vague, but the best example I've found is Peter Singer's argument (analyzed here). Singer compares philanthropy to a Trolley Problem. There's a set of train tracks, which a child is lying on, and a train is fast approaching. You're driving a luxury car, and if you drive the car on the tracks, the train will run into the car and save the child. What should you do?

In standard morality, the right thing to do is save the child, even if it means destroying your really expensive car. Indeed, we might socially shame someone who didn't. According to Singer, this means that we should be willing to donate any amount of money less than the price of an expensive car to charity, if it meant saving a life. Not donating to charity would be the same as letting the train run over the kid - murder through inaction.

I haven't figured out in detail what the real moral framework should be, but this argument doesn't work. For one thing, it produces atrocious incentives. Suppose you have a nice, cushy software job, and donate 10% of your income to charity, even though you could easily afford 20%. You work really hard, and a year later, you get another job for twice as much money. If not donating surplus money is morally equivalent to causing whatever bad outcome the donation would prevent, you are now twice as guilty, since the amount you aren't donating (20% vs. 10%) is twice as large. This is despite the fact that the total amount of good done is also twice as large. Why punish an improvement?

Another huge problem is the creation of unbounded obligations. I suspect a lot of thinking is inherently binary - you've either graduated college or you haven't, either paid back the loan or you haven't, either obeyed the rules or you haven't. With this line of argument, there's literally no point at which one can sit back and say, "I've fulfilled my duty to charity - there's nothing more to do". There's (short of FAI) always another child to save. One can never say, "I've met the goal", or even "I've gotten a third of the way to the goal", since the goal of solving all the world's problems is so huge. But if all states of the world - whether they be donating 0%, 10%, or 20% of income - result in 0% total goal fulfillment, then they're all equivalent, at least in some sense. A moral framework should make the good outcome and bad outcome as distinct as possible, not the same.

6. More people interested in doing good should become professional philanthropists.

This one I totally agree with, which might seem odd, given how closely related it is to #3 and #4. However, I think there are two important differences. A professional philanthropist is, typically, someone whose full-time job it is to figure out how to give away their money. But almost always, it's someone who already has lots of money. Historically, there isn't much precedent for people taking high-paying jobs and donating most of their salary... but there's lots of precedent for getting rich first, in whatever field, and then working full-time on donating.

The other difference is that professional philanthropists don't optimize for donating the maximum amount. They see donating as good, but they also see it as a good to be traded off against other goods, like having lots of nice stuff and social respect. Optimizing for more than one thing allows one to have a lot more Fun, as I suspect Bill Gates and Warren Buffett do.

This really does seem to be better than conventional routes of do-gooding. When I was in college, a huge number of people did stuff like fly to Africa to dig wells. This isn't just inefficient - it actually does net harm, since the cost of utilizing unskilled labor usually outweighs the benefits of such labor. Surely we can do better.

 

Advice On Getting A Software Job

22 alyssavance 09 July 2012 06:52PM

(Note to LWers: This post was written for a general audience at my blog, but I think it's particularly applicable to Less Wrong, as many here are already interested in programming. Programming is also an important route into two of the main paths to get rich, entrepreneurship and angel investing. Many of the leading donors to the Singularity Institute are professional programmers.)

You’ve already graduated college and found a job, but aren’t making the money you’d like. Or you live in the middle of nowhere. (Or, your job just sucks.) You’re pretty smart, and want to do something about this. What should you do?

One option is working as a computer programmer. Programming has a lot going for it: people tend to enjoy it, software companies have great perks, the work is usually in a laid-back atmosphere, and of course there’s no manual labor. Programming salaries generally range from high five figures (just out of college) to mid six figures (for senior people, and quants at Wall Street banks). This assumes you live in a major city, so be sure to factor that into cost-of-living math. (If you aren’t in a major city, seriously consider moving – most of the best jobs are there.)

Before you apply, you’ll need to learn how to program. To get started, there are lots of books on introductory programming – just search for “Introduction to C”, “Introduction to Python”, “Introduction to Haskell” and stuff like that. It’s good to know at least one language well, and also have experience with a few others, preferably ones that differ in important ways. Once you’ve learned the basics, there are lots of problems online to practice on. If you’re into math, Project Euler has a big, well-curated collection of them. You’ll also want to know your way around Linux, since it’s the main operating system of web servers; try installing it, and using it as your main OS for a few months.

To actually get a programming job, you’ll mainly need to demonstrate a) programming experience, and b) knowledge of computer science. For the first one, the most important thing is to build lots of real software that people use, since this teaches some vital practical skills you won’t learn in books or college classes. Working on open source projects is an excellent way to build experience – check out the list of Debian packages for stuff people are currently doing. You should also get an account on Github, browse some of the projects, and pay attention to the issues; notice how programmers talk about bugs and ideas for fixing them. It’s sort of like being an artist – you want to have a portfolio of past work to show employers.

As you learn to program, you’ll discover that a majority of programming is debugging – discovering errors, and figuring out how to fix them. When you get an error (and you will get errors constantly), Google, Google, Google. Especially when working with open source software, just about any concrete problem you encounter has been dealt with before. If there’s anything you want to learn to do, or an error message you don’t understand, Googling should be a reflex.

For the second part, a lot of interview questions ask about computer science concepts like linked lists, sorting, standard search algorithms, and so on. To learn these, you can usually read a standard freshman comp sci textbook all the way through, and do a few of each chapter’s problems. It might be helpful to write out code on paper, as practice for job interviews where you might not have a computer.

For most smart people, learning programming won’t be terribly difficult… but it does require sitting down at a desk every day, and putting in the hours. If you aren’t disciplined enough to work without a boss or a formal structure, you’ll have to figure out how to solve that problem first. Try working on problems you care about, ones that inspire you. It can be really helpful to work with a more experienced programmer, especially in person – they’ll review your work, correct your mistakes, and (more importantly) it’ll make you feel like you have to get things done on time.

Practicing programming isn’t like studying for a test – anything that requires flash cards or lots of memorization is likely a waste of time. “The best way to learn to program is by doing it.” Don’t memorize; build. If you have any task you find boring, annoying, or repetitive, see if you can write a program to do it instead. (This may require using other people’s software. Eg. if a website makes you type in the same number ten thousand times, Selenium is your best friend.)

A college degree isn’t necessary, but it can be very useful, since a lot of larger companies will only hire people with degrees (especially if you lack previous experience). If you don’t have a degree, try applying to smaller companies and startups, which have more flexible hiring procedures. Majoring in computer science and having a high GPA may help, but it won’t get you a job by itself, and a lot of places don’t care much. A lot of job postings will say things like “X years experience in languages A, B and C required” – ignore these, apply anyway. (Famously, one HR department once posted “ten years of experience required” for a language invented eight years ago.)

Some words of caution: Don’t work for anyone who offers you a full-time job without a decent salary (generally, at least $50,000) because you’ll be “paid in equity”. This is almost certainly a scam. If you want to get experience or learn skills, and someone is working on a cool project but doesn’t have money to pay you, do it on a volunteer basis so you won’t be tempted to see it as a job.

Also, don’t quit your current job to be a programmer, unless you either a) have lots of professional programming experience, or b) have some other way to pay rent. Generally, finding programming jobs in a major city is fast and easy (1-2 months), but a lot of people overestimate their skills, and you don’t want to run out of cash while discovering you aren’t as good as you hoped. It isn’t an overnight project; getting basic competence will take months, and true skill takes years.

Lastly, like most fields, don’t be afraid to apply to a zillion different jobs, and network, network, network. If you’ve been working with any other programmers for a while, ask (once, politely) if they know where you can get a job; they’ll likely have some ideas. Also, you might want to check out the Who’s Hiring thread on Hacker News. Go forth, and build the future!

(Disclaimer: This is based on my and my friends’ personal experiences. It isn’t applicable to everyone, and the reader must be responsible for the consequences of any decisions they make.)

This post was co-authored with Mike Blume, who went from having little programming knowledge as a physics grad student to being a software engineer at Loggly.

Further resources: How To Become A Hacker, Get That Job At Google

Negative and Positive Selection

71 alyssavance 06 July 2012 01:34AM

(Originally posted to my blog, The Rationalist Conspiracy; cross-posted here on request of Lukeprog.)

You’re the captain of a team, and you want to select really good players. How do you do it?

One way is through what I call positive selection. You devise a test – say, who can run the fastest – and pick the people who do best. If you want to be really strict, like if you’re selecting for the Olympics, you only pick the top fraction of a percent. If you’re a player, and you want to get selected, you have to train to do better on the test.

The opposite method is negative selection. Instead of one test to pick out winners, you design many tests to pick out losers. You test, say, who can’t run very well when it’s hot out, and get rid of them. Then you test who can’t run very well when it’s cold out, and get rid of them. Then you test running in the rain, and get rid of the losers there. And so on and so forth. When you’re strict with negative selection, you have lots and lots of tests, so that it’s very hard for any one person to pass through all the filters.

I think a big part of where American society’s gone wrong over the last hundred years is the ubiquitous use of negative selection over positive selection. (Athletics is one of the only exceptions. It’s apparently so important that people really care about performance – as opposed to, say, in medicine, where we exclude brilliant doctors if they don’t have the stamina to work ninety hours a week.) A single test can always be flawed; for example, IQ tests and SATs have many flaws. However, with negative selection, how badly you do is determined by the failure rate of every test combined. If you have twenty tests, and even one of them is so flawed it excludes good players, then your team will suck.

Elite college admissions is an example of a negative selection test. There’s no one way you can do really, really well, and thereby be admitted to Harvard. Instead, you have to pass a bunch of different selection filters: Are your SATs good enough? Are your grades good enough? Is your essay good enough? Are your extracurriculars good enough? Are your recommendations good enough? Failure on any one step usually means not getting admitted. And as competition has intensified, colleges have added more and more filters, like the supplemental applications top schools now require (in addition to the Common Application). It wasn’t always this way – Harvard used to admit primarily based on an entrance exam – until they discovered this let too many Jews in (no, seriously). More recently, the negative selection has been intensified by eliminating the SAT’s high ceiling.

Academia is another example of negative selection. To get tenure, first you have to get into a top PhD program. Then you have to graduate. Then you have to get a good recommendation from your advisor. Then you have to get a good postdoc. Then you have to get another good postdoc. Then you have to get a good assistant professorship. Then you have to get approved by the tenure committee. For the most part, if even one of those steps goes wrong – if you went to a second-tier PhD program, say – there’s no way to recover. Once you’re off the “track”, you’re off, and there’s no getting back on. It’s fail once, fail forever.

Grades are another example – A is a good grade, but there’s no excellent grade. There’s no grade that you only get if you’re in the top 0.1%. Hence, getting a really good GPA doesn’t mean excelling, so much as it means never failing. If you’re in high school and are taking six classes, if you fail one, your GPA is now 3.3 or less, regardless of how good you are otherwise.

In any field, at the top end, you tend to get a lot of variance. (Insert tales of the mad artist and mad mathematician.) Negative selection suppresses variance, by eliminating many of the dimensions on which people vary. Students at Yale are, for the most part, all strikingly similar – same socioeconomic class, same interests, same pursuits, same life goals, even the same style of dress. A lot of people tend to assume performance follows a bell curve, but in some cases, it’s more like a Pareto distribution: the top people do hundreds or thousands of times better than average. Hence, if you eliminate the small fraction of people at the very top, your performance is hosed. Fortunately for VC funds, the startup world is still positive selection.

Less obviously, a world with lots of negative selection might be a nasty one to live in. If you think of yourself as trying to eliminate bad, rather than encourage good, you start operating on the purity vs. contamination moral axis. Any tiny amount of bad, anywhere, must be gotten rid of, and that can lead to all sorts of nastiness. “When you are a Guardian of the Truth, all you can do is try to stave off the inevitable slide into entropy by zapping anything that departs from the Truth.  If there’s some way to pump against entropy, generate new true beliefs along with a little waste heat, that same pump can keep the truth alive without secret police.”

New York Less Wrong: Expansion Plans

13 alyssavance 01 July 2012 01:20AM
Last week, the New York Less Wrong group hosted the Summer Solstice Megameetup. Everyone had a huge amount of fun, and we met a lot of new people who didn't normally come to meetups. I've recently moved to New York, and we all enjoyed the solstice events so much that I'd like to host more of them. 
So, in addition to the regular Tuesday evening New York meetups, I'm going to start hosting Saturday afternoon meetups - it seems a lot of people are free on the weekends, but can't come to events during the work week. We also plan to host larger parties once a month or so, to just have fun, talk, and blow off steam. Parties are also useful as a Schelling point for those who are interested, but live far away or otherwise can't come every week. 
continue reading »

Meetup : A Game of Nomic

2 alyssavance 29 June 2012 12:49AM

Discussion article for the meetup : A Game of Nomic

WHEN: 21 July 2012 03:00:00PM (-0400)

WHERE: Midtown Manhattan, New York, NY 10010

Hi everyone. I'll be holding a Saturday meetup at my apartment to play Nomic, not this Saturday, but next Saturday (July 7th, nine days from now). For those not familiar with Nomic, it's a game where playing the game is about changing the rules of the game. The last time we played with the NYC rationalist group was super-awesome, and I'm looking forward to doing it again.

Meetup will start at 3 PM, and will be followed by pizza or other forms of dinner (depending on interest).

NOTE: Due to conflict with other events, this has been moved to Saturday, July 21st.

Discussion article for the meetup : A Game of Nomic

What Would You Like To Read? A Quick Poll

0 alyssavance 21 June 2012 12:38AM

In our discussion of academic papers, Lukeprog argued that lots of smart people preferred to read ideas in academic paper format. Based on my observations, I mostly disagree. But that's just anecdotal evidence. Let's use Science!

Suppose someone at the Singularity Institute thought up a cool new idea: it could be about rationality, Friendly AI, decision theory, making money, or any of the other topics we discuss here on LW. Explaining it takes about ten pages, and it's nontechnical enough that it can be explained to a general audience of non-mathematicians. Which of the following explanations would you be most likely to actually sit down and read through?

  • A post on Less Wrong or another friendly blog
  • A book chapter, available both on Kindle and in physical book form
  • A mailing list post, made available through a public archive
  • An academic paper, downloadable over the Internet as a PDF
  • A static HTML page on the Singularity Institute's website
  • A page on a Singularity Institute or Less Wrong wiki
  • A speech, downloadable as an audio file
  • A PowerPoint or other presentation format

EDIT: To state the obvious, this poll will be biased in favor of blog postings, since it's on a blog. However, I still think it'll provide data that's much better than anecdotal guessing. I've emailed a few rationalist mailing lists to try and counteract this effect.

Why Academic Papers Are A Terrible Discussion Forum

25 alyssavance 20 June 2012 06:15PM

Over the past few months, the Singularity Institute has published many papers on topics related to Friendly AI. It's wonderful that these ideas are getting written up, and it's virtually always better to do something suboptimal than to do nothing. However, I will make the case below that academic papers are a terrible way to discuss Friendly AI, and other ideas in that region of thought space. We need something better.

I won't try to argue that papers aren't worth publishing. There are many reasons to publish papers - prestige in certain communities and promises to grant agencies, for instance - and I haven't looked at them all in detail. However, I think there is a conclusive case that as a discussion forum - a way for ideas to be read by other people, evaluated, spread, criticized, and built on - academic papers fail. Why?

 

1. The time lag is huge; it's measured in months, or even years.

Ideas structured like the Less Wrong Sequences, with large inferential distances between beginning and ending, have huge webs of interdependencies: to read A you have to read B, which means you need to read C, which requires D and E, and on and on and on. Ideas build on each other. Einstein built on Maxwell, who built on Faraday, who built on Newton, who built on Kepler, who built on Galileo and Copernicus.

For this to happen, ideas need to get out there - whether orally or in writing - so others can build on them. The publication cycle for ideas is like the release cycle for software. It determines how quickly you can get feedback, fix mistakes, and then use whatever you've already built to help make the next thing. Most academic papers take months to write up, and then once written up, take more months to publish. Compare that to Less Wrong articles or blog posts, where you can write an essay, get comments within a few hours, and then write up a reply or follow-up the next day.

Of course, some of that extra time lag is that big formal documents are sometimes needed for discussion, and big formal documents take a while. But academic papers aren't just limited by writing and reviewing time - they still fundamentally operate on the schedule of the seventeenth-century Transactions of the Royal Society. When Holden published his critique of the Singularity Institute on Less Wrong, a big formal document, Eliezer could reply with another big formal document in about three weeks.

 

2. Most academic publications are inaccessible outside universities.

This problem is familiar to anyone who's done research outside a university. The ubiquitous journal paywall. People complain about how the New York Times and Wall Street Journal have paywalls, but at least you can pay for them if you really want to. It isn't practical for almost anyone doing research to pay for the articles they need out-of-pocket, since journals commonly charge $30 or more per article, and any serious research project involves dozens or even hundreds of articles. Sure, there are ways to get around the system, and you can try to publish (and get everyone else in your field to publish) in open-access journals, but why introduce a trivial inconvenience?


3. Virtually no one reads most academic publications.

This obviously goes together with point #2, but even within universities, it's rare for papers, dissertations or even books to be read outside a very narrow community. Most people don't regularly read journals outside their field, let alone outside their department. Academic papers are hard to get statistics on, but eg., I was a math major in undergrad, and I can't even understand the titles of most new math papers. More broadly, the print run of most academic books is very small, only a few hundred or so. The average Less Wrong post gets more views than that.

 

4. It's very unusual to make successful philosophical arguments in paper form.

When doing research for Personalized Medicine, I often read papers to discover the results of some experiment. Someone gave drug X to people with disease Y. What were the results? How many were cured? How many had side effects? What were the costs and benefits? All useful information.

However, most recent Singularity Institute papers are neither empirical ("we did experiment X, these are the results") or mathematical ("if you assume A, B, and C, then D and E follow"). Rather, they are philosophical, like Paul Graham's essays. I honestly can't think of a single instance where I was convinced of an informal, philosophical argument through an academic paper. Books, magazines, blog posts - sure, but papers just don't seem to be a thing.

 

5. Papers don't have prestige outside a narrow subset of society.

Several other arguments here - the time lag, for instance - also apply to books. However, society in general recognizes that writing a book is a noteworthy achievement, especially if it sells well. A successful author, even if not compensated well, is treated a little like a celebrity: media interviews, fan clubs, crazy people writing him letters in green ink, etc. (This is probably related to them not being paid well: in the labor market, payment in social status probably substitutes to a high degree for payment in money, as we see with actors and musicians.)

There's nothing comparable for academic papers. No one ever writes a really successful paper, and then goes on The Daily Show, or gets written up in the New York Times, or gets harassed by crowds of screaming fangirls. (There are a few exceptions, like medicine, but philosophy and computer science are not among them.) Eg., a lot of people are familiar with Ioannidis's paper, Why most published research findings are false. However, he also wrote another paper, a few years earlier, titled Replication validity of genetic association studies. This paper actually has more citations - over 1300 at least count. But not only have we not heard of it, no one else outside the field has either. (Try Googling it, and you'll see what I mean.)

 

6. Getting people to read papers is difficult.

Most intellectual people regularly read books, blogs, newspapers, magazines, and other common forms of memetic transmission. However, it's much less common for people to read papers, and that reduces the affordances that people have for doing so, if they are asking "hey, this thing is a crazy idea, why should I believe it?". Papers are, intentionally, written for an audience of specialists rather than a general interest group, which reduces both the tendency and ability of non-specialists to read them when asked (and also violates the "Explainers shoot high - aim low" rule).

 

7. Academia selects for conformity.

The whole point of tenure is to avoid selecting for conformity - if you have tenure, the theory goes, you can work on whatever you want, without fear of being fired or otherwise punished. However, only a small (and shrinking) number of academics have tenure. In order to make sure fools didn't get tenure, it turns out academia resorted to lots and lots of negative selection. The famous letter by chemistry professor Erick Carreira illustrates some of what the selection pressure is like, similar to medicine or investment banking: there's a single, narrow "track", and people who deviate at any point are pruned. Lee Smolin has written about this phenomenon in string theory, in his famous book The Trouble with Physics.

Things may change in the future, but as it stands now, many ideas like the Singularity are non-conformist, well outside the mainstream. They aren't likely to go very far in an environment where deviations from the norm are seen negatively.

 

8. The current community isn't academic in origin.

This isn't an airtight argument, because it's heuristic - "things which worked well before will probably work again". However, heuristic arguments still have a lot of validity. One of the key purposes of a discussion forum, like Less Wrong or the SL4 list that was, is to get new people with bright ideas interested in the topics under discussion. Academia's track record of getting new people interested isn't that great - of the current Singularity Institute directors and staff, only one (Anna Salamon) has an academic background, and she dropped out of her PhD program to work for SIAI. What has been successful, so far, at bringing new people into our community? I haven't analyzed it in depth, but whatever the answer is, the priors are that it will work well again.

 

9. Our ideas aren't academic in origin.

Similarly to #8, this is a "heuristic argument" rather than an airtight proof. But I still think it's important to note that our current ideas about Friendly AI - any given AI will probably destroy the world, mathematical proof is needed to prevent that, human value is complicated and hard to get right, and so on - were not developed through papers, but through in-person and mailing list discussions (primarily). I'm also not aware of any ideas which came into our community through papers. Even science fiction has a better track record - eg. some of our key concepts originated in Vinge's True Names and Other Dangers. What formats have previously worked well for discussing ideas?

 

10. Papers have a tradition of violating the bottom line rule.

In a classic paper, one starts with the conclusion in the abstract, and then builds up an argument for it in the paper itself. Paul Graham has a fascinating essay on this form of writing, and how it came to be - it ultimately derives from the legal tradition, where one takes a position (guilty or innocent), and then defends it. However, this style of writing violates the bottom line rule. Once something is written on the paper, it is already either right or wrong, no matter what clever arguments you come up with in support of it. This doesn't make it wrong, of course, but it does tend to create a fitness environment where truth isn't selected for, just as Alabama creates a fitness environment where startups aren't selected for.

 

11. Academic moderation is both very strict and badly run.

All forums need some sort of moderation to avoid degenerating. However, academic moderation is very strict by normal standards - in a lot of journals, only a small fraction of submissions get approved. In addition, academic moderation has a large random element, and is just not very good overall; many quality papers get rejected, and many obvious errors slip through.

As if that wasn't enough, most journals are single-blind rather than double-blind. You don't know who the moderators are, but they know who you are, raising the potential for all kinds of obvious unfairness. The most common kind of bias is one that hurts us unusually badly: people from prestigious universities are given a huge leg up, compared to people outside the system.

(This article has been cross-posted to my blog, The Rationalist Conspiracy.)

 

EDIT #1: As Lukeprog notes in the comments, academic papers are not our main discussion forum for FAI ideas. In practice, the main forum is still in-person conversations. However, in-person conversations have critical limitations too, albeit more obvious ones. Some crucial limits are the small number of people who can participate at any one time; the lack of any external record that can be looked up later; the lack of any way to "broadcast" key findings to a larger audience (you can shout, but that's not terribly effective); and the lack of lots of time to think, since each participant in the conversation can't really wait three hours before replying.

EDIT #2: To give a specific example of an alternative forum for FAI discussion, I think the proposal for an AI Risk wiki would solve most of the problems listed here.

Ideas for rationalist meetup topics

15 alyssavance 12 January 2012 05:04AM

(From Zvi Mowshowitz, leader of the New York group, and based on his experience)


Category 1: Discussions - The Base
A: Less Wrong topics. Usually recent posts. Often big hits.
B: Rationalist group planning and organization. Meta-topics. Dealing with group issues on occasion.
C: Spreading Rationality. Discussion of various approaches.
D: Contraband Topics. Discussion of things that I won't include here, but you can guess.
E: How To Do X, or how to do X well. Charity, meeting people, improving skills, relationships, etc.

Category 2: Presentations - Almost Always a Winner
F: Sequences. Andrew Rettek did these for public meetups, went over well I think.
G: Personal Projects: Variance, Geoff Anders's Psychology.
H: General Knowledge: CBT, Starting a Business, Basic Python.
I: Advanced Rationality: Attempted once, successful despite poor execution. Should be explored more.

Category 3: Game Nights
J: Proven Good Games: Illuminati, Poker, Citadels, 7 Wonders, Nomic.
K: Advanced Games: Much demand for general gaming, but not really that rationalist. Proven winners in this group: Vegas Showdown, Power Struggle, Baltimore & Ohio, Tichu, Through the Ages. I can keep going, many good choices. AVOID: Settlers of Catan (personal opinion) and Fluxx (cause you can do better, honestly).
U: Outside Games. Ultimate Frisbee is good.

Category 4: Other Celebrations
L: Karaoke
M: Baraccuda (bring something awesome you love)
N: Contraband Activities (enough said)
O: General Party In or Out / Pot Luck

Category 5: Special Guests
P: Famous Rationalists! If you get Eliezer or Robin or Vassar, you need no other topic. Probably a few others who get there on their own as well, or any generally famous guest.

Category 6: Self-Improvement
Q: Go out and do something to improve social skills.
R: Discuss or map out goals and plans. Set goals. Can be combined with other tasks.
S: Run experiments! For science!
T: Study Hall. Also folds into discussion of self-improvement topics.

Quantified Health Prize Deadline Extended

3 alyssavance 05 January 2012 09:28AM

(Original Post: Announcing the Quantified Health Prize)

I've recently been hired by Personalized Medicine, a new research company trying to bring Less Wrongian rationality to the medical world. We're giving away a $5000 prize for well-researched, well-reasoned presentations that answer the following question: What are the best recommendations for what quantities adults (ages 20-60) should take the important dietary minerals in, and what are the costs and benefits of various amounts?

Entries are now due by January 15th, 2012. This is an update from the original date of December 31st, 2011. However, we will not change this deadline again, and it will be strictly enforced. If you submit your entry on January 16 at 12:01 AM Pacific time, we will not read it.

Why enter the contest? If you have an excellent entry, even if you don’t win the grand prize, you can still win one of four additional cash prizes, you’ll be under consideration for a job as a researcher with our company Personalized Medicine, and you’ll get a leg up in the larger contest we plan to run after this one. You also get to help people get better nutrition and stay healthier.

More info about the contest, and instructions for submitting entries, can be found at the contest website at http://www.medicineispersonal.com/contest/home. Good luck!

How To Spin What Is Spun

0 alyssavance 04 September 2011 04:47AM

In which is described some of the "spin" tricks that an author might use, so that people might defend against them, or use them for themselves when they may.

As a startup founder, as in many other walks of life, it is sometimes necessary to convince people of things. Capital needs to be raised from reluctant investors; deals need to be done with bureaucrats, who would really rather be playing Minesweeper; prices need to be negotiated, reporters persuaded, workers hired and loans secured. A common technique to achieve this is to write material that is, not false exactly, but slanted in a certain direction; written with the aim of getting the reader to agree with a particular point of view.

There are a thousand-and-one tricks one might use to accomplish this. A successful founder might learn, implicitly, to use them well; but, lacking knowledge of the underlying principles of cognitive science, would probably find them hard to explain to others. I am by no means an expert at either, but it seems like these sorts of tricks really ought to be explained, for they have truly become ubiquitous in today's marketing-driven society. So, here it goes.

continue reading »

View more: Next