2016 LessWrong Diaspora Survey Results

32 ingres 14 May 2016 05:38PM

Foreword:

As we wrap up the 2016 survey, I'd like to start by thanking everybody who took
the time to fill it out. This year we had 3083 respondents, more than twice the
number we had last year. (Source: http://lesswrong.com/lw/lhg/2014_survey_results/)
This seems consistent with the hypothesis that the LW community hasn't declined
in population so much as migrated into different communities. Being the *diaspora*
survey I had expectations for more responses than usual, but twice as many was
far beyond them.

Before we move on to the survey results, I feel obligated to put a few affairs
in order in regards to what should be done next time. The copyright situation
for the survey was ambiguous this year, and to prevent that from happening again
I'm pleased to announce that this years survey questions will be released jointly
by me and Scott Alexander as Creative Commons licensed content. We haven't
finalized the details of this yet so expect it sometime this month.

I would also be remiss not to mention the large amount of feedback we received
on the survey. Some of which led to actionable recommendations I'm going to
preserve here for whoever does it next:

- Put free response form at the very end to suggest improvements/complain.

- Fix metaethics question in general, lots of options people felt were missing.

- Clean up definitions of political affilations in the short politics section.
  In particular, 'Communist' has an overly aggressive/negative definition.

- Possibly completely overhaul short politics section.

- Everywhere that a non-answer is taken as an answer should be changed so that
  non answer means what it ought to, no answer or opinion. "Absence of a signal
  should never be used as a signal." - Julian Bigelow, 1947

- Give a definition for the singularity on the question asking when you think it
  will occur.

- Ask if people are *currently* suffering from depression. Possibly add more
  probing questions on depression in general since the rates are so extraordinarily
  high.

- Include a link to what cisgender means on the gender question.

- Specify if the income question is before or after taxes.

- Add charity questions about time donated.

- Add "ineligible to vote" option to the voting question.

- Adding some way for those who are pregnant to indicate it on the number of
  children question would be nice. It might be onerous however so don't feel
  obligated. (Remember that it's more important to have a smooth survey than it
  is to catch every edge case.)

And read this thread: http://lesswrong.com/lw/nfk/lesswrong_2016_survey/,
it's full of suggestions, corrections and criticism.

Without further ado,

Basic Results:

2016 LessWrong Diaspora Survey Questions (PDF Format)

2016 LessWrong Diaspora Survey Results (PDF Format, Missing 23 Responses)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Included)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Excluded)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Included, 13 Responses Filtered, Percentages)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Excluded, 13 Responses Filtered, Percentages)

2016 LessWrong Diaspora Survey Results Complete (HTML Format, Null Entries Excluded)

Our report system is currently on the fritz and isn't calculating numeric questions. If I'd known this earlier I'd have prepared the results for said questions ahead of time. Instead they'll be coming out later today or tomorrow. (EDIT: These results are now in the text format survey results.)

 

Philosophy and Community Issues At LessWrong's Peak (Write Ins)

Peak Philosophy Issues Write Ins (Part One)

Peak Philosophy Issues Write Ins (Part Two)

Peak Community Issues Write Ins (Part One)

Peak Community Issues Write Ins (Part Two)


Philosophy and Community Issues Now (Write Ins)

Philosophy Issues Now Write Ins (Part One)

Philosophy Issues Now Write Ins (Part Two)

Community Issues Now Write Ins (Part One)

Community Issues Now Write Ins (Part Two)

 

Rejoin Conditions

Rejoin Condition Write Ins (Part One)

Rejoin Condition Write Ins (Part Two)

Rejoin Condition Write Ins (Part Three)

Rejoin Condition Write Ins (Part Four)

Rejoin Condition Write Ins (Part Five)

 

CC-Licensed Machine Readable Survey and Public Data

2016 LessWrong Diaspora Survey Structure (License)

2016 LessWrong Diaspora Survey Public Dataset

(Note for people looking to work with the dataset: My survey analysis code repository includes a sqlite converter, examples, and more coming soon. It's a great way to get up and running with the dataset really quickly.)

In depth analysis:

Analysis Posts

Part One: Meta and Demographics

Part Two: LessWrong Use, Successorship, Diaspora

Part Three: Mental Health, Basilisk, Blogs and Media

Part Four: Politics, Calibration & Probability, Futurology, Charity & Effective Altruism

Aggregated Data

Effective Altruism and Charitable Giving Analysis

Mental Health Stats By Diaspora Community (Including self dxers)

How Diaspora Communities Compare On Mental Health Stats (I suspect these charts are subtly broken somehow, will investigate later)

Improved Mental Health Charts By Obormot (Using public survey data)

Improved Mental Health Charts By Anonymous (Using full survey data)

Political Opinions By Political Affiliation

Political Opinions By Political Affiliation Charts (By anonymous)

Blogs And Media Demographic Clusters

Blogs And Media Demographic Clusters (HTML Format, Impossible Answers Excluded)

Calibration Question And Brier Score Analysis

More coming soon!

Survey Analysis Code

Some notes:

1. FortForecast on the communities section, Bayesed And Confused on the blogs section, and Synthesis on the stories section were all 'troll' answers designed to catch people who just put down everything. Somebody noted that the three 'fortforecast' users had the entire DSM split up between them, that's why.

2. Lots of people asked me for a list of all those cool blogs and stories and communities on the survey, they're included in the survey questions PDF above.

Public TODO:

1. Add more in depth analysis, fix the ones that decided to suddenly break at the last minute or I suspect were always broken.

2. Add a compatibility mode so that the current question codes are converted to older ones for 3rd party analysis that rely on them.

If anybody would like to help with these, write to jd@fortforecast.com

Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity

18 peter_hurford 01 December 2015 01:56AM

This year's EA Survey is now ready to be shared! This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.

If you are an EA or otherwise familiar with the community, we hope you will take it using this link. All results will be anonymised and made publicly available to members of the EA community. As an added bonus, one random survey taker will be selected to win a $250 donation to their favorite charity.

Take the EA Survey

Please share the survey with others who might be interested using this link rather than the one above: http://bit.ly/1OqsVWo

Why you should consider buying Bitcoin right now (Jan 2015) if you have high risk tolerance

4 Ander 13 January 2015 08:02PM

LessWrong is where I learned about Bitcoin, several years ago, and my greatest regret is that I did not investigate it more as soon as possible, that people here did not yell at me louder that it was important, and to go take a look at it.  In that spirit, I will do so now.

 

First of all, several caveats:

* You should not go blindly buying anything that you do not understand.  If you don't know about Bitcoin, you should start by reading about its history, read Satoshi's whitepaper, etc.  I will assume that hte rest of the readers who continue reading this have a decent idea of what Bitcoin is.

* Under absolutely no circumstances should you invest money into Bitcoin that you cannot afford to lose.  "Risk money" only!  That means that if you were to lose 100% of you money, it would not particularly damage your life.  Do not spend money that you will need within the next several years, or ever.  You might in fact want to mentally write off the entire thing as a 100% loss from the start, if that helps.

* Even more strongly, under absolutely no circumstances whatsoever will you borrow money in order to buy Bitcoins, such as using margin, credit card loans, using your student loan, etc.  This is very much similar to taking out a loan, going to a casino and betting it all on black on the roulette wheel.  You would either get very lucky or potentially ruin your life.  Its not worth it, this is reality, and there are no laws of the universe preventing you from losing.

* This post is not "investment advice".

* I own Bitcoins, which makes me biased.  You should update to reflect that I am going to present a pro-Bitcoin case.

 

So why is this potentially a time to buy Bitcoins?  One could think of markets like a pendulum, where price swings from one extreme to another over time, with a very high price corresponding to over-enthusiasm, and a very low price corresponding to despair.  As Warren Buffett said, Mr. Market is like a manic depressive.  One day he walks into your office and is exuberant, and offers to buy your stocks at a high price.  Another day he is depressed and will sell them for a fraction of that. 

The root cause of this phenomenon is confirmation bias.  When things are going well, and the fundamentals of a stock or commodity are strong, the price is driven up, and this results in a positive feedback loop.  Investors receive confirmation of their belief that things are going good from the price increase, confirming their bias.  The process repeats and builds upon itself during a bull market, until it reaches a point of euphoria, in which bad news is completely ignored or disbelieved in.

The same process happens in reverse during a price decline, or bear market.  Investors receive the feedback that the price is going down => things are bad, and good news is ignored and disbelieved.  Both of these processes run away for a while until they reach enough of an extreme that the "smart money" (most well informed and intelligent agents in the system) realizes that the process has gone too far and switches sides. 

 

Bitcoin at this point is certainly somewhere in the despair side of the pendulum.  I don't want to imply in any way that it is not possible for it to go lower.  Picking a bottom is probably the most difficult thing to do in markets, especially before it happens, and everyone who has claimed that Bitcoin was at a bottom for the past year has been repeatedly proven wrong.  (In fact, I feel a tremendous amount of fear in sticking my neck out to create this post, well aware that I could look like a complete idiot weeks or months or years from now and utterly destroy my reputation, yet I will continue anyway).

 

First of all, lets look at the fundamentals of Bitcoin.  On one hand, things are going well. 

 

Use of Bitcoin (network effect):

One measurement of Bitcoin's value is the strenght of its network effect.  By Metcalfe's law, the value of a network is proporitonal to the square of the number of nodes in the network. 

http://en.wikipedia.org/wiki/Metcalfe%27s_law

Over the long term, Bitcoin's price has generally followed this law (though with wild swings to both the upside and downside as the pendulum swings). 

In terms of network effect, Bitcoin is doing well.

 

Bitcoin transactions are hitting all time highs:  (28 day average of number of transactions).

https://blockchain.info/charts/n-transactions-excluding-popular?timespan=2year&showDataPoints=false&daysAverageString=28&show_header=true&scale=0&address=

 

Number of Bitcoin addresses are hitting all time highs:

https://blockchain.info/charts/n-unique-addresses?timespan=2year&showDataPoints=false&daysAverageString=28&show_header=true&scale=0&address=

 

Merchant adoption continues to hit new highs:

BitPay/Coinbase continue to report 10% monthly growth in the number of merchants that accept Bitcoin.

Prominent companies that began accepting Bitcoin in the past year include: Dell, Overstock, Paypal, Microsoft, etc.

 

On the other hand, due to the sustained price decline, many Btcoin businesses that started up in the past two years with venture capital funding have shut down.  This is more of an effect of the price decline than a cause however.  In the past month especially there has been a number of bearish news stories, such as Bitpay laying off employees, exchanges Vault of Satoshi and CEX.io deciding to shut down, exchange Bitstamp being hacked and shut down for 3 days, but ultimately is back up without losing customer funds, etc.

 

The cost to mine a Bitcoin is commonly seen as one indicator of price.   Note that the cost to mine a Bitcoin does not directly determine the *value* or usefulness of a Bitcoin.   I do not believe in the labor theory of value: http://en.wikipedia.org/wiki/Labor_theory_of_value

However, there is a stabilizing effect in commodities, in which over time, the price of an item will often converge towards the cost to produce it due to market forces. 

 

If a Bitcoin is being priced at a value much greater than the cost (in mining equipment and electricity) to create it, people will invest in mining equipment.  This results in increased 'difficulty' of mining and drives down the amount of Bitcoin that you can create with a particular piece of mining equipment.  (The amount of Bitcoins created is a fixed amount per unit of time, and thus the more mining equipment that exists, the less Bitcoin each miner will get).

If Bitcoin is being priced at a value below the cost to create it, people will stop investing in mining equipment.  This may be a signal that the price is getting too low, and could rise.

 

Historically, the one period of time where Bitcoin was priced significantly below the cost to produce it was in late 2011.  It was noted on LessWrong.  The price has not currently fallen to quite the same extent as it did back then (which may indicate that it has further to fall), however the current price relative to the mining cost indicates we are very much in the bearish side of the pendulum.

 

It is difficult to calculate an exact cost to mine a Bitcoin, because this depends on the exact hardware used, your cost of electricity, and a prediction of the future difficulty adjustments that will occur.  However, we can make estimates with websites such as http://www.vnbitcoin.org/bitcoincalculator.php

According to this site, every available Bitcoin miner will never give you back as much money as it cost, factoring in the hardware cost and electricity cost.   Upcoming more efficient miners which have not yet released yet are estimated to pay off in about a year, if difficulty grows extremely slowly, and that is for upcoming technology which has not yet even been released. 

 

There are two important breakpoints when discussing Bitcoin mining profitability.  The first is the point at which your return is enough that it pays for both the electricity and the hardware.  The second is the point at which you make more than your electricity costs, but cannot recover the hardware cost.

 

For example, lets say Alice pays $1000 on Bitcoin mining equipment.  Every day, this mining equipment can return $10 worth of Bitcoin, but it costs $5 of electricity to run.  Her gain for the day is $5, and it would take 200 days at this rate before the mining equipment paid for itself.  Once she has made the decision to purchase the mining equipment, the money spent on the miner is a sunk cost.  The money spent on electricity is not a sunk cost, she continues to have the decision every day of whether or not to run her mining equipment.  The optimal decision is to continue to run the miner as long as it returns more than the electricity cost. 

Over time, the payout she will receive from this hardware will decline, as the difficulty of mining Bitcoin increases.  Eventually, her payout will decline below the electricity cost, and she should shut the miner down.  At this point, if her total gain from running the equipment was higher than the hardware cost, it was a good investment.  If it did not recoup its cost, then it was worse than simply spending the money buying Bitcoin on an exchange in the first place.

 

This process creates a feedback into the market price of Bitcoins.  Imagine that Bitcoin investors have two choices, either they can buy Bitcoins (the commodity which has already been produced by others), or they can buy miners, and produce Bitcoins for themself.   If the Bitcoin price falls sufficiently that mining equipment will not recover its costs over time, investment money that would have gone into miners instead goes into Bitcoin, helping to support the price.  As you can see from mining cost calculators, we have passed this point already.  (In fact, we passed it months ago already).

 

The second breakpoint is when the Bitcoin price falls so low that it falls below the electricity cost of running mining equipment.  We have passed this point for many of the less efficient ways to mine.  For example, Cointerra recently shut down its cloud mining pool because it was losing money.  We have not yet passed this point for more recent and efficient miners, but we are getting fairly close to it. Crossing this point has occurred once in Bitcoin's history, in late 2011 when the price bottomed out near $2, before giving birth to the massive bull run of 2012-2013 in which the price rose by a factor of 500.

 

Market Sentiment: 

I was not active in Bitcoin back in 2011, so I cannot compare the present time to the sentiment at the November 2011 bottom.  However, sentiment currently is the worst that I have seen by a significant margin. Again, this does not mean that things could not get much, much worse before they get better!  After all, sentiment has been growing worse for months now as the price declines, and everyone who predicted that it was as bad as it could get and the price could not possibly go below $X has been wrong.  We are in a feedback loop which is strongly pumping bearishness into all market participants, and that feedback loop can continue and has continued for quite a while.

 

A look at market indicators tells us that Bitcoin is very, very oversold, almost historically oversold.  Again, this does not mean that it could not get worse before it gets better. 

 

As I write this, the price of Bitcoin is $230.  For perspective, this is down over 80% from the all time high of $1163 in November 2013.  It is still higher than the roughly $100 level it spent most of mid 2013 at.

* The average price of a Bitcoin since the last time it moved is $314.

https://www.reddit.com/r/BitcoinMarkets/comments/2ez90b/and_the_average_bitcoin_cost_basis_is/

The current price is a multiple of .73 of this price.  This is very low historically, but not the lowest it has ever ben.  THe lowest was about .39 in late 2011. 

 

* Short interest (the number of Bitcoins that were borrowed and sold, and must be rebought later) hit all time highs this week, according to data on the exchange Bitfinex, at more than 25000 Bitcoins sold short:

http://www.bfxdata.com/swaphistory/totals.php

 

* Weekly RSI (relative strength index), an indicator which tells if a stock or commodity is 'overbought' or 'oversold' relative to its history, just hit its lowest value ever.

 

Many indicators are telling us that Bitcoin is at or near historical levels in terms of the depth of this bear market.  In percentage terms, the price decline is surpassed only by the November 2011 low.  In terms of length, the current decline is more than twice as long as the previous longest bear market.

 

To summarize: At the present time, the market is pricing in a significant probability that Bitcoin is dying.

But there are some indicators (such as # of transactions) which say it is not dying.  Maybe it continues down into oblivion, and the remaining fundamentals which looked bullish turn downwards and never recover.  Remember that this is reality, and anything can happen, and nothing will save you.

 

 

Given all of this, we now have a choice.  People have often compared Bitcoin to making a bet in which you have a 50% chance of losing everything, and a 50% chance of making multiples (far more than 2x) of what you started with. 

There are times when the payout on that bet is much lower, when everyone is euphoric and has been convinced by the positive feedback loop that they will win.  And there are times when the payout on that bet is much higher, when everyone else is extremely fearful and is convinced it will not pay off. 

 

This is a time to be good rationalists, and investigate a possible opportunity, comparing the present situation to historical examples, and making an informed decision.   Either Bitcoin has begun the process of dying, and this decline will continue in stages until it hits zero (or some incredibly low value that is essentially the same for our purposes), or it will live.  Based on the new all time high being hit in number of transactions, and ways to spend Bitcoin, I think there is at least a reasonable chance it will live.  Enough of a chance that it is worth taking some money that you can 100% afford to lose, and making a bet.  A rational gamble that there is a decent probability that it will survive, at a time when a large number of others are betting that it will fail.

 

And then once you do that, try your hardest to mentally write it off as a complete loss, like you had blown the money on a vacation or a consumer good, and now it is gone, and then wait a long time.

 

 

Bayes Academy: Development report 1

47 Kaj_Sotala 19 November 2014 10:35PM

Some of you may remember me proposing a game idea that went by the name of The Fundamental Question. Some of you may also remember me talking a lot about developing an educational game about Bayesian Networks for my MSc thesis, but not actually showing you much in the way of results.

Insert the usual excuses here. But thanks to SSRIs and mytomatoes.com and all kinds of other stuff, I'm now finally on track towards actually accomplishing something. Here's a report on a very early prototype.

This game has basically two goals: to teach its players something about Bayesian networks and probabilistic reasoning, and to be fun. (And third, to let me graduate by giving me material for my Master's thesis.)

We start with the main character stating that she is nervous. Hitting any key, the player proceeds through a number of lines of internal monologue:

I am nervous.

I’m standing at the gates of the Academy, the school where my brother Opin was studying when he disappeared. When we asked the school to investigate, they were oddly reluctant, and told us to drop the issue.

The police were more helpful at first, until they got in contact with the school. Then they actually started threatening us, and told us that we would get thrown in prison if we didn’t forget about Opin.

That was three years ago. Ever since it happened, I’ve been studying hard to make sure that I could join the Academy once I was old enough, to find out what exactly happened to Opin. The answer lies somewhere inside the Academy gates, I’m sure of it.

Now I’m finally 16, and facing the Academy entrance exams. I have to do everything I can to pass them, and I have to keep my relation to Opin a secret, too. 

???: “Hey there.”

Eep! Someone is talking to me! Is he another applicant, or a staff member? Wait, let me think… I’m guessing that applicant would look a lot younger than staff members! So, to find that out… I should look at him!

[You are trying to figure out whether the voice you heard is a staff member or another applicant. While you can't directly observe his staff-nature, you believe that he'll look young if he's an applicant, and like an adult if he's a staff member. You can look at him, and therefore reveal his staff-nature, by right-clicking on the node representing his apperance.]

Here is our very first Bayesian Network! Well, it's not really much of a network: I'm starting with the simplest possible case in order to provide an easy start for the player. We have one node that cannot be observed ("Student", its hidden nature represented by showing it in greyscale), and an observable node ("Young-looking") whose truth value is equal to that of the Student node. All nodes are binary random variables, either true or false. 

According to our current model of the world, "Student" has a 50% chance of being true, so it's half-colored in white (representing the probability of it being true) and half-colored in black (representing the probability of it being false). "Young-looking" inherits its probability directly. The player can get a bit of information about the two nodes by left-clicking on them.

The game also offers alternate color schemes for colorblind people who may have difficulties distinguishing red and green.

Now we want to examine the person who spoke to us. Let's look at him, by right-clicking on the "Young-looking" node.

Not too many options here, because we're just getting started. Let's click on "Look at him", and find out that he is indeed young, and thus a student.

This was the simplest type of minigame offered within the game. You are given a set of hidden nodes whose values you're tasked with discovering by choosing which observable nodes to observe. Here the player had no way to fail, but later on, the minigames will involve a time limit and too many observable nodes to inspect within that time limit. It then becomes crucial to understand how probability flows within a Bayesian network, and which nodes will actually let you know the values of the hidden nodes.

The story continues!

Short for an adult, face has boyish look, teenagerish clothes... yeah, he looks young!

He's a student!

...I feel like I’m overthinking things now.

...he’s looking at me.

I’m guessing he’s either waiting for me to respond, or there’s something to see behind me, and he’s actually looking past me. If there isn’t anything behind me, then I know that he must be waiting for me to respond.

Maybe there's a monster behind me, and he's paralyzed with fear! I should check that possibility before it eats me!

[You want to find out whether the boy is waiting for your reply or staring at a monster behind you. You know that he's looking at you, and your model of the world suggests that he will only look in your direction if he's waiting for you to reply, or if there's a monster behind you. So if there's no monster behind you, you know that he's waiting for you to reply!]

Slightly more complicated network, but still, there's only one option here. Oops, apparently the "Looks at you" node says it's an observable variable that you can right-click to observe, despite the fact that it's already been observed. I need to fix that.

Anyway, right-clicking on "Attacking monster" brings up a "Look behind you" option, which we'll choose.

You see nothing there. Besides trees, that is.

Boy: “Um, are you okay?”

“Yeah, sorry. I just… you were looking in my direction, and I wasn’t sure of whether you were expecting me to reply, or whether there was a monster behind me.”

He blinks.

Boy: “You thought that there was a reasonable chance for a monster to be behind you?”

I’m embarrassed to admit it, but I’m not really sure of what the probability of a monster having snuck up behind me really should have been.

My studies have entirely focused on getting into this school, and Monsterology isn’t one of the subjects on the entrance exam!

I just went with a 50-50 chance since I didn’t know any better.

'Okay, look. Monsterology is my favorite subject. Monsters avoid the Academy, since it’s surrounded by a mystical protective field. There’s no chance of them getting even near! 0 percent chance.'

'Oh. Okay.'

[Your model of the world has been updated! The prior of the variable 'Monster Near The Academy' is now 0%.]

Then stuff happens and they go stand in line for the entrance exam or something. I haven't written this part. Anyway, then things get more exciting, for a wild monster appears!

Stuff happens

AAAAAAH! A MONSTER BEHIND ME!

Huh, the monster is carrying a sword.

Well, I may not have studied Monsterology, but I sure did study fencing!

[You draw your sword. Seeing this, the monster rushes at you.]

He looks like he's going to strike. But is it really a strike, or is it a feint?

If it's a strike, I want to block and counter-attack. But if it's a feint, that leaves him vulnerable to my attack.

I have to choose wisely. If I make the wrong choice, I may be dead.

What did my master say? If the opponent has at least two of dancing legs, an accelerating midbody, and ferocious eyes, then it's an attack!

Otherwise it's a feint! Quick, I need to read his body language before it's too late!

Now get to the second type of minigame! Here, you again need to discover the values of some number of hidden variables within a time limit, but here it is in order to find out the consequences of your decision. In this one, the consequence is simple - either you live or you die. I'll let the screenshot and tutorial text speak for themselves:

[Now for some actual decision-making! The node in the middle represents the monster's intention to attack (or to feint, if it's false). Again, you cannot directly observe his intention, but on the top row, there are things about his body language that signal his intention. If at least two of them are true, then he intends to attack.]

[Your possible actions are on the bottom row. If he intends to attack, then you want to block, and if he intends to feint, you want to attack. You need to inspect his body language and then choose an action based on his intentions. But hurry up! Your third decision must be an action, or he'll slice you in two!]

In reality, the top three variables are not really independent of each other. We want to make sure that the player can always win this battle despite only having three actions. That's two actions for inspecting variables, and one action for actually making a decision. So this battle is rigged: either the top three variables are all true, or they're all false.

...actually, now that I think of it, the order of the variables is wrong. Logically, the body language should be caused by the intention to attack, and not vice versa, so the arrows should point from the intention to body language. I'll need to change that. I got these mixed up because the prototypical exemplar of a decision minigame is one where you need to predict someone's reaction from their personality traits, and there the personality traits do cause the reaction. Anyway, I want to get this post written before I go to bed, so I won't change that now.

Right-clicking "Dancing legs", we now see two options besides "Never mind"!

We can find out the dancingness of the enemy's legs by thinking about our own legs - we are well-trained, so our legs are instinctively mirroring our opponent's actions to prevent them from getting an advantage over us - or by just instinctively feeling where they are, without the need to think about them! Feeling them would allow us to observe this node without spending an action.

Unfortunately, feeling them has "Fencing 2" as a prerequisite skill, and we don't have that. Neither could we have them, in this point of the game. The option is just there to let the player know that there are skills to be gained in this game, and make them look forward to the moment when they can actually gain that skill. As well as giving them an idea of how the skill can be used.

Anyway, we take a moment to think of our legs, and even though our opponent gets closer to us in that time, we realize that our legs our dancing! So his legs must be dancing as well!

With our insider knowledge, we now know that he's attacking, and we could pick "Block" right away. But let's play this through. The network has automatically recalculated the probabilities to reflect our increased knowledge, and is now predicting a 75% chance for our enemy to be attacking, and for "Blocking" to thus be the right decision to make.

Next we decide to find out what his eyes say, by matching our gaze with his. Again, there would be a special option that cost us no time - this time around, one enabled by Empathy 1 - but we again don't have that option.

Except that his gaze is so ferocious that we are forced to look away! While we are momentarily distracted, he closes the distance, ready to make his move. But now we know what to do... block!

Success!

Now the only thing that remains to do is to ask our new-found friend for an explanation.

"You told me there was a 0% chance of a monster near the academy!"

Boy: “Ehh… yeah. I guess I misremembered. I only read like half of our course book anyway, it was really boring.”

“Didn’t you say that Monsterology was your favorite subject?”

Boy: “Hey, that only means that all the other subjects were even more boring!”

“. . .”

I guess I shouldn’t put too much faith on what he says.

[Your model of the world has been updated! The prior of the variable 'Monster Near The Academy' is now 50%.]

[Your model of the world has been updated! You have a new conditional probability variable: 'True Given That The Boy Says It's True', 25%]

And that's all for now. Now that the basic building blocks are in place, future progress ought to be much faster.

Notes:

As you might have noticed, my "graphics" suck. A few of my friends have promised to draw art, but besides that, the whole generic Java look could go. This is where I was originally planning to put in the sentence "and if you're a Java graphics whiz and want to help fix that, the current source code is conveniently available at GitHub", but then getting things to his point took longer than I expected and I didn't have the time to actually figure out how the whole Eclipse-GitHub integration works. I'll get to that soon. Github link here!

I also want to make the nodes more informative - right now they only show their marginal probability. Ideally, clicking on them would expand them to a representation where you could visually see what components their probability composed of. I've got some scribbled sketches of what this should look like for various node types, but none of that is implemented yet.

I expect some of you to also note that the actual Bayes theorem hasn't shown up yet, at least in no form resembling the classic mammography problem. (It is used implicitly in the network belief updates, though.) That's intentional - there will be a third minigame involving that form of the theorem, but somehow it felt more natural to start this way, to give the player a rough feeling of how probability flows through Bayesian networks. Admittedly I'm not sure of how well that's happening so far, but hopefully more minigames should help the player figure it out better.

What's next? Once the main character (who needs a name) manages to get into the Academy, there will be a lot of social scheming, and many mysteries to solve in order for her to find out just what did happen to her brother... also, I don't mind people suggesting things, such as what could happen next, and what kinds of network configurations the character might face in different minigames.

(Also, everything that you've seen might get thrown out and rewritten if I decide it's no good. Let me know what you think of the stuff so far!)

2014 Survey of Effective Altruists

27 tog 05 May 2014 02:32AM

I'm pleased to announce the first annual survey of effective altruists. This is a short survey of around 40 questions (generally multiple choice), which several collaborators and I have put a great deal of work into and would be very grateful if you took. I'll offer $250 of my own money to one participant.

Take the survey at http://survey.effectivealtruismhub.com/

The survey should yield some interesting results such as EAs' political and religious views, what actions they take, and the causes they favour and donate to. It will also enable useful applications which will be launched immediately afterwards, such as a map of EAs with contact details and a cause-neutral register of planned donations or pledges which can be verified each year. I'll also provide an open platform for followup surveys and other actions people can take. If you'd like to suggest questions, email me or comment.

Anonymised results will be shared publicly and not belong to any individual or organisation. The most robust privacy practices will be followed, with clear opt-ins and opt-outs.

I'd like to thank Jacy Anthis, Ben Landau-Taylor, David Moss and Peter Hurford for their help.

Other surveys' results, and predictions for this one

Other surveys have had intriguing results. For example, Joey Savoie and Xio Kikauka's interviewed 42 often highly active EAs over Skype, and found that they generally had left-leaning parents, donated on average 10%, and were altruistic before becoming EAs. The time they spent on EA activities was correlated with the percentage they donated (0.4), the time their parents spend volunteering (0.3), and the percentage of their friends who were EAs (0.3).

80,000 Hours also released a questionnaire and, while this was mainly focused on their impact, it yielded a list of which careers people plan to pursue: 16% for academia,  9% for both finance and software engineering, and 8% for both medicine and non-profits.  

I'd be curious to hear people's predictions as to what the results of this survey will be. You might enjoy reading or sharing them here. For my part, I'd imagine we have few conservatives or even libertarians, are over 70% male, and have directed most of our donations to poverty charities.

Video: What is Harry Potter and the Methods of Rationality

5 Eneasz 30 September 2013 03:28AM

So a friend of mine took over running MALcon in Denver this year. She asked me to do a presentation on Harry Potter and the Methods of Rationality. I said ok and put together the following little talk. It's about 25 minutes. I tried to cover what rationality is, why it makes fiction cool, and what HPMoR is. For the non-initiated. It was my first time doing public speaking, and I was nervous and, ok, borderline terrified. I hope I didn't screw anything up too badly. I recorded the presentation and I'm putting it up for critique. There's several chunks that, in retrospect, I think should have been placed differently in the talk, they didn't flow well. I need more eye-contact, less notes, and overall just a LOT more practice doing public speaking. Any suggestions are welcome.

Video on YouTube

 

The text I was reading from is below, although I deviated from it a bit, of course.
(bolding was to draw my eye, not for emphasis)

 

Hello. You’re all here for the Harry Potter and the Methods of Rationality thing? OK. This is my first time doing public speaking so please make allowances for my noob mistakes. Also, for the same reason, please hold your questions until the end. You may wish to write down any that come to you so you don’t forget.

To start - I’m going to assume everyone here knows what fanfic is. Harry Potter and the Methods of Rationality is fanfic written by decision theorist Eliezer Yudkowsky and it’s one of the most popular Harry Potter fanfics online. It’s the most reviewed and followed HP work on fanfiction.net and it’s received praise by award-winning authors. I don’t really have much to do with it, I’m mainly  just a fan. I am big enough of a fan that I record and produce the audio-book version of it though, so I was asked to do this presentation. Plus I live a few blocks away. So that’s why I’m here.


You’re here because you all want to know what the big deal is.

Part of the big deal is that it’s a really good story, but there’s lots of good fanfic and generally they don’t have their own panel to discuss them. The thing about Harry Potter and the Methods of Rationality – which I’ll just be calling Methods for short – is that it captures the heart of the rationality movement. Maybe you’ve heard of this “rationality” thing, maybe not, but it’s a growing movement among certain types of geeks. And when a geek subculture latches on to some fictional work and says “OMG, this is US!” it’s usually pretty damn good in it’s own right. My Little Pony wouldn’t have the fandom it does if the show itself wasn’t great. So the Rationality part of Harry Potter and the Methods of Rationality is pretty important to the whole.

Therefore before we get into the fic itself I’ll briefly touch on – what is Rationality, and why does it make a cool story?


Rationality is the study of general methods for good decision-making, especially when the decision is hard to get right. Of knowing what errors in thinking are common so we can avoid them. Of realizing when we are confused, or when we’re motivated by bad instincts. If you want to make good decisions you must not fool yourself, and you are the easiest person to fool.


This makes a story interesting because watching someone put in high-stress situations where making a good decision is the difference between life and death, when a good decision is hard to find, and seeing them frantically navigate through that mess is pretty exciting.


Now, to make good decisions we also need true beliefs about the world around us. Rationalists assume that we can know true things about the real world. (I know that seems like really obvious assumption, but you’d be surprised). However our beliefs about reality are imperfect. The model we have of reality in here doesn’t exactly match up with what’s really out there. The map of the world we have in our brain isn’t entirely accurate. In some places it’s completely wrong. What we need is a way to verify what we think we know and discover true things about the real world… a way of separating fact from delusion. If only someone would come up with a way to do that...


Waitaminnit, you’re saying to me - they did, it’s called the scientific method! I’m pretty sure I don’t have to tell you all how awesome science is - We all love science yeah? GO SCIENCE! So naturally Rationality incorporates the scientific method. The truth about how reality works has to be part of any effective decision-making process. And anyone who’s read good science fiction knows how great a story that struggle to find the truth can be. The search to find out what’s going on, and why. The discovery of an underlying principle of how the universe works, and the power that comes from harnessing that knowledge.


Harry Potter may be set in a typical fantasy setting, but The Methods of Rationality is a Science Fiction story.


Now, sometimes discovering a truth is not enough. Sometimes it doesn’t match with what you knew before, with assumptions and habits that guided your actions. We don’t think through every little thing we do in our day-to-day lives, we rely mainly on our reflexive biases and habits. You couldn’t cross a room if you had to think through and plan each step. So rationality isn’t just about finding out what is true about the world - if that’s all you wanted, you have the Scientific Method. Rationality is also about updating your implicit beliefs to more accurately match what you’ve discovered is true.


And it turns out that’s not so easy. It’s especially hard when what we discover conflicts with our hard-wired instincts. Our bodies and our instincts have evolved to grab all the calories and resources we can find, pass on our genes, and die. And so while we may consciously know that that bag of potato chips is bad for us, we still eat it, cuz it tastes good. We may consciously understand that the roller coaster is completely safe, engineered so that you’d have to really work hard at getting hurt, we are still terrified when we go over that initial drop. If you really want to effect a change in your behavior it isn’t enough to simply “know” something. You have to feel it. And to do that you usually have to play dirty. There’s an old Keanu Reeves movie where he plays a hacker, and at the climax he’s told he needs info that’s hidden in his own brain. He has to HACK HIS OWN BRAIN. DUN DUN DUN! We have to do the same thing, a lot. Rationality gives you the tools to hack yourself.


And that’s another great aspect of Rationality stories. Many great stories are about man vs man or man vs nature, some of the best stories are about man vs himself, man vs his own flaws. Thing is most people don’t have the weapons to wrestle with themselves effectively, most authors don’t even know those weapons exist. So the traditional “wrestle with oneself” is a grunting bare-nuckled back-alley fight. Which is fun, like that great brawl in They Live. But a story that incorporates rationality, upgrades this to a duel between cyber-ninjas with laser swords. It’s freakin’ cool and you don’t get to see that in most books, so it’s a hell of a show!


So, that’s rationality, and that’s how it makes stories awesome and unusual. But - why Harry Potter? After all, it was probably that name recognition that brought you here, and not the term “Methods of Rationality”.


To start with, the Potterverse is very fertile soil for fanfiction. There’s a reason some settings have only a trickle of fanfic, while others explode with it. Some settings really lend themselves to further exploration by fans. These settings provide a rich history in a living world that goes beyond just the characters in the story. There are allusions to events that have happened before or are happening outside the scope of the book - the rise of Grindelwald, the first wizarding war against Voldemort, the whole first generation backstory. And since in Harry Potter the action takes place in the modern day in and around our muggle world, there are a lot of practical implications that can provide speculation and plot-hooks for days on end. The more rich and complex the setting is, the greater the potential it has for fanfiction to explore and grow from it, and Harry Potter has a very rich world.


Of course there are many worlds ripe for fanfiction - why Harry Potter specifically? MLP, Twilight, and Star Trek all have thriving fanfic scenes. Probably the biggest reason can be summed up in the title of the second chapter - Everything I Believe Is False.


I mentioned at the start that Yudkowsky is a decision theorist. A lot of sci-fi writers have a background in the sciences, and they explore “what-if” scenarios from their field in their fiction. Lets say you want to write a sci-fi piece that revolves around decision theory. To make it really captivating you want a character who already knows how to use these skills. Training is ok, and the many years of training that a ninja or a demon-slayer goes through can be interesting, but the real action is when they are near the peak of their mastery and they have to face down the Big Bad villain in a fight to the death. Most of the time the training is alluded to in flashbacks, or covered in a montage. It’s just not that fun.


To facilitate this, there is one major change between this fanfic and canon. In Methods of Rationality Petunia marries a kind University Professor instead of an ignorant jerk, and he teaches Harry about the Scientific Method and gives him the full set of Enlightenment skills and ideals so the story can just right into the action.


Now to really test a character’s skills and resolve you thrust them into a completely novel situation, one in which they didn’t prepare for and never dreamed they’d be tested, but which still relies on those skills. Which in the case of decision theory would mean revealing to the character that everything they thought they knew was false. They have been lied to all their lives, and the world doesn’t really work the way they thought it did. Now they have to re-examine everything they thought they knew, test every assumption they had. Is this thing I believe a true fact about reality, or was it part of the conspiracy to keep me ignorant? What beliefs can I keep and what must I change? Of those, which beliefs do I have to dump entirely, and which can I simply modify a bit? How do I internalize this new knowledge so that I act unconsciously on what I’ve discovered, rather than defaulting to old habits?


If you ask people to name settings in pop culture where there is such a radical revelation, where the protagonist learns that the world is mostly a lie, the two most common answers you get are “Harry Potter” and “The Matrix”. The Matrix is really cool, but the characters aren’t as interesting - they don’t have parents or relatives or backstory. Neo, as his name alludes, is New and completely disconnected from the surrounding world, which works for the story they’re telling of the isolated loner, but that doesn’t make for very fertile fanfic soil. Also - the Matrix world doesn’t have magic. No one there is forced to say “I just saw a human turn into a cat, but she kept thinking using her human brain. What does this mean for what I thought I knew about brains?”


Plus Yudkowsky was a reader of Harry Potter fanfic, not Matrix fanfic, so it was natural to write in the same world he enjoyed reading.


I should probably get into the meat of the story itself.


Methods of Rationality takes place during Harry’s first year at Hogwarts. It starts with Harry getting his letter and initially follows the structure of the first book, with a trip to Daigon Alley, Platform 9 and ¾, the sorting, the conflict with Snape, even the Troll. But it does it all with a rationalist slant, which makes it a unique sort of story, and the differences between it and the original are really cool to watch. This slant results in some parodies of the original, like when it sorts Hermione into Ravenclaw because - as Harry comments - if Hermione Granger doesn’t qualify as Ravenclaw, there’s no reason for Ravenclaw House to exist. But the parodies aren’t mean-spirited - the author really likes the Potterverse. They’re just fun.


In terms of genre It’s hard to classify Methods of Rationality into any one category, but large parts of it are comedy. If you watch anime and enjoy that sort of over-the-top, falling-on-your-face, winking-at-the-audience style of humor, you will love Methods of Rationality. It has TONS of that. It has Boy-Who-Lived Fangirls trying to get Harry Potter to fall in love with them. It has someone trying to summon Harry with an epic straight-out-of-Lovecraft Elder God summoning ritual which goes… not quite how they expected.


But it isn’t all comedy. Harry is attacked by a Dementor and re-lives seeing his parents murdered. He goes to Azkaban and meets a tortured Bellatrix Black. Like, literally being tortured. There’s blood debts and ransoms, and all the while Voldemort’s minions are trying to destroy him and kill his friends. So there’s drama and action and pathos as well as comedy. And it flows very nicely, Yudkowsky handles mood-switches extremely well, moving from comedy to drama to action and back to comedy with a skill that rivals professional authors.


Even though the story takes place only in Harry’s first year, it does draw in elements from the entire Potter timeline. There’s a time-turner. Remus Lupin, Rita Skeeter, and Mad-Eye Moody all make appearances. The three Deathly Hallows and the Peveral Brothers are a major plot point. Luna Lovegood doesn’t show up, since she’s too young to be at Hogwarts in the first year, but she is mentioned and several issues of The Quibbler show up.


I did mention the major change from canon - In canon Harry’s step-parents are evil and keep him locked up. That wouldn’t really work for this story, because Harry can’t be locked away from the muggle world, he has to have the knowledge and expectations about it in order for them all to be shattered. So But since almost all the action takes place at Hogwarts, the content of the story isn’t drastically altered by that. It’s mainly altered by the application of rationality.


The question sometimes comes up - what if I haven’t read the original Harry Potter books, or seen the movies? There are people who’ve heard the story is great and want to read it, but don’t have much desire to read the Potter books. I ain’t gonna lie - you won’t enjoy it quite as much. There are a lot of in-jokes that will go right over your head if you haven’t at least seen the movies. For example, the references to the Weasley pet rat will probably be confusing. But it’s not as bad as you might think, because there are A LOT of references in Methods of Rationality to tons of things outside the Potterverse. There’s references to anime, old sci-fi books, internet memes… there’s shout-outs to Star Wars and even to Gargoyles. So everyone will miss something. The in-jokes are great when you get them, and no big deal when you don’t, and if the in-jokes you don’t get happen to be Harry Potter in-jokes, that’s not a tragedy. To be honest, I hadn’t read the last two potter books when I started on Methods of Rationality myself. And I loved it.


In the end, you don’t actually need to have read the Potter books to enjoy Methods of Rationality. Characters are still introduced in a coherent way, the plot is internally consistent, and the knowledge you need to understand and enjoy the story is presented in the text. So if you’re on the fence, go ahead and give it a try. You really don’t have to plow through seven books you aren’t excited about. But if you can find the time to watch at least the first movie, it does make it more enjoyable.


Some of you may have realized that there is a problem with giving Harry a major rationality upgrade. For a story to be exciting there must be a real conflict, not a one-sided beatdown. There’s a law of good fanfic that says “If you give Frodo a lightsaber, you must give Sauron the Death Star.” Fortunately this IS a good fanfic, and Voldemort gets a huge upgrade in intelligence and rationality. The way he wraps the entire Wizard World into knots, even seducing Harry, is epic. And Draco Malfoy gets an upgrade as well, and turns from an egotistical bully to a shrewd plotter. This makes for really good reading for those of us more interested in power grabs and back-stabbing than broomstick-based sports. Not that there’s anything wrong with that...


Personally, the plotting really is phenomenal. There is foreshadowing everywhere, things you’ll read that seem like throw-away jokes when you first encounter them, but that are clearly signs saying “This is what is going to happen next!” that blow you away when you read through a second time. There are chekov’s guns that are laid out early on that aren’t fired until 50 chapters later (chapters aren’t that long). The way little plot points and comments are woven in and out, tieing early tiny actions back to huge events much later is stunning.


Obviously I’m a big fan.


There is one other thing about Harry Potter and the Methods of Rationality that makes it unusual. It’s not just a novel. It’s also a deliberate instructional mechanism. Humans learn things by story-telling. Imagining something is mentally analogous to remembering something that didn’t actually happen. Yudkowsky uses this intentionally to direct his audience into developing stronger rationality skills. Almost every chapter, or group of chapters, is specifically designed to teach a technique or skill of rationality. The technique to be taught is right there in the chapter title. Chapter 26: Noticing Confusion. Most of the time a character, often Harry, will at some point explicitly explain what the technique is or how it is to be used. The chapter will also contain at least one example of someone succeeding or failing in the use of the technique. Sometimes multiple examples. Sometimes multiple examples of both.


The really crazy thing is, you generally don’t notice. The writing is strong and the story really pulls you in, so it’s integrated seamlessly into the plot progression. It isn’t until I go back and read a chapter a second time, referring back to the chapter title and really keeping my eyes open for all examples of it, that I realize just how central that particular idea is in that chapter. It makes me wish all books were written like that.


And as a final bonus for anyone who likes to really dig deep into their novels, Yudkowsky’s stated that Methods is a puzzle that’s meant to be solvable. That all the clues are laid out within it, and a reader who really wants to can work it out before it’s revealed at the end. Toward that end there are a number of places online where people discuss Methods of Rationality and what they think is happening. There’s a thread on TVTropes, and an HPMoR sub-reddit, as well as just people blogging about it now and then. So if you’re into that sort of puzzle-solving, this is right up your alley. The final arc will be released later this year, so there’s still time to get in on the action.


OK, all that being said, this fanfic isn’t for everyone. There are some people who dislike how Harry talks to adults. Most of these people are parents. /shrug I’m not a parent, I don’t know. Some people just never get into the story, which is fine. The humor doesn’t appeal to everyone, and some of the dark parts are pretty dark. And I really wouldn’t recommend this to anyone who isn’t at least in their teens yet. The terminology and some of the more complex ideas are probably too daunting for younger readers. Also the story does touch on more adult subject matter a few times.


I’ll wrap up with some final info on where you can find this. The official home is at FanFiction.Net. You can go there and search for Harry Potter And The Methods of Rationality. Or just google Harry Potter And The Methods of Rationality. The cleanest site, with a table of contents and resource links and everything, is HPMOR.COM. That’s the site I use when I read it. There’s also the audio-book version, which is at HPMORpodcast.com. I run that one. And of course all of it is free.


*breath*

 

Alright, that’s my presentation, and I hope you’ve learned whatever you wanted to learn. Give it a shot and maybe you’ll love it as much as I do. I’ll now open the floor to questions.

What should a college student do to maximize future earnings for effective altruism?

16 D_Malik 27 August 2013 07:06PM

 

I'd like to solicit advice since I'm starting at Stanford this Fall and I'm interested in optimal philanthropy.

First off, what should I major in? I have experience in programming and math, so I'm thinking of majoring in CS, possibly with a second major or a minor in applied math. But switching costs are still extremely low at the moment, so I should consider other fields.

Some majors that could have higher lifetime earnings than straight CS:

  • Petroleum engineering. Would non-oil energy sources cause pay to drop over the next 40 years?
  • Actuarial math. If I understand correctly, actuaries had high pay because they were basically a cartel, artificially limiting the supply of certifications to a certain number each year. And I've heard that people that used to hire actuaries now hire cheaper equivalents, so pay could be less over the next 40 years.
  • Chemical engineering, nuclear engineering, electrical and electronics engineering, mechanical engineering, aerospace engineering.
  • Pre-med.
  • Quantitative finance.

Thoughts?

Stanford actually has salary data for 2011-2012 graduates by major. CS has highest earnings, by quite far. The data is incomplete because few people responded and some groups were omitted for privacy, so we don't know what e.g. petroleum engineers or double majors earned.

Should I double-major? There are some earnings statistics here; to summarize, two majors in the same field doesn't help; a science major plus a humanities major has lower earnings than the science major alone; greatest returns are achieved by pairing a math/science major with an engineering major, which increases earnings "up to 30%" above the math/science major alone. I'd guess these effects are largely not causation, but correlation caused by conscientiousness/ambition causing both double majors and higher earnings.

I could also get minors. I'm planning to very carefully look over the requirements for each major and minor, since there do seem to be some cheap gains. A math minor can be done in one quarter, for instance; a math major takes only a bit more than two quarters.

I have a table with the unit requirements of each combination of majors and minors. Most students take 15 units a quarter. Here are some major/minor combinations I could do:

  • If I take 18.8 units a quarter, I could double-major in CS and econ.
  • If I take 15.8 units a quarter, I could major in CS and minor in math and econ.
  • If I take 15.4 units a quarter, I could double-major in CS and math.

Cal Newport argues that this sort of thing a bad idea because hard schedules do not actually impress employers more.

Would employers care about double majors in undergrad if I also get a graduate degree? I will do a master's degree or a PhD, partly because those make it a lot easier to emigrate to the US. (I'm from South Africa, which doesn't have much of a software industry.)

What other things could increase earnings?

  • Doing an internship every summer.
  • Networking. Stanford's statistics on how 2011-2012 graduates found jobs indicates that around 29% of them got jobs through networking.
  • Better social skills? I'm planning on taking some classes on public speaking, improv, etc.; what else should I do?
  • Some way of signalling leadership skills? Maybe I could try to get into a leadership position at a student club or something.
  • Honors programs, or doing research. Do employers care about this?
  • Following the advice of Stanford's Career Development Center, for instance about how to prepare for career fairs, using their internship network, making appointments with their career counselors, etc.
  • Studying abroad. I'm already studying abroad by going to Stanford, so this is probably less valuable for me than for most students, though it still seems likely to be worthwhile. Stanford has a Washington program involving internships and classes taught by policymakers, which might be worth doing. Both these would make it harder to do multiple majors and minors.

Many thanks for all advice given!

 

EDIT: I used a scoring rule to rank all combinations of majors and minors in CS, math, economics and MS&E (management science and engineering) according to practicality and estimated effect on earnings. Unit estimates include all breadth requirements etc., assuming I don't take stupid courses. Here's the top 20; the top 10 all look pretty good:

CS Math Econ MS&E   Total Units Units per quarter Hours/day
               
minor minor MAJOR minor   198 16.5 7.1
MAJOR . minor minor   207 17.3 7.4
minor . MAJOR minor   189 15.8 6.8
minor . MAJOR MAJOR   216 18.0 7.7
MAJOR minor minor minor   216 18.0 7.7
minor MAJOR minor minor   183 15.3 6.5
MAJOR . . MAJOR   199 16.6 7.1
minor MAJOR minor MAJOR   210 17.5 7.5
minor minor minor MAJOR   180 15.0 6.4
minor MAJOR MAJOR .   202 16.8 7.2
MAJOR minor minor .   190 15.8 6.8
MAJOR minor . MAJOR   208 17.3 7.4
MAJOR MAJOR . minor   211 17.6 7.5
. minor MAJOR MAJOR   192 16.0 6.9
minor minor MAJOR MAJOR   225 18.8 8.0
MAJOR . minor MAJOR   234 19.5 8.4
minor . minor MAJOR   171 14.3 6.1
. MAJOR MAJOR minor   195 16.3 7.0
minor MAJOR MAJOR minor   228 19.0 8.1
MAJOR minor . minor   181 15.1 6.5
MAJOR MAJOR minor .   220 18.3 7.9
MAJOR . MAJOR .   226 18.8 8.1
MAJOR . minor .   181 15.1 6.5
minor MAJOR . MAJOR   175 14.6 6.3
MAJOR MAJOR . .   185 15.4 6.6
minor minor MAJOR .   172 14.3 6.1
. . MAJOR MAJOR   183 15.3 6.5
MAJOR minor MAJOR .   235 19.6 8.4
MAJOR . . minor   172 14.3 6.1

Another option is to major or minor in M&CS (mathematical and computational sciences) instead of math or CS separately.

 

EDIT 2: Here is a graph of graduates' salaries by major. Y-axis is salary of 2011-2012 Stanford graduates. X-axis is degree: 1 is BA/BS, 2 is MA/MS, 3 is PhD; intermediate values are for groups containing two degree-levels. The sample size is tiny because only 30% of students responded, and some groups were omitted for privacy.

Arguments Against Speciesism

28 Lukas_Gloor 28 July 2013 06:24PM

There have been some posts about animals lately, for instance here and here. While normative assumptions about the treatment of nonhumans played an important role in the articles and were debated at length in the comment sections, I was missing a concise summary of these arguments. This post from over a year ago comes closest to what I have in mind, but I want to focus on some of the issues in more detail.

A while back, I read the following comment in a LessWrong discussion on uploads:

I do not at all understand this PETA-like obsession with ethical treatment of bits.

Aside from (carbon-based) humans, which other beings deserve moral consideration? Nonhuman animals? Intelligent aliens? Uploads? Nothing else?

This article is intended to shed light on these questions; it is however not the intent of this post to advocate a specific ethical framework. Instead, I'll try to show that some ethical principles held by a lot of people are inconsistent with some of their other attitudes -- an argument that doesn't rely on ethics being universal or objective. 

More precisely, I will develop the arguments behind anti-speciesism (and the rejection of analogous forms of discrimination, such as discrimination against uploads) to point out common inconsistencies in some people's values. This will also provide an illustrative example of how coherentist ethical reasoning can be applied to shared intuitions. If there are no shared intuitions, ethical discourse will likely be unfruitful, so it is likely that not everyone will draw the same conclusions from the arguments here. 

 

What Is Speciesism?

Speciesism, a term popularized (but not coined) by the philosopher Peter Singer, is meant to be analogous to sexism or racism. It refers to a discriminatory attitude against a being where less ethical consideration i.e. caring less about a being's welfare or interests is given solely because of the "wrong" species membership. The "solely" here is crucial, and it's misunderstood often enough to warrant the redundant emphasis.

For instance, it is not speciesist to deny pigs the right to vote, just like it is not sexist to deny men the right to have an abortion performed on their body. Treating beings of different species differently is not speciesist if there are relevant criteria for doing so. 

Singer summarized his case against speciesism in this essay. The argument that does most of the work is often referred to as the argument from marginal cases. A perhaps less anthropocentric, more fitting name would be argument from species overlap, as some philosophers (e.g. Oscar Horta) have pointed out. 

The argument boils down to the question of choosing relevant criteria for moral concern. What properties do human beings possess that makes us think that it is wrong to torture them? Or to kill them? (Note that these are two different questions.) The argument from species overlap points out that all the typical or plausible suggestions for relevant criteria apply equally to dogs, pigs or chickens as they do to human infants or late-stage Alzheimer patients. Therefore, giving less ethical consideration to the former would be based merely on species membership, which is just as arbitrary as choosing race or sex as relevant criterion (further justification for that claim follows below).

Here are some examples for commonly suggested criteria. Those who want may pause at this point and think about the criteria they consult for whether it is wrong to inflict suffering on a being (and separately, those that are relevant for the wrongness of killing).

 

The suggestions are:

A: Capacity for moral reasoning

B: Being able to reciprocate

C: (Human-like) intelligence

D: Self-awareness

E: Future-related preferences; future plans

E': Preferences / interests (in general)

F: Sentience (capacity for suffering and happiness)

G: Life / biological complexity

H: What I care about / feel sympathy or loyalty towards

 

The argument from species overlap points out that not all humans are equal. The sentiment behind "all humans are equal" is not that they are literally equal, but that equal interests/capacities deserve equal consideration. None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it. If we consider this implication to be unacceptable, then the same must apply for the situations nonhuman animals find themselves in on farms.

Side note: The question whether killing a given being is wrong, and if so, "why" and "how wrong exactly", is complex and outside the scope of this article. Instead of on killing, the focus will be on suffering, and by suffering I mean something like wanting to get out of one's current conscious state, or wanting to change some aspect about it. The empirical issue of which beings are capable of suffering is a different matter that I will (only briefly) discuss below. So in this context, giving a being moral consideration means that we don't want it to suffer, leaving open the question whether killing it painlessly is bad/neutral/good or prohibited/permissible/obligatory. 

The main conclusion so far is that if we care about all the suffering of members of the human species, and if we reject question-begging reasoning that could also be used to justify racism or other forms of discrimination, then we must also care fully about suffering happening in nonhuman animals. This would imply that x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads. (Though admittedly the latter wouldn't be anti-speciesist but rather anti-"substratist", or anti-"fleshist".)

The claim is that there is no way to block this conclusion without:

1. using reasoning that could analogically be used to justify racism or sexism
or
2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

I've tried and have asked others to try -- without success. 

 

Caring about suffering

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past. 

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb). However, I don't see why absurd conclusions that will likely remain hypothetical would be significantly less bad than other absurd conclusions. Their mere possibility undermines the whole foundation one's decisional algorithm is grounded in. (Compare hypothetical problems for specific decision theories.) 

Furthermore, while D and E seem plausible candidates for reasons against killing a being with these properties (E is in fact Peter Singer's view on the matter), none of the criteria from A to E seem relevant to suffering, to whether a being can be harmed or benefitted. The case for these being bottom-up morally relevant criteria for the relevance of suffering (or happiness) is very weak, to say the least. 

Maybe that's the speciesist's central confusion, that the rationality/sapience of a being is somehow relevant for whether its suffering matters morally. Clearly, for us ourselves, this does not seem to be the case. If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it! 

Those who do consider biting the bullet should ask themselves whether they would have defended that view in all contexts, or whether they might be driven towards such a conclusion by a self-serving bias. There seems to be a strange and sudden increase in the frequency of people who are willing to claim that there is nothing intrinsically wrong with torturing babies when the subject is animal rights, or more specifically, the steak they intend to have for dinner.

It is an entirely different matter if people genuinely think that animals or human infants or late-stage demented people are not sentient. To be clear about what is meant by sentience: 

A sentient being is one for whom "it feels like something to be that being". 

I find it highly implausible that only self-aware or "sapient" beings are sentient, but if true, this would constitute a compelling reason against caring for at least most nonhuman animals, for the same reason that it would pointless to care about pebbles for the pebbles' sake. If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist.

What irritates me, however, is that anyone advocating such a view should, it seems to me, still have to factor in a significant probability of being wrong, given that both philosophy of mind and the neuroscience that goes with it are hard and, as far as I'm aware, not quite settled yet. The issue matters because of the huge numbers of nonhuman animals at stake and because of the terrible conditions these beings live in. 

I rarely see this uncertainty acknowledged. If we imagine the torture-scenario outlined above, how confident would we really be that the torture "won't matter" if our own advanced cognitive capacities are temporarily suspended? 

 

Why species membership really is an absurd criterion

In the beginning of the article, I wrote that I'd get back to this for those not convinced. Some readers may still feel that there is something special about being a member of the human species. Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form. 

The following likely isn't news to most of the LW audience, but it is worth spelling it out anyway: There exists a continuum of "species" in thing-space as well as in the actual evolutionary timescale. The species boundaries seem obvious just because the intermediates kept evolving or went extinct. And even if that were not the case, we could imagine it. The theoretical possibility is enough to make the philosophical case, even though psychologically, actualities are more convincing.

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd! There are several different definitions of "species" used in biology. A common criterion -- for sexually reproducing organisms anyway -- is whether groups of beings (of different sex) can have fertile offspring together. If so, they belong to the same species. 

That is a rather odd way of determining whether one cares about the suffering of some hominid creature in the line-up of ancestors -- why should that for instance be relevant in regard to determining whether some instance of suffering matters to us? 

Moreover, is that really the terminal value of people who claim they only care about humans, or could it be that they would, upon reflection, revoke such statements?

And what about transhumanism? I remember that a couple of years ago, I thought I had found a decisive argument against human enhancement. I thought it would likely lead to speciation, and somehow the thought of that directly implied that posthumans would treat the remaining humans badly, and so the whole thing became immoral in my mind. Obviously this is absurd; there is nothing wrong with speciation per se, and if posthumans will be anti-speciesist, then the remaining humans would have nothing to fear! But given the speciesism in today's society, it is all too understandable that people would be concerned about this. If we imagine the huge extent to which a posthuman, or not to mention a strong AI, would be superior compared to current humans, isn't that a bit like comparing chickens to us?

A last possible objection I can think of: Suppose one held the belief that group averages are what matters, and that all members of the human species deserve equal protection because of the group average for a criterion that is considered relevant and that would, without the group average rule, deny moral consideration to some sentient humans. 

This defense too doesn't work. Aside from seeming suspiciously arbitrary, such a view would imply absurd conclusions. A thought experiment for illustration: A pig with a macro-mutation is born, she develops child-like intelligence and the ability to speak. Do we refuse to allow her to live unharmed -- or even let her go to school -- simply because she belongs to a group (defined presumably by snout shape, or DNA, or whatever the criteria for "pigness" are) with an average that is too low?

Or imagine you are the head of an architecture bureau and looking to hire a new aspiring architect. Is tossing out an application written by a brilliant woman going to increase the expected success of your firm, assuming that women are, on average, less skilled at spatial imagination than men? Surely not!

Moreover, taking group averages as our ethical criterion requires us to first define the relevant groups. Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others? 

 

Summary

Our speciesism is an anthropocentric bias without any reasonable foundation. It would be completely arbitrary to give special consideration to a being simply because of its species membership. Doing so would lead to a number of implications that most people clearly reject. A strong case can be made that suffering is bad in virtue of being suffering, regardless of where it happens. If the suffering or deaths of nonhuman animals deserve no ethical consideration, then human beings with the same relevant properties (of which all plausible ones seem to come down to having similar levels of awareness) deserve no intrinsic ethical consideration either, barring speciesism. 

Assuming that we would feel uncomfortable giving justifications or criteria for our scope of ethical concern that can analogously be used to defend racism or sexism, those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering. 

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments. 

Edit: As Carl Shulman has pointed out, discounting may also apply for "intensity of sentience", because it seems at least plausible that shrimps (for instance), if they are sentient, can experience less suffering than e.g. a whale. 

Why Eat Less Meat?

48 peter_hurford 23 July 2013 09:30PM

Previously, I wrote on LessWrong about the preliminary evidence in favor of using leaflets to promote veganism as a way of cost-effectively reducing suffering.  In response, there was a large discussion with 530+ comments.   In this discussion, I found that a lot of people wanted me to write about why I think nonhuman animals deserve our concern anyway.

Therefore, I wrote this essay with an attempt to defend the view that if one cares about suffering, one should also care about nonhuman animals, since (1) they are capable of suffering, (2) they do suffer quite a lot, and (3) we can prevent their suffering.   I hope that we can have a sober, non mind-killing discussion about this topic, since it’s possibly quite important.

 

Introduction

For the past two years, the only place I ate meat was at home with my family.  As of October 2012, I've finally stopped eating meat altogether and can't see a reason why I would want to go back to eating meat.  This kind of attitude toward eating is commonly classified as "vegetarianism" where one refrains from eating the flesh of all animals, including fish, but still will consume animal products like eggs and milk (though I try to avoid egg as best I can).

Why might I want to do this?  And why might I see it as a serious issue?  It's because I'm very concerned about the reality of suffering done to our "food animals" in the process of making them into meat, because I see vegetarianism as a way to reduce this suffering by stopping the harmful process, and because vegetarianism has not been hard at all for me to accomplish.

 

Animals Can Suffer

Back in the 1600s, Réné Descartes thought nonhuman animals were soulless automatons that could respond to their environment and react to stimuli, but could not feel anything — humans were the only species that were truly conscious. Descartes hit on an important point — since feelings are completely internal to the animal doing the feeling, it is impossible to demonstrate that anyone is truly conscious.

However, when it comes to humans, we don’t let that stop us from assuming other people feel pain. When we jab a person with a needle, no matter who they are, where they come from, or what they look like, they share a rather universal reaction of what we consider to be evidence of pain. We also extend this to our pets — we make great strides to avoid harming kittens, puppies, or other companion animals, and no one would want to kick a puppy or light a kitten on fire just because their consciousness cannot be directly observed. That’s why we even go as far as having laws against animal cruelty.

The animals we eat are no different. Pigs, chickens, cows, and fish all have incredibly analogous responses to stimuli that we would normally agree cause pain to humans and pets.  Jab a pig with a needle, kick a chicken, or light a cow on fire, and they will react aversively like any cat, dog, horse, or human.

 

The Science

But we don't need to rely on just our intuition -- instead, we can look at the science.  Animal scientists Temple Grandin and Mark Deesing conclude that "[o]ur review of the literature on frontal cortex development enables us to conclude that all mammals, including rats, have a sufficiently developed prefrontal cortex to suffer from pain".  An interview of seven different scientists concludes that animals can suffer.

Dr. Jane Goodall, famous for having studied animals, writes in her introduction to The Inner World of Farm Animals that "farm animals feel pleasure and sadness, excitement and resentment, depression, fear, and pain. They are far more aware and intelligent than we ever imagined…they are individuals in their own right."  Farm Sanctuary, an animal welfare organization, has a good overview documenting this research on animal emotion.

Lastly, among much other evidence, in the "Cambridge Declaration On Consciousness", prominent international group of cognitive  neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists states:

Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors.  Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also  possess these neurological substrates.

 

Factory Farming Causes Considerable Suffering

However, the fact that animals can suffer is just one piece of the picture; we next have to establish that animals do suffer as a result of people eating meat.  Honestly, this is easier shown than told -- there's an extremely harrowing and shocking 11-minute video about the cruelty available.  Watching that video is perhaps the easiest way to see the suffering of nonhuman animals first hand in these "factory farms".

In making the case clear, Vegan Outreach writes "Many people believe that animals raised for food must be treated well because sick or dead animals would be of no use to agribusiness. This is not true."

They then go on to document, with sources, how virtually all birds raised for food are from factory farms where "resulting ammonia levels [from densely populated sheds and accumulated waste] commonly cause painful burns to the birds' skin, eyes, and respiratory tracts" and how hens "become immobilized and die of asphyxiation or dehydration", having been "[p]acked in cages (usually less than half a square foot of floor space per bird)".  In fact, 137 million chickens suffer to death each year before they can even make it to slaughter -- more than the number of animals killed for fur, in shelters and in laboratories combined!

Farm Sanctuary also provides an excellent overview of the cruelty of factory farming, writing "Animals on factory farms are regarded as commodities to be exploited for profit. They undergo painful mutilations and are bred to grow unnaturally fast and large for the purpose of maximizing meat, egg, and milk production for the food industry."

It seems clear that factory farming practices are truly deplorable, and certainly are not worth the benefit of eating a slightly tastier meal.  In "An Animal's Place", Michael Pollan writes:

To visit a modern CAFO (Confined Animal Feeding Operation) is to enter a world that, for all its technological sophistication, is still designed according to Cartesian principles: animals are machines incapable of feeling pain. Since no thinking person can possibly believe this any more, industrial animal agriculture depends on a suspension of disbelief on the part of the people who operate it and a willingness to avert your eyes on the part of everyone else.

 

Vegetarianism Can Make a Difference

Many people see the staggering amount of suffering in factory farms, and if they don't aim to dismiss it outright will say that there's no way they can make a difference by changing their eating habits.  However, this is certainly not the case!

 

How Many Would Be Saved?

Drawing from the 2010 Livestock Slaughter Animal Summary and the Poultry Slaughter Animal Summary, 9.1 billion land animals are either grown in the US or imported (94% of which are chickens!), 1.6 billion are exported, and 631 million die before anyone can eat them, leaving 8.1 billion land animals for US consumption each year.

A naïve average would divide this total among the population of the US, which is 311 million, assigning 26 land animals for each person's annual consumption.  Thus, by being vegetarian, you are saving 26 land animals a year you would have otherwise eaten.  And this doesn't even count fish, which could be quite high given how many fish need to be grown just to be fed to bigger fish!

Yet, this is not quite true.  It's important to note that supply and demand aren't perfectly linear.  If you reduce your demand for meat, the suppliers will react by lowering the price of meat a little bit, making it so more people can buy it.  Since chickens dominate the meat market, we'll adjust by the supply elasticity of chickens, which is 0.22 and the demand elasticity of chickens, which is -0.52, and calculate the change in supply, which is 0.3.  Taking this multiplier, it's more accurate to say you're saving 7.8 land animals a year or more.  Though, there are a lot of complex considerations in calculating elasticity, so we should take this figure to have some uncertainty.

 

Collective Action

One might critique this response by responding that since meat is often bought in bulk, reducing meat consumption won't affect the amount of meat bought, and thus the suffering will still be the same, except with meat gone to waste.  However, this ignores the effect of many different vegetarians acting together.

Imagine that you're supermarket buys cases of 200 chicken wings.  It would thus take 200 people together to agree to buy 1 less wing in order for the supermarket to buy less wings.  However, you have no idea if you're vegetarian #1 or vegetarian #56 or vegetarian #200, making the tipping point for 200 less wings to be bought.  You thus can estimate that by buying one less wing you have a 1 in 200 chance of reducing 200 wings, which is equivalent to reducing the supply by one wing.  So the effect basically cancels out.  See here or here for more.

Every time you buy factory farmed meat, you are creating demand for that product, essentially saying "Thank you, I liked what you are doing and want to encourage you to do it more".  By eating less meat, we can stop our support of this industry.

 

Vegetarianism Is Easier Than You Think

So nonhuman animals can suffer and do suffer in factory farms, and we can help stop this suffering by eating less meat.  I know people who get this far, but then stop and say that, as much as they would like to, there's no way they could be a vegetarian because they like meat too much!  However, such a joy for meat shouldn't count much compared to the massive suffering each animal undergoes just to be farmed -- imagine if someone wouldn't stop eating your pet just because they like eating your pet so much!

This is less than a problem than you might think, because being a vegetarian is really easy.  Most people only think about what they would have to give up and how good it tastes, and don't think about what tasty things they could eat instead that have no meat in them.  When I first decided to be a vegetarian, I simply switched from tasty hamburgers to tasty veggieburgers and there was no problem at all.

 

A Challenge

To those who say that vegetarianism is too hard, I’d like to simply challenge you to just try it for a few days. Feel free to give up afterward if you find it too hard. But I imagine that you should do just fine, find great replacements, and be able to save animals from suffering in the process.

If reducing suffering is one of your goals, there’s no reason why you must either be a die-hard meat eater or a die-hard vegetarian. Instead, feel free to explore some middle ground. You could be a vegetarian on weekdays but eat meat on weekends, or just try Meatless Mondays, or simply try to eat less meat. You could try to eat bigger animals like cows instead of fish or chicken, thus getting the same amount of meat with significantly less suffering.

-

(This was also cross-posted on my blog.)

Four Focus Areas of Effective Altruism

40 lukeprog 09 July 2013 12:59AM

It was a pleasure to see all major strands of the effective altruism movement gathered in one place at last week's Effective Altruism Summit.

Representatives from GiveWell, The Life You Can Save, 80,000 Hours, Giving What We Can, Effective Animal AltruismLeverage Research, the Center for Applied Rationality, and the Machine Intelligence Research Institute either attended or gave presentations. My thanks to Leverage Research for organizing and hosting the event!

What do all these groups have in common? As Peter Singer said in his TED talk, effective altruism "combines both the heart and the head." The heart motivates us to be empathic and altruistic toward others, while the head can "make sure that what [we] do is effective and well-directed," so that altruists can do not just some good but as much good as possible.

Effective altruists (EAs) tend to:

  1. Be globally altruisticEAs care about people equally, regardless of location. Typically, the most cost-effective altruistic cause won't happen to be in one's home country.
  2. Value consequences: EAs tend to value causes according to their consequences, whether those consequences are happiness, health, justice, fairness and/or other values.
  3. Try to do as much good as possible: EAs don't just want to do some good; they want to do (roughly) as much good as possible. As such, they hope to devote their altruistic resources (time, money, energy, attention) to unusually cost-effective causes. (This doesn't necessarily mean that EAs think "explicit" cost effectiveness calculations are the best method for figuring out which causes are likely to do the most good.)
  4. Think scientifically and quantitatively: EAs tend to be analytic, scientific, and quantitative when trying to figure out which causes actually do the most good.
  5. Be willing to make significant life changes to be more effectively altruistic: As a result of their efforts to be more effective in their altruism, EAs often (1) change which charities they support financially, (2) change careers, (3) spend significant chunks of time investigating which causes are most cost-effective according to their values, or (4) make other significant life changes.

Despite these similarities, EAs are a diverse bunch, and they focus their efforts on a variety of causes.

Below are four popular focus areas of effective altruism, ordered roughly by how large and visible they appear to be at the moment. Many EAs work on several of these focus areas at once, due to uncertainty about both facts and values.

Though labels and categories have their dangers, they can also enable chunking, which has benefits for memory, learning, and communication. There are many other ways we might categorize the efforts of today's EAs; this is only one categorization.

continue reading »

View more: Next