Meetup : Fifth Buenos Aires LessWrong meetup

1 Pablo_Stafforini 04 August 2014 10:00PM

Discussion article for the meetup : Fifth Buenos Aires LessWrong meetup

WHEN: 09 August 2014 04:00:00PM (-0300)

WHERE: Roseti 1380, Ciudad de Buenos Aires, Argentina

This will be our fifth Buenos Aires LessWrong meetup.

On this occasion, Pablo will give a brief introductory presentation about effective altruism, followed by discussion. Optional reading material:

The meeting will take place at GarageLab.

To stay informed about future meetups, please subscribe to our Google group.

Hope to see you there!

Discussion article for the meetup : Fifth Buenos Aires LessWrong meetup

[link] Why Psychologists' Food Fight Matters

28 Pablo_Stafforini 01 August 2014 07:52AM

Why Psychologists’ Food Fight Matters: Important findings” haven’t been replicated, and science may have to change its ways. By Michelle N. Meyer and Christopher Chabris. Slate, July 31, 2014. [Via Steven Pinker's Twitter account, who adds: "Lesson for sci journalists: Stop reporting single studies, no matter how sexy (these are probably false). Report lit reviews, meta-analyses."]  Some excerpts:

Psychologists are up in arms over, of all things, the editorial process that led to the recent publication of a special issue of the journal Social Psychology. This may seem like a classic case of ivory tower navel gazing, but its impact extends far beyond academia. The issue attempts to replicate 27 “important findings in social psychology.” Replication—repeating an experiment as closely as possible to see whether you get the same results—is a cornerstone of the scientific method. Replication of experiments is vital not only because it can detect the rare cases of outright fraud, but also because it guards against uncritical acceptance of findings that were actually inadvertent false positives, helps researchers refine experimental techniques, and affirms the existence of new facts that scientific theories must be able to explain.

One of the articles in the special issue reported a failure to replicate a widely publicized 2008 study by Simone Schnall, now tenured at Cambridge University, and her colleagues. In the original study, two experiments measured the effects of people’s thoughts or feelings of cleanliness on the harshness of their moral judgments. In the first experiment, 40 undergraduates were asked to unscramble sentences, with one-half assigned words related to cleanliness (like pure or pristine) and one-half assigned neutral words. In the second experiment, 43 undergraduates watched the truly revolting bathroom scene from the movie Trainspotting, after which one-half were told to wash their hands while the other one-half were not. All subjects in both experiments were then asked to rate the moral wrongness of six hypothetical scenarios, such as falsifying one’s résumé and keeping money from a lost wallet. The researchers found that priming subjects to think about cleanliness had a “substantial” effect on moral judgment: The hand washers and those who unscrambled sentences related to cleanliness judged the scenarios to be less morally wrong than did the other subjects. The implication was that people who feel relatively pure themselves are—without realizing it—less troubled by others’ impurities. The paper was covered by ABC News, the Economist, and the Huffington Post, among other outlets, and has been cited nearly 200 times in the scientific literature.

However, the replicators—David Johnson, Felix Cheung, and Brent Donnellan (two graduate students and their adviser) of Michigan State University—found no such difference, despite testing about four times more subjects than the original studies. [...]

The editor in chief of Social Psychology later agreed to devote a follow-up print issue to responses by the original authors and rejoinders by the replicators, but as Schnall told Science, the entire process made her feel “like a criminal suspect who has no right to a defense and there is no way to win.” The Science article covering the special issue was titled “Replication Effort Provokes Praise—and ‘Bullying’ Charges.” Both there and in her blog post, Schnall said that her work had been “defamed,” endangering both her reputation and her ability to win grants. She feared that by the time her formal response was published, the conversation might have moved on, and her comments would get little attention.

How wrong she was. In countless tweets, Facebook comments, and blog posts, several social psychologists seized upon Schnall’s blog post as a cri de coeur against the rising influence of “replication bullies,” “false positive police,” and “data detectives.” For “speaking truth to power,” Schnall was compared to Rosa Parks. The “replication police” were described as “shameless little bullies,” “self-righteous, self-appointed sheriffs” engaged in a process “clearly not designed to find truth,” “second stringers” who were incapable of making novel contributions of their own to the literature, and—most succinctly—“assholes.” Meanwhile, other commenters stated or strongly implied that Schnall and other original authors whose work fails to replicate had used questionable research practices to achieve sexy, publishable findings. At one point, these insinuations were met with threats of legal action. [...]

Unfortunately, published replications have been distressingly rare in psychology. A 2012 survey of the top 100 psychology journals found that barely 1 percent of papers published since 1900 were purely attempts to reproduce previous findings. Some of the most prestigious journals have maintained explicit policies against replication efforts; for example, the Journal of Personality and Social Psychology published a paper purporting to support the existence of ESP-like “precognition,” but would not publish papers that failed to replicate that (or any other) discovery. Science publishes “technical comments” on its own articles, but only if they are submitted within three months of the original publication, which leaves little time to conduct and document a replication attempt.

The “replication crisis” is not at all unique to social psychology, to psychological science, or even to the social sciences. As Stanford epidemiologist John Ioannidis famously argued almost a decade ago, “Most research findings are false for most research designs and for most fields.” Failures to replicate and other major flaws in published research have since been noted throughout science, including in cancer research, research into the genetics of complex diseases like obesity and heart disease, stem cell research, and studies of the origins of the universe. Earlier this year, the National Institutes of Health stated “The complex system for ensuring the reproducibility of biomedical research is failing and is in need of restructuring.”

Given the stakes involved and its centrality to the scientific method, it may seem perplexing that replication is the exception rather than the rule. The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view “positive” findings that announce a novel relationship or support a theoretical claim as more interesting than “negative” findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate. Since journal publications are valuable academic currency, researchers—especially those early in their careers—have strong incentives to conduct original work rather than to replicate the findings of others. Replication efforts that do happen but fail to find the expected effect are usually filed away rather than published. That makes the scientific record look more robust and complete than it is—a phenomenon known as the “file drawer problem.”

The emphasis on positive findings may also partly explain the fact that when original studies are subjected to replication, so many turn out to be false positives. The near-universal preference for counterintuitive, positive findings gives researchers an incentive to manipulate their methods or poke around in their data until a positive finding crops up, a common practice known as “p-hacking” because it can result in p-values, or measures of statistical significance, that make the results look stronger, and therefore more believable, than they really are. [...]

The recent special issue of Social Psychology was an unprecedented collective effort by social psychologists to [rectify this situation]—by altering researchers’ and journal editors’ incentives in order to check the robustness of some of the most talked-about findings in their own field. Any researcher who wanted to conduct a replication was invited to preregister: Before collecting any data from subjects, they would submit a proposal detailing precisely how they would repeat the original study and how they would analyze the data. Proposals would be reviewed by other researchers, including the authors of the original studies, and once approved, the study’s results would be published no matter what. Preregistration of the study and analysis procedures should deter p-hacking, guaranteed publication should counteract the file drawer effect, and a requirement of large sample sizes should make it easier to detect small but statistically meaningful effects.

The results were sobering. At least 10 of the 27 “important findings” in social psychology were not replicated at all. In the social priming area, only one of seven replications succeeded. [...]

One way to keep things in perspective is to remember that scientific truth is created by the accretion of results over time, not by the splash of a single study. A single failure-to-replicate doesn’t necessarily invalidate a previously reported effect, much less imply fraud on the part of the original researcher—or the replicator. Researchers are most likely to fail to reproduce an effect for mundane reasons, such as insufficiently large sample sizes, innocent errors in procedure or data analysis, and subtle factors about the experimental setting or the subjects tested that alter the effect in question in ways not previously realized.

Caution about single studies should go both ways, though. Too often, a single original study is treated—by the media and even by many in the scientific community—as if it definitively establishes an effect. Publications like Harvard Business Review and idea conferences like TED, both major sources of “thought leadership” for managers and policymakers all over the world, emit a steady stream of these “stats and curiosities.” Presumably, the HBR editors and TED organizers believe this information to be true and actionable. But most novel results should be initially regarded with some skepticism, because they too may have resulted from unreported or unnoticed methodological quirks or errors. Everyone involved should focus their attention on developing a shared evidence base that consists of robust empirical regularities—findings that replicate not just once but routinely—rather than of clever one-off curiosities. [...]

Scholars, especially scientists, are supposed to be skeptical about received wisdom, develop their views based solely on evidence, and remain open to updating those views in light of changing evidence. But as psychologists know better than anyone, scientists are hardly free of human motives that can influence their work, consciously or unconsciously. It’s easy for scholars to become professionally or even personally invested in a hypothesis or conclusion. These biases are addressed partly through the peer review process, and partly through the marketplace of ideas—by letting researchers go where their interest or skepticism takes them, encouraging their methods, data, and results to be made as transparent as possible, and promoting discussion of differing views. The clashes between researchers of different theoretical persuasions that result from these exchanges should of course remain civil; but the exchanges themselves are a perfectly healthy part of the scientific enterprise.

This is part of the reason why we cannot agree with a more recent proposal by Kahneman, who had previously urged social priming researchers to put their house in order. He contributed an essay to the special issue of Social Psychology in which he proposed a rule—to be enforced by reviewers of replication proposals and manuscripts—that authors “be guaranteed a significant role in replications of their work.” Kahneman proposed a specific process by which replicators should consult with original authors, and told Science that in the special issue, “the consultations did not reach the level of author involvement that I recommend.”

Collaboration between opposing sides would probably avoid some ruffled feathers, and in some cases it could be productive in resolving disputes. With respect to the current controversy, given the potential impact of an entire journal issue on the robustness of “important findings,” and the clear desirability of buy-in by a large portion of psychology researchers, it would have been better for everyone if the original authors’ comments had been published alongside the replication papers, rather than left to appear afterward. But consultation or collaboration is not something replicators owe to original researchers, and a rule to require it would not be particularly good science policy.

Replicators have no obligation to routinely involve original authors because those authors are not the owners of their methods or results. By publishing their results, original authors state that they have sufficient confidence in them that they should be included in the scientific record. That record belongs to everyone. Anyone should be free to run any experiment, regardless of who ran it first, and to publish the results, whatever they are. [...]

some critics of replication drives have been too quick to suggest that replicators lack the subtle expertise to reproduce the original experiments. One prominent social psychologist has even argued that tacit methodological skill is such a large factor in getting experiments to work that failed replications have no value at all (since one can never know if the replicators really knew what they were doing, or knew all the tricks of the trade that the original researchers did), a surprising claim that drew sarcastic responses. [See LW discussion.] [...]

Psychology has long been a punching bag for critics of “soft science,” but the field is actually leading the way in tackling a problem that is endemic throughout science. The replication issue of Social Psychology is just one example. The Association for Psychological Science is pushing for better reporting standards and more study of research practices, and at its annual meeting in May in San Francisco, several sessions on replication were filled to overflowing. International collaborations of psychologists working on replications, such as the Reproducibility Project and the Many Labs Replication Project (which was responsible for 13 of the 27 replications published in the special issue of Social Psychology) are springing up.

Even the most tradition-bound journals are starting to change. The Journal of Personality and Social Psychology—the same journal that, in 2011, refused to even consider replication studies—recently announced that although replications are “not a central part of its mission,” it’s reversing this policy. We wish that JPSP would see replications as part of its central mission and not relegate them, as it has, to an online-only ghetto, but this is a remarkably nimble change for a 50-year-old publication. Other top journals, most notable among them Perspectives in Psychological Science, are devoting space to systematic replications and other confirmatory research. The leading journal in behavior genetics, a field that has been plagued by unreplicable claims that particular genes are associated with particular behaviors, has gone even further: It now refuses to publish original findings that do not include evidence of replication.

A final salutary change is an overdue shift of emphasis among psychologists toward establishing the size of effects, as opposed to disputing whether or not they exist. The very notion of “failure” and “success” in empirical research is urgently in need of refinement. When applied thoughtfully, this dichotomy can be useful shorthand (and we’ve used it here). But there are degrees of replication between success and failure, and these degrees matter.

For example, suppose an initial study of an experimental drug for cardiovascular disease suggests that it reduces the risk of heart attack by 50 percent compared to a placebo pill. The most meaningful question for follow-up studies is not the binary one of whether the drug’s effect is 50 percent or not (did the first study replicate?), but the continuous one of precisely how much the drug reduces heart attack risk. In larger subsequent studies, this number will almost inevitably drop below 50 percent, but if it remains above 0 percent for study after study, then the best message should be that the drug is in fact effective, not that the initial results “failed to replicate.”

[link] [poll] Future Progress in Artificial Intelligence

8 Pablo_Stafforini 09 July 2014 01:51PM

Vincent Müller and Nick Bostrom have just released a paper surveying the results of a poll of experts about future progress in artificial intelligence. The authors have also put up a companion site where visitors can take the poll and see the raw data. I just checked the site and so far only one individual has submitted a response.  This provides an opportunity for testing the views of LW members against those of experts.  So if you are willing to complete the questionnaire, please do so before reading the paper. (I have abstained from providing a link to the pdf to create a trivial inconvenience for those who cannot resist temptaion. Once you take the poll, you can easily find the paper by conducting a Google search with the keywords: bostrom muller future progress artificial intelligence.)

[link] Nick Beckstead on improving disaster shelters to increase the chances of recovery from a global catastrophe

6 Pablo_Stafforini 19 February 2014 05:27PM

Nick Beckstead just published a post on disaster shelters over at the Effective Altruism Blog. Summary:

What is the problem? Civilization might not recover from some possible global catastrophes. Conceivably, people with access to disaster shelters or other refuges may be more likely to survive and help civilization recover. However, existing disaster shelters (sometimes built to ensure continuity of government operations and sometimes built to protect individuals), people working on submarines, largely uncontacted peoples, and people living in very remote locations may serve this function to some extent.

What are the possible interventions? Other interventions may also increase the chances that humanity would recover from a global catastrophe, but this review focuses on disaster shelters. Proposed methods of improving disaster shelter networks include stocking shelters with appropriately trained people and resources that would enable them to rebuild civilization in case of a near-extinction event, keeping some shelters constantly full of people, increasing food reserves, and building more shelters. A philanthropist could pay to improve existing shelter networks in the above ways, or they could advocate for private shelter builders or governments to make some of the improvements listed above.

Who else is working on it? Some governments maintain bunkers in order to maintain continuity of government and/or to protect their citizens. Some individuals purchase and maintain private disaster shelters.

Questions for further investigation: With the possible exception of pandemic specifically engineered to kill all humans, I am aware of no scenario in which improved disaster shelters would plausibly enable a small group of people to survive a sudden near-extinction event. In the case of other catastrophes where a much larger number of people would survive, I would guess that improved refuges would play a relatively small role in helping humanity to recover because they would represent a small share of relevant people and resources. Many challenging questions about improving refuges remain, but I would prioritize investigating other issues at this point because refuges seem likely to be of limited value and alternative strategies (such as improving biosecurity and increasing the resilience of industrial and agricultural systems) seem more likely to effectively reduce the global catastrophic risks that improving refuges might plausibly address.

[link] Psychologists strike a blow for reproducibility

26 Pablo_Stafforini 28 November 2013 05:26AM

Link

A large international group set up to test the reliability of psychology experiments has successfully reproduced the results of 10 out of 13 past experiments. The consortium also found that two effects could not be reproduced.

To tackle this 'replicability crisis', 36 research groups formed the Many Labs Replication Project to repeat 13 psychological studies. The consortium combined tests from earlier experiments into a single questionnaire — meant to take 15 minutes to complete — and delivered it to 6,344 volunteers from 12 countries.

Project co-leader Brian Nosek, a psychologist at the Center of Open Science in Charlottesville, Virginia, finds the outcomes encouraging. “It demonstrates that there are important effects in our field that are replicable, and consistently so,” he says. “But that doesn’t mean that 10 out of every 13 effects will replicate.”

Kahneman agrees. The study “appears to be extremely well done and entirely convincing”, he says, “although it is surely too early to draw extreme conclusions about entire fields of research from this single effort”. Kahneman published an open letter in 2012 calling for a “daisy chain” of replications of studies on priming effects, in which subtle, subconscious cues can supposedly affect later behaviour.

Of the 13 effects under scrutiny in the latest investigation, one was only weakly supported, and two were not replicated at all. Both irreproducible effects involved social priming. In one of these, people had increased their endorsement of a current social system after being exposed to money3. In the other, Americans had espoused more-conservative values after seeing a US flag4.

Social psychologist Travis Carter of Colby College in Waterville, Maine, who led the original flag-priming study, says that he is disappointed but trusts Nosek’s team wholeheartedly, although he wants to review their data before commenting further. Behavioural scientist Eugene Caruso at the University of Chicago in Illinois, who led the original currency-priming study, says, “We should use this lack of replication to update our beliefs about the reliability and generalizability of this effect”, given the “vastly larger and more diverse sample” of the Many Labs project. Both researchers praised the initiative.

The plan for the Many Labs project was vetted by the original authors where possible, was documented openly, and was registered with the journal Social Psychology and its methods were peer-reviewed before any experiments were done. The results have now been submitted to the journal and are available online“That sort of openness should be the standard for all research,” says Daniel Simons of the University of Illinois at Urbana–Champaign, who is coordinating a similar collaborative attempt to verify a classic psychological effect not covered in the present study. “I hope this will become a standard approach in psychology.”

[Link] "A Long-run Perspective on Strategic Cause Selection and Philanthropy" by Nick Beckstead and Carl Shulman

5 Pablo_Stafforini 05 November 2013 06:27PM

A philanthropist who will remain anonymous recently asked us about what we would do if we didn’t face financial constraints. We gave a detailed answer that we thought we might as well share with others, who may also find our perspective interesting. We gave the answer largely in hope of creating some interest in our way of thinking about philanthropy and some of the causes that we find interesting for further investigation, and because we thought the answer would be fruitful for conversation.

Link

[link] "The Survival of Humanity"

1 Pablo_Stafforini 14 September 2013 03:19PM

The Survival of Humanity, by Lawrence Rifkin (September 13, 2013). Some excerpts:

An existential catastrophe would obliterate or severely limit the existence of all future humanity.

As defined by Nick Bostrom at Oxford University, an existential catastrophe is one which extinguishes Earth-originating intelligent life or permanently destroys a substantial part of its potential.  As such it must be considered a harm of unfathomable magnitude, far beyond tragedy affecting those alive at the time. Because such risks jeopardize the entire future of humankind and conscious life, even relatively small probabilities, especially when seen statistically over a long period of time, may become significant in the extreme. It would follow that if such risks are non-trivial, the importance of existential catastrophes dramatically eclipse most of the social and political issues that commonly ignite our passions and tend to get our blood boiling today. [...]

One would think that if we are mobilized to fight for issues that affect a relatively small number of people, we would have an even stronger moral and social emotional motivation to prevent potential catastrophes that could kill or incapacitate the entire human population. But there are significant psychological barriers to overcome. People who would be emotionally crushed just hearing about a tortured child or animal may not register even the slightest emotional response when contemplating the idea that all human life may one day become extinct. As Eliezer Yudkowsky wrote, “The challenge of existential risks to rationality is that, the catastrophes being so huge, people snap into a different mode of thinking.” [...]

Here is a partial list of suggestion worthy of consideration. The idea here is not to advocate for some extreme survivalist or “Chicken Little” mentality, but rather to use reason, foresight, and judgment about how best to protect our future.

  • Create a larger worldwide stockpile of grains and other food reserves.
  • Support and prioritize global measures to detect, prevent, and halt emerging pandemic infectious diseases, such as the WHO’s The Global Outbreak Alert and Response Network.
  • Invest in technologies to discover and deflect large asteroids and comets on a deadly collision course with our planet.
  • Consider banning the synthesis and public publication of the genome sequences of deadly microorganisms such as smallpox and the 1918 influenza virus, thereby reducing the risks of bioterrorism or accidental release.
  • Maintain stores in multiple locations of wild plant species, seed banks, and gene banks to safeguard genetic diversity.
  • Invest in space station research. Because of the Sun’s ultimate expansion heating up the planet, Earth will become uninhabitable for humans in about 1-1.5 billion years (it will become uninhabitable for all life on Earth several billion years after that). This is, understandably, almost too long from now to contemplate. Nonetheless, our best (and possibly only) chance for survival in the very distant future may be to live in space or to colonize other planets or moons.
  • Create strains of agricultural species better able to withstand major environmental change and threats.
  • Continue to strive towards scientific accuracy in predicting climate change effects, and work towards renewable energy sources, sustainable use, technological solutions, and other measures to prevent potential climate catastrophes. Human-caused environmental changes that increase the risk of global pandemics deserve particular attention.
  • Develop appropriate oversight of new molecular manufacturing technologies.
  • Prioritize international cooperation to reduce nuclear proliferation, secure existing nuclear weapons, develop systems to minimize technological mishaps, and decrease the world’s nuclear armamentarium.
  • Maintain a well-chosen small number of people in a deep, well protected refugee sanctuary, with adequate supplies to last for years to buffer against human extinction from a range of causes. Genetically diverse international volunteers who live in such a bunker could be rotated, say, every two months. A similar Noah’s ark refuge could be established on a space station.
  • Work towards changing the social conditions that foster ideological absolutism.
  • Promote evidence-based thinking and reason at all levels of society.
  • Plan in detail to quickly produce and administer vaccines and other medical interventions during a pandemic.
The idea is not that we should do all these, but that the issue deserves our very highest consideration. 

[video] "Transhuman", featuring Sandberg and Bostrom

6 Pablo_Stafforini 16 June 2013 06:59PM

Transhuman, a 20-minute Dutch documentary about Anders Sandberg, is now available online.  From Sandberg's blog:

Here is the chance of see my beetle collection, my grubby kitchen, and an absolutely stunning combination between a supercomputer center and an oxford library. Oh, and some discussion about transhumanism and the meaning of life too.

The documentary features Nick Bostrom at 9:23-10:32 and at 19:28-19:37.  There's also a cameo appearance of Eliezer Yudkowsky, Carl Shulman, Anna Salamon, Toby Ord, William MacAskill, David Pearce and Stuart Armstrong.

[link] Join Wall Street. Save the World

23 Pablo_Stafforini 31 May 2013 04:49PM

A very interesting article on "earning to give", featuring  LessWrong members Jeff Kaufman, Julia WiseHolden Karnofsky, William MacAskill and Toby Ord.  Some excerpts:

Jason Trigg went into finance because he is after money — as much as he can earn.

The 25-year-old certainly had other career options. An MIT computer science graduate, he could be writing software for the next tech giant. Or he might have gone into academia in computing or applied math or even biology. He could literally be working to cure cancer.

Instead, he goes to work each morning for a high-frequency trading firm. It’s a hedge fund on steroids. He writes software that turns a lot of money into even more money. For his labors, he reaps an uptown salary — and over time his earning potential is unbounded. It’s all part of the plan.

Why this compulsion? It’s not for fast cars or fancy houses. Trigg makes money just to give it away. His logic is simple: The more he makes, the more good he can do. [...]

Two former analysts at the mega-hedge fund Bridgewater and Associates have worked to change that. Holden Karnofsky and Elie Hassenfeld created GiveWell, a nonprofit that analyzes charities to help people decide where to give, rather than how much to give. They take into account, for instance, that a malaria donation can save a life, while a check sent to the New York City Ballet probably cannot. (Although it may produce a slightly better version of “Swan Lake.”) [...]

Take Jeff Kaufman. A Cambridge, Mass.-based developer at Google, Kaufman and his wife, Julia Wise, managed to live on $10,000 in 2012, they say. Together, they give away at least 45 percent of their income each year (the rest goes to savings and taxes). Kaufman and Wise meticulously document their spending on their blogs. In 2010, for example, they spent a measly $164.44 on groceries each month and gave themselves $38 apiece to spend each week on nonessentials (including all non-grocery meals). In 2012, they moved in with Jeff’s family, which saved even more money, they say. [...]

If GiveWell makes the empirical argument to the public, Giving What We Can makes the moral one.

Toby Ord, the founder, is an Australian philosopher teaching at Oxford. That’s hardly an accident. Oxford’s philosophy department is chock-full of consequentialists, or ethicists who think morality is about maximizing the good, however one defines “good.”

The group conducts charity evaluations and is a grass-roots network for those trying to live the consequentialist lifestyle. At least in Britain, the idea took off fast, and not just with avowed consequentialists and utilitarians.

The group has been profiled across Britain, in the Guardian, the Daily Mail and the BBC. The initial coverage focused on Ord’s promise in 2010 to give £1 million (or $1.5 million) to charity over his life, a tall order for an Oxford fellow making $50,000 a year. But somewhere along the line, Ord’s colleague and charity co-founder Will MacAskill hit upon an even catchier pitch. At the height of the Occupy movement in late 2011, he gave a talk at Oxford titled: “Want an ethical career? Become a banker.”

MacAskill, like Trigg, realized that percentages don’t matter. Absolutes do. Ord may be able to give $1.5 million over the course of his life, but Goldman Sachs chief executive Lloyd Blankfein made more than $15 million in 2012 alone. Before the crisis, Blankfein was clearing $50 million annually. And investment bankers don’t even get the biggest cut. Hedge fund manager John Paulson made $5 billion in 2010. Suppose Paulson were to keep his job, move to a studio in Hoboken, reduce his living expenses to $30,000 a year, and give the rest of the $5 billion away. He could save 3,000 times as many lives in a year as Ord could save in 80 years. So why not enter finance with the express goal of using earnings to save lives? [...]

It’s hard to imagine a 25-year-old Peter Singer envisioning that an article he published in Philosophy and Public Affairs would push people like Jason Trigg into the financial sector.

But the 66-year-old Singer of today welcomes the result. In between fending off religious opponents and helping lead the animal rights movement, he’s been doing a fair bit of giving advocacy himself. He has his own group, The Life You Can Save, spun off from his book of the same name, which also organizes at universities and works as an informal ally of Giving What We Can and 80,000 Hours.

And he embraces earning-to-give as among the most ethical career choices one can make, more moral than his own, even. “There is a relatively small group of philosophers who actually have a big influence,” he says from his home in Australia. “But otherwise, the marginal difference that you’re going to make as a professor of philosophy compared to somebody else is not all that great.”

[Paper] On the 'Simulation Argument' and Selective Scepticism

11 Pablo_Stafforini 18 May 2013 06:31PM

Jonathan Birch recently published an interesting critique of Bostrom's simulation argument.  Here's the abstract:

Nick Bostrom’s ‘Simulation Argument’ purports to show that, unless we are confident that advanced ‘posthuman’ civilizations are either extremely rare or extremely rarely interested in running simulations of their own ancestors, we should assign significant credence to the hypothesis that we are simulated. I argue that Bostrom does not succeed in grounding this constraint on credence. I first show that the Simulation Argument requires a curious form of selective scepticism, for it presupposes that we possess good evidence for claims about the physical limits of computation and yet lack good evidence for claims about our own physical constitution. I then show that two ways of modifying the argument so as to remove the need for this presupposition fail to preserve the original conclusion. Finally, I argue that, while there are unusual circumstances in which Bostrom’s selective scepticism might be reasonable, we do not currently find ourselves in such circumstances. There is no good reason to uphold the selective scepticism the Simulation Argument presupposes. There is thus no good reason to believe its conclusion.

The paper is behind a paywall, but I have uploaded it to my shared Dropbox folder, here.

EDIT: I emailed the author and am glad to see that he's decided to participate in the discussion below.

View more: Next