Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

I'm the new moderator

83 NancyLebovitz 13 January 2015 11:21PM

Viliam Bur made the announcement in Main, but not everyone checks main, so I'm repeating it here.

During the following months my time and attention will be heavily occupied by some personal stuff, so I will be unable to function as a LW moderator. The new LW moderator is... NancyLebovitz!

From today, please direct all your complaints and investigation requests to Nancy. Please not everyone during the first week. That can be a bit frightening for a new moderator.

There are a few old requests I haven't completed yet. I will try to close everything during the following days, but if I don't do it till the end of January, then I will forward the unfinished cases to Nancy, too.

Long live the new moderator!

Apptimize -- rationalist startup hiring engineers

64 nancyhua 12 January 2015 08:22PM

Apptimize is a 2-year old startup closely connected with the rationalist community, one of the first founded by CFAR alumni.  We make “lean” possible for mobile apps -- our software lets mobile developers update or A/B test their apps in minutes, without submitting to the App Store. Our customers include big companies such as Nook and Ebay, as well as Top 10 apps such as Flipagram. When companies evaluate our product against competitors, they’ve chosen us every time.


We work incredibly hard, and we’re striving to build the strongest engineering team in the Bay Area. If you’re a good developer, we have a lot to offer.


Team

  • Our team of 14 includes 7 MIT alumni, 3 ex-Googlers, 1 Wharton MBA, 1 CMU CS alum, 1 Stanford alum, 2 MIT Masters, 1 MIT Ph. D. candidate, and 1 “20 Under 20” Thiel Fellow. Our CEO was also just named to the Forbes “30 Under 30

  • David Salamon, Anna Salamon’s brother, built much of our early product

  • Our CEO is Nancy Hua, while our Android lead is "20 under 20" Thiel Fellow James Koppel. They met after James spoke at the Singularity Summit

  • HP:MoR is required reading for the entire company

  • We evaluate candidates on curiosity even before evaluating them technically

  • Seriously, our team is badass. Just look

Self Improvement

  • You will have huge autonomy and ownership over your part of the product. You can set up new infrastructure and tools, expense business products and services, and even subcontract some of your tasks if you think it's a good idea

  • You will learn to be a more goal-driven agent, and understand the impact of everything you do on the rest of the business

  • Access to our library of over 50 books and audiobooks, and the freedom to purchase more

  • Everyone shares insights they’ve had every week

  • Self-improvement is so important to us that we only hire people committed to it. When we say that it’s a company value, we mean it

The Job

  • Our mobile engineers dive into the dark, undocumented corners of iOS and Android, while our backend crunches data from billions of requests per day

  • Engineers get giant monitors, a top-of-the-line MacBook pro, and we’ll pay for whatever else is needed to get the job done

  • We don’t demand prior experience, but we do demand the fearlessness to jump outside your comfort zone and job description. That said, our website uses AngularJS, jQuery, and nginx, while our backend uses AWS, Java (the good parts), and PostgreSQL

  • We don’t have gratuitous perks, but we have what counts: Free snacks and catered meals, an excellent health and dental plan, and free membership to a gym across the street

  • Seriously, working here is awesome. As one engineer puts it, “we’re like a family bent on taking over the world”


If you’re interested, send some Bayesian evidence that you’re a good match to jobs@apptimize.com

Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial

52 ciphergoth 15 January 2015 04:33PM

We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity. 

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk's donation aims to support precisely this type of research: "Here are all these leading AI researchers saying that AI safety is important", says Elon Musk. "I agree with them, so I'm today committing $10M to support research aimed at keeping AI beneficial for humanity." 

[...] The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. [...]

The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy  (a detailed list of examples can be found here [PDF]). "Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere", says FLI co-founder Viktoriya Krakovna. 

[...] Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories. 

Elon Musk donates $10M to keep AI beneficial, Future of Life Institute, Thursday January 15, 2015

Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time"

47 ciphergoth 22 January 2015 08:21PM

Steven Levy: Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?

Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.

"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21

'Dumb' AI observes and manipulates controllers

33 Stuart_Armstrong 13 January 2015 01:35PM

The argument that AIs provided with a reward channel will observe their controllers and learn to manipulate them is a valid one. Unfortunately, it's often framed in a way that feels counterintuitive or extreme, especially to AI designers. It typically starts with the standard reinforcement learning scenario, then posits that the AI becomes superintelligent and either manipulates the controller with super-social powers, or breaks out and gains control of its reward channel, killing or threatening its controllers.

And that is a fair argument. But conceptually, it leaps from a standard reinforcement learning scenario, to a science-fiction-sounding scenario. It might help to have intermediate scenarios: to show that even lower intelligence AIs might start exhibiting the same sort of behaviour, long before it gets to superintelligence.

So consider the following scenario. Some complex, trainable AI is tasked with writing automated news stories for a student newspaper. It trawls the web and composes its stories, then gets reward and feedback from the editors. Assume there are two editors for this newspaper, and they work on alternate days. The two editors have somewhat different ideas as to what constitutes a good story, so their feedbacks are different. After a while, the AI finds that it gets higher reward by using a certain style on Monday, Wednesday and Friday, and another style on Tuesday and Thursdays - this is a simple consequence of its reward mechanism.

But the rota isn't perfect. Sometime the Monday editor will edit a story so late on Monday that it's Tuesday, and sometimes the Tuesday editor will be up early and edit a story at the same time. This pattern is more common during the holidays, when the editors go to different time zones, often travel, and become more irregular in their editing.

So if the AI complex and skilled enough, then, simply through simple feedback, it will start building up a picture of its editors. It will figure out when they are likely to stick to a schedule, and when they will be more irregular. It will figure out the difference between holidays and non-holidays. Given time, it may be able to track the editors moods and it will certainly pick up on any major change in their lives - such as romantic relationships and breakups, which will radically change whether and how it should present stories with a romantic focus.

It will also likely learn the correlation between stories and feedbacks - maybe presenting a story define roughly as "positive" will increase subsequent reward for the rest of the day, on all stories. Or maybe this will only work on a certain editor, or only early in the term. Or only before lunch.

Thus the simple trainable AI with a particular focus - write automated news stories - will be trained, through feedback, to learn about its editors/controllers, to distinguish them, to get to know them, and, in effect, to manipulate them.

This may be a useful "bridging example" between standard RL agents and the superintelligent machines.

Overpaying for happiness?

31 cousin_it 01 January 2015 12:22PM

Happy New Year, everyone!

In the past few months I've been thinking several thoughts that all seem to point in the same direction:

1) People who live in developed Western countries usually make and spend much more money than people in poorer countries, but aren't that much happier. It feels like we're overpaying for happiness, spending too much money to get a single bit of enjoyment.

2) When you get enjoyment from something, the association between "that thing" and "pleasure" in your mind gets stronger, but at the same time it becomes less sensitive and requires more stimulus. For example if you like sweet food, you can get into a cycle of eating more and more food that's sweeter and sweeter. But the guy next door, who's eating much less and periodically fasting to keep the association fresh, is actually getting more pleasure from food than you! The same thing happens when you learn to deeply appreciate certain kinds of art, the folks who enjoy "low" art are visibly having more fun.

3) People sometimes get unrealistic dreams and endlessly chase them, like trying to "make it big" in writing or sports, because they randomly got rewarded for it at an early age. I wrote a post about that.

I'm not offering any easy answers here. But it seems like too many people get locked in loops where they spend more and more effort to get less and less happiness. The most obvious examples are drug addiction and video gaming, but also "one-itis" in dating, overeating, being a connoisseur of anything, striving for popular success, all these things follow the same pattern. You're just chasing after some Skinner-box thing that you think you "love", but it doesn't love you back.

Sooo... if you like eating, give yourself a break every once in a while? If you like comfort, maybe get a cold shower sometimes? Might be a good idea to make yourself the kind of person that can get happiness cheaply.

Sorry if this post is not up to LW standards, I typed it really quickly as it came to my mind.

CFAR fundraiser far from filled; 4 days remaining

29 AnnaSalamon 27 January 2015 07:26AM

We're 4 days from the end of our matching fundraiser, and still only about 1/3rd of the way to our target (and to the point where pledged funds would cease being matched).

If you'd like to support the growth of rationality in the world, do please consider donating, or asking me about any questions/etc. you may have.  I'd love to talk.  I suspect funds donated to CFAR between now and Jan 31 are quite high-impact.

As a random bonus, I promise that if we meet the $120k matching challenge, I'll post at least two posts with some never-before-shared (on here) rationality techniques that we've been playing with around CFAR.

Research Priorities for Artificial Intelligence: An Open Letter

23 jimrandomh 11 January 2015 07:52PM

The Future of Life Institute has published their document Research priorities for robust and beneficial artificial intelligence and written an open letter for people to sign indicating their support.

Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls. This document gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.

 

Memes and Rational Decisions

23 inferential 09 January 2015 06:42AM

In 2004, Michael Vassar gave the following talk about how humans can reduce existential risk, titled Memes and Rational Decisions, to some transhumanists. It is well-written and gives actionable advice, much of which is unfamiliar to the contemporary Less Wrong zeitgeist.

Although transhumanism is not a religion, advocating as it does the critical analysis of any position; it does have certain characteristics which may lead to its identification as such by concerned skeptics. I am sure that everyone here has had to deal with this difficulty, and as it is a cause of perplexity for me I would appreciate it if anyone who has some suggested guidelines for interacting honestly with non-transhumanists share them at the end of my presentation. It seems likely to me that each of our minds contains either meme complexes or complex functional adaptations which have evolved to identify “religious” thoughts and to neutralize their impact on our behavior. Most brains respond to these memes by simply rejecting them. Others however, instead neutralize such memes simply by not acting according to the conclusions that should be drawn from such memes. In almost any human environment prior to the 20th century this religious hypocrisy would be a vital cognitive trait for every selectively fit human. People who took in religious ideas and took them too seriously would end up sacrificing their lives overly casually at best, and at worst would become celibate priests. Unfortunately, these memes are no more discriminating than the family members and friends who tend to become concerned for our sanity in response to their activity. Since we are generally infested with the same set of memes, we genuinely are liable to insanity, though not of the suspected sort. A man who is shot by surprise is not particularly culpable for his failure to dodge or otherwise protect himself, though perhaps he should have signed up with Alcor. A hunter gatherer who confronts an aggressive European with a rifle for the first time can also receive sympathy when he is slain by the magic wand that he never expected to actually work. By contrast, a modern Archimedes who ignores a Roman soldier’s request that he cease from his geometric scribbling is truly a mad man. Most of people of the world, unaware of molecular nanotechnology and of the potential power of recursively self-improving AI are in a position roughly analogous to that of the first man. The business and political figures that dismiss eternal life and global destruction alike as plausible scenarios are in the position of the second man. By contrast, it is we transhumanists who are for the most part playing the part of Archimedes. With death, mediated by technologies we understand full well staring us in the face; we continue our pleasant intellectual games. At best a few percent of us have adopted the demeanor of an earlier Archimedes and transferred our attention from our choice activities to other, still interesting endeavors which happen to be vital to our own survival. The rest are presumably acting as puppets of the memes which react to the prospect of immortality by isolating the associated meme-complex and suppressing its effects on actual activity.

OK, so most of us don't seem to be behaving in an optimal manner. What manner would be optimal? This ISN'T a religion, remember? I can't tell you that. At best I can suggest an outline of the sort of behavior that seems to me least likely to lead to this region of space becoming the center of a sphere of tiny smiley faces expanding at the speed of light.

The first thing that I can suggest is that you take rationality seriously. Recognize how far you have to go. Trust me; the fact that you can't rationally trust me without evidence is itself a demonstration that at least one of us isn't even a reasonable approximation of rational, as demonstrated by Robin Hanson and Tyler Emerson of George Mason University in their paper on rational truth-seekers. The fact is that humans don't appear capable of approaching perfect rationality to anything like the degree to which most of you probably believe you have approached it. Nobel Laureate Daniel Kahneman and Amos Tversky provided a particularly valuable set of insights into this fact with their classic book Judgement Under Uncertainty: Heuristics and Biases and in subsequent works. As a trivial example of the uncertainty that humans typically exhibit, try these tests. (Offer some tests from Judgement Under Uncertainty)

I hope that I have made my point. Now let me point out some of the typical errors of transhumanists who have decided to act decisively to protect the world they care about from existential risks. After deciding to rationally defer most of the fun things that they would like to do for a few decades until the world is relatively safe, it is completely typical to either begin some quixotic quest to transform human behavior on a grand scale over the course of the next couple decades or to go raving blithering Cthulhu-worshiping mad and try to build an artificial intelligence. I will now try to discourage such activities.

One of the first rules of rationality is not to irrationally demand that others be rational. Demanding that someone make a difficult mental transformation has never once lead them to making said transformation. People have a strong evolved desire to make other people accept their assertions and opinions. Before you let the thought cross your mind that a person is not trying to be rational, I would suggest that you consider the following. If you and your audience were both trying to be rational, you would be mutually convinced of EVERY position that the members of your audience had on EVERY subject and vice versa. If this does not seem like a plausible outcome then one of you is not trying to be rational, and it is silly to expect a rational outcome from your discussion. By all means, if a particular person is in a position to be helpful try to blunder past the fact of your probably mutual unwillingness to be rational; in a particular instance it is entirely possible that ordinary discussion will lead to the correct conclusion, though it will take hundreds of times longer than it would if the participants were able to abandon the desire to win an argument as a motivation separate from the desire to reach the correct conclusion. On the other hand, when dealing with a group of people, or with an abstract class of people, Don't Even Try to influence them with what you believe to be a well-reasoned argument. This has been scientifically shown not to work, and if you are going to try to simply will your wishes into being you may as well debate the nearest million carbon atoms into forming an assembler and be done with it, or perhaps convince your own brain to become transhumanly intelligent. Hey, it's your brain, if you can't convince it to do something contrary to its nature that it wants to do is it likely that you can convince the brains of many other people to do something contrary to their natures that they don't want to do just by generating a particular set of vocalizations?

My recommendation that you not make an AI is slightly more urgent. Attempting to transform the behavior of a substantial group of people via a reasoned argument is a silly and superstitious act, but it is still basically a harmless one. On the other hand, attempts by ordinary physicist Nobel Laureate quality geniuses to program AI systems are not only astronomically unlikely to succeed, but in the shockingly unlikely event that they do succeed they are almost equally likely to leave nothing of value in this part of the universe. If you think you can do this safely despite my warning, here are a few things to consider:

  1. A large fraction of the greatest computer scientists and other information scientists in history have done work on AI, but so far none of them have begun to converge on even the outlines of a theory or succeeded in matching the behavioral complexity of an insect, despite the fantastic military applications of even dragonfly-equivalent autonomous weapons.
  2. Top philosophers, pivotal minds in the history of human thought, have consistently failed to converge on ethical policy.
  3. Isaac Asimov, history's most prolifi writer and Mensa's honorary president, attempted to formulate a more modest set of ethical precepts for robots and instead produced the blatantly suicidal three laws (if you don't see why the three laws wouldn't work I refer you to the Singularity Institute for Artificial Intelligence's campaign against the three laws)
  4. Science fiction authors as a class, a relatively bright crowd by human standards, have subsequently thrown more time into considering the question of machine ethics than they have any other philosophical issue other than time travel, yet have failed to develop anything more convincing than the three laws.
  5. AI ethics cannot be arrived at either through dialectic (critical speculation) or through the scientific method. The first method fails to distinguish between an idea that will actually work and the first idea you and your friends couldn't rapidly see big holes in, influenced as you were by your specific desire for a cool-sounding idea to be correct and your more general desire to actually realize your AI concept, saving the world and freeing you to devote your life to whatever you wish. The second method is crippled by the impossibility of testing a transhumanly intelligent AI (because it could by definition trick you into thinking it had passed the test) and by the irrelevance of testing an ethical system on an AI without transhuman intelligence. Ask yourself, how constrained would your actions be if you were forced to obey the code of Hammurabi but you had no other ethical impulses at all. Now keep in mind that Hammurabi was actually FAR more like you than an AI will be. He shared almost all of your genes, your very high by human standards intellect, and the empathy that comes from an almost identical brain architecture, but his attempt at a set of rules for humans was a first try, just as your attempt at a set of rules for AIs would be.
  6. Actually, if you are thinking in terms of a set of rules AT ALL this implies that you are failing to appreciate both a programmer's control over an AI's cognition and an AI's alien nature. If you are thinking in terms of something more sophisticated, and bear in mind that apparently only one person has ever thought in terms of something more sophisticated so far, bear in mind that the first such "more sophisticated" theory was discovered on careful analysis to itself be inadequate, as was the second.

 

If you can't make people change, and you can't make an AI, what can you do to avoid being killed? As I said, I don't know. It's a good bet that money would help, as well as an unequivocal decision to make singularity strategy the focus of your life rather than a hobby. A good knowledge of cognitive psychology and of how people fail to be rational may enable you to better figure out what to do with your money, and may enable you to better co-operate your efforts with other serious and rational transhumanists without making serious mistakes. If you are willing to try, please let's keep in touch. Seriously, even if you discount your future at a very high rate, I think that you will find that living rationally and trying to save the world is much more fun and satisfying than the majority of stuff that even very smart people spend their time doing. It really really beats pretending to do the same, yet even such pretending is or once was a very popular activity among top-notch transhumanists.

Aiming at true rationality will be very difficult in the short run, a period of time which humans who expect to live for less than a century are prone to consider the long run. It entails absolutely no social support from non-transhumanists, and precious little from transhumanists, most of whom will probably resent the implicit claim that they should be more rational. If you haven't already, it will also require you to put your every-day life in order and acquire the ability to interact positively with people or a less speculative character. You will get no VC or angel funding, terribly limited grant money, and in general no acknowledgement of any expertise you acquire. On the other hand, if you already have some worth-while social relationships, you will be shocked by just how much these relationships improve when you dedicate yourself to shaping them rationally. The potential of mutual kindness, when even one partner really decides not to do anything to undermine it, shines absolutely beyond the dreams of self-help authors.

If you have not personally acquired a well-paying job, in the short term I recommend taking the actuarial tests. Actuarial positions, while somewhat boring, do provide practice in rationally analyzing data of a complexity that denies intuitive analysis or analytical automatonism. They also pay well, require no credentials other than tests in what should be mandatory material for anyone aiming at rationality, and have top job security in jobs that are easy to find and only require 40 hours per week of work. If you are competent with money, a few years in such a job should give you enough wealth to retire to some area with a low cost of living and analyze important questions. A few years more should provide the capital to fund your own research. If you are smart enough to build an AI's morality it should be a breeze to burn through the 8 exams in a year, earn a six-figure income, and get returns on investment far better than Buffet does. On the other hand, doing that doesn't begin to suggest that you are smart enough to build an AI's morality. I'm not convinced that anything does.

Fortunately ordinary geniuses with practiced rationality can contribute a great deal to the task of saving the world. Even more fortunately, so long as they are rational they can co-operate very effectively even if they don't share an ethical system. Eternity is an intrinsically shared prize. On this task more than any other the actual behavioral difference between an egoist, altruist, or even a Kantian should fade to nothing in terms of its impact on actual behavior. The hard part is actually being rational, which requires that you postpone the fun but currently irrelevant arguments until the pressing problem is solved, even perhaps with the full knowledge that you  are actually probably giving them up entirely, as they may be about as interesting as watching moss grow post-singularity. Delaying gratification in this manner is not a unique difficulty faced by transhumanists. Anyone pursuing a long-term goal, such as a medical student or PhD candidate, does the same. The special difficulty that you will have to overcome is the difficulty of staying on track in the absence of social support or of appreciation of the problem, and the difficulty of overcoming your mind's anti-religion defenses, which will be screaming at you to cut out the fantasy and go live a normal life, with the normal empty set of beliefs about the future and its potential.

Another important difficulty to overcome is the desire for glory. It isn't important that the ideas that save the world be your ideas. What matters is that they be the right ideas. In ordinary life, the satisfaction that a person gains from winning an argument may usually be adequate compensation for walking away without having learned what they should have learned from the other side, but this is not the case when you elegantly prove to your opponent and yourself that the pie you are eating is not poisoned. Another glory-related concern is that of allowing science fiction to shape your expectations of the actual future. Yes it may be fun and exciting to speculate on government conspiracies to suppress nanotech, but even if you are the right conspiracy theories don't have enough predictive power to test or to guide your actions. If you are wrong, you may well end up clinically paranoid. Conspiracy thrillers are pleasant silly fun. Go ahead and read them if you lack the ability to take the future seriously, but don't end up in an imaginary one, that is NOT fun.

Likewise, don't trust science fiction when it implies that you have decades or centuries left before the singularity. You might, but you don't know that; it all depends on who actually goes out and makes it happen. Above all, don't trust its depictions of the sequence in which technologies will develop or of the actual consequences of technologies that enhance intelligence. These are just some author's guesses. Worse still, they aren't even the author's best guesses, they are the result of a lop-sided compromise between the author's best guess and the set of technologies that best fit the story the author wants to tell. So you want to see Mars colonized before singularity. That's common in science fiction, right? So it must be reasonably likely. Sorry, but that is not how a rational person estimates what is likely. Heuristics and Biases will introduce you to the representativeness heuristic, roughly speaking the degree to which a scenario fits a preconceived mental archetype. People who haven't actively optimized their rationally typically use representativeness as their estimate of probability because we are designed to do so automatically so we find it very easy to do so. In the real world this doesn't work well. Pay attention to logical relationships instead.

Since I am attempting to approximate a rational person, I don't expect e-mails from any of you to show up in my in-box in a month or two requesting my cooperation on some sensible and realistic project for minimizing existential risk. I don't expect that, but I place a low certainty value on most of my expectations, especially regarding the actions of outlier humans. I may be wrong. Please prove me wrong. The opportunity to find that I am mistaken in my estimates of the probability of finding serious transhumanists is what motivated me to come all the way across the continent. I'm betting we all die in a flash due to the abuse of these technologies. Please help me to be wrong.

Compartmentalizing: Effective Altruism and Abortion

21 Dias 04 January 2015 11:48PM

Cross-posted on my blog and the effective altruism forum with some minor tweaks; apologies if some of the formatting hasn't copied across. The article was written with an EA audience in mind but it is essentially one about rationality and consequentialism.

Summary: People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important – many standard arguments on both sides of moral issues like the permissibility of abortion are significantly undermined or otherwise effected by EA considerations, especially moral uncertainty.

A long time ago, Will wrote an article about how a key part of rationality was taking ideas seriously: fully exploring ideas, seeing all their consequences, and then acting upon them. This is something most of us do not do! I for one certainly have trouble. He later partially redacted it, and Anna has an excellent article on the subject, but at the very least decompartmentalizing is a very standard part of effective altruism.

Similarly, I think people selectively apply Effective Altruist (EA) principles. People are very willing to apply them in some cases, but when those principles would cut at a core part of the person’s identity – like requiring them to dress appropriately so they seem less weird – people are much less willing to take those EA ideas to their logical conclusion.

Consider your personal views. I’ve certainly changed some of my opinions as a result of thinking about EA ideas. For example, my opinion of bednet distribution is now much higher than it once was. And I’ve learned a lot about how to think about some technical issues, like regression to the mean. Yet I realized that I had rarely done a full 180  – and I think this is true of many people:

  • Many think EA ideas argue for more foreign aid – but did anyone come to this conclusion who had previously been passionately anti-aid?
  • Many think EA ideas argue for vegetarianism – but did anyone come to this conclusion who had previously been passionately carnivorous?
  • Many think EA ideas argue against domestic causes – but did anyone come to this conclusion who had previously been a passionate nationalist?

Yet this is quite worrying. Given the power and scope of many EA ideas, it seems that they should lead to people changing their mind on issues were they had been previously very certain, and indeed emotionally involved.

Obviously we don’t need to apply EA principles to everything – we can probably continue to brush our teeth without need for much reflection. But we probably should apply them to issues with are seen as being very important: given the importance of the issues, any implications of EA ideas would probably be important implications.

Moral Uncertainty

In his PhD thesis, Will MacAskill argues that we should treat normative uncertainty in much the same way as ordinary positive uncertainty; we should assign credences (probabilities) to each theory, and then try to maximise the expected morality of our actions. He calls this idea ‘maximise expected choice-worthiness’, and if you’re into philosophy, I recommend reading the paper. As such, when deciding how to act we should give greater weight to the theories we consider more likely to be true, and also give more weight to theories that consider the issue to be of greater importance.

This is important because it means that a novel view does not have to be totally persuasive to demand our observance. Consider, for example, vegetarianism. Maybe you think there’s only a 10% chance that animal welfare is morally significant – you’re pretty sure they’re tasty for a reason. Yet if the consequences of eating meat are very bad in those 10% of cases (murder or torture, if the animal rights activists are correct), and the advantages are not very great in the other 90% (tasty, some nutritional advantages), we should not eat meat regardless. Taking into account the size of the issue at stake as well as probability of its being correct means paying more respect to ‘minority’ theories.

And this is more of an issue for EAs than for most people. Effective Altruism involves a group of novel moral premisses, like cosmopolitanism, the moral imperative for cost-effectiveness and the importance of the far future. Each of these imply that our decisions are in some way very important, so even if we assign them only a small credence, their plausibility implies radical revisions to our actions.

One issue that Will touches on in his thesis is the issue of whether fetuses morally count. In the same way that we have moral uncertainty as to whether animals, or people in the far future, count, so too we have moral uncertainty as to whether unborn children are morally significant. Yes, many people are confident they know the correct answer – but there many of these on each side of the issue. Given the degree of disagreement on the issue, among philosophers, politicians and the general public, it seems like the perfect example of an issue where moral uncertainty should be taken into account – indeed Will uses it as a canonical example.

Consider the case of a pregnant women Sarah, wondering whether it is morally permissible to abort her child1. The alternative course of action she is considering is putting the child up for adoption. In accordance with the level of social and philosophical debate on the issue, she is uncertain as to whether aborting the fetus is morally permissible. If it’s morally permissible, it’s merely permissible – it’s not obligatory. She follows the example from Normative Uncertainty and constructs the following table

abortion table 1

In the best case scenario, abortion has nothing to recommend it, as adoption is also permissible. In the worst case, abortion is actually impermissible, whereas adoption is permissible. As such, adoption dominates abortion.

However, Sarah might not consider this representation as adequate. In particular, she thinks that now is not the best time to have a child, and would prefer to avoid it.2 She has made plans which are inconsistent with being pregnant, and prefers not to give birth at the current time. So she amends the table to take into account these preferences.

abortion table 2

Now adoption no longer strictly dominates abortion, because she prefers abortion to adoption in the scenario where it is morally permissible. As such, she considers her credence: she considers the pro-choice arguments slightly more persuasive than the pro-life ones: she assigns a 70% credence to abortion being morally permissible, but only a 30% chance to its being morally impermissible.

Looking at the table with these numbers in mind, intuitively it seems that again it’s not worth the risk of abortion: a 70% chance of saving oneself inconvenience and temporary discomfort is not sufficient to justify a 30% chance of committing murder. But Sarah’s unsatisfied with this unscientific comparison: it doesn’t seem to have much of a theoretical basis, and she distrusts appeals to intuitions in cases like this. What is more, Sarah is something of a utilitarian; she doesn’t really believe in something being impermissible.

Fortunately, there’s a standard tool for making inter-personal welfare comparisons: QALYs. We can convert the previous table into QALYs, with the moral uncertainty now being expressed as uncertainty as to whether saving fetuses generates QALYs. If it does, then it generates a lot; supposing she’s at the end of her first trimester, if she doesn’t abort the baby it has a 98% chance of surviving to birth, at which point its life expectancy is 78.7 in the US, for 78.126 QALYs. This calculation assumes assigns no QALYs to the fetus’s 6 months of existence between now and birth. If fetuses are not worthy of ethical consideration, then it accounts for 0 QALYs.

We also need to assign QALYs to Sarah. For an upper bound, being pregnant is probably not much worse than having both your legs amputated without medication, which is 0.494 QALYs, so lets conservatively say 0.494 QALYs. She has an expected 6 months of pregnancy remaining, so we divide by 2 to get 0.247 QALYs. Women’s Health Magazine gives the odds of maternal death during childbirth at 0.03% for 2013; we’ll round up to 0.05% to take into account risk of non-death injury. Women at 25 have a remaining life expectancy of around 58 years, so thats 0.05%*58= 0.029 QALYs. In total that gives us an estimate of 0.276 QALYs. If the baby doesn’t survive to birth, however, some of these costs will not be incurred, so the truth is probably slightly lower than this. All in all a 0.276 QALYs seems like a reasonably conservative figure.

Obviously you could refine these numbers a lot (for example, years of old age are likely to be at lower quality of life, there are some medical risks to the mother from aborting a fetus, etc.) but they’re plausibly in the right ballpark. They would also change if we used inherent temporal discounting, but probably we shouldn’t.

abortion table 3

We can then take into account her moral uncertainty directly, and calculate the expected QALYs of each action:

  • If she aborts the fetus, our expected QALYs are 70%x0 + 30%(-78.126) = -23.138
  • If she carries the baby to term and puts it up for adoption, our expected QALYs are 70%(-0.247) + 30%(-0.247) = -0.247

Which again suggests that the moral thing to do is to not abort the baby. Indeed, the life expectancy is so long at birth that it quite easily dominates the calculation: Sarah would have to be extremely confident in rejecting the value of the fetus to justify aborting it. So, mindful of overconfidence bias, she decides to carry the child to term.

Indeed, we can show just how confident in the lack of moral significance of the fetuses one would have to be to justify aborting one. Here is a sensitivity table, showing credence in moral significance of fetuses on the y axis, and the direct QALY cost of pregnancy on the x axis for a wide range of possible values. The direct QALY cost of pregnancy is obviously bounded above by its limited duration. As is immediately apparent, one has to be very confident in fetuses lacking moral significance, and pregnancy has to be very bad, before aborting a fetus becomes even slightly QALY-positive. For moderate values, it is extremely QALY-negative.

abortion table 4

Other EA concepts and their applications to this issue

Of course, moral uncertainty is not the only EA principle that could have bearing on the issue, and given that the theme of this blogging carnival, and this post, is things we’re overlooking, it would be remiss not to give at least a broad overview of some of the others. Here, I don’t intend to judge how persuasive any given argument is – as we discussed above, this is a debate that has been going without settlement for thousands of years – but merely to show the ways that common EA arguments affect the plausibility of the different arguments. This is a section about the directionality of EA concerns, not on the overall magnitudes.

Not really people

One of the most important arguments for the permissibility of abortion is that fetuses are in some important sense ‘not really people’. In many ways this argument resembles the anti-animal rights argument that animals are also ‘not really people’. We already covered above the way that considerations of moral uncertainty undermine both these arguments, but it’s also noteworthy that in general it seems that the two views are mutually supporting (or mutually undermining, if both are false). Animal-rights advocates often appeal to the idea of an ‘expanding circle’ of moral concern. I’m skeptical of such an argument, but it seems clear that the larger your sphere, the more likely fetuses are to end up on the inside. The fact that, in the US at least, animal activists tend to be pro-abortion seems to be more of a historical accident than anything else. We could imagine alternative-universe political coalitions, where a “Defend the Weak; They’re morally valuable too” party faced off against a “Exploit the Weak; They just don’t count” party. In general, to the extent that EAs care about animal suffering (even insect suffering ), EAs should tend to be concerned about the welfare of the unborn.

Not people yet

A slightly different common argument is that while fetuses will eventually be people, they’re not people yet. Since they’re not people right now, we don’t have to pay any attention to their rights or welfare right now. Indeed, many people make short sighted decisions that implicitly assign very little value to the futures of people currently alive, or even to their own futures – through self-destructive drug habits, or simply failing to save for retirement. If we don’t assign much value to our own futures, it seems very sensible to disregard the futures of those not even born. And even if people who disregarded their own futures were simply negligent, we might still be concerned about things like the non-identity problem.

Yet it seems that EAs are almost uniquely unsuited to this response. EAs do tend to care explicitly about future generations. We put considerable resources into investigating how to help them, whether through addressing climate change or existential risks. And yet these people have far less of a claim to current personhood than fetuses, who at least have current physical form, even if it is diminutive. So again to the extent that EAs care about future welfare, EAs should tend to be concerned about the welfare of the unborn.

Replaceability

Another important EA idea is that of replaceability. Typically this arises in contexts of career choice, but there is a different application here. The QALYs associated with aborted children might not be so bad if the mother will go on to have another child instead. If she does, the net QALY loss is much lower than the gross QALY loss. Of course, the benefits of aborting the fetus are equivalently much smaller – if she has a child later on instead, she will have to bear the costs of pregnancy eventually anyway. This resembles concerns that maybe saving children in Africa doesn’t make much difference, because their parents adjust their subsequent fertility.

The plausibility behind this idea comes from the idea that, at least in the US, most families have a certain ideal number of children in mind, and basically achieve this goal. As such, missing an opportunity to have an early child simply results in having another later on.

If this were fully true, utilitarians might decide that abortion actually has no QALY impact at all – all it does is change the timing of events. On the other hand, fertility declines with age, so many couples planning to have a replacement child later may be unable to do so. Also, some people do not have ideal family size plans.

Additionally, this does not really seem to hold when the alternative is adoption; presumably a woman putting a child up for adoption does not consider it as part of her family, so her future childbearing would be unaffected. This argument might hold if raising the child yourself was the only alternative, but given that adoption services are available, it does not seem to go through.

Autonomy

Sometimes people argue for the permissibility of abortion through autonomy arguments. “It is my body”, such an argument would go, “therefore I may do whatever I want with it.” To a certain extent this argument is addressed by pointing out that one’s bodily rights presumably do not extent to killing others, so if the anti-abortion side are correct, or even have a non-trivial probability of being correct, autonomy would be insufficient. It seems that if the autonomy argument is to work, it must be because a different argument has established the non-personhood of fetuses – in which case the autonomy argument is redundant. Yet even putting this aside, this argument is less appealing to EAs than to non-EAs, because EAs often hold a distinctly non-libertarian account of personal ethics. We believe it is actually good to help people (and avoid hurting them), and perhaps that it is bad to avoid doing so. And many EAs are utilitarians, for whom helping/not-hurting is not merely laud-worthy but actually compulsory. EAs are generally not very impressed with Ayn Rand style autonomy arguments for rejecting charity, so again EAs should tend to be unsympathetic to autonomy arguments for the permissibility of abortion.

Indeed, some EAs even think we should be legally obliged to act in good ways, whether through laws against factory farming or tax-funded foreign aid.

Deontology

An argument often used on the opposite side  – that is, an argument used to oppose abortion, is that abortion is murder, and murder is simply always wrong. Whether because God commanded it or Kant derived it, we should place the utmost importance of never murdering. I’m not sure that any EA principle directly pulls against this, but nonetheless most EAs are consequentialists, who believe that all values can be compared. If aborting one child would save a million others, most EAs would probably endorse the abortion. So I think this is one case where a common EA view pulls in favor of the permissibility of abortion.

I didn’t ask for this

Another argument often used for the permissibility of abortion is that the situation is in some sense unfair. If one did not intend to become pregnant – perhaps even took precautions to avoid becoming so – but nonetheless ends up pregnant, you’re in some way not responsible for becoming pregnant. And since you’re not responsible for it you have no obligations concerning it – so may permissible abort the fetus.

However, once again this runs counter to a major strand of EA thought. Most of us did not ask to be born in rich countries, or to be intelligent, or hardworking. Perhaps it was simply luck. Yet being in such a position nonetheless means we have certain opportunities and obligations. Specifically, we have the opportunity to use of wealth to significantly aid those less fortunate than ourselves in the developing world, and many EAs would agree the obligation. So EAs seem to reject the general idea that not intending a situation relieves one of the responsibilities of that situation.

Infanticide is okay too

A frequent argument against the permissibility of aborting fetuses is by analogy to infanticide. In general it is hard to produce a coherent criteria that permits the killing of babies before birth but forbids it after birth. For most people, this is a reasonably compelling objection: murdering innocent babies is clearly evil! Yet some EAs actually endorse infanticide. If you were one of those people, this particular argument would have little sway over you.

Moral Universalism

A common implicit premise in many moral discussion is that the same moral principles apply to everyone. When Sarah did her QALY calculation, she counted the baby’s QALYs as equally important to her own in the scenario where they counted at all. Similarly, both sides of the debate assume that whatever the answer is, it will apply fairly broadly. Perhaps permissibility varies by age of the fetus – maybe ending when viability hits – but the same answer will apply to rich and poor, Christian and Jew, etc.

This is something some EAs might reject. Yes, saving the baby produces many more QALYs than Sarah loses through the pregnancy, and that would be the end of the story if Sarah were simply an ordinary person. But Sarah is an EA, and so has a much higher opportunity cost for her time. Becoming pregnant will undermine her career as an investment banker, the argument would go, which in turn prevents her from donating to AMF and saving a great many lives. Because of this, Sarah is in a special position – it is permissible for her, but it would not be permissible for someone who wasn’t saving many lives a year.

I think this is a pretty repugnant attitude in general, and a particularly objectionable instance of it, but I include it here for completeness.

May we discuss this?

Now we’ve considered these arguments, it appears that applying general EA principles to the issue in general tends to make abortion look less morally permissible, though there were one or two exceptions. But there is also a second order issue that we should perhaps address – is it permissible to discuss this issue at all?

Nothing to do with you

A frequently seen argument on this issue is to claim that the speaker has no right to opine on the issue. If it doesn’t personally affect you, you cannot discuss it – especially if you’re privileged. As many (a majority?) of EAs are male, and of the women many are not pregnant, this would curtail dramatically the ability of EAs to discuss abortion. This is not so much an argument on one side or other of the issue as an argument for silence.

Leaving aside the inherent virtues and vices of this argument, it is not very suitable for EAs. Because EAs have many many opinions on topics that don’t directly affect them:

  • EAs have opinions on disease in Africa, yet most have never been to Africa, and never will
  • EAs have opinions on (non-human) animal suffering, yet most are not non-human animals
  • EAs have opinions on the far future, yet live in the present

Indeed, EAs seem more qualified to comment on abortion – as we all were once fetuses, and many of us will become pregnant. If taken seriously this argument would call foul on virtually ever EA activity! And this is no idle fantasy – there are certainly some people who think that Westerns cannot usefully contribute to solving African poverty.

Too controversial

We can safely say this is a somewhat controversial issue. Perhaps it is too controversial – maybe it is bad for the movement to discuss. One might accept the arguments above – that EA principles generally undermine the traditional reasons for thinking abortion is morally permissible – yet think we should not talk about it. The controversy might divide the community and undermine trust. Perhaps it might deter newcomers. I’m somewhat sympathetic to this argument – I take the virtue of silence seriously, though eventually my boyfriend persuaded me it was worth publishing.

Note that the controversial nature is evidence against abortion’s moral permissibility, due to moral uncertainty.

However, the EA movement is no stranger to controversy.

  • There is a semi-official EA position on immigration, which is about as controversial as abortion in the US at the moment, and the EA position is such an extreme position that essentially no mainstream politicians hold it.
  • There is a semi-official EA position on vegetarianism, which is pretty controversial too, as it involves implying that the majority of Americans are complicit in murder every day.

Not worthy of discussion

Finally, another objection to discussing this is it simply it’s an EA idea. There are many disagreements in the world, yet there is no need for an EA view on each. Conflict between the Lilliputians and Blefuscudians notwithstanding, there is no need for an EA perspective on which end of the egg to break first. And we should be especially careful of heated, emotional topics with less avenue to pull the rope sideways. As such, even though the object-level arguments given above are correct, we should simply decline to discuss it.

However, it seems that if abortion is a moral issue, it is a very large one. In the same way that the sheer number of QALYs lost makes abortion worse than adoption even if our credence in fetuses having moral significance was very low, the large number of abortions occurring each year make the issue as a whole of high significance. In 2011 there were over 1 million babies were aborted in the US. I’ve seen a wide range of global estimates, including around 10 million to over 40 million. By contrast, the WHO estimates there are fewer than 1 million malaria deaths worldwide each year. Abortion deaths also cause a higher loss of QALYs due to the young age at which they occur. On the other hand, we should discount them for the uncertainty that they are morally significant. And perhaps there is an even larger closely related moral issue. The size of the issue is not the only factor in estimating the cost-effectiveness of interventions, but it is the most easily estimable. On the other hand, I have little idea how many dollars of donations it takes to save a fetus – it seems like an excellent example of some low-hanging fruit research.

Conclusion

People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important. In this post we the implications of common EA beliefs on the permissibility of abortion. Taking into account moral uncertainty makes aborting a fetus seem far less permissible, as the high counterfactual life expectancy of the baby tends to dominate other factors. Many other EA views are also significant to the issue, making various standard arguments on each side less plausible.

 


  1. There doesn’t seem to be any neutral language one can use here, so I’m just going to switch back and forth between ‘fetus’ and ‘child’ or ‘baby’ in a vain attempt at terminological neutrality. 
  2. I chose this reason because it is the most frequently cited main motivation for aborting a fetus according to the Guttmacher Institute. 

Low Hanging fruit for buying a better life

20 taryneast 06 January 2015 10:11AM

What can I purchase with $100 that will be the best thing I can buy to make my life better?

 

I've decided to budget some regular money to improving my life each month. I'd like to start with low hanging fruit for obvious reasons - but when I sat down to think of improvements, I found myself thinking of the same old things I'd already been planning to do anyway... and I'd like out of that rut.

Constraints/more info:

 

  1. be concrete. I know - "spend money on experiences" is a good idea - but what experiences are the best option to purchase *first*
  2. "better" is deliberately left vague - choose how you would define it, so that I'm not constrained just by ways of "being better" that I'd have thought of myself.
  3. please assume that I have all my basic needs met (eg food, clothing, shelter) and that I have budgeted separately for things like investing for my financial future and for charity.
  4. apart from the above, assume nothing - Especially don't try and tailor solutions to anything you might know and/or guess about me specifically, because I think this would be a useful resource for others who might have just begun.
  5. don't constrain yourself to exactly $100 - I could buy 2-3 things for that, or I could save up over a couple of months and buy something more expensive... I picked $100 because it's a round number and easy to imagine.
  6. it's ok to add "dumb" things - they can help spur great ideas, or just get rid of an elephant in the room.
  7. try thinking of your top-ten before reading any comments, in order not to bias your initial thinking. Then come back and add ten more once you've been inspired by what everyone else came up with.

 

Background:

This is a question I recently posed to my local Less Wrong group and we came up with a few good ideas, so I thought I'd share the discussion with the wider community and see what we can come up with. I'll add the list we came up with later on in the comments...

It'd be great to have a repository of low-hanging fruit for things that can be solved with (relatively affordable) amounts of money. I'd personally like to go through the list - look at candidates that sound like they'd be really useful to me and then make a prioritised list of what to work on first.

Recent AI safety work

20 paulfchristiano 30 December 2014 06:19PM

(Crossposted from ordinary ideas). 

I’ve recently been thinking about AI safety, and some of the writeups might be interesting to some LWers:

  1. Ideas for building useful agents without goals: approval-directed agentsapproval-directed bootstrapping, and optimization and goals. I think this line of reasoning is very promising.
  2. A formalization of one piece of the AI safety challenge: the steering problem. I am eager to see more precise, high-level discussion of AI safety, and I think this article is a helpful step in that direction. Since articulating the steering problem I have become much more optimistic about versions of it being solved in the near term. This mostly means that the steering problem fails to capture the hardest parts of AI safety. But it’s still good news, and I think it may eventually cause some people to revise their understanding of AI safety.
  3. Some ideas for getting useful work out of self-interested agents, based on arguments: of arguments and wagersadversarial collaboration [older], and delegating to a mixed crowd. I think these are interesting ideas in an interesting area, but they have a ways to go until they could be useful.

I’m excited about a few possible next steps:

  1. Under the (highly improbable) assumption that various deep learning architectures could yield human-level performance, could they also predictably yield safe AI? I think we have a good chance of finding a solution---i.e. a design of plausibly safe AI, under roughly the same assumptions needed to get human-level AI---for some possible architectures. This would feel like a big step forward.
  2. For what capabilities can we solve the steering problem? I had originally assumed none, but I am now interested in trying to apply the ideas from the approval-directed agents post. From easiest to hardest, I think there are natural lines of attack using any of: natural language question answering, precise question answering, sequence prediction. It might even be possible using reinforcement learners (though this would involve different techniques).
  3. I am very interested in implementing effective debates, and am keen to test some unusual proposals. The connection to AI safety is more impressionistic, but in my mind these techniques are closely linked with approval-directed behavior.
  4. I’m currently writing up a concrete architecture for approval-directed agents, in order to facilitate clearer discussion about the idea. This kind of work that seems harder to do in advance, but at this point I think it’s mostly an exposition problem.

Immortality: A Practical Guide

19 G0W51 26 January 2015 04:17PM

Immortality: A Practical Guide

Introduction

This article is about how to increase one’s own chances of living forever or, failing that, living for a long time. To be clear, this guide defines death as the long-term loss of one’s consciousness and defines immortality as never-ending life. For those who would like less lengthy information on decreasing one’s risk of death, I recommend reading the sections “Can we become immortal,” “Should we try to become immortal,” and “Cryonics,” in this guide, along with the article Lifestyle Interventions to Increase Longevity.

This article does not discuss how to treat specific disease you may have. It is not intended as a substitute for the medical advice of physicians. You should consult a physician with respect to any symptoms that may require diagnosis or medical attention. Additionally, I suggest considering using MetaMed to receive customized, albeit perhaps very expensive, information on your specific conditions, if you have any.

When reading about the effect sizes in scientific studies, keep in mind that many scientific studies report false-positives and are biased,101 though I have tried to minimize this by maximizing the quality of the studies used. Meta-analyses and scientific reviews seem to typically be of higher quality than other study types, but are still subject to biases.114

Corrections, criticisms, and suggestions for new topics are greatly appreciated. I’ve tried to write this article tersely, so feedback on doing so would be especially appreciated. Apologies if the article’s font type, size and color isn’t standard on Less Wrong; I made it in google docs without being aware of Less Wrong’s standard and it would take too much work changing the style of the entire article.

 

Contents

  1. Can we become immortal?

  2. Should we try to become immortal?

  3. Relative importance of the different topics

  4. Food

    1. What to eat and drink

    2. When to eat and drink

    3. How much to eat

    4. How much to drink

  5. Exercise

  6. Carcinogens

    1. Chemicals

    2. Infections

    3. Radiation

  7. Emotions and feelings

    1. Positive emotions and feelings

    2. Psychological distress

    3. Stress

    4. Anger and hostility

  8. Social and personality factors

    1. Social status

    2. Giving to others

    3. Social relationships

    4. Conscientiousness

  9. Infectious diseases

    1. Dental health

  10. Sleep

  11. Drugs

  12. Blood donation

  13. Sitting

  14. Sleep apnea

  15. Snoring

  16. Exams

  17. Genomics

  18. Aging

  19. External causes of death

    1. Transport accidents

    2. Assault

    3. Intentional self harm

    4. Poisoning

    5. Accidental drowning

    6. Inanimate mechanical forces

    7. Falls

    8. Smoke, fire, and heat

    9. Other accidental threats to breathing

    10. Electric current

    11. Forces of nature

  20. Medical care

  21. Cryonics

  22. Money

  23. Future advancements

  24. References

 

Can we become immortal?

In order to potentially live forever, one never needs to make it impossible to die; one instead just needs to have one’s life expectancy increase faster than time passes, a concept known as the longevity escape velocity.61 For example, if one had a 10% chance of dying in their first century of life, but their chance of death decreased by 90% at the end of each century, then one’s chance of ever dying would be be 0.1 + 0.12 + 0.13… = 0.11… = 11.11...%. When applied to risk of death from aging, this akin to one’s remaining life expectancy after jumping off a cliff while being affected by gravity and jet propulsion, with gravity being akin to aging and jet propulsion being akin to anti-aging (rejuvenation) therapies, as shown below.

The numbers in the above figure denote plausible ages of individuals when the first rejuvenation therapies arrive. A 30% increase in healthy lifespan would give the users of first-generation rejuvenation therapies 20 years to benefit from second-generation rejuvenation therapies, which could give an additional 30% increase if life span, ad infinitum.61

As for causes of death, many deaths are strongly age-related. The proportion of deaths that are caused by aging in the industrial world approaches 90%.53 Thus, I suppose postponing aging would drastically increase life expectancy.

As for efforts against aging, the SENS Research foundation and Science for Life Extension are charitable foundations for trying to cure aging.54, 55 Additionally, Calico, a Google-backed company, and AbbVie, a large pharmaceutical company, have each committed fund $250 million to cure aging.56

I speculate that one could additionally decrease risk of death by becoming a cyborg, as mechanical bodies seem easier to maintain than biological ones, though I’ve found no articles discussing this.

Similar to becoming a cyborg, another potential method of decreasing one’s risk of death is mind uploading, which is, roughly speaking, the transfer of most or all of one’s mental contents into a computer.62 However, there are some concerns about the transfer creating a copy of one’s consciousness, rather than being the same consciousness. This issue is made very apparent if the mind-uploaded process leaves the original mind intact, making it seem unlikely that one’s consciousness was transferred to the new body.63 Eliezer Yudkowsky doesn’t seem to believe this is an issue, though I haven't found a citation for this.

With regard to consciousness, it seems that most individuals believe that the consciousness in one’s body is the “same” consciousness as the one that was in one’s body in the past and will be in it in the future. However, I know of no evidence for this. If one’s consciousness isn’t the same of the one in one’s body in the future, and one defined death as one’s consciousness permanently ending, then I suppose one can’t prevent death for any time at all. Surprisingly, I’ve found no articles discussing this possibility.

Although curing aging, becoming a cyborg, and mind uploading may prevent death from disease, they still seem to leave oneself vulnerable to accidents, murder, suicide, and existential catastrophes. I speculate that these problems could be solved by giving an artificial superintelligence the ability to take control of one’s body in order to prevent such deaths from occurring. Of course, this possibility is currently unavailable.

Another potential cause of death is the Sun expanding, which could render Earth uninhabitable in roughly one billion years. Death from this could be prevented by colonizing other planets in the solar system, although eventually the sun would render the rest of the solar system uninhabitable. After this, one could potentially inhabit other stars; it is expected that stars will remain for roughly 10 quintillion years, although some theories predict that the universe will be destroyed in a mere 20 billion years. To continue surviving, one could potentially go to other universes.64 Additionally, there are ideas for space-time crystals that could process information even after heat death (i.e. the “end of the universe”),65 so perhaps one could make oneself composed of the space-time crystals via mind uploading or another technique. There could also be other methods of surviving the conventional end of the universe, and life could potentially have 10 quintillion years to find them.

Yet another potential cause of death is living in a computer simulation that is ended. The probability of one living in a computer simulation actually seems to not be very improbable. Nick Bostrom argues that:

...at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

The argument for this is here.100

If one does die, one could potentially be revived. Cryonics, discussed later in this article, may help in this. Additionally, I suppose one could possibly be revived if future intelligences continually create new conscious individuals and eventually create one of them that have one’s “own” consciousness, though consciousness remains a mystery, so this may not be plausible, and I’ve found no articles discussing this possibility. If the probability of one’s consciousness being revived per unit time does not approach or equal zero as time approaches infinity, then I suppose one is bound to become conscious again, though this scenario may be unlikely. Again, I’ve found no articles discussing this possibility.

As already discussed, in order to be live forever, one must either be revived after dying or prevent death from the consciousness in one’s body not being the same as the one that will be in one’s body in the future, accidents, aging, the sun dying, the universe dying, being in a simulation and having it end, and other, unknown, causes. Keep in mind that adding extra details that aren’t guaranteed to be true can only make events less probable, and that people often don’t account for this.66 A spreadsheet for estimating one’s chance of living forever is here.

 

Should we try to become immortal?

Before deciding whether one should try to become immortal, I suggest learning about the cognitive biases scope insensitivity, hyperbolic discounting, and bias blind spot if you don’t know currently know about them. Also, keep in mind that one study found that simply informing people of a cognitive bias made them no less likely to fall prey to it. A study also found that people only partially adjusted for cognitive biases after being told that informing people of a cognitive bias made them no less likely to fall prey to it.67

Many articles arguing against immortality are found via a quick google search, including this, this, this, and this. This article along with its comments discusses counter-arguments to many of these arguments. The Fable of the Dragon Tyrant provides an argument for curing aging, which can be extended to be an argument against mortality as a whole. I suggest reading it.

One can also evaluate the utility of immortality via decision theory. Assuming individuals receive some finite, non-decreasing, above zero amount of utility per unit time, living forever would give infinitely more utility than living for a finite amount of time. Using these assumptions, in order to maximize utility, one should be willing to accept any finite cost to become immortal. However, the situation is complicated when one considers the potential of becoming immortal and receiving a finite positive utility unintentionally, in which case one would receive infinite expected utility regardless of if one tried to become immortal. Additionally, if one both has the chance of receiving infinitely high and infinitely low utility, one’s expected utility would be undefined. Infinite utilities are discussed in “Infinite Ethics” by Nick Bostrom.

For those interested in decreasing existential risk, living for a very long time, albeit not necessarily forever, may give one more opportunity to do so. This idea can be generalized to many goals one has in life.

On whether one can influence one’s chances of becoming immortal, studies have shown that only roughly 20-30% of longevity in humans is accounted for by genetic factors.68 There are multiple actions one can to increase one’s chances of living forever; these are what the rest of this article is about. Keep in mind that you should consider continuing reading this article even if you don’t want to try to become immortal, as the article provides information on living longer, even if not forever, as well.

 

Relative importance of the different topics

The figure below gives the relative frequencies of preventable causes of death.

1

Some causes of death are excluded from the graph, but are still large causes of death. Most notably, 440,000 deaths in the US, roughly one sixth of total deaths in the US are estimated to be from preventable medical errors in hospitals.2

Risk calculators for cardiovascular disease are here and here. Though they seem very simplistic, they may be worth looking at and can probably be completed quickly.

Here are the frequencies of causes of deaths in the US in year 2010 based off of another classification:

  • Heart disease: 596,577

  • Cancer: 576,691

  • Chronic lower respiratory diseases: 142,943

  • Stroke (cerebrovascular diseases): 128,932

  • Accidents (unintentional injuries): 126,438

  • Alzheimer's disease: 84,974

  • Diabetes: 73,831

  • Influenza and Pneumonia: 53,826

  • Nephritis, nephrotic syndrome, and nephrosis: 45,591

  • Intentional self-harm (suicide): 39,518

113

 

Food

What to eat and drink

Keep in mind that the relationship between health and the consumption of types of substances aren’t necessarily linear. I.e. some substances are beneficial in small amounts but harmful in large amounts, while others are beneficial in both small and large amounts, but consuming large amounts is no more beneficial than consuming small amounts.

 

Recommendations from The Nutrition Source

The Nutrition Source is part of the Harvard School of Public Health.

Its recommendations:

  • Make ½ of your “plate” consist of a variety of fruits and a variety of vegetables, excluding potatoes, due to potatoes’ negative effect on blood sugar. The Harvard School of Public Health doesn’t seem to specify if this is based on calories or volume. It also doesn’t explain what it means by plate, but presumably ½ of one’s plate means ½ solid food consumed.

  • Make ¼ of your plate consist of whole grains.

  • Make ¼ of your plate consist of high-protein foods.

  • Limit red meat consumption.

  • Avoid processed meats.

  • Consume monounsaturated and polyunsaturated fats in moderation; they are healthy.

  • Avoid partially hydrogenated oils, which contain trans fats, which are unhealthy.

  • Limit milk and dairy products to one to two servings per day.

  • Limit juice to one small glass per day.

  • It is important to eat seafood one or two times per week, particularly fatty (dark meat) fish that are richer in EPA and DHA.

  • Limit diet drink consumption or consume in moderation.

  • Avoid sugary drinks like soda, sports drinks, and energy drinks.3

 

Fat

The bottom line is that saturated fats and especially trans fats are unhealthy, while unsaturated fats are healthy and the types of unsaturated fats omega-3 and omega-6 fatty acids fats are essential. The proportion of calories from fat in one’s diet isn’t really linked with disease.

Saturated fat is unhealthy. It’s generally a good idea to minimize saturated fat consumption. The latest Dietary Guidelines for Americans recommends consuming no more than 10% of calories from saturated fat, but the American Heart Association recommends consuming no more than 7% of calories from saturated fat. However, don’t decrease nut, oil, and fish consumption to minimize saturated fat consumption. Foods that contain large amounts of saturated fat include red meat, butter, cheese, and ice cream.

Trans fats are especially unhealthy. For every 2% increase of calories from trans-fat, risk of coronary heart disease increases by 23%. The Federal Institute for Medicine states that there are no known requirements for trans fats for bodily functions, so their consumption should be minimized. Partially hydrogenated oils contain trans fats, and foods that contain trans fats are often processed foods. In the US, products can claim to have zero grams of trans fat if they have no more than 0.5 grams of trans fat. Products with no more than 0.5 grams of trans fat that still have non-negligible amounts of trans fat will probably have the ingredients “partially hydrogenated vegetable oils” or “vegetable shortening” in their ingredient list.

Unsaturated fats have beneficial effects, including improving cholesterol levels, easing inflammation, and stabilizing heart rhythms. The American Heart Association has set 8-10% of calories as a target for polyunsaturated fat consumption, though eating more polyunsaturated fat, around 15%of daily calories, in place of saturated fat may further lower heart disease risk. Consuming unsaturated fats instead of saturated fat also prevents insulin resistance, a precursor to diabetes. Monounsaturated fats and polyunsaturated fats are types of unsaturated fats.

Omega-3 fatty acids (omega-3 fats) are a type of unsaturated fat. There are two main types: Marine omega-3s and alpha-linolenic acid (ALA). Omega-3 fatty acids, especially marine omega-3s, are healthy. Though one can make most needed types of fats from other fats or substances consumed, omega-3 fat is an essential fat, meaning it is an important type of fat and cannot be made in the body, so they must come from food. Most americans don’t get enough omega-3 fats.

Marine omega-3s are primarily found in fish, especially fatty (dark mean) fish. A comprehensive review found that eating roughly two grams per week of omega-3s from fish, equal to about one or two servings of fatty fish per week, decreased risk of death from heart disease by more than one-third. Though fish contain mercury, this is insignificant the positive health effects of their consumption (for the consumer, not the fish). However, it does benefit one’s health to consult local advisories to determine how much local freshwater fish to consume.

ALA may be an essential nutrient, and increased ALA consumption may be beneficial. ALA is found in vegetable oils, nuts (especially walnuts), flax seeds, flaxseed oil, leafy vegetables, and some animal fat, especially those from grass-fed animals. ALA is primarily used as energy, but a very small amount of it is converted into marine omega-3s. ALA is the most common omega-3 in western diets.

Most Americans consume much more omega-6 fatty acids (omega-6 fats) than omega-3 fats. Omega-6 fat is an essential nutrient and its consumption is healthy. Some sources of it include corn and soybean oils. The Nutrition Sources stated that the theory that omega-3 fats are healthier than omega-6 fats isn’t supported by evidence. However, in an image from the Nutrition Source, seafood omega-6 fats were ranked as healthier than plant omega-6 fats, which were ranked as healthier than monounsaturated fats, although such a ranking was to the best of my knowledge never stated in the text.3

 

Carbohydrates

There seems to be two main determinants of carbohydrate sources’ effects on health: nutrition content and effect on blood sugar. The bottom line is that consuming whole grains and other less processed grains and decreasing refined grain consumption improves health. Additionally, moderately low carbohydrate diets can increase heart health as long as protein and fat comes from health sources, though the type of carbohydrate at least as important as the amount of carbohydrates in a diet.

Glycemic index and is a measure of how much food increases blood sugar levels. Consuming carbohydrates that cause blood-sugar spikes can increase risk of heart disease and diabetes at least as much as consuming too much saturated fat does. Some factors that increase the glycemic index of foods include:

  • Being a refined grain as opposed to a whole grain.

  • Being finely ground, which is why consuming whole grains in their whole form, such as rice, can be healthier than consuming them as bread.

  • Having less fiber.

  • Being more ripe, in the case of fruits and vegetables.

  • Having a lower fat content, as meals with fat are converted more slowly into sugar.

Vegetables (excluding potatoes), fruits, whole grains, and beans, are healthier than other carbohydrates. Potatoes have a negative effect on blood sugar, due to their high glycemic index. Information on glycemic index and the index of various foods is here.

Whole grains also contain essential minerals such as magnesium, selenium, and copper, which may protect against some cancers. Refining grains takes away 50% of the grains’ B vitamins, 90% of vitamin E, and virtually all fiber. Sugary drinks usually have little nutritional value.

Identifying whole grains as food that has at least one gram of fiber for every gram of carbohydrate is a more effective measure of healthfulness than identifying a whole grain as the first ingredient, any whole grain as the first ingredient without added sugars in the first 3 ingredients, the word “whole” before any grain ingredient, and the whole grain stamp.3

 

Protein

Proteins are broken down to form amino acids, which are needed for health. Though the body can make some amino acids by modifying others, some must come from food, which are called essential amino acids. The institute of medicine recommends that adults get a minimum of 0.8 grams of protein per kilogram of body weight per day, and sets the range of acceptable protein intake to 10-35% of calories per day. The Institute of Medicine recommends getting 10-35% of calories from protein each day. The US recommended daily allowance for protein is 46 grams per day for women over 18 and 56 grams per day for men over 18.

Animal products tend to give all essential amino acids, but other sources lack some essential amino acids. Thus, vegetarians need to consume a variety of sources of amino acids each day to get all needed types. Fish, chicken, beans, and nuts are healthy protein sources.3

 

Fiber

There are two types of fiber: soluble fiber and insoluble fiber. Both have important health benefits, so one should eat a variety of foods to get both.94 The best sources of fiber are whole grains, fresh fruits and vegetables, legumes, and nuts.3

 

Micronutrients

There are many micronutrients in food; getting enough of them is important. Most healthy individuals can get sufficient micronutrients by consuming a wide variety of healthy foods, such as fruits, vegetables, whole grains, legumes, and lean meats and fish. However, supplementation may be necessary for some. Information about supplements is here.110

Concerning supplementation, potassium, iodine, and lithium supplementation are recommended in the first-place entry in the Quantified Health Prize, a contest on determining good mineral intake levels. However, others suggest that potassium supplementation isn’t necessarily beneficial, as shown here. I’m somewhat skeptical that the supplements are beneficial, as I have not found other sources recommending their supplementation. The suggested supplementation levels are in the entry.

Note that food processing typically decreases micronutrient levels, as described here. In general, it seems cooking, draining and drying foods sizably, taking potentially half of nutrients away, while freezing and reheating take away relatively few nutrients.111

One micronutrient worth discussing is sodium. Some sodium is needed for health, but most Americans consume more sodium than needed. However, recommendations on ideal sodium levels vary. The US government recommends limiting sodium consumption to 2,300mg/day (one teaspoon). The American Heart Association recommends limiting sodium consumption to 1,500mg/day (⅔ of a teaspoon), especially for those who are over 50, have high or elevated blood pressure, have diabetes, or are African Americans3 However, As RomeoStevens pointed out, the Institute of Medicine found that there's inconclusive evidence that decreasing sodium consumption below 2,300mg/day effects mortality,115 and some meta-analyses have suggested that there is a U-shaped relationship between sodium and mortality.116, 117

Vitamin D is another micronutrient that’s important for health. It can be obtained from food or made in the body after sun exposure. Most people who live farther north than San Francisco or don’t go outside at least fifteen minutes when it’s sunny are vitamin D deficient. Vitamin D deficiency is increases the risk of many chronic diseases including heart disease, infectious diseases, and some cancers. However, there is controversy about optimal vitamin D intake. The Institute of medicine recommends getting 600 to 4000 IU/day, though it acknowledged that there was no good evidence of harm at 4000 IU/day. The Nutrition Sources states that these recommendations are too low and fail to account for new evidence. The nutrition source states that for most people, supplements are the best source of vitamin D, but most multivitamins have too little vitamin D in them. The Nutrition Source recommends considering and talking to a doctor about taking an additional multivitamin if the you take less than 1000 IU of vitamin D and especially if you have little sun exposure.3

 

Blood pressure

Information on blood pressure is here in the section titled “Blood Pressure.”

 

Cholesterol and triglycerides

Information on optimal amounts of cholesterol and triglycerides are here.

 

The biggest influences on cholesterol are fats and carbohydrates in one’s diet, and cholesterol consumption generally has a far weaker influence. However, some people’s cholesterol levels rise and fall very quickly with the amount of cholesterol consumed. For them, decreasing cholesterol consumption from food can have a considerable effect on cholesterol levels. Trial and error is currently the only way of determining if one’s cholesterol levels risk and fall very quickly with the amount of cholesterol consumed.

 

Antioxidants

Despite their initial hype, randomized controlled trials have offered little support for the benefit is single antioxidants, though studies are inconclusive.3

 

Dietary reference intakes

For the numerically inclined, the Dietary Reference Intake provides quantitative guidelines on good nutrient consumption amounts for many nutrients, though it may be harder to use for some, due to its quantitative nature.

 

Drinks

The Nutrition Source and SFGate state that water is the best drink,3, 112 though I don’t know why it’s considered healthier than drinks such as tea.

Unsweetened tea decreases the risk of many diseases, likely largely due to polyphenols, and antioxidant, in it. Despite antioxidants typically having little evidence of benefit, I suppose polyphenols are relatively beneficial. All teas have roughly the same levels of polyphenols except decaffeinated tea,3 which has fewer polyphenols.96 Research suggests that proteins and possibly fat in milk decrease the antioxidant capacity of tea.

It’s considered safe to drink up to six cups of coffee per day. Unsweetened coffee is healthy and may decrease some disease risks, though coffee may slightly increase blood pressure. Some people may want to consider avoiding coffee or switching to decaf, especially women who are pregnant or people who have a hard time controlling their blood pressure or blood sugar. The nutrition source states that it’s best to brew coffee with a paper filter to remove a substance that increases LDL cholesterol, despite consumed cholesterol typically having a very small effect on the body’s cholesterol level.

Alcohol increases risk of diseases for some people and decreases it for others. Heavy alcohol consumption is a major cause of preventable death in most countries. For some groups of people, especially pregnant people, people recovering from alcohol addiction, and people with liver disease, alcohol causes greater health risks and should be avoided. The likelihood of becoming addicted to alcohol can be genetically determined. Moderate drinking, generally defined as no more than one or two drinks per day for men, can increase colon and breast cancer risk, but these effects are offset by decreased heart disease and diabetes risk, especially in middle age, where heart disease begins to account for an increasingly large proportion of deaths. However, alcohol consumption won’t decrease cardiovascular disease risk much for those who are thin, physically active, don’t smoke, eat a healthy diet, and have no family history of heart disease. Some research suggests that red wine, particularly when consumed after a meal, has more cardiovascular benefits than beers or spirits, but alcohol choice has still little effect on disease risk. In one study, moderate drinkers were 30-35% less likely to have heart attacks than non-drinkers and men who drank daily had lower heart attack risk than those who drank once or twice per week.

There’s no need to drink more than one or two glasses of milk per day. Less milk is fine if calcium is obtained from other sources.

The health effects of artificially sweetened drinks are largely unknown. Oddly, they may also cause weight gain. It’s best to limit consuming them if one drinks them at all.

Sugary drinks can cause weight gain, as they aren’t as filling as solid food and have high sugar. They also increase the risk of diabetes, heart disease, and other diseases. Fruit juice has more calories and less fiber than whole fruit and is reportedly no better than soft drinks.3

 

Solid food

Fruits and vegetables are an important part of a healthy diet. Eating a variety of them is as important as eating many of them.3 Fish and nut consumption is also very healthy.98

Processed meat, on the other hand, is shockingly bad.98 A meta-analysis found that processed meat consumption is associated with a 42% increased risk of coronary heart disease (relative risk per 50g serving per day; 95% confidence interval: 1.07 - 1.89) and 19% increased risk of diabetes.97 Despite this, a bit of red meat consumption has been found to be beneficial.98 Consumption of well-done, fried, or barbecued meat has been associated with certain cancers, presumably due to carcinogens made in the meat from being cooked, though this link isn’t definitive. The amount of carcinogens increases with increased cooking temperature (especially above 300ºF, increased cooking time, charring, or being exposed to smoke.99

Eating less than one egg per day doesn’t increase heart disease risk in healthy individuals and can be part of a healthy diet.3

Organic foods have lower levels of pesticides than inorganic foods, though the residues of most organic and inorganic products don’t exceed government safety threshold. Washing fresh fruits and vegetables in recommended, as it removes bacteria and some, though not all, pesticide residues. Organic foods probably aren’t more nutritious than non-organic foods.103

 

When to eat and drink

A randomized controlled trial found an increase in blood sugar variation for subjects who skipped breakfast.6 Increasing meal frequency and decreasing meal size appears to have some metabolic advantages, and doesn’t appear to have metabolic disadvantages.7 Note:  old source; made in 1994 However, Mayo Clinic states that fasting for 1-2 days per week may increase heart health.32 Perhaps it is optimal for health to fast, but to have high meal frequency when not fasting.

 

How much to eat

One’s weight gain is directly proportional to the number of calories consumed divided by the number of calories burnt. Centers for Disease Control and Prevention (CDC) has guidelines for healthy weights and information on how to lose weight.

Some advocate restricting weight to a greater extent, which is known as calorie restriction. It’s unknown whether calorie restriction increases lifespan in humans or not, but data indicate that moderate calorie restriction with adequate nutrition decreases risk of obesity, type 2 diabetes, inflammation, hypertension, cardiovascular disease, and decreases metabolic risk factors associated with cancer.4 The CR Society has information on getting started on calorie restriction.

 

How much to drink

Generally, drinking enough to rarely feel thirsty and to have colorless or light yellow urine is usually sufficient. It’s also possible to drink too much water. In general, drinking too much water is rare in healthy adults who eat an average American diet, although endurance athletes are at a higher risk.10

 

Exercise

A meta-analysis found the data in the following graphs for people aged over 40.

8

A weekly total of roughly five hours of vigorous exercise has been identified by several studies to be the safe upper limit for life expectancy. It may be beneficial to take one or two days off from vigorous exercise per week and to limit chronic vigorous exercise to <= 60 min/day.9 Based on the above, I my best guess for the optimal amount of exercise for longevity is roughly 30 MET-hr/wk. Calisthenics burn 6-10 METs/hr11, so an example exercise routine to get this amount of exercise is doing calisthenics 38 minutes per day and 6 days/wk. Guides on how to exercise are available, e.g. this one.

 

Carcinogens

Carcinogens are cancer-causing substances. Since cancer causes death, decreasing exposure to carcinogens presumably decreases one’s risk of death. Some foods are also carcinogenic, as discussed in the “Food” section.

 

Chemicals

Tobacco use is the greatest avoidable risk factor for cancer worldwide, causing roughly 22% of cancer deaths. Additionally, second hand smoke has been proven to cause lung cancer in nonsmoking adults.

Alcohol use is a risk factor for many types of cancer. The risk of cancer increases with the amount of alcohol consumed, and substantially increases if one is also a heavy smoker. The attributable fraction of cancer from alcohol use varies depending on gender, due to differences in consumption level. E.g. 22% of mouth and oropharynx cancer is attributable to cancer in men but only 9% is attributable to alcohol in women.

Environmental air pollution accounts for 1-4% of cancer.84 Diesel exhaust is one type of carcinogenic air pollution. Those with the highest exposure to diesel exhaust are exposed to it occupationally. As for residential exposure, diesel exhaust is highest in homes near roads where traffic is heaviest. Limiting time spent near large sources of diesel exhaust decreases exposure. Benzene, another carcinogen, is found in gasoline and vehicle exhaust but exposure to it can also be cause by being in areas with unventilated fumes from gasoline, glues, solvents, paints, and art supplies. It can cause exposure from inhalation or skin contact.86

Some occupations exposure workers to occupational carcinogens.84 A list of some of the occupations is here, all of which involve manual labor, except for hospital-related jobs.87

 

Infections

Infections are responsible for 6% of cancer deaths in developed nations.84 Many of the infections are spread via sexual contact and sharing needles and some can be vaccinated against.85

 

Radiation

Ionizing radiation is carcinogenic to humans. Residential exposure to radon gas is estimated to cause 3-14% of lung cancers, which is the largest source of radon exposure for most people 84 Being exposed to radon and cigarette smoke together increases one’s cancer risk much more than they do separately. There is much variation radon levels depending on where one lives and and radon is usually higher inside buildings, especially levels closer to the ground, such as basements. The EPA recommends taking action to reduce radon levels if they are greater than or equal to 4.0 pCi/L. Radon levels can be reduced by a qualified contractor. Reducing radon levels without proper training and equipment can increase instead of decrease them.88

Some medical tests can also increase exposure to radiation. The EPA estimates that exposure to 10 mSv from a medical imaging test increases risk of cancer by  roughly 0.05%. To decrease exposure to radiation from medical imaging tests, one can ask if there are ways to shield parts of one’s body from radiation that aren’t being tested and making sure  the doctor performing the test is qualified.89

 

Small doses of ionizing radiation increase risk by a very small amount. Most studies haven’t detected increased cancer risk in people exposed to low levels of ionizing radiation. For example, people living in higher altitudes don’t have noticeably higher cancer rates than other people. In general, cancer risk from radiation increases as the dose of radiation increases and there is thought to be no safe level of exposure. Ultraviolet radiation as a type of radiation that can be ionizing radiation. Sunlight is the main source of ultraviolet radiation.84

Factors that increase one’s exposure to ultraviolet radiation when outside include:

  • Time of day. Almost ⅓ of UV radiation hits the surface between 11AM and 1PM, and ¾ hit the surface between 9AM and 5PM.  

  • Time of year. UV radiation is greater during summer. This factor is less significant near the equator.

  • Altitude. High elevation causes more UV radiation to penetrate the atmosphere.

  • Clouds. Sometimes clouds decrease levels of UV radiation because they block UV radiation from the sun. Other times, they increase exposure because they reflect UV radiation.

  • Reflection off surfaces, such as water, sand, snow, and grass increases UV radiation.

  • Ozone density, because ozone stops some UV radiation from reaching the surface.

Some tips to decrease exposure to UV radiation:

  • Stay in the shade. This is one of the best ways to limit exposure to UV radiation in sunlight.

  • Cover yourself with clothing.

  • Wear sunglasses.

  • Use sunscreen on exposed skin.90

 

Tanning beds are also a source of ultraviolet radiation. Using tanning booths can increase one’s chance of getting skin melanoma by at least 75%.91

 

Vitamin D3 is also produced from ultraviolet radiation, although the American Society for Clinical Nutrition states that vitamin D is readily available from supplements and that the controversy about reducing ultraviolet radiation exposure was fueled by the tanning industry.92

 

There could be some risk of cell phone use being associated with cancer, but the evidence is not strong enough to be considered causal and needs to be investigated further.93

 

Emotions and feelings

Positive emotions and feelings

A review suggested that positive emotions and feelings decreased mortality. Proposed mechanisms include positive emotions and feelings being associated with better health practices such as improved sleep quality, increased exercise, and increased dietary zinc consumption, as well as lower levels of some stress hormones. It has also been hypothesized to be associated with other health-relevant hormones, various aspects of immune function, and closer and more social contacts.33 Less Wrong has a good article on how to be happy.

 

Psychological distress

A meta-analysis was conducted on psychological stress. To measure psychological stress, it used the GHQ-12 score, which measured symptoms of anxiety, depression, social dysfunction, and loss of confidence. The scores range from 0 to 12, with 0 being asymptomatic, 1-3 being subclinically symptomatic, 4-6 being symptomatic, and 7-12 being highly symptomatic. It found the results shown in the following graphs.

http://www.bmj.com/content/bmj/345/bmj.e4933/F3.large.jpg?width=800&height=600

This association was essentially unchanged after controlling for a range of covariates including occupational social class, alcohol intake, and smoking. However, reverse causality may still partly explain the association.30

 

Stress

A study found that individuals with moderate and high stress levels as opposed to low stress had hazard ratios (HRs) of mortality of 1.43 and 1.49, respectively.27 A meta-analysis found that high perceived stress as opposed to low perceived stress had a coronary heart disease relative risk (RR) of 1.27. The mean age of participants in the studies used in the meta-analysis varied from 44 to 72.5 years and was significantly and positively associated with effect size. It explained 46% of the variance in effect sizes between the studies used in the meta-analysis.28

A cross-sectional study (which is a relatively weak study design) not in the aforementioned meta-analysis used 28,753 subjects to study the effect on mortality from the amount of stress and the perception of whether stress is harmful or not. It found that neither of these factors predicted mortality independently, but but that taken together, they did have a statistically significant effect. Subjects who reported much stress and that stress has a large effect on health had a HR of 1.43 (95% CI: 1.2, 1.7). Reverse causality may partially explain this though, as those who have had negative health impacts from stress may have been more likely to report that stress influences health.83

 

Anger and hostility

A meta-analysis found that after fully controlling for behavior covariates such as smoking, physical activity or body mass index, and socioeconomic status, anger and hostility was not associated with coronary heart disease (CHD), though the results are inconclusive.34

 

Social and personality factors

Social status

A review suggested that social status is linked to health via gender, race, ethnicity, education levels, socioeconomic differences, family background, and old age.46

 

Giving to others

An observational study found that stressful life events was not a predictor for mortality for those who engaged in unpaid helping behavior directed towards friends, neighbors, or relatives who did not live with them. This association may be due to giving to others causing one to have a sense of mattering, opportunities for generativity, improved social well-being, the emotional state of compassion, and the physiology of the caregiving behavioral system.35

 

Social relationships

A large meta-analysis found that the odds ratio of mortality of having weak social relationships is 1.5 (95% confidence interval (CI): 1.42 to 1.59). However, this effect may be a conservative estimate. Many of the studies used in the meta-analysis used single item measures of social relations, but the size of the association was greatest in studies that used more complex measurements. Additionally, some of the studies in the meta-analysis adjusted for risk factors that may be mediators of social relationships’ effect on mortality (e.g. behavior, diet, and exercise). Many of the studies in the meta-analysis also ignored the quality of social relationships, but research suggests that negative social relationships are linked to increased mortality. Thus, the effect of social relationships on mortality could be even greater than the study found.

Concerning causation, social relationships are linked to better health practices and psychological processes, such as stress and depression, which influence health outcomes on their own. However, the meta-analysis also states that social relationships exert an independent effect. Some studies show that social support is linked to better immune system functioning and to immune-mediated inflammatory processes.36

 

Conscientiousness

A cohort study with 468 deaths found that each 1 standard deviation decrease in conscientiousness was associated with HR being multiplied by 1.07 (95% CI: 0.98 – 1.17), though it gave no mechanism for the association.39 Although it adjusted for several variables, (e.g.  socioeconomic status, smoking, and drinking), it didn’t adjust for drug use, risky driving, risky sex, suicide, and violence, which were all found by a meta-analysis to have statistically significant associations with conscientiousness.40 Overall, it seems to me that conscientiousness doesn’t seem to have a significant effect on mortality.

 

Infectious diseases

Mayo clinic has a good article on preventing infectious disease.

 

Dental health

A cohort study of 5611 adults found that compared to men with 26-32 teeth, men with 16-25 teeth had an HR of 1.03 (95% CI: 0.91-1.17), men with 1-15 teeth had an HR of 1.21 (95% CI: 1.05-1.40) and men with 0 teeth had an HR of 1.18 (95% CI: 1.00-1.39).

In the study, men who never brushed their teeth at night had a HR of 1.34 (95% CI: 1.14-1.57) relative to those who did every night. Among subjects who brushed at night, HR was similar between those who did and didn’t brush daily in the morning or day. The HR for men who brushed in the morning every day but not at night every day was 1.19 (95% CI: 0.99-1.43).

In the study, men who never used dental floss had an HR of 1.27 (95% CI: 1.11-1.46) and those who sometimes used it had an HR or 1.14 (95% CI: 1.00-1.30) compared to men who used it every day. Among subjects who brushed their teeth at night daily, not flossing was associated with a significantly increased HR.

Use of toothpicks didn’t significantly decrease HR and mouthwash had no effect.

The study had a list of other studies on the effect of dental health on mortality. It seems to us that almost all of them found a negative correlation between dental health and risk of mortality, although the study didn’t say their methodology for selecting the studies to show. I did a crude review of other literature by only looking at their abstracts and found that five studies found that poor dental health increased risk of mortality and one found it didn’t.

Regarding possible mechanisms, the study says that toothpaste helps prevent dental caries and that dental floss is the most effective means of removing interdental plaque and decreasing interdental gingival inflammation.38

 

Sleep

It seems that getting too little or too much sleep likely increases one’s risk of mortality, but it’s hard to tell exactly how much is too much and how little is too little.

 

One review found that the association between amount of sleep and mortality is inconsistent in studies and that what association does exist may be due to reverse-causality.41 However, a meta-analysis found that the RR associated with short sleep duration (variously defined as sleeping from < 8 hrs/night to < 6 hrs/night) was 1.10 (95% CI: 1.06-1.15). It also found that the RR associated with long sleep duration (variously defined as sleeping for > 8 hrs/night to > 10 hrs per night) compared with medium sleep duration (variously defined as sleeping for 7-7.9 hrs/night to 9-9.9 hrs/night) was 1.23 (95% CI: 1.17 - 1.30).42

 

The National Heart, Lung, and Blood Institute and Mayo Clinic recommend adults get 7-8 hours of sleep per night, although it also says sleep needs vary from person to person. It gives no method of determining optimal sleep for an individual. Additionally, it doesn’t say if its recommendations are for optimal longevity, optimal productivity, something else, or a combination of factors.43 The Harvard Medical School implies that one’s optimal amount of sleep is enough sleep to not need an alarm to wake up, though it didn’t specify the criteria for determining optimality either.45

 

Drugs

None of the drugs I’ve looked into have a beneficial effect for the people without a special disease or risk factor. Notes on them are here.

 

Blood donation

A quasi-randomized experiment with a validity near that of a randomized trial presumably suggested that blood donation didn’t significantly decrease risk of coronary heart disease (CHD). Observational studies have shown lower CHD incidence among donors, although the authors of the former experiment suspect that bias played a role in this. The authors believe that their findings cast serious doubts on the theory that blood donation decreases CHD risk.29

 

Sitting

After adjusting for amount of physical activity, a meta-analysis estimated that for every one hour increment of sitting in intervals 0-3, >3-7 and >7 h/day total sitting time, the hazard ratios of mortality were 1.00 (95% CI: 0.98-1.03), 1.02 (95% CI: 0.99-1.05) and 1.05 (95% CI: 1.02-1.08) respectively. It proposed no mechanism for sitting time having this effect,37 so it might have been due to confounding variables it didn’t control.

 

Sleep apnea

Sleep apnea is an independent risk factor for mortality and cardiovascular disease.26 Symptoms and other information on sleep apnea are here.

 

Snoring

A meta-analysis found that self-reported habitual snoring had a small but statistically significant association with stroke and coronary heart disease, but not with cardiovascular disease and all-cause mortality [HR 0.98 (95% CI: 0.78-1.23)]. Whether the risk is due to obstructive sleep apnea is controversial. Only the abstract is able to be viewed for free, so I’m just basing this off the abstract.31

 

Exams

The organization Susan G. Komen, citing a meta-analysis that used randomized controlled trials, doesn’t recommend breast self exams as a screening tool for breast cancer, as it hasn’t been shown to decrease cancer death. However, it still stated that it is important to be familiar with one’s breasts’ appearance and how they normally feel.49 According to the Memorial Sloan Kettering Cancer Center, no study has been able to show a statistically significant decrease in breast cancer deaths from breast self-exams.50 The National Cancer Institute states that breast self-examinations haven’t been shown to decrease breast cancer mortality, but does increase biopsies of benign breast lesions.51

The American Cancer Society doesn’t recommend testicular self-exams for all men, as they haven’t been studied enough to determine if they decrease mortality. However, it states that men with risk factors of testicular cancer (e.g. an undescended testical, previous testicular cancer, of a family member who previously had testicular cancer) should consider self-exams and discuss them with a doctor. The American Cancer Society also recommends having testicular self-exams in routine cancer-related check-ups.52

 

Genomics

Genomics is the study of genes in one’s genome, and may help increase health by using knowledge of one’s genes to have personalized treatment. However, it hasn’t proved to be useful for most; recommendations rarely change after knowledge from genomic testing. Still, genomics has much future potential.102

 

Aging

Like I’ve said in the section “Can we become immortal,” the proportion of deaths that are caused by aging in the industrial world approaches 90%,53 but some organizations and companies are working on curing it.54, 55, 56

One could support these organizations in an effort to hasten the development of anti-aging therapies, although I doubt an individual would have a noticeable impact on one’s own chance of death unless one is very wealthy. That said, I have little knowledge in investments, but I suppose investing in companies working on curing aging may be beneficial, as if they succeed, they may offer an enormous return on investment, and if they fail, one would probably die, so losing one’s money may not be as bad. Calico currently isn’t a public stock, though.

 

External causes of death

Unless otherwise specified, graphs in this section are on data collected from American citizens ages 15-24, as based off the Less Wrong census results, this seems to be the most probable demographic that will read this. For this demographic, external causes cause 76% of deaths. Note that although this is true, one is much more likely to die when older than when aged 15-24, and older individuals are much more likely to die from disease than from external causes of death. Thus, I think it’s more important when young to decrease risk of disease than external causes of death. The graph below shows the percentage of total deaths from external causes caused by various causes.

21

 

Transport accidents

Below are the relative death rates of specified means of transportation for people in general:

71

Much information about preventing death from car crashes is here. Information on preventing death from car crashes is here, here, here, and here.

 

Assault

Lifehacker's “Basic Self-Defense Moves Anyone Can Do (and Everyone Should Know)” gives a basic introduction to self defence.

 

Intentional self harm

Intentional self harm such as suicide, presumably, increases one’s risk of death.47 Mayo Clinic has a guide on preventing suicide. I recommend looking at it if you are considering killing yourself. Additionally, if are are considering killing yourself, I suggest reviewing the potential rewards of achieving immortality from the section “Should we try to become immortal.”

 

Poisoning

What to do if a poisoning occurs

CDC recommends staying calm, dialing 1-800-222-1222, and having this information ready:

  • Your age and weight.

  • If available, the container of the poison.

  • The time of the poison exposure.

  • The address where the poisoning occurred.

It also recommends staying on the phone and following the instructions of the emergency operator or poison control center.18

 

Types of poisons

Below is a graph of the risk of death per type of poison.

21

Some types of poisons:

  • Medicine overdoses.

  • Some household chemicals.

  • Recreational drug overdoses.

  • Carbon monoxide.

  • Metals such as lead and mercury.

  • Plants12 and mushrooms.14

  • Presumably some animals.

  • Some fumes, gases, and vapors.15

 

Recreational drugs

Using recreational drugs increases risk of death.

 

Medicine overdoses and household chemicals

CDC has tips for these here.

 

Carbon monoxide

CDC and Mayo Clinic have tips for this here and here.

 

Lead

Lead poisoning causes 0.2% of deaths worldwide and 0.0% of deaths in developed countries.22 Children under the age of 6 are at higher risk of lead poisoning.24 Thus, for those who aren’t children, learning more about preventing lead poisoning seems like more effort than it’s worth. No completely safe blood lead level has been identified.23

 

Mercury

MedlinePlus has an article on mercury poisoning here.

 

Accidental drowning

Information on preventing accidental drowning from CDC is here and here.

 

Inanimate mechanical forces

Over half of deaths from inanimate mechanical forces for Americans aged 15-24 are from firearms. Many of the other deaths are from explosions, machinery, and getting hit by objects. I suppose using common sense, precaution, and standard safety procedures when dealing with such things is one’s best defense.

 

Falls

Again, I suppose common sense and precaution is one’s best defense. Additionally, alcohol and substance abuse is a risk factor of falling.72

 

Smoke, fire and heat

Owning smoke alarms halves one’s risk of dying in a home fire.73 Again, common sense when dealing with fires and items potentially causing fires (e.g. electrical wires and devices) seems effective.

 

Other accidental threats to breathing

Deaths from other accidental threats to breathing are largely caused by strangling or choking on food or gastric contents, and occasionally by being in a cave-in or trapped in a low-oxygen environment.21 Choking can be caused by eating quickly or laughing while eating.74 If you are choking:

  • Forcefully cough. Lean as far forwards as you can and hold onto something that is firmly anchored, if possible. Breathe out and then take a deep breath in and cough; this may eject the foreign object.

  • Attract someone’s attention for help.75

 

Additionally, choking can be caused by vomiting while unconscious, which can be caused by being very drunk.76 I suggest lying in the recovery position if you think you may vomit while unconscious, so as to to decrease the chance of choking on vomit.77 Don’t forget to use common sense.

 

Electric current

Electric shock is usually caused by contact with poorly insulated wires or ungrounded electrical equipment, using electrical devices while in water, or lightning.78 Roughly ⅓ of deaths from electricity are caused by exposure to electric transmission lines.21

 

Forces of nature

Deaths from forces of nature in (for Americans ages 15-24) in descending order of number of deaths caused are: exposure to cold, exposure to heat, lightning, avalanches or other earth movements, cataclysmic storms, and floods.21 Here are some tips to prevent these deaths:

  • When traveling in cold weather, carry emergency supplies in your car and tell someone where you’re heading.79

  • Stay hydrated during hot weather.80

  • Safe locations from lightning include substantial buildings and hard-topped vehicles. Safe locations don’t include small sheds, rain shelters, and open vehicles.

  • Wait until there are no thunderstorm clouds in the area before going to a location that isn’t lightning safe.81

 

Medical care

Since medical care is tasked with treating diseases, receiving medical care when one has illnesses presumably decreases risk of death. Though necessary medical care may be essential when one has illnesses, a review estimated that preventable medical errors contributed to roughly 440,000 deaths per year in the US, which is roughly one-sixth of total deaths in the US. It gave a lower limit of 210,000 deaths per year.

The frequency of deaths from preventable medical errors varied across studies used in the review, with a hospital that was shown the put much effort into improving patient safety having a lower proportion of deaths from preventable medical errors than that of others.57 Thus, I suppose that it would be beneficial to go to hospitals that are known for their dedication to patient safety. There are several rankings of hospital safety available on the internet, such as this one. Information on how to help prevent medical errors is found here and under the “What Consumers Can Do” section here. One rare medical error is having a surgery be done on the wrong body part. The New York Times gives tips for preventing this here.

Additionally, I suppose it may be good to live relatively close to a hospital so as to be able to quickly reach it in emergencies, though I’ve found no sources stating this.

A common form of medical care are general health checks. A comprehensive Cochrane review with 182,880 subjects concluded that general health checks are probably not beneficial.107 A meta-analysis found that general health checks are associated with small but statistically significant benefits in factoring related to mortality, such as blood pressure and body mass index. However, it found no significant association with mortality.109 The New York Times acknowledged that health checks are probably not beneficial and gave some explanation why general health checks are nonetheless still common.108 However, CDC and MedlinePlus recommend getting routine general health checks. The cited no studies to support their claims.104, 106 When I contacted CDC about it, it responded, “Regular health exams and tests can help find problems before they start. They also can help find problems early, when your chances for treatment and cure are better. By getting the right health services, screenings, and treatments, you are taking steps that help your chances for living a longer, healthier life,” a claim that doesn’t seem supported by evidence. It also stated, “Although CDC understands you are concerned, the agency does not comment on information from unofficial or non-CDC sources.” I never heard back from MedlinePlus.

 

Cryonics

Cryonics is the freezing of legally dead humans with the purpose preserving their bodies so they can be brought back to life in the future once technology makes it possible. Human tissue have been cryopreserved and then brought back to life, although this has never been done on full humans.59 The price of Cryonics at least ranges from $28,000 to $200,000.60 More information on cryonics is on LessWrong Wiki.

 

Money

Cryonics, medical care, safe housing, and basic needs all take money. Rejuvenation therapy may also be very expensive. It seems valuable to have a reasonable amount of money and income.

 

Future advancements

Keeping updated on further advancements in technology seems like a good idea, as not doing so would prevent one from making use of future technologies. Keeping updated on advancements on curing aging seems especially important, due to the massive number of casualties it inflicts and the current work being done to stop it. Updates on mind-uploading seem important as well. I don’t know of any very efficient method of keeping updated on new advancements, but periodically googling for articles about curing aging or Calico and searching for new scientific articles on topics in this guide seems reasonable. As knb suggested, it seems beneficial to periodically check on Fight Aging, a website advocating anti-aging therapies. I’ll try to do this and update this guide with any new relevant information I find.

There is much uncertainty ahead, but if we’re clever enough, we just might make it though alive.

 

References

 

  1. Actual Causes of Death in the United States, 2000.
  2. A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care.
  3. All pages in The Nutrition Source, a part of the Harvard School of Public Health.
  4. Will calorie restriction work on humans? 
  5. The pages Getting Started, Tests and Biomarkers, and Risks from The CR Society.
  6. The causal role of breakfast in energy balance and health: a randomized controlled trial in lean adults.
  7. Low Glycemic Index: Lente Carbohydrates and Physiological Effects of altered food frequency. Published in 1994. 
  8. Leisure Time Physical Activity of Moderate to Vigorous Intensity and Mortality: A Large Pooled Cohort Analysis.
  9. Exercising for Health and Longevity vs Peak Performance: Different Regimens for Different Goals.
  10. Water: How much should you drink every day? 
  11. MET-hour equivalents of various physical activities.
  12. Poisoning. NLM
  13. Carcinogen. Dictionary.com
  14. Types of Poisons. New York Poison Center
  15. The Most Common Poisons for Children and Adults. National Capital Poison Center.
  16. Known and Probable Human Carcinogens. American cancer society.
  17. Nutritional Effects of Food Processing. Nutritiondata.com.
  18. Tips to Prevent Poisonings. CDC.
  19. Carbon monoxide poisoning. Mayo Clinic.
  20. Carbon Monoxide Poisoning. CDC. 
  21. CDCWONDER. Query Criteria taken from all genders, all states, all races, all levels of urbanization, all weekdays, dates 1999 – 2010, ages 15 – 24. 
  22. Global health risks: mortality and burden of disease attributable to selected major risks.
  23. National Biomonitoring Program Factsheet. CDC
  24. Lead poisoning. Mayo Clinic.
  25. Mercury. Medline Plus.
  26. Snoring Is Not Associated With All-Cause Mortality, Incident Cardiovascular Disease, or Stroke in the Busselton Health Study.
  27. Do Stress Trajectories Predict Mortality in Older Men? Longitudinal Findings from the VA Normative Aging Study.
  28. Meta-analysis of Perceived Stress and its Association with Incident Coronary Heart Disease.
  29. Iron and cardiac ischemia: a natural, quasi-random experiment comparing eligible with disqualified blood donors.
  30. Association between psychological distress and mortality: individual participant pooled analysis of 10 prospective cohort studies.
  31. Self-reported habitual snoring and risk of cardiovascular disease and all-cause mortality.
  32. Is it true that occasionally following a fasting diet can reduce my risk of heart disease? 
  33. Positive Affect and Health.
  34. The Association of Anger and Hostility with Future Coronary Heart Disease: A Meta-Analytic Review of Prospective Evidence.
  35. Giving to Others and the Association Between Stress and Mortality.
  36. Social Relationships and Mortality Risk: A Meta-analytic Review.
  37. Daily Sitting Time and All-Cause Mortality: A Meta-Analysis.
  38. Dental Health Behaviors, Dentition, and Mortality in the Elderly: The Leisure World Cohort Study.
  39. Low Conscientiousness and Risk of All-Cause, Cardiovascular and Cancer Mortality over 17 Years: Whitehall II Cohort Study.
  40. Conscientiousness and Health-Related Behaviors: A Meta-Analysis of the Leading Behavioral Contributors to Mortality.
  41. Sleep duration and all-cause mortality: a critical review of measurement and associations.
  42. Sleep duration and mortality: a systematic review and meta-analysis.
  43. How Much Sleep Is Enough? National Lung, Blood, and Heart Institute. 
  44. How many hours of sleep are enough for good health? Mayo Clinic.
  45. Assess Your Sleep Needs. Harvard Medical School.
  46. A Life-Span Developmental Perspective on Social Status and Health.
  47. Suicide. Merriam-Webster. 
  48. Can testosterone therapy promote youth and vitality? Mayo Clinic.
  49. Breast Self-Exam. Susan G. Komen.
  50. Screening Guidelines. The Memorial Sloan Kettering Cancer Center.
  51. Breast Cancer Screening Overview. The National Cancer Institute.
  52. Testicular self-exam. The American Cancer Society.
  53. Life Span Extension Research and Public Debate: Societal Considerations
  54. SENS Research Foundation: About.
  55. Science for Life Extension Homepage.
  56. Google's project to 'cure death,' Calico, announces $1.5 billion research center. The Verge.
  57. A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care.
  58. When Surgeons Cut the Wrong Body Part. The New York Times.
  59. Cold facts about cryonics. The Guardian. 
  60. The cryonics organization founded by the "Father of Cryonics," Robert C.W. Ettinger. Cryonics Institute. 
  61. Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now
  62. International Journal of Machine Consciousness Introduction.
  63. The Philosophy of ‘Her.’ The New York Times.
  64. How to Survive the End of the Universe. Discover Magazine.
  65. A Space-Time Crystal to Outlive the Universe. Universe Today.
  66. Conjunction Fallacy. Less Wrong.
  67. Cognitive Biases Potentially Affecting Judgment of Global Risks.
  68. Genetic influence on human lifespan and longevity.
  69. First Drug Shown to Extend Life Span in Mammals. MIT Technology Review.
  70. Sirolimus (Oral Route). Mayo Clinic.
  71. Micromorts. Understanding Uncertainty.
  72. Falls. WHO.
  73. Smoke alarm outreach materials.  US Fire Administration.
  74. What causes choking? 17 possible conditions. Healthline.
  75. Choking. Better Health Channel.
  76. Aspiration pneumonia. HealthCentral.
  77. First aid - Recovery position. NHS Choices.
  78. Electric Shock. HowStuffWorks.
  79. Hypothermia prevention. Mayo Clinic.
  80. Extreme Heat: A Prevention Guide to Promote Your Personal Health and Safety. CDC.
  81. Understanding the Lightning Threat: Minimizing Your Risk. National weather service.
  82. The Case Against QuikClot. The survival mom.
  83. Does the Perception that Stress Affects Health Matter? The Association with Health and Mortality.
  84. Cancer Prevention. WHO.
  85. Infections That Can Lead to Cancer. American Cancer Society.
  86. Pollution. American Cancer Society.
  87. Occupations or Occupational Groups Associated with Carcinogen Exposures. Canadian Centre for Occupational Health and Safety. 
  88. Radon. American Cancer Society.
  89. Medical radiation. American Cancer Society.
  90. Ultraviolet (UV) Radiation. American Cancer Society.
  91. An Unhealthy Glow. American Cancer Society.
  92. Sun exposure and vitamin D sufficiency.  
  93. Cell Phones and Cancer Risk. National Cancer Institute.
  94. Nutrition for Everyone. CDC.
  95. How Can I Tell If My Body is Missing Key Nutrients? Oprah.com.
  96. Decaffeination, Green Tea and Benefits. Teas etc.
  97. Red and Processed Meat Consumption and Risk of Incident Coronary Heart Disease, Stroke, and Diabetes Mellitus.
  98. Lifestyle interventions to increase longevity.
  99. Chemicals in Meat Cooked at High Temperatures and Cancer Risk. National Cancer Institute.
  100. Are You Living in a Simulation? 
  101. How reliable are scientific studies?
  102. Genomics: What You Should Know. Forbes.
  103. Organic foods: Are they safer? More nutritious? Mayo Clinic.
  104. Health screening - men - ages 18 to 39. MedlinePlus. 
  105. Why do I need medical checkups. Banner Health.
  106. Regular Check-Ups are Important. CDC.
  107. General health checks in adults for reducing morbidity and mortality for disease (Review)
  108. Let’s (Not) Get Physicals.
  109. Effectiveness of general practice-based health checks: a systematic review and meta-analysis.
  110. Supplements: Nutrition in a Pill? Mayo Clinic.
  111. Nutritional Effects of Food Processing. SelfNutritionData.
  112. What Is the Healthiest Drink? SFGate.
  113. Leading Causes of Death. CDC.
  114. Bias Detection in Meta-analysis. Statistical Help.
  115. The summary of Sodium Intake in Populations: Assessment of Evidence. Institute of Medicine.
  116. Compared With Usual Sodium Intake, Low and Excessive -Sodium Diets Are Associated With Increased Mortality: A Meta-analysis.
  117. The Cochrane Review of Sodium and Health.

Who are your favorite "hidden rationalists"?

18 aarongertler 11 January 2015 06:26AM

Quick summary: "Hidden rationalists" are what I call authors who espouse rationalist principles, and probably think of themselves as rational people, but don't always write on "traditional" Less Wrong-ish topics and probably haven't heard of Less Wrong.

I've noticed that a lot of my rationalist friends seem to read the same ten blogs, and while it's great to have a core set of favorite authors, it's also nice to stretch out a bit and see how everyday rationalists are doing cool stuff in their own fields of expertise. I've found many people who push my rationalist buttons in fields of interest to me (journalism, fitness, etc.), and I'm sure other LWers have their own people in their own fields.

So I'm setting up this post as a place to link to/summarize the work of your favorite hidden rationalists. Be liberal with your suggestions!

Another way to phrase this: Who are the people/sources who give you the same feelings you get when you read your favorite LW posts, but who many of us probably haven't heard of?

 

Here's my list, to kick things off:

 

  • Peter Sandman, professional risk communication consultant. Often writes alongside Jody Lanard. Specialties: Effective communication, dealing with irrational people in a kind and efficient way, carefully weighing risks and benefits. My favorite recent post of his deals with empathy for Ebola victims and is a major, Slate Star Codex-esque tour de force. His "guestbook comments" page is better than his collection of web articles, but both are quite good.
  • Doug McGuff, MD, fitness guru and author of the exercise book with the highest citation-to-page ratio of any I've seen. His big thing is "superslow training", where you perform short and extremely intense workouts (video here). I've been moving in this direction for about 18 months now, and I've been able to cut my workout time approximately in half without losing strength. May not work for everyone, but reminds me of Leverage Research's sleep experiments; if it happens to work for you, you gain a heck of a lot of time. I also love the way he emphasizes the utility of strength training for all ages/genders -- very different from what you'd see on a lot of weightlifting sites.
  • Philosophers' Mail. A website maintained by applied philosophers at the School of Life, which reminds me of a hippy-dippy European version of CFAR (in a good way). Not much science, but a lot of clever musings on the ways that philosophy can help us live, and some excellent summaries of philosophers who are hard to read in the original. (Their piece on Vermeer is a personal favorite, as is this essay on Simon Cowell.) This recently stopped posting new material, but the School of Life now collects similar work through The Book of Life

Finally, I'll mention something many more people are probably aware of: I Am A, where people with interesting lives and experiences answer questions about those things. Few sites are better for broadening one's horizons; lots of concentrated honesty. Plus, the chance to update on beliefs you didn't even know you had.



Once more: Who are the people/sources who give you the same feeling you get when you read your favorite LW posts, but who many of us probably haven't heard of?

 

New, Brief Popular-Level Introduction to AI Risks and Superintelligence

16 LyleN 23 January 2015 03:43PM

The very popular blog Wait But Why has published the first part of a two-part explanation/summary of AI risks and superintelligence, and it looks like the second part will be focused on Friendly AI. I found it very clear, reasonably thorough and appropriately urgent without signaling paranoia or fringe-ness. It may be a good article to share with interested friends.

[Link] Neural networks trained on expert Go games have just made a major leap

15 ESRogs 02 January 2015 03:48PM

From the arXiv:

Move Evaluation in Go Using Deep Convolutional Neural Networks

Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver

The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function. In this paper we investigate whether deep convolutional networks can be used to directly represent and learn this knowledge. We train a large 12-layer convolutional neural network by supervised learning from a database of human professional games. The network correctly predicts the expert move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GnuGo in 97% of games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates a million positions per move.

This approach looks like it could be combined with MCTS. Here's their conclusion:

In this work, we showed that large deep convolutional neural networks can predict the next move made by Go experts with an accuracy that exceeds previous methods by a large margin, approximately matching human performance. Furthermore, this predictive accuracy translates into much stronger move evaluation and playing strength than has previously been possible. Without any search, the network is able to outperform traditional search based programs such as GnuGo, and compete with state-of-the-art MCTS programs such as Pachi and Fuego.

In Figure 2 we present a sample game played by the 12-layer CNN (with no search) versus Fuego (searching 100K rollouts per move) which was won by the neural network player. It is clear that the neural network has implicitly understood many sophisticated aspects of Go, including good shape (patterns that maximise long term effectiveness of stones), Fuseki (opening sequences), Joseki (corner patterns), Tesuji (tactical patterns), Ko fights (intricate tactical battles involving repeated recapture of the same stones), territory (ownership of points), and influence (long-term potential for territory). It is remarkable that a single, unified, straightforward architecture can master these elements of the game to such a degree, and without any explicit lookahead.

On the other hand, we note that the network still has weaknesses: notably it sometimes fails to understand the global picture, behaving as if the life and death status of large groups has been incorrectly assessed. Interestingly, it is precisely these global aspects of the game for which Monte-Carlo search excels, suggesting that these two techniques may be largely complementary. We have provided a preliminary proof-of-concept that MCTS and deep neural networks may be combined effectively. It appears that we now have two core elements that scale effectively with increased computational resource: scalable planning, using Monte-Carlo search; and scalable evaluation functions, using deep neural networks. In the future, as parallel computation units such as GPUs continue to increase in performance, we believe that this trajectory of research will lead to considerably stronger programs than are currently possible.

H/T: Ken Regan

Edit -- see also: Teaching Deep Convolutional Neural Networks to Play Go (also published to the arXiv in December 2014), and Why Neural Networks Look Set to Thrash the Best Human Go Players for the First Time (MIT Technology Review article)

[Link] An argument on colds

14 Konkvistador 18 January 2015 07:16PM

Source.

It's illegal to work around food when showing symptoms of contagious diseases. Why not the same for everyone else? Each person who gets a cold infects one other person on average. We could probably cut infection rates and the frequency of colds in half if sick people didn't come in to work.

And if we want better biosecurity, why not also require people to be able to reschedule flights if a doctor certifies they have a contagious disease?

Due to the 'externalities', the case seems very compelling.

Moving my commentary to a separate comment, so as to disambiguate votes on my commentary and the original argument.

Negative polyamory outcomes?

14 atorm 05 January 2015 12:25PM

Related article: Polyhacking

Note: This article was posted earlier for less than a day but accidentally deleted.

 

Although polyamory isn't one of the "official" topics of LW interest (human cognition, AI, probability, etc...), this is the only community I'm part of where I expect a sufficiently high number of members to have experience with it to give useful feedback. 

 

If you go looking for advice or articles about polyamory on the internet, you mostly get stuff written by polyamorists that are happy with their decisions. Is this selection bias? Where are the people whose relationships (or social lives, out anything) got damaged or ruined by experimenting with Consensual Non-Monogamy?

 

I'm posting this hoping for feedback, negative AND positive, on experiences with polyamory. I considered putting this in an Open Thread, but it occurred to me that many other LW readers might be interested in whether polyamory has drawbacks they need to be aware of. If you have experience with CNM (including first-hand witnessing, which has the added bonus of not requiring you to out yourself while still participating in the dialogue), please comment with your overall impression and as much detail as you would like to include (I am also putting my experiences there rather than in this post). If you've seen multiple poly relationships, multiple comments would make tallying slightly easier. I will try to upvote people who feed me data, a la LW surveys. If there are sufficient comments, I will periodically go through them and post a rough ratio of good to bad experiences at the bottom of this article.

PSA: The Username account is available for use by any who wish to remain anonymous. The password is left as an exercise for the reader. Hat tip... Username.

Brain-centredness and mind uploading

14 gedymin 02 January 2015 12:23PM

The naïve way of understanding mind uploading is "we take the connectome of a brain, including synaptic connection weights and characters, and emulate it in a computer". However, people want their personalities to be uploaded, not just brains. That is more than just replicating the functionality of their brains in silico.

This nuance has lead to some misunderstandings, for example, to experts wondering [1] why on Earth would anyone think that brain-centredness [2] (the idea that brains are "sufficient" in some vague sense) is a necessary prerequisite for successful whole brain emulation. Of course, brain-centredness is not required for brain uploading to be technically successful; the problem is whether it is sufficient for mind uploading in the sense that people actually care about?

 

The first obvious extension that may be required is the chemical environment of the brain. Here are some examples:

  • Are you familiar with someone whose personality is radically (and often predictability) altered under influence of alcohol or drugs? This is not an exception, but a rule: most are impacted by this, only to a smaller extent. Only the transiency of the effects allow us to label them as simple mood changes.
  • I have observed that my personal levels of neuroticism vary depending on the pharmaceutical drugs I'm using. Nootropics make me more nervous, while anti-hypertensions drugs have the reverse effect.
  • The levels of hormones in the blood function as long-term personality changes. There are neurotransmitters that themselves are slow-acting, for example, nitric oxide [3].
  • Artificially enchanted levels of serotonin in the brain causes it to "adapt" to this environment - in this way some of antidepressants work (namely, SSRI) [4].

Whole Brain Emulation - A Roadmap includes a short section about the "Body chemical environment" and concludes that for "WBE, the body chemistry model, while involved, would be relatively simple", unless protein interactions have to be modelled.

The technical aspect notwithstanding, what are the practical and moral implications? I think that here's not only a problem, but also an opportunity. Why keep the accidental chemistry we have developed in our lifetimes, one that presumably has little relation to what we would really like to be - if we could? Imagine that it is possible to create carefully improved and tailored versions of the neurotransmitter "soup" in the brain. There are new possibilities here for personal growth in ways that have not been possible before. These ways are completely orthogonal to the intelligence enhancement opportunities commonly associated with uploading.

The question of personal identity is more difficult, and there appears to be a grey zone here. A fictional example of the protagonist in Planescape: Torment comes into mind - is he the same person in each of his incarnations?

 

The second extension required to upload our personalities in the fullest sense might be the peripheral nervous system. Most of us think it's the brain that's responsible for emotions, but this is a simplified picture. Here are some hints why:

  • The James-Lange 19th century theory of emotions proposed that we experience emotion in response to physiological changes in our body. For example, we feel sad because we cry rather than cry because we are sad [5]. While the modern understanding of emotions is significantly different, these ideas have not completely gone away neither from academic research [5] nor everyday life. For example, to calm down, we are suggested to take deep and slow breaths. Paraplegics and quadriplegics, with severe spinal cord injuries typically experience less intense emotions than other people [6].
  • Endoscopic thoracic sympathectomy (ETS) is a surgical procedure in which a portion of the sympathetic nerve trunk in the thoracic region is destroyed [7]. It is typically used against excessive hand sweating. However, "a large study of psychiatric patients treated with this surgery [also] showed significant reductions in fear, alertness and arousal [..] A severe possible consequence of thoracic sympathectomy is corposcindosis (split-body syndrome) [..] In 2003 ETS was banned in Sweden due to overwhelming complaints by disabled patients." The complaints include having not been able to lead emotional life as fully as before the operation.
  • The enteric nervous system in the stomach "governs the function of the gastrointestinal system" [8]. I'm not sure how solid the research is, but there are a lot of articles on the Web that mention the importance of this system to our mood and well being [9]. Serotonin is "the happiness neurotransmitter" and "in fact 95 percent of the body's serotonin is found in the bowels", as are 50% of dopamine [8]. "Gut bacteria may influence thoughts and behaviour" [10] by using the serotonin mechanism. Also, "Irritable bowel syndrome is associated with psychiatric illness" [10].

 

In short, different chemistry in the brain changes what we are, as does the peripheral nervous system. To upload someone in the fullest sense, his/her chemistry and PNS also have to be uploaded.

[1] Randal Koene on whole brain emulation

[2] Anders Sandberg, Nick Bostrom, Future of Humanity Institute, Whole Brain Emulation - A Roadmap.

[3] Bradley Voytek's (Ph.D. neuroscience) Quora answer to Will human consciousness ever be transferrable?

[4] Selective serotonin reuptake inhibitors

[5] Bear et al. Neuroscience: Exploring the Brain, 3rd edition. Page 564.

[6] Michael W. Eysenck - Perspectives On Psychology - Page 100 - Google Books Result

[7] Endoscopic thoracic sympathectomy

[8] Enteric nervous system

[9] Scientific American, 2010. Think Twice: How the Gut's "Second Brain" Influences Mood and Well-Being

[10] The Guardian, 2012. Microbes manipulate your mind

Slides online from "The Future of AI: Opportunities and Challenges"

13 ciphergoth 16 January 2015 11:17AM

In the first weekend of this year, the Future of Life institute hosted a landmark conference in Puerto Rico: "The Future of AI: Opportunities and Challenges". The conference was unusual in that it was not made public until it was over, and the discussions were under Chatham House rules. The slides from the conference are now available. The list of attenders includes a great many famous names as well as lots of names familiar to those of us on Less Wrong: Elon Musk, Sam Harris, Margaret Boden, Thomas Dietterich, all three DeepMind founders, and many more.

This is shaping up to be another extraordinary year for AI risk concerns going mainstream!

How Islamic terrorists reduced terrorism in the US

13 PhilGoetz 11 January 2015 05:19AM

Yesterday I was using the Global Terrorism Database to check some suprisingly low figures on what percentage of terrorist acts are committed by Muslims. (Short answer: Worldwide since 2000, about 80%, rather than 0.4 - 6% as given in various sources.) But I found some odd patterns in the data for the United States. Look at this chart of terrorist acts in the US which meet GTD criteria I-III and are listed as "unambiguous":



There were over 200 bombings in the US in 1970 alone, by all sorts of political groups (the Puerto Rican Liberation Front, the Jewish Defense League, the Weathermen, the Black Panthers, anti-Castro groups, white supremacists, etc., etc.) There was essentially no religious terrorism; that came in the 80s and 90s. But let's zoom in on 1978 onward, after the crazy period we inaccurately call "the sixties". First, a count of Islamic terrorist acts worldwide:

Islamic terrorist acts worldwide
This is incomplete, because the database contains over 400 Islamic terrorist groups, but only let me select 300 groups at a time. (Al Qaeda is one of the groups not included here.) Also, this doesn't list any acts committed without direct supervision from a recognized terrorist group, nor acts whose perpetrators were not identified (about 77% of the database, estimated from a sample of 100, with the vast majority of those unknowns in Muslim countries). But we can see there's an increase after 2000.

Now let's look at terrorist acts of all kinds in the US:

Terrorist acts in the US, 1970-2013

We see a dramatic drop in terrorist acts in the US after 2000. Sampling them, I found that except for less than a handful of white supremacists, there are only 3 types of terrorists still active in the US: Nutcases, animal liberation activists, and Muslims. If we exclude cases of property damage (which has never terrified me), it's basically just nutcases and Muslims.

Going by body count, it may still be an increase, because even if you exclude 9/11, just a handful of Muslim attacks still accounted for 50% of US fatalities in terrorist attacks from 2000 through 2013. But counting incidents, by 2005 there were about 1/3 as many per year as just before 2000. From 2000 to 2013 there were only 6 violent terrorist attacks in the US by non-Islamic terrorist groups that were not directed solely at property damage, resulting in 2 fatalities over those 14 years. Violent non-Islamic organized terrorism in the US has been effectively eliminated.

Some of this reduction is because we've massively expanded our counter-terrorism agencies. But if that were the explanation, given that homeland security doesn't stop all of the Islamic attacks they're focused on, surely we would see more than 6 attacks by other groups in 14 years.

Much of the reduction might be for non-obvious reasons, like whatever happened around 1980. But I think the most-obvious hypothesis is that Islamic terrorists gave terrorism a bad name. In the sixties, terrorism was almost cool. You could conceivably get laid by blowing up an Army recruiting center. Now, though, there's such a stigma associated with terrorism that even the Ku Klux Klan doesn't want to be associated with it. Islamists made terrorism un-American. In doing so, they reduced the total incidence of terrorism in America. Talk about unintended consequences.



On a completely different note, I couldn't help but notice one other glaring thing in the US data: terrorist acts attributed to "Individual" (a lone terrorist not part of an organization). I checked 200 cases from other countries and did not find one case tagged "Individual". But half of all attributed cases in the US from 2000-2013 are tagged "Individual". The lone gunman thing, where someone flips out and shoots up a Navy base, or bombs a government building because of a conspiracy theory, is distinctively American.

Perhaps Americans really are more enterprising than people of other nations. Perhaps other countries can't do the detective work to attribute acts to individuals. Perhaps their rate of non-lone wolf terrorism is so high that the lone wolf terrorists disappear in the data. Perhaps we're more accepting of "defending our freedom" as an excuse for shooting people. Perhaps psychotic delusions of being oppressed don't thrive well in countries that have plenty of highly-visible oppression. But perhaps Americans really do have a staggeringly-higher rate of mental illness than everyone else in the world. (Yes, suspicious study is suspicious, but... it is possible.)

2015 Repository Reruns - Boring Advice Repository

13 TrE 08 January 2015 06:00PM

 

This is the first post of the 2015 repository rerun, which appears to be a good idea. The motivation for this rerun is that while the 12 repositories (go look them up, they're awesome!) exist and people might look them up, few new comments are posted there. In effect, there might be useful stuff that should go in those repositories, but is never posted due to low expected value and no feedback. With the rerun, attention is shifted to one topic per month. This might allow us to have a lively discussion on the topic at hand and gather new content for the repository.

continue reading »

Non-obvious skills with highly measurable progress?

13 robot-dreams 03 January 2015 12:23AM

A lot of my significant personal improvement happened as a result of highly measurable progress and tight feedback loops.  For example:

  • Project Euler
  • Go (the game has a very accurate ranking system)
  • Strength training
However, these are somewhat obvious examples, and I feel like it would be a waste not to push such a useful improvement mechanism as far as possible.

What are some non-obvious examples of skills with highly measurable progress and tight feedback loops?

Exams and Overfitting

12 robot-dreams 06 January 2015 07:35PM

When I hear something like "What's going to be on the exam?", part of me gets indignant.  WHAT?!?!  You're defeating the whole point of the exam!  You're committing the Deadly Sin of Overfitting!

Let me step back and explain my view of exams.

When I take a class, my goal is to learn the material.  Exams are a way to answer the question, "How well did I learn the material?"[1].  But exams are only a few hours long, so it's unfeasible to have questions on all of the material.  To deal with this time constraint, an exam takes a random sample of the material and gives me a "statistical" rather than "perfect" answer to the question, "How well did I learn the material?"

If I know in advance what topics will be covered on the exam, and if I then prepare for the exam by learning only those topics, then I am screwing up this whole process.  By doing very well on the exam, I get the information, "Congratulations!  You learned the material covered on the exam very well."  But who knows how well I learned the material covered in class as a whole?  This is a textbook case of overfitting.

To be clear, I don't necessarily lose respect for someone who asks, "What's going to be on the exam?".  I understand that different people have different priorities[2], and that's fine by me.  But if you're taking a class because you truly want to learn the material, in spite of any sacrifices that you might have to make to do so[3], then I'd like to encourage you not to "study for the test".  I'd like to encourage you not to overfit.


[1] When I say "learned", I mean in the "Feynman" sense, not in the "teacher's password" sense.  I believe that a necessary (but not sufficient) condition for an exam to check for this kind of learning is to have problems that I've never seen before.

[2] Someone might care much more about getting into medical school than, say, mastering classical mechanics.  I respect that choice, and I acknowledge that someone might be in a system where getting a good grade in physics is required for getting into medical school, even though mastering classical mechanics isn't required for becoming a good doctor.

[3] There were a few terms when I felt like I did a really good job of learning the material (conveniently, I also got really good grades during these terms).  But for these terms, one (or both) of the following would happen:

  • I would take a huge hit in social status, because I was taking barely more than the minimum courseload.  At my university, there was a lot of social pressure to always take the maximum courseload (or petition to exceed the maximum courseload), and still participate in lots of extracurricular activities.
  • My girlfriend at the time would break up with me because of all the time I was spending on my coursework (and not with her).

 

Prediction Markets are Confounded - Implications for the feasibility of Futarchy

11 Anders_H 26 January 2015 10:39PM

(tl;dr:  In this post, I show that prediction markets estimate non-causal probabilities, and can therefore not be used for decision making by rational agents following causal decision theory.  I provide an example of a simple situation where such confounding leads to a society which has implemented futarchy making an incorrect decision)

 

It is October 2016, and the US Presidential Elections are nearing. The most powerful nation on earth is about to make a momentous decision about whether being the brother of a former president is a more impressive qualification than being the wife of a former president. However, one additional criterion has recently become relevant in light of current affairs:   Kim Jong-Un, Great Leader of the Glorious Nation of North Korea, is making noise about his deep hatred for Hillary Clinton. He also occasionally discusses the possibility of nuking a major US city. The US electorate, desperate to avoid being nuked, have come up with an ingenious plan: They set up a prediction market to determine whether electing Hillary will impact the probability of a nuclear attack. 

The following rules are stipulated:  There are four possible outcomes, either "Hillary elected and US Nuked", "Hillary elected and US not nuked", "Jeb elected and US not nuked", "Jeb elected and US not nuked".   Participants in the market can buy and sell contracts for each of those outcomes,  the contract which correponds to the actual outcome will expire at $100, all other contracts will expire at $0

Simultaneously in a country far, far away,  a rebellion is brewing against the Great Leader.  The potential challenger not only appears not to have no problem with Hillary, he also seems like a reasonable guy who would be unlikely to use nuclear weapons. It is generally believed that the challenger will take power with probability 3/7; and will be exposed and tortured in a forced labor camp for the rest of his miserable life with probability 4/7.     Let us stipulate that this information is known to all participants  - I am adding this clause in order to demonstrate that this argument does not rely on unknown information or information asymmetry. 

A mysterious but trustworthy agent named "Laplace's Demon" has recently appeared, and informed everyone that, to a first approximation,  the world is currently in one of seven possible quantum states.  The Demon, being a perfect Bayesian reasoner with Solomonoff Priors, has determined that each of these states should be assigned probability 1/7.     Knowledge of which state we are in will perfectly predict the future, with one important exception:   It is possible for the US electorate to "Intervene" by changing whether Clinton or Bush is elected. This will then cause a ripple effect into all future events that depend on which candidate is elected President, but otherwise change nothing. 

The Demon swears up and down that the choice about whether Hillary or Jeb is elected has absolutely no impact in any of the seven possible quantum states. However, because the Prediction market has already been set up and there are powerful people with vested interests, it is decided to run the market anyways. 

 Roughly, the demon tells you that the world is in one of the following seven states:

 

State

Kim overthrown

Election winner (if no intervention)

US Nuked if Hillary elected

US Nuked if Jeb elected

US Nuked

1

No

Hillary

Yes

Yes

Yes

2

No

Hillary

No

No

No

3

No

Jeb

Yes

Yes

Yes

4

No

Jeb

No

No

No

5

Yes

Hillary

No

No

No

6

Yes

Jeb

No

No

No

7

Yes

Jeb

No

No

No


Let us use this table to define some probabilities:   If one intervenes to make Hillary win the election, the probability of the US being nuked is 2/7 (this is seen from column 4).  If one intervenes to make Jeb win the election, the probability of the US being nuked is 2/7 (this is seen from column 5).   In the language of causal inference, these probabilities are Pr (Nuked| Do (Elect Clinton)] and Pr[Nuked | Do(Elect Bush)].  The fact that these two quantities  are equal confirms the Demon’s claim that the choice of President has no effect on the outcome.  An agent operating under Causal Decision theory will use this information to correctly conclude that he has no preference about whether to elect Hillary or Jeb. 

However, if one were to condition on who actually was elected, we get different numbers:  Conditional on being in a state where Hillary is elected, the probability of the US being nuked is 1/3; whereas conditional on being in a state where Jeb is elected, the probability of being nuked is ¼.  Mathematically, these probabilities are Pr [Nuked | Clinton Elected] and Pr[Nuked | Bush Elected].  An agent operating under Evidentiary Decision theory will use this information to conclude that he will vote for Bush.  Because evidentiary decision theory is wrong, he will fail to optimize for the outcome he is interested in. 

Now, let us ask ourselves which probabilities our prediction markets will converge to, ie which probabilities participants in the market have an incentive to provide their best estimate of.  We defined our contract as "Hillary is elected and the US is nuked".  The probability of this occurring in 1/7;  if we normalize by dividing by the marginal probability that Hillary is elected, we get 1/3 which is equal to  Pr [Nuked | Clinton Elected].   In other words, the prediction market estimates the wrong quantities.

Essentially, what happens is structurally the same phenomenon as confounding in epidemiologic studies:  There was a common cause of Hillary being elected and the US being nuked.  This common cause - whether Kim Jong-Un was still Great Leader of North Korea - led to a correlation between the election of Hillary and the outcome, but that correlation is purely non-causal and not relevant to a rational decision maker. 

The obvious next question is whether there exists a way to save futarchy; ie any way to give traders an incentive to pay a price that reflects their beliefs about Pr (Nuked| Do (Elect Clinton)]  instead of Pr [Nuked | Clinton Elected]).    We discussed this question at the Less Wrong Meetup in Boston a couple of months ago. The only way we agreed will definitely solve the problem is the following procedure: 

 

  1. The governing body makes an absolute pre-commitment that no matter what happens, the next President will be determined solely on the basis of the prediction market 
  2. The following contracts are listed: “The US is nuked if Hillary is elected” and “The US is nuked if Jeb is elected”
  3. At the pre-specified date, the markets are closed and the President is chosen based on the estimated probabilities
  4. If Hillary is chosen,  the contract on Jeb cannot be settled, and all bets are reversed.  
  5. The Hillary contract is expired when it is known whether Kim Jong-Un presses the button. 

 

This procedure will get the correct results in theory, but it has the following practical problems:  It allows maximizing on only one outcome metric (because one cannot precommit to choose the President based on criteria that could potentially be inconsistent with each other).  Moreover, it requires the reversal of trades, which will be problematic if people who won money on the Jeb contract have withdrawn their winnings from the exchange. 

The only other option I can think of  in order to obtain causal information from a prediction market is to “control for confounding”.   If, for instance, the only confounder is whether Kim Jong-Un is overthrown, we can control for it by using Do-Calculus to show that Pr (Nuked| Do (Elect Clinton)] = Pr (Nuked| (Clinton elected,  Kim Overthrown)* Pr (Kim Overthrown) + Pr (Nuked| (Clinton elected,  Kim Not Overthrown)* Pr (Kim Not Overthrown).   All of these quantities can be estimated from separate prediction markets.  

 However, this is problematic for several reasons:

 

  1. There will be an exponential explosion in the number of required prediction markets, and each of them will ask participants to bet on complicated conditional probabilities that have no obvious causal interpretation. 
  2. There may be disagreement on what the confounders are, which will lead to contested contract interpretations.
  3. The expert consensus on what the important confounders are may change during the lifetime of the contract, which will require the entire thing to be relisted. Etc.    For practical reasons, therefore,  this approach does not seem feasible.

 

I’d like a discussion on the following questions:  Are there any other ways to list a contract that gives market participants an incentive to aggregate information on  causal quantities? If not, is futarchy doomed?

(Thanks to the Less Wrong meetup in Boston and particularly Jimrandomh for clarifying my thinking on this issue)

(I reserve the right to make substantial updates to this text in response to any feedback in the comments)

[LINK] Yudkowsky's Abridged Guide to Intelligent Characters

11 katydee 30 December 2014 06:37AM

Some of you have likely seen this already, but for those of you who haven't, Eliezer recently finished a series of Tumblr posts on writing intelligent characters in fiction. It can be found at http://yudkowsky.tumblr.com/writing and is IMO worth a read.

Purchasing research effectively open thread

10 John_Maxwell_IV 21 January 2015 12:24PM

Many of the biggest historical success stories in philanthropy have come in the form of funding for academic research.  This suggests that the topic of how to purchase such research well should be of interest to effective altruists.  Less Wrong survey results indicate that a nontrivial fraction of LW has firsthand experience with the academic research environment.  Inspired by the recent Elon Musk donation announcement, this is a thread for discussion of effectively using money to enable important, useful research.  Feel free to brainstorm your own questions and ideas before reading what's written in the thread.

... And Everyone Loses Their Minds

10 Ritalin 16 January 2015 11:38PM

Chris Nolan's Joker is a very clever guy, almost Monroesque in his ability to identify hypocrisy and inconsistency. One of his most interesting scenes in the film has him point out how people estimate horrible things differently depending on whether they're part of what's "normal", what's "expected", rather than on how inherently horrifying they are, or how many people are involved.

Soon people extrapolated this observation to other such apparent inconsistencies in human judgment, where a behaviour that once was acceptable, with a simple tweak or change in context, becomes the subject of a much more serious reaction.

I think there's rationalist merit in giving these inconsistencies a serious look. I intuit that there's some sort of underlying pattern to them, something that makes psychological sense, in the roundabout way that most irrational things do. I think that much good could come out of figuring out what that root cause is, and how to predict this effect and manage it.

Phenomena that come to mind, are, for instance, from an Effective Altruism point of view, the expenses incurred in counter-terrorism (including some wars that were very expensive in treasure and lives), and the number of lives said expenses save, compared with the number of lives that could be saved by spending that same amount into improving road safety, increasing public helathcare expense where it would do the most good, building better lightning rods (in the USA you're four times more likely to be struck by thunder than by terrorists), or legalizing drugs.

What do y'all think? Why do people have their priorities all jumbled-up? How can we predict these effects? How can we work around them?

An example and discussion of extension neglect

10 emr 16 January 2015 06:10AM

I recently used an automatic tracker to learn how I was spending my time online. I learned that my perceptions were systemically biased: I spend less time than I thought on purely non-productive sites, and far more time on sites that are quasi-productive.

For example, I felt that I was spending too much time reading the news, but I learned that I spend hardly time doing so. I didn't feel that I was spending much time reading Hacker News, but I was spending a huge amount of time there!

Is this a specific case of a more general error?

A general framing: "Paying too much attention to the grouping whose items have the most extreme quality, when the value of focusing on this grouping is eclipsed by the value of focusing on a larger grouping of less extreme items".

So in this case, once I had formed the desire to be more productive, I overestimated how much potential productive time I could gain by focusing on those sites that I felt were maximally non-productive, and underestimated the potential of focusing on marginally more productive sites.

In pseudo-technical terms: We think about items in groups. But then we think of the total value of a group as being closer to average_value than to average_value * size_of_group.

This falls under the category of Extension Neglect, which includes errors caused by ignoring the size of a set. Other patterns in this category are:

  • Base rate neglect: Inferring the category of an item as if all categories were the same size.
  • The peak-end rule: Giving the value of the ordered group as a function of max_value and end_value.
  • Not knowing how set size interacts with randomness.

For the error given above, some specific examples might be:

  • Health: Focusing too much on eating desert at your favorite restaurant; and not enough on eating pizza three times a week.
  • Love: Fights and romantic moments; daily interaction.
  • Stress: Public speaking; commuting
  • Ethics: Improbable dilemmas; reducing suffering (or doing anything externally visible)
  • Crime: Serial killers; domestic violence

 

Identity crafting

10 robot-dreams 31 December 2014 06:34PM

I spend a LOT of time on what I'll call "identity crafting".  It's probably my most insidious procrastination tactic--far worse than, say, Facebook or Reddit.

What do I mean by "identity crafting"?  Here are some examples:

  • Brainstorming areas of my life where I want to improve (e.g. social skills, sleep habits)
  • Searching for new hobbies to start (e.g. snowboarding, guitar)
  • Making a "character sheet" for myself, complete with a huge list of "badass skills" that I'd want to learn (e.g. martial arts, lock picking)
  • Creating and revising my "four-year plan", i.e. schedule of university courses (at my university I had a lot of flexibility in which courses to take each term)
  • Finding books that I "ought to read" (bonus points if the list includes "The Art of Computer Programming") and movies that I "ought to watch"
  • Looking up variants on "renaissance man" (e.g. "Four Arts of the Chinese Scholar"), and imagining how I could become one

In other words, "identity crafting" is some combination of making lists and daydreaming.  And since the vast majority of the "identities" that I "craft" never become reality, I should really say that "identity crafting" is some combination of making lists and self-aggrandizing delusion.

What's so bad about this?  Besides the obvious waste of time, this gives me a false sense of accomplishment and productivity--I often feel as though the "identity" that I "crafted" were already real, and I often feel as though I've already done enough for the day (week, month, year).  Thus in the short term, this is a great way to ensure that I don't do any "actual work", and in the long term, this is a great way to become a poser with an epically inflated opinion of myself.

So... does anyone else do this?

The Rubber Hand Illusion and Preaching to the Unconverted

10 Gram_Stone 29 December 2014 12:56PM

Related Posts: The Apologist and the Revolutionary

 

It seems that the CFAR workshops so far have been dedicated to people who have preconceptions pretty close in ideaspace to the sorts of ideas proposed on LW and by the institutions related to it. This is not a criticism; it's easier to start out this way: as has been said, in a different context and perhaps not in so many words, we should focus on precision before tractability. We're not going to learn a thing about the effectiveness of rationality training from people who won't even listen to what we have to say. Nevertheless, there will come a day when these efforts must be expanded to people who don't already view us as high in social status, so we still have to solve the problem of people being more concerned with both our and their social status than with listening to what we have to say. I propose that the solution is to divorce the consideration of social status from the argument.

 

There is a lot of talk of cognitive biases on LW, and for good reason, but ultimately what we are trying to teach people is that they are prone to misinterpreting reality, and cognitive biases are only one component of this. One of the problems with trying to teach people about biases is that people feel personally responsible for being biased; many people have a conception of thinking as an 'active' process, so they feel as though it reflects upon their character. On the other hand, many people conceive of perception as a 'passive' process; no one feels personally responsible for what they perceive. So, I propose that we circumvent this fear of character assassination by demonstrating how people can misinterpret reality through perception. Enter: the rubber hand illusion.

 

In case you're unfamiliar with this illusion, to demonstrate the rubber hand illusion, a subject sits at a table, a rubber hand is placed in front of them, oriented relative to their body as a natural hand would be, and a partition is placed between the rubber hand and their 'real' hand such that they are unable to see the 'real' hand. Then, the experimenter simultaneously 'stimulates' both hands at random intervals (usually by stroking each hand with a paintbrush). Then, the experimenter overextends the tips of a finger on each hand, the rubber hand about 90 degrees, and the 'real' hand about 20 degrees (it's not really overextension, and it wouldn't cause pain outside of the experiment's conditions). Measurements of skin conductance response indicate that subjects anticipate pain when this is done, and a very small selection of subjects even report actually experiencing pain. Also, (just for kicks) when subjects are questioned about the degree to which they believe their 'real' finger was bent, they overestimate, by an average of about 20 degrees.


As Dr. Vilayanur Ramachandran has demonstrated, the rubber hand illusion isn't the most general example of this sort of illusion: the human mind can even anticipate pain from injury to the surface of a table. In fact, there is evidence that the human mind's evaluation of what is and is not part of its body isn't even dependent upon distance: Dr. Ramachandran has also demonstrated this with rubber hands attached to unnaturally long rubber arms.


I think that there are also three beneficial side effects to this exercise. (1) We are trying to convince people that Bayesian inference is a useful way to form beliefs, and this illusion demonstrates that every human mind already unconsciously uses Bayesian inference all of the time (namely, to infer what is and isn't its body). To further demonstrate the part about Bayesian inference, I would suggest that subjects also subsequently be shown how the illusion does not occur when the rubber hand is perpendicular to the 'real' hand or when the 'stimulations' aren't simultaneous. (2) After the fact, the demonstration grants social status to the demonstrator in the eyes of the subject: "This person showed me something that I consider extremely significant and that I didn't know about, therefore, they must be important." (3) Inconsistencies in perception instill feelings of self-doubt and incredulity, which makes it easier to change one's mind.

 

Addendum: This post has been substantially edited, both for brevity and on the basis of mistakes mentioned in the comments, such that some of the comments now appear nonsensical. Here is a draft that I found on my desktop which as far as I can tell is identical to the original post: http://pastebin.com/BL81VQVp


Donate to Keep Charity Science Running

9 peter_hurford 27 January 2015 02:45AM

Charity Science is looking for $35,000 to fund our 2015 operations. We fundraise for GiveWell-recommended charities, and over 2014 we moved over $150,000 to them that wouldn’t have been given otherwise: that’s $9 for every $1 we spent. We can’t do this work without your support, so please consider making a donation to us - however small, it will be appreciated. Donate now and you’ll also be matched by Matt Wage.

The donations pages below list other reasons to donate to us, which include:

  • Our costs are extremely low: the $35,000 CAD pays for three to four full-time staff.
  • We experiment with many different forms of fundraising and record detailed information on how these experiments go, so funding us lets the whole EA community learn about their prospects.
  • We carefully track how much money each experiment raises, subtract money which would have been given anyway, and shut down experiments that don’t work.
  • Our fundraising still has many opportunities to continue to scale as we try new ideas we haven’t tested yet.

There’s much more information, including our full budget and what we’d do if we raised over $35,000, in the linked document, and we’d be happy to answer any questions. Thank you in advance for your consideration.

Donate in American dollars 

Donate in British pounds 

Donate in Canadian dollars

The guardian article on longevity research [link]

8 ike 11 January 2015 07:02PM

Comments on "When Bayesian Inference Shatters"?

8 Crystalist 07 January 2015 10:56PM

I recently ran across this post, which gives a lighter discussion of a recent paper on Bayesian inference ("On the Brittleness of Bayesian Inference"). I don't understand it, but I'd like to, and it seems like the sort of paper other people here might enjoy discussing.

I am not a statistician, and this summary is based on the blog post (I haven't had time to read the paper yet) so please discount my summary accordingly: It looks like the paper focuses on the effects of priors and underlying models on the posterior distribution. Given a continuous distribution (or a discrete approximation of one) to be estimated from finite observations (of sufficiently high precision), and finite priors, the range of posterior estimates is the same as the range of the distribution to be estimated. Given models that are arbitrarily close (I'm not familiar with the total variance metric, but the impression I had was that, for finite accuracy, they produce the same observations with arbitrarily similar probability), you can have posterior estimates that are arbitrarily distant (within the range of the distribution to be estimated) given the same information. My impression is that implicitly relying on arbitrary precision of a prior can give updates that are diametrically opposed to the ones you'd get with different, but arbitrarily similar priors.

 

First, of course, I want to know if my summary's accurate, misses the point, or wrong.

Second, I'd be interested in hearing discussions of the paper in general and whether it might have any immediate impact on practical applications.

Some other areas of discussion that would be of interest to me: I'm also not entirely sure what 'sufficiently high precision' would be. I also have only a vague idea of the circumstances where you'd be implicitly relying on the arbitrary precision of a prior. I'm also just generally interest in hearing what people more experienced/intelligent than I am might have to say here.

Stupid Questions January 2015

8 Gondolinian 01 January 2015 02:30AM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

Superintelligence 19: Post-transition formation of a singleton

7 KatjaGrace 20 January 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the nineteenth section in the reading guidepost-transition formation of a singleton. This corresponds to the last part of Chapter 11.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: : “Post-transition formation of a singleton?” from Chapter 11


Summary

  1. Even if the world remains multipolar through a transition to machine intelligence, a singleton might emerge later, for instance during a transition to a more extreme technology. (p176-7)
  2. If everything is faster after the first transition, a second transition may be more or less likely to produce a singleton. (p177)
  3. Emulations may give rise to 'superorganisms': clans of emulations who care wholly about their group. These would have an advantage because they could avoid agency problems, and make various uses of the ability to delete members. (p178-80) 
  4. Improvements in surveillance resulting from machine intelligence might allow better coordination, however machine intelligence will also make concealment easier, and it is unclear which force will be stronger. (p180-1)
  5. Machine minds may be able to make clearer precommitments than humans, changing the nature of bargaining somewhat. Maybe this would produce a singleton. (p183-4)

Another view

Many of the ideas around superorganisms come from Carl Shulman's paper, Whole Brain Emulation and the Evolution of Superorganisms. Robin Hanson critiques it:

...It seems to me that Shulman actually offers two somewhat different arguments, 1) an abstract argument that future evolution generically leads to superorganisms, because their costs are generally less than their benefits, and 2) a more concrete argument, that emulations in particular have especially low costs and high benefits...

...On the general abstract argument, we see a common pattern in both the evolution of species and human organizations — while winning systems often enforce substantial value sharing and loyalty on small scales, they achieve much less on larger scales. Values tend to be more integrated in a single organism’s brain, relative to larger families or species, and in a team or firm, relative to a nation or world. Value coordination seems hard, especially on larger scales.

This is not especially puzzling theoretically. While there can be huge gains to coordination, especially in war, it is far less obvious just how much one needs value sharing to gain action coordination. There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination. It is also far from obvious that values in generic large minds can easily be separated from other large mind parts. When the parts of large systems evolve independently, to adapt to differing local circumstances, their values may also evolve independently. Detecting and eliminating value divergences might in general be quite expensive.

In general, it is not at all obvious that the benefits of more value sharing are worth these costs. And even if more value sharing is worth the costs, that would only imply that value-sharing entities should be a bit larger than they are now, not that they should shift to a world-encompassing extreme.

On Shulman’s more concrete argument, his suggested single-version approach to em value sharing, wherein a single central em only allows (perhaps vast numbers of) brief copies, can suffer from greatly reduced innovation. When em copies are assigned to and adapt to different tasks, there may be no easy way to merge their minds into a single common mind containing all their adaptations. The single em copy that is best at doing an average of tasks, may be much worse at each task than the best em for that task.

Shulman’s other concrete suggestion for sharing em values is “psychological testing, staged situations, and direct observation of their emulation software to form clear pictures of their loyalties.” But genetic and cultural evolution has long tried to make human minds fit well within strongly loyal teams, a task to which we seem well adapted. This suggests that moving our minds closer to a “borg” team ideal would cost us somewhere else, such as in our mental agility.

On the concrete coordination gains that Shulman sees from superorganism ems, most of these gains seem cheaply achievable via simple long-standard human coordination mechanisms: property rights, contracts, and trade. Individual farmers have long faced starvation if they could not extract enough food from their property, and farmers were often out-competed by others who used resources more efficiently.

With ems there is the added advantage that em copies can agree to the “terms” of their life deals before they are created. An em would agree that it starts life with certain resources, and that life will end when it can no longer pay to live. Yes there would be some selection for humans and ems who peacefully accept such deals, but probably much less than needed to get loyal devotion to and shared values with a superorganism.

Yes, with high value sharing ems might be less tempted to steal from other copies of themselves to survive. But this hardly implies that such ems no longer need property rights enforced. They’d need property rights to prevent theft by copies of other ems, including being enslaved by them. Once a property rights system exists, the additional cost of applying it within a set of em copies seems small relative to the likely costs of strong value sharing.

Shulman seems to argue both that superorganisms are a natural endpoint of evolution, and that ems are especially supportive of superorganisms. But at most he has shown that ems organizations may be at a somewhat larger scale, not that they would reach civilization-encompassing scales. In general, creatures who share values can indeed coordinate better, but perhaps not by much, and it can be costly to achieve and maintain shared values. I see no coordinate-by-values free lunch...

Notes

1. The natural endpoint

Bostrom says that a singleton is natural conclusion of long-term trend toward larger scales of political integration (p176). It seems helpful here to be more precise about what we mean by singleton. Something like a world government does seem to be a natural conclusion to long term trends. However this seems different to the kind of singleton I took Bostrom to previously be talking about. A world government would by default only make a certain class of decisions, for instance about global level policies. There has been a long term trend for the largest political units to become larger, however there have always been smaller units as well, making different classes of decisions, down to the individual. I'm not sure how to measure the mass of decisions made by different parties, but it seems like the individuals may be making more decisions more freely than ever, and the large political units have less ability than they once did to act against the will of the population. So the long term trend doesn't seem to point to an overpowering ruler of everything.

2. How value-aligned would emulated copies of the same person be?

Bostrom doesn't say exactly how 'emulations that were wholly altruistic toward their copy-siblings' would emerge. It seems to be some combination of natural 'altruism' toward oneself and selection for people who react to copies of themselves with extreme altruism (confirmed by a longer interesting discussion in Shulman's paper). How easily one might select for such people depends on how humans generally react to being copied. In particular, whether they treat a copy like part of themselves, or merely like a very similar acquaintance.

The answer to this doesn't seem obvious. Copies seem likely to agree strongly on questions of global values, such as whether the world should be more capitalistic, or whether it is admirable to work in technology. However I expect many—perhaps most—failures of coordination come from differences in selfish values—e.g. I want me to have money, and you want you to have money. And if you copy a person, it seems fairly likely to me the copies will both still want the money themselves, more or less.

From other examples of similar people—identical twins, family, people and their future selves—it seems people are unusually altruistic to similar people, but still very far from 'wholly altruistic'. Emulation siblings would be much more similar than identical twins, but who knows how far that would move their altruism?

Shulman points out that many people hold views about personal identity that would imply that copies share identity to some extent. The translation between philosophical views and actual motivations is not always complete however.

3. Contemporary family clans

Family-run firms are a place to get some information about the trade-off between reducing agency problems and having access to a wide range of potential employees. Given a brief perusal of the internet, it seems to be ambiguous whether they do better. One could try to separate out the factors that help them do better or worse.

4. How big a problem is disloyalty?

I wondered how big a problem insider disloyalty really was for companies and other organizations. Would it really be worth all this loyalty testing? I can't find much about it quickly, but 59% of respondents to a survey apparently said they had some kind of problems with insiders. The same report suggests that a bunch of costly initiatives such as intensive psychological testing are currently on the table to address the problem. Also apparently it's enough of a problem for someone to be trying to solve it with mind-reading, though that probably doesn't say much.

5. AI already contributing to the surveillance-secrecy arms race

Artificial intelligence will help with surveillance sooner and more broadly than in the observation of people's motives. e.g. here and here.

6. SMBC is also pondering these topics this week



In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. What are the present and historical barriers to coordination, between people and organizations? How much have these been lowered so far? How much difference has it made to the scale of organizations, and to productivity? How much further should we expect these barriers to be lessened as a result of machine intelligence?
  2. Investigate the implications of machine intelligence for surveillance and secrecy in more depth.
  3. Are multipolar scenarios safer than singleton scenarios? Muehlhauser suggests directions.
  4. Explore ideas for safety in a singleton scenario via temporarily multipolar AI. e.g. uploading FAI researchers (See Salamon & Shulman, “Whole Brain Emulation, as a platform for creating safe AGI.”)
  5. Which kinds of multipolar scenarios would be more likely to resolve into a singleton, and how quickly?
  6. Can we get whole brain emulation without producing neuromorphic AGI slightly earlier or shortly afterward? See section 3.2 of Eckersley & Sandberg (2013).
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the 'value loading problem'. To prepare, read “The value-loading problem” through “Motivational scaffolding” from Chapter 12The discussion will go live at 6pm Pacific time next Monday 26 January. Sign up to be notified here.

What topics are appropriate for LessWrong?

7 tog 12 January 2015 06:58PM

For example, what would be inappropriately off topic to post to LessWrong discussion about?

I couldn't find an answer in the FAQ. (Perhaps it'd be worth adding one.) The closest I could find was this:

What is Less Wrong?

Less Wrong is an online community for discussion of rationality. Topics of interest include decision theory, philosophy, self-improvement, cognitive science, psychology, artificial intelligence, game theory, metamathematics, logic, evolutionary psychology, economics, and the far future.

However "rationality" can be interpreted broadly enough that rational discussion of anything would count, and my experience reading LW is compatible with this interpretation being applied by posters. Indeed my experience seems to suggest that practically everything is on topic; political discussion of certain sorts is frowned upon, but not due to being off topic. People often post about things far removed from the topics of interest. And some of these topics are very broad: it seems that a lot of material about self-improvement is acceptable, for instance.

Some recent evidence against the Big Bang

7 JStewart 07 January 2015 05:06AM

I am submitting this on behalf of MazeHatter, who originally posted it here in the most recent open tread. Go there to upvote if you like this submission.

Begin MazeHatter:

I grew up thinking that the Big Bang was the beginning of it all. In 2013 and 2014 a good number of observations have thrown some of our basic assumptions about the theory into question. There were anomalies observed in the CMB, previously ignored, now confirmed by Planck:

Another is an asymmetry in the average temperatures on opposite hemispheres of the sky. This runs counter to the prediction made by the standard model that the Universe should be broadly similar in any direction we look.

Furthermore, a cold spot extends over a patch of sky that is much larger than expected.

The asymmetry and the cold spot had already been hinted at with Planck’s predecessor, NASA’s WMAP mission, but were largely ignored because of lingering doubts about their cosmic origin.

“The fact that Planck has made such a significant detection of these anomalies erases any doubts about their reality; it can no longer be said that they are artefacts of the measurements. They are real and we have to look for a credible explanation,” says Paolo Natoli of the University of Ferrara, Italy.

... One way to explain the anomalies is to propose that the Universe is in fact not the same in all directions on a larger scale than we can observe. ...

“Our ultimate goal would be to construct a new model that predicts the anomalies and links them together. But these are early days; so far, we don’t know whether this is possible and what type of new physics might be needed. And that’s exciting,” says Professor Efstathiou.

http://www.esa.int/Our_Activities/Space_Science/Planck/Planck_reveals_an_almost_perfect_Universe

We are also getting a better look at galaxies at greater distances, thinking they would all be young galaxies, and finding they are not:

The finding raises new questions about how these galaxies formed so rapidly and why they stopped forming stars so early. It is an enigma that these galaxies seem to come out of nowhere.

http://carnegiescience.edu/news/some_galaxies_early_universe_grew_quickly

http://mq.edu.au/newsroom/2014/03/11/granny-galaxies-discovered-in-the-early-universe/

The newly classified galaxies are striking in that they look a lot like those in today's universe, with disks, bars and spiral arms. But theorists predict that these should have taken another 2 billion years to begin to form, so things seem to have been settling down a lot earlier than expected.

B. D. Simmons et al. Galaxy Zoo: CANDELS Barred Disks and Bar Fractions. Monthly Notices of the Royal Astronomical Society, 2014 DOI: 10.1093/mnras/stu1817

http://www.sciencedaily.com/releases/2014/10/141030101241.htm

The findings cast doubt on current models of galaxy formation, which struggle to explain how these remote and young galaxies grew so big so fast.

http://www.nasa.gov/jpl/spitzer/splash-project-dives-deep-for-galaxies/#.VBxS4o938jg

Although it seems we don't have to look so far away to find evidence that galaxy formation is inconsistent with the Big Bang timeline.

If the modern galaxy formation theory were right, these dwarf galaxies simply wouldn't exist.

Merrick and study lead Marcel Pawlowski consider themselves part of a small-but-growing group of experts questioning the wisdom of current astronomical models.

"When you have a clear contradiction like this, you ought to focus on it," Merritt said. "This is how progress in science is made."

http://www.natureworldnews.com/articles/7528/20140611/galaxy-formation-theories-undermined-dwarf-galaxies.htm

http://arxiv.org/abs/1406.1799

Another observation is that lithium abundances are way too low for the theory in other places, not just here:

A star cluster some 80,000 light-years from Earth looks mysteriously deficient in the element lithium, just like nearby stars, astronomers reported on Wednesday.

That curious deficiency suggests that astrophysicists either don't fully understand the big bang, they suggest, or else don't fully understand the way that stars work.

http://news.nationalgeographic.com/news/2014/09/140910-space-lithium-m54-star-cluster-science/

It also seems there is larger scale structure continually being discovered larger than the Big Bang is thought to account for:

"The first odd thing we noticed was that some of the quasars' rotation axes were aligned with each other -- despite the fact that these quasars are separated by billions of light-years," said Hutsemékers. The team then went further and looked to see if the rotation axes were linked, not just to each other, but also to the structure of the Universe on large scales at that time.

"The alignments in the new data, on scales even bigger than current predictions from simulations, may be a hint that there is a missing ingredient in our current models of the cosmos," concludes Dominique Sluse.

http://www.sciencedaily.com/releases/2014/11/141119084506.htm

D. Hutsemékers, L. Braibant, V. Pelgrims, D. Sluse. Alignment of quasar polarizations with large-scale structures. Astronomy & Astrophysics, 2014

Dr Clowes said: "While it is difficult to fathom the scale of this LQG, we can say quite definitely it is the largest structure ever seen in the entire universe. This is hugely exciting -- not least because it runs counter to our current understanding of the scale of the universe.

http://www.sciencedaily.com/releases/2013/01/130111092539.htm

These observations have been made just recently. It seems that in the 1980's, when I was first introduced to the Big Bang as a child, the experts in the field knew then there were problems with it, and devised inflation as a solution. And today, the validity of that solution is being called into question by those same experts:

In light of these arguments, the oft-cited claim that cosmological data have verified the central predictions of inflationary theory is misleading, at best. What one can say is that data have confirmed predictions of the naive inflationary theory as we understood it before 1983, but this theory is not inflationary cosmology as understood today. The naive theory supposes that inflation leads to a predictable outcome governed by the laws of classical physics. The truth is that quantum physics rules inflation, and anything that can happen will happen. And if inflationary theory makes no firm predictions, what is its point?

http://www.physics.princeton.edu/~steinh/0411036.pdf

What are the odds 2015 will be more like 2014 where we (again) found larger and older galaxies at greater distances, or will it be more like 1983?

Why do you really believe what you believe regarding controversial subjects?

7 iarwain1 04 January 2015 02:32PM

For every controversial subject I've heard of, there are always numerous very smart experts on either side. So I'm curious how it is that rational non-experts come to believe one side or the other.

So, what are your meta-arguments for going with one side or the other for any given controversial subject on which you have an opinion?

  • Have you researched both sides so thoroughly that you consider yourself equal to or better than the opposing experts? If so, to what do you attribute the mistakes of your counterparts? Have you carefully considered the possibility that you are the one who's mistaken?
  • Do you think that one side is more biased the other? Why?
  • Do you think that one side is more expert than the other? Why?
  • Do you rely on the majority of experts? (I haven't worked out for myself if going with a majority makes sense, so if you have arguments for / against this meta-argument then please elaborate.)
  • Do you think that there are powerful arguments that simply haven't been addressed by the other side? To what do you attribute the fact that these arguments haven't been addressed?
  • Do you have other heuristics or meta-arguments for going with one side or the other?
  • Do you just remain more or less an agnostic on every controversial subject?
  • Or do you perhaps admit that ultimately your beliefs are at least partially founded on non-rational reasons?
  • Do you think that this whole discussion is misguided? If so, why?
I know I don't have to list controversial subjects, but here are some to perhaps stimulate some thinking: Politics, religion (theism OR atheism), dangers from AI / x-risks, Bayesianism vs. alternatives, ethics & metaethics, pretty much everything in philosophy (at least that's what it often seems like!), social justice issues, policy proposals of all types. (If you have a particular controversy for which you'd like people to list their meta-arguments, just mention it in the comments.)



ETA: For myself, I generally try not to have an opinion on almost any controversial issue. So for example, on the recent LW survey I deliberately left most of the controversial issue questions blank. On the few issues that I do have some opinion, it's generally because I attribute a higher likelihood of bias to one side or the other, and/or I judge one side or the other to be greater experts, and/or there's a very large majority on one side, and/or there are powerful arguments on one side plus I have a good explanation for why the other side hasn't addressed those arguments. And even after all that I usually don't assign a high confidence to my judgement.

There's an interesting question that might follow from this approach: Other than curiosity, what is the use of researching a given subject if I'll never really be an expert and ultimately I'm going to need to fall back on the above types of meta-arguments? However, I've found that actually researching the subject is useful for a number of reasons:
  • Often after research it turns out that there are a surprising number of important points on which both sides actually agree.
  • It often turns out that one or both sides are not as confident about their positions as it might initially seem.
  • Often there are a number of sub-issues for which some of the above meta-arguments apply even if they might not apply to the broader issues. For example, perhaps there is a vast majority of experts who agree on a certain sub-issue even while debating the broader subject.
  • Occasionally the arguments ultimately boil down to things that fall outside the domain of rational debate.
  • Sometimes on the surface it may seem that someone is an expert, but on further research it turns out that they are relying on arguments outside their field of expertise. For example, many studies are faulty due to subtle statistics issues. The authors may be expert scientists / researchers, but subtle statistics falls outside their domain of expertise.
  • Occasionally I've come up with an argument (usually a domain-specific meta-argument of some sort) that I'm pretty sure even the experts on the other side would agree with, and for which I can give a good argument why they haven't addressed this particular argument before. Of course, I need to take my own arguments with a large grain of "I'm not really an expert on this" salt. But I've also in the past simply contacted one of the experts and asked him what he thought of my argument - and he agreed. In that particular instance the expert didn't change his mind, but the reason he gave for not changing his mind made me strongly suspect him of bias.
  • For a few issues, especially some of the really small sub-issues, it's actually not all that hard to become an expert. You take out a few books from your local university library, read the latest half dozen articles published on the topic, and that's about it. Of course, even after you're an expert you should still probably take the outside view and ask why you think your expert opinion is better than the other guy's. But it's still something, and perhaps you'll even be able to contribute to the field in a meaningful way and change some others' opinions. At the very least you'll likely be in a better position to judge other experts' biases and levels of expertise.
Relevant: posts listed on the bottom of the LW wiki page on disagreement

Problems and Solutions in Infinite Ethics

7 Xodarap 04 January 2015 02:06PM

(Crossposted from the EA forum.)

Summary: The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists; for example: we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness. This would imply that all forms of altruism are equally ineffective.

Like everything in life, the canonical reference in philosophy about this problem was written by Nick Bostrom. However, I found that an area of economics known as "sustainable development" has actually made much further progress on this subject than the philosophy world. In this post I go over some of what I consider to be the most interesting results.

NB: This assumes a lot of mathematical literacy and familiarity with the subject matter, and hence isn't targeted to a general audience. Most people will probably prefer to read my other posts:


1. Summary of the most interesting results

  1. There’s no ethical system which incorporates all the things we might want.
  2. Even if we have pretty minimal requirements, satisfactory ethical systems might exist but we can’t prove their existence, much less actually construct them
  3. Discounted utilitarianism, whereby we value people less just because they are further away in time, is actually a pretty reasonable thing despite philosophers considering it ridiculous.
    1. (I consider this to be the first reasonable argument for locavorism I've ever heard)

2. Definitions

In general, we consider a population to consist of an infinite utility vector (u0,u1,…) where ui is the aggregate utility of the generation alive at time i. Utility is a bounded real number (the fact that economists assume utility to be bounded confused me for a long time!). Our goal is to find a preference ordering over the set of all utility vectors which is in some sense “reasonable”. While philosophers have understood for a long time that finding such an ordering is difficult, I will present several theorems which show that it is in fact impossible.

Due to a lack of latex support I’m going to give English-language definitions and results instead of math-ey ones; interested people should look at the papers themselves anyway.

3. Impossibility Results

3.1 Definitions

  • Strong Pareto: if you can make a generation better off, and none worse off, you should.
  • Weak Pareto: if you can make every generation better off, you should.
  • Intergenerational equity: utility vectors are unchanged in value by any permutation of their components.
    • There is an important distinction here between allowing a finite number of elements to be permuted and an infinite number; I will refer to the former as “finite intergenerational equity” and the latter as just “intergenerational equity”
  • Ethical relation: one which obeys both weak Pareto and intergenerational equity
  • Social welfare function: an order-preserving function from the set of populations (utility vectors) to the real numbers

3.2 Diamond-Basu-Mitra Impossibility Result1

  1. There is no social welfare function which obeys Strong Pareto and finite intergenerational equity. This means that any sort of utilitarianism won’t work, unless we look outside the real numbers.

3.3 Zame's impossibility result2

  1. If an ordering obeys intergenerational equity over [0,1]N, then almost always we can’t tell which of two populations is better 
    1. (i.e. the set of populations {X,Y: neither X<Y nor X>Y} has outer measure one)
  2. The existence of an ethical preference relation on [0,1]N is independent of ZF plus the axiom of choice

4. Possibility Results

We’ve just shown that it’s impossible to construct or even prove the existence of any useful ethical system. But not all hope is lost!

The important idea here is that of a “subrelation”: < is a subrelation to <’ if x<y implies x<’y.

Our arguments will work like this:

Suppose we could extend utilitarianism to the infinite case. (We don't, of course, know that we can extend utilitarianism to the infinite case. But suppose we could.) Then A, B and C must follow.

Technically: suppose utilitarianism is a subrelation of <. Then < must have properties A, B and C.

Everything in this section comes from (3), which is a great review of the literature.

4.1 Definition

  • Utilitarianism: we extend the standard total utilitarianism ordering to infinite populations in the following way: suppose there is some time T after which every generation in X is at least as well off as every generation in Y, and that the total utility in X before T is at least as good as the total utility in Y before T. Then X is at least as good as Y.
    • Note that this is not a complete ordering! In fact, as per Zame’s result above, the set of populations it can meaningfully speak about has measure zero.
  • Partial translation scale invariance: suppose after some time T, X and Y become the same. Then we can add any arbitrary utility vector A to both X and Y without changing the ordering. (I.e. X > Y ó X+A > Y+A)

4.2 Theorem

  1. Utilitarianism is a subrelation of > if and only if > satisfies strong Pareto, finite intergenerational equity and partial translation scale invariance.
    1. This means that if we want to extend utilitarianism to the infinite case, we can’t use a social welfare function, as per the above Basu-Mitra result

4.3 Definition

  • Overtaking utilitarianism: suppose there is some point T after which the total utility of the first N generations in X is always greater than the total utility of the first N generations in Y (given N > T). Then X is better than Y.
    • Note that utilitarianism is a subrelation of overtaking utilitarianism
  • Weak limiting preference: suppose that for any time T, X truncated at time T is better than Y truncated at time T. Then X is better than Y.

4.4 Theorem

  1. Overtaking utilitarianism is a subrelation of < if and only if < satisfies strong Pareto, finite intergenerational equity, partial translation scale invariance, and weak limiting preference

4.5 Definition

  • Discounted utilitarianism: the utility of a population is the sum of its components, discounted by how far away in time they are
  • Separability:
    • Separable present: if you can improve the first T generations without affecting the rest, you should
    • Separable future: if you can improve everything after the first T generations without affecting the rest, you should
  • Stationarity: preferences are time invariant
  • Weak sensitivity: for any utility vector, we can modify its first generation somehow to make it better

4.6 Theorem

  1. The only continuous, monotonic relation which obeys weak sensitivity, stationary, and separability is discounted utilitarianism

4.7 Definition

  • Dictatorship of the present: there’s some time T after which changing the utility of generations doesn’t matter

4.8 Theorem

  1. Discounted utilitarianism results in a dictatorship of the present. (Remember that each generation’s utility is assumed to be bounded!)

4.9 Definition

  • Sustainable preference: a continuous ordering which doesn’t have a dictatorship of the present but follows strong Pareto and separability.

4.10 Theorem

  1. The only ordering which is sustainable is to take discounted utilitarianism and add an “asymptotic” part which ensures that infinitely long changes in utility matter. (Of course, finite changes in utility still won't matter.)

5. Conclusion

I hope I've convinced you that there's a "there" there: infinite ethics is something that people can make progress on, and it seems that most of the progress is being made in the field of sustainable development.

Fun fact: the author of the last theorem (the one which defined "sustainable") was one of the lead economists on the Kyoto protocol. Who says infinite ethics is impractical?

6. References

  1. Basu, Kaushik, and Tapan Mitra. "Aggregating infinite utility streams with intergenerational equity: the impossibility of being Paretian." Econometrica 71.5 (2003): 1557-1563. http://folk.uio.no/gasheim/zB%26M2003.pdf
  2. Zame, William R. "Can intergenerational equity be operationalized?." (2007).  https://tspace.library.utoronto.ca/bitstream/1807/9745/1/1204.pdf
  3. Asheim, Geir B. "Intergenerational equity." Annu. Rev. Econ. 2.1 (2010): 197-222.http://folk.uio.no/gasheim/A-ARE10.pdf

View more: Next