Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Viliam Bur made the announcement in Main, but not everyone checks main, so I'm repeating it here.
During the following months my time and attention will be heavily occupied by some personal stuff, so I will be unable to function as a LW moderator. The new LW moderator is... NancyLebovitz!
From today, please direct all your complaints and investigation requests to Nancy. Please not everyone during the first week. That can be a bit frightening for a new moderator.
There are a few old requests I haven't completed yet. I will try to close everything during the following days, but if I don't do it till the end of January, then I will forward the unfinished cases to Nancy, too.
Long live the new moderator!
Apptimize is a 2-year old startup closely connected with the rationalist community, one of the first founded by CFAR alumni. We make “lean” possible for mobile apps -- our software lets mobile developers update or A/B test their apps in minutes, without submitting to the App Store. Our customers include big companies such as Nook and Ebay, as well as Top 10 apps such as Flipagram. When companies evaluate our product against competitors, they’ve chosen us every time.
We work incredibly hard, and we’re striving to build the strongest engineering team in the Bay Area. If you’re a good developer, we have a lot to offer.
Our team of 14 includes 7 MIT alumni, 3 ex-Googlers, 1 Wharton MBA, 1 CMU CS alum, 1 Stanford alum, 2 MIT Masters, 1 MIT Ph. D. candidate, and 1 “20 Under 20” Thiel Fellow. Our CEO was also just named to the Forbes “30 Under 30”
David Salamon, Anna Salamon’s brother, built much of our early product
HP:MoR is required reading for the entire company
We evaluate candidates on curiosity even before evaluating them technically
Seriously, our team is badass. Just look
You will have huge autonomy and ownership over your part of the product. You can set up new infrastructure and tools, expense business products and services, and even subcontract some of your tasks if you think it's a good idea
You will learn to be a more goal-driven agent, and understand the impact of everything you do on the rest of the business
Access to our library of over 50 books and audiobooks, and the freedom to purchase more
Everyone shares insights they’ve had every week
Self-improvement is so important to us that we only hire people committed to it. When we say that it’s a company value, we mean it
Our mobile engineers dive into the dark, undocumented corners of iOS and Android, while our backend crunches data from billions of requests per day
Engineers get giant monitors, a top-of-the-line MacBook pro, and we’ll pay for whatever else is needed to get the job done
We don’t demand prior experience, but we do demand the fearlessness to jump outside your comfort zone and job description. That said, our website uses AngularJS, jQuery, and nginx, while our backend uses AWS, Java (the good parts), and PostgreSQL
We don’t have gratuitous perks, but we have what counts: Free snacks and catered meals, an excellent health and dental plan, and free membership to a gym across the street
Seriously, working here is awesome. As one engineer puts it, “we’re like a family bent on taking over the world”
If you’re interested, send some Bayesian evidence that you’re a good match to email@example.com
We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.
There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk's donation aims to support precisely this type of research: "Here are all these leading AI researchers saying that AI safety is important", says Elon Musk. "I agree with them, so I'm today committing $10M to support research aimed at keeping AI beneficial for humanity."
[...] The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. [...]
The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy (a detailed list of examples can be found here [PDF]). "Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere", says FLI co-founder Viktoriya Krakovna.
[...] Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories.
Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.
"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21
The Sequences are being released as an eBook, titled Rationality: From AI to Zombies, on March 12.
We went with the name "Rationality: From AI to Zombies" (based on shminux's suggestion) to make it clearer to people — who might otherwise be expecting a self-help book, or an academic text — that the style and contents of the Sequences are rather unusual. We want to filter for readers who have a wide-ranging interest in (/ tolerance for) weird intellectual topics. Alternative options tended to obscure what the book is about, or obscure its breadth / eclecticism.
The book's contents
Around 340 of Eliezer's essays from 2009 and earlier will be included, collected into twenty-six sections ("sequences"), compiled into six books:
- Map and Territory: sequences on the Bayesian conceptions of rationality, belief, evidence, and explanation.
- How to Actually Change Your Mind: sequences on confirmation bias and motivated reasoning.
- The Machine in the Ghost: sequences on optimization processes, cognition, and concepts.
- Mere Reality: sequences on science and the physical world.
- Mere Goodness: sequences on human values.
- Becoming Stronger: sequences on self-improvement and group rationality.
The six books will be released as a single sprawling eBook, making it easy to hop back and forth between different parts of the book. The whole book will be about 1,800 pages long. However, we'll also be releasing the same content as a series of six print books (and as six audio books) at a future date.
The Sequences have been tidied up in a number of small ways, but the content is mostly unchanged. The largest change is to how the content is organized. Some important Overcoming Bias and Less Wrong posts that were never officially sorted into sequences have now been added — 58 additions in all, forming four entirely new sequences (and also supplementing some existing sequences). Other posts have been removed — 105 in total. The following old sequences will be the most heavily affected:
- Map and Territory and Mysterious Answers to Mysterious Questions are being merged, expanded, and reassembled into a new set of introductory sequences, with more focus placed on cognitive biases. The name 'Map and Territory' will be re-applied to this entire collection of sequences, constituting the first book.
- Quantum Physics and Metaethics are being heavily reordered and heavily shortened.
- Most of Fun Theory and Ethical Injunctions is being left out. Taking their place will be two new sequences on ethics, plus the modified version of Metaethics.
I'll provide more details on these changes when the eBook is out.
Unlike the print and audio-book versions, the eBook version of Rationality: From AI to Zombies will be entirely free. If you want to purchase it on Kindle Store and download it directly to your Kindle, it will also be available on Amazon for $4.99.
To make the content more accessible, the eBook will include introductions I've written up for this purpose. It will also include a LessWrongWiki link to a glossary, which I'll be recruiting LessWrongers to help populate with explanations of references and jargon from the Sequences.
I'll post an announcement to Main as soon as the eBook is available. See you then!
First post here, and I'm disagreeing with something in the main sequences. Hubris acknowledged, here's what I've been thinking about. It comes from the post "Are your enemies innately evil?":
On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?
Realistically, most people don't construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy's story, as seen by the Enemy, is not going to make the Enemy look bad. If you try to construe motivations that would make the Enemy look bad, you'll end up flat wrong about what actually goes on in the Enemy's mind.
If I'm misreading this, please correct me, but the way I am reading this is:
1) People do not construct their stories so that they are the villains,
2) the idea that Al Qaeda is motivated by a hatred of American freedom is false.
Reading the Al Qaeda document released after the attacks called Why We Are Fighting You you find the following:
What are we calling you to, and what do we want from you?
1. The first thing that we are calling you to is Islam.
A. The religion of tahwid; of freedom from associating partners with Allah Most High , and rejection of such blasphemy; of complete love for Him, the Exalted; of complete submission to his sharia; and of the discarding of all the opinions, orders, theories, and religions that contradict with the religion He sent down to His Prophet Muhammad. Islam is the religion of all the prophets and makes no distinction between them.
It is to this religion that we call you …
2. The second thing we call you to is to stop your oppression, lies, immorality and debauchery that has spread among you.
A. We call you to be a people of manners, principles, honor and purity; to reject the immoral acts of fornication, homosexuality, intoxicants, gambling and usury.
We call you to all of this that you may be freed from the deceptive lies that you are a great nation, which your leaders spread among you in order to conceal from you the despicable state that you have obtained.
B. It is saddening to tell you that you are the worst civilization witnessed in the history of mankind:
i. You are the nation who, rather than ruling through the sharia of Allah, chooses to invent your own laws as you will and desire. You separate religion from you policies, contradicting the pure nature that affirms absolute authority to the Lord your Creator….
ii. You are the nation that permits usury…
iii. You are a nation that permits the production, spread, and use of intoxicants. You also permit drugs, and only forbid the trade of them, even though your nation is the largest consumer of them.
iv. You are a nation that permits acts of immorality, and you consider them to be pillars of personal freedom.
"Freedom" is of course one of those words. It's easy enough to imagine an SS officer saying indignantly: "Of course we are fighting for freedom! For our people to be free of Jewish domination, free from the contamination of lesser races, free from the sham of democracy..."
If we substitute the symbol with the substance though, what we mean by freedom - "people to be left more or less alone, to follow whichever religion they want or none, to speak their minds, to try to shape society's laws so they serve the people" - then Al Qaeda is absolutely inspired by a hatred of freedom. They wouldn't call it "freedom", mind you, they'd call it "decadence" or "blasphemy" or "shirk" - but the substance is what we call "freedom".
Returning to the syllogism at the top, it seems to be that there is an unstated premise. The conclusion "Al Qaeda cannot possibly hate America for its freedom because everyone sees himself as the hero of his own story" only follows if you assume that What is heroic, what is good, is substantially the same for all humans, for a liberal Westerner and an Islamic fanatic.
(for Americans, by "liberal" here I mean the classical sense that includes just about everyone you are likely to meet, read or vote for. US conservatives say they are defending the American revolution, which was broadly in line with liberal principles - slavery excepted, but since US conservatives don't support that, my point stands).
When you state the premise baldly like that, you can see the problem. There's no contradiction in thinking that Muslim fanatics think of themselves as heroic precisely for being opposed to freedom, because they see their heroism as trying to extend the rule of Allah - Shariah - across the world.
Now to the point - we all know the phrase "thinking outside the box". I submit that if you can recognize the box, you've already opened it. Real bias isn't when you have a point of view you're defending, but when you cannot imagine that another point of view seriously exists.
That phrasing has a bit of negative baggage associated with it, that this is just a matter of pigheaded close-mindedness. Try thinking about it another way. Would you say to someone with dyscalculia "You can't get your head around the basics of calculus? You are just being so close minded!" No, that's obviously nuts. We know that different peoples minds work in different ways, that some people can see things others cannot.
Orwell once wrote about the British intellectuals inability to "get" fascism, in particular in his essay on H.G. Wells. He wrote that the only people who really understood the nature and menace of fascism were either those who had felt the lash on their backs, or those who had a touch of the fascist mindset themselves. I suggest that some people just cannot imagine, cannot really believe, the enormous power of faith, of the idea of serving and fighting and dying for your god and His prophet. It is a kind of thinking that is just alien to many.
Perhaps this is resisted because people think that "Being able to think like a fascist makes you a bit of a fascist". That's not really true in any way that matters - Orwell was one of the greatest anti-fascist writers of his time, and fought against it in Spain.
So - if you can see the box you are in, you can open it, and already have half-opened it. And if you are really in the box, you can't see the box. So, how can you tell if you are in a box that you can't see versus not being in a box?
The best answer I've been able to come up with is not to think of "box or no box" but rather "open or closed box". We all work from a worldview, simply because we need some knowledge to get further knowledge. If you know you come at an issue from a certain angle, you can always check yourself. You're in a box, but boxes can be useful, and you have the option to go get some stuff from outside the box.
The second is to read people in other boxes. I like steelmanning, it's an important intellectual exercise, but it shouldn't preclude finding actual Men of Steel - that is, people passionately committed to another point of view, another box, and taking a look at what they have to say.
Now you might say: "But that's steelmanning!" Not quite. Steelmanning is "the art of addressing the best form of the other person’s argument, even if it’s not the one they presented." That may, in some circumstances, lead you to make the mistake of assuming that what you think is the best argument for a position is the same as what the other guy thinks is the best argument for his position. That's especially important if you are addressing a belief held by a large group of people.
Again, this isn't to run down steelmanning - the practice is sadly limited, and anyone who attempts it has gained a big advantage in figuring out how the world is. It's just a reminder that the steelman you make may not be quite as strong as the steelman that is out to get you.
[EDIT: Link included to the document that I did not know was available online before now]
If you'd like to support the growth of rationality in the world, do please consider donating, or asking me about any questions/etc. you may have. I'd love to talk. I suspect funds donated to CFAR between now and Jan 31 are quite high-impact.
As a random bonus, I promise that if we meet the $120k matching challenge, I'll post at least two posts with some never-before-shared (on here) rationality techniques that we've been playing with around CFAR.
For a site extremely focused on fixing bad thinking patterns, I've noticed a bizarre lack of discussion here. Considering the high correlation between intelligence and mental illness, you'd think it would be a bigger topic.
I personally suffer from Generalized Anxiety Disorder and a very tame panic disorder. Most of this is focused on financial and academic things, but I will also get panicky about social interaction, responsibilities, and things that happened in the past that seriously shouldn't bother me. I have an almost amusing response to anxiety that is basically my brain panicking and telling me to go hide under my desk.
I know lukeprog and Alicorn managed to fight off a good deal of their issues in this area and wrote up how, but I don't think enough has been done. They mostly dealt with depression. What about rational schizophrenics and phobics and bipolar people? It's difficult to find anxiety advice that goes beyond "do yoga while watching the sunrise!" Pop psych isn't very helpful. I think LessWrong could be. What's mental illness but a wrongness in the head?
Mental illness seems to be worse to intelligent people than your typical biases, honestly. Hiding under my desk is even less useful than, say, appealing to authority during an argument. At least the latter has the potential to be useful. I know it's limiting me, and starting cycles of avoidance, and so much more. And my mental illness isn't even that bad! Trying to be rational and successful when schizophrenic sounds like a Sisyphusian nightmare.
I'm not fighting my difficulties nearly well enough to feel qualified to author my own posts. Hearing from people who are managing is more likely to help. If nothing else, maybe a Rational Support Group would be a lot of fun.
When I criticize, I'm a genius. I can go through a book of highly-referenced scientific articles and find errors in each of them. Boy, I feel smart. How are these famous people so dumb?
But when I write, I suddenly become stupid. I sometimes spend half a day writing something and then realize at the end, or worse, after posting, that what it says simplifies to something trivial, or that I've made several unsupported assumptions, or claimed things I didn't really know were true. Or I post something, then have to go back every ten minutes to fix some point that I realize is not quite right, sometimes to the point where the whole thing falls apart.
If someone writes an article or expresses an idea that you find mistakes in, that doesn't make you smarter than that person. If you create an equally-ambitious article or idea that no one else finds mistakes in, then you can start congratulating yourself.
Recently I talked with a guy from Grant Street Group. They make, among other things, software with which local governments can auction their bonds on the Internet.
By making the auction process more transparent and easier to participate in, they enable local governments which need to sell bonds (to build a high school, for instance), to sell those bonds at, say, 7% interest instead of 8%. (At least, that's what he said.)
They have similar software for auctioning liens on property taxes, which also helps local governments raise more money by bringing more buyers to each auction, and probably helps the buyers reduce their risks by giving them more information.
This is a big deal. I think it's potentially more important than any budget argument that's been on the front pages since the 1960s. Yet I only heard of it by chance.
People would rather argue about reducing the budget by eliminating waste, or cutting subsidies to people who don't deserve it, or changing our ideological priorities. Nobody wants to talk about auction mechanics. But fixing the auction mechanics is the easy win. It's so easy that nobody's interested in it. It doesn't buy us fuzzies or let us signal our affiliations. To an individual activist, it's hardly worth doing.
Transcribed from maxikov's posted videos.
Verbal filler removed for clarity.
Audience Laughter denoted with [L], Applause with [A]
Eliezer: So, any questions? Do we have a microphone for the audience?
Guy Offscreen: We don't have a microphone for the audience, have we?
Some Other Guy: We have this furry thing, wait, no that's not hooked up. Never mind.
Eliezer: Alright, come on over to the microphone.
Guy with 'Berkeley Lab' shirt: So, this question is sort of on behalf of the HPMOR subreddit. You say you don't give red herrings, but like... He's making faces at me like... [L] You say you don't give red herrings, but while he's sitting during in the Quidditch game thinking of who he can bring along, he stares at Cedric Diggory, and he's like, "He would be useful to have at my side!", and then he never shows up. Why was there not a Cedric Diggory?
Eliezer: The true Cedrics Diggory are inside all of our hearts. [L] And in the mirror. [L] And in Harry's glasses. [L] And, well, I mean the notion is, you're going to look at that and think, "Hey, he's going to bring along Cedric Diggory as a spare wand, and he's gonna die! Right?" And then, Lestath Lestrange shows up and it's supposed to be humorous, or something. I guess I can't do humor. [L]
Guy Dressed as a Witch: Does Quirrell's attitude towards reckless muggle scientists have anything to do with your attitude towards AI researchers that aren't you? [L]
Eliezer: That is unfair. There are at least a dozen safety conscious AI researchers on the face of the earth. [L] At least one of them is respected. [L] With that said, I mean if you have a version of Voldemort who is smart and seems to be going around killing muggleborns, and sort of pretty generally down on muggles... Like, why would anyone go around killing muggleborns? I mean, there's more than one rationalization you could apply to this situation, but the sort of obvious one is that you disapprove of their conduct with nuclear weapons. From Tom Riddle's perspective that is.
I do think I sort of try to never have leakage from that thing I spend all day talking about into a place it really didn't belong, and there's a saying that goes 'A fanatic is someone who cannot change his mind, and will not change the subject.' And I'm like ok, so if I'm not going to change my mind, I'll at least endeavor to be able to change the subject. [L] Like, towards the very end of the story we are getting into the realm where sort of the convergent attitude that any sort of carefully reasoning person will take towards global catastrophic risks, and the realization that you are in fact a complete crap rationalist, and you're going to have to start over and actually try this time. These things are sort of reflective of the story outside the story, but apart from 'there is only one king upon a chessboard', and 'I need to raise the level of my game or fail', and perhaps, one little thing that was said about the mirror of VEC, as some people called it.
Aside from those things I would say that I was treating it more as convergent evolution rather than any sort of attempted parable or Professor Quirrell speaking form me. He usually doesn't... [L] I wish more people would realize that... [L] I mean, you know the... How can I put this exactly. There are these people who are sort of to the right side of the political spectrum and occasionally they tell me that they wish I'd just let Professor Quirrell take over my brain and run my body. And they are literally Republicans for You Know Who. And there you have it basically. Next Question! ... No more questions, ok. [L] I see that no one has any questions left; Oh, there you are.
Fidgety Guy: One of the chapters you posted was the final exam chapter where you had everybody brainstorm solutions to the predicament that Harry was in. Did you have any favorite alternate solution besides the one that made it into the book.
Eliezer: So, not to give away the intended solution for anyone who hasn't reached that chapter yet, though really you're just going to have the living daylight spoiled out of you, there's no way to avoid that really. So, the most brilliant solution I had not thought of at all, was for Harry to precommit to transfigure something that would cause a large explosion visible from the Quidditch stands which had observed no such explosion, thereby unless help sent via Time-Turner showed up at that point, thereby insuring that the simplest timeline was not the one where he never reached the Time-Turner. And assuring that some self-consistent set of events would occur which caused him not to carry through on his precommitment. I, you know, I suspect that I might have ruled that that wouldn't work because of the Unbreakable Vow preventing Harry from actually doing that because it might, in effect, count as trying to destroy that timeline, or filter it, and thereby have that count as trying to destroy the world, or just risk destroying it, or something along those lines, but it was brilliant! [L] I was staring at the computer screen going, "I can't believe how brilliant these people are!" "That's not something I usually hear you say," Brienne said. "I'm not usually watching hundreds of peoples' collective intelligence coming up with solutions way better than anything I thought of!" I replied to her.
And the sort of most fun lateral thinking solution was to call 'Up!' to, or pull Quirinus Quirrell's body over using transfigured carbon nanotubes and some padding, and call 'Up!' and ride away on his broomstick bones. [L] That is definitely going in 'Omake files #5: Collective Intelligence'! Next question!
Guy Wearing Black: So in the chapter with the mirror, there was a point at which Dumbledore had said something like, "I am on this side of the mirror and I always have been." That was never explained that I could tell. I'm wondering if you could clarify that.
Eliezer: It is a reference to the fanfic 'Seventh Horcrux' that *totally* ripped off HPMOR despite being written slightly earlier than it... [L] I was slapping my forehead pretty hard when that happened. Which contains the line "Perhaps Albus Dumbledore really was inside the mirror all along." Sort of arc words as it were. And I also figured that there was simply some by-location effect using one of the advanced settings of the mirror that Dumbledore was using so that the trap would always be springable as opposed to him having to know at what time Tom Riddle would appear before the mirror and be trapped. Next!
Black Guy: So, how did Moody and the rest of them retrieve the items Dumbledore threw in the mirror of VEC?
Eliezer: Dumbledore threw them outside the mirrors range, thereby causing those not to be sealed in the corresponding real world when the duplicate mode of Dumbledore inside the mirror was sealed. So wherever Dumbledore was at the time, probably investigating Nicolas Flamel's house, he suddenly popped away and the line of Merlin Unbroken and the Elder Wand just fell to the floor from where he was.
Asian Guy: In the 'Something to Protect: Severus Snape', you wrote that he laughed. And I was really curious, what exactly does Severus Snape sound like when he laughs. [L]
Person in Audience: Perform for us!
Eliezer: He He He. [L]
Girl in Audience: Do it again now, everybody together!
Audience: He He He. [L]
Guy in Blue Shirt: So I was curious about the motivation between making Sirius re-evil again and having Peter be a good guy again, their relationship. What was the motivation?
Eliezer: In character or out character?
Guy in Blue Shirt: Well, yes. [L]
Eliezer: All right, well, in character Peter can be pretty attractive when he wants to be, and Sirius was a teenager. Or, you were asking about the alignment shift part?
Guy in Blue Shirt: Yeah, the alignment and their relationship.
Eliezer: So, in the alignment, I'm just ruling it always was that way. The whole Sirius Black thing is a puzzle, is the way I'm looking at it. And the canon solution to that puzzle is perfectly fine for a children's book, which I say once again requires a higher level of skill than a grown-up book, but just did not make sense in context. So I was just looking at the puzzle and being like, ok, so what can be the actual solution to this puzzle? And also, a further important factor, this had to happen. There's a whole lot of fanfictions out there of Harry Potter. More than half a million, and that was years ago. And 'Methods of Rationality' is fundamentally set in the universe of Harry Potter fanfiction, more than canon. And in many many of these fanfictions someone goes back in time to redo the seven years, and they know that Scabbers is secretly Peter Pettigrew, and there's a scene where they stun Scabbers the rat and take him over to Dumbledore, and Head Auror, and the Minister of Magic and get them to check out this rat over here, and uncover Peter Pettigrew. And in all the times I had read that scene, at least a dozen times literally, it was never once played out the way it would in real life, where that is just a rat, and you're crazy. [L] And that was the sort of basic seed of, "Ok, we're going to play this straight, the sort of loonier conspiracies are false, but there is still a grain of conspiracy truth to it." And then I introduced the whole accounting of what happened with Sirius Black in the same chapter where Hermione just happens to mention that there's a Metamorphmagus in Hufflepuff, and exactly one person posted to the reviews in chapter 28, based on the clue that the Metamorphmagus had been mentioned in the same chapter, "Aha! I present you the tale of Peter Pettigrew, the unfortunate Metamorphmagus." [L] See! You could've solved it, you could've solved it, but you didn't! Someone solved it, you did not solve that. Next Question!
Guy in White: First, [pulls out wand] Avada Kedavra. How do you feel about your security? [L] Second, have you considered the next time you need a large group of very smart people to really work on a hard problem, presenting it to them in fiction?
Eliezer: So, of course I always keep my Patronus Charm going inside of me. [Aww/L] And if that fails, I do have my amulet that triggers my emergency kitten shield. [L] And indeed one of the higher, more attractive things I'm considering to potentially do for the next major project is 'Precisely Bound Djinn and their Behavior'. The theme of which is you have these people who can summon djinn, or command the djinn effect, and you can sort of negotiate with them in the language of djinn and they will always interpret your wish in the worst way possible, or you can give them mathematically precise orders; Which they can apparently carry out using unlimited computing power, which obviously ends the world in fairly short order, causing our protagonist to be caught in a groundhog day loop as they try over and over again to both maybe arrange for conditions outside to be such that they can get some research done for longer than a few months before the world ends again, and also try to figure out what to tell their djinn. And, you know, I figure that if anyone can give me an unboundedly computable specification of a value aligned advanced agent, the story ends, the characters win, hopefully that person gets a large monetary prize if I can swing it, the world is safer, and I can go onto my next fiction writing project, which will be the one with the boundedly specified [L] value aligned advanced agents. [A]
Guy with Purple Tie: So, what is the source of magic?
Eliezer: Alright, so, there was a bit of literary miscommunication in HPMOR. I tried as hard as I could to signal that unraveling the true nature of magic and everything that adheres in it is actually this kind of this large project that they were not going to complete during Harry's first year of Hogwarts. [L] You know, 35 years, even if someone is helping you is a reasonable amount of time for a project like that to take. And if it's something really difficult, like AIs, you might need more that two people even. [L] At least if you want the value aligned version. Anyway, where was I?
So the only way I think that fundamentally to come up with a non-nitwit explanation of magic, you need to get started from the non-nitwit explanation, and then generate the laws of magic, so that when you reveal the answer behind the mystery, everything actually fits with it. You may have noticed this kind of philosophy showing up elsewhere in the literary theory of HPMOR at various points where it turns out that things fit with things you have already seen. But with magic, ultimately the source material was not designed as a hard science fiction story. The magic that we start with as a phenomenon is not designed to be solvable, and what did happen was that the characters thought of experiments, and I in my role of the universe thought of the answer to it, and if they had ever reached the point where there was only one explanation left, then the magic would have had rules, and they would have been arrived at in a fairly organic way that I could have felt good about; Not as a sudden, "Aha! I gotcha! I revealed this thing that you had no way of guessing."
Now I could speculate. And I even tried to write a little section where Harry runs into Dumbledore's writings that Dumbledore left behind, where Dumbledore writes some of his own speculation, but there was no good place to put that into the final chapter. But maybe I'll later be able... The final edits were kind of rushed honestly, sleep deprivation, 3am. But maybe in the second edit or something I'll be able to put that paragraph, that set of paragraphs in there. In Dumbledore's office, Dumbledore has speculated. He's mostly just taking the best of some of the other writers that he's read. That, look at the size of the universe, that seems to be mundane. Dumbledore was around during World War 2, he does know that muggles have telescopes. He has talked with muggle scientists a bit and those muggle scientists seem very confident that all the universe they can see looks like it's mundane. And Dumbledore wondered, why is there this sort of small magical section, and this much larger mundane section, or this much larger muggle section? And that seemed to Dumbledore to suggest that as a certain other magical philosopher had written, If you consider the question, what is the underlying nature of reality, is it that it was mundane to begin with, and then magic arises from mundanity, or is the universe magic to begin with, and then mundanity has been imposed above it? Now mundanity by itself will clearly never give rise to magic, yet magic permits mundanity to be imposed, and so, this other magical philosopher wrote, therefore he thinks that the universe is magical to begin with and the mundane sections are imposed above the magic. And Dumbledore himself had speculated, having been antiquated with the line of Merlin for much of his life, that just as the Interdict of Merlin was imposed to restrict the spread an the number of people who had sufficiently powerful magic, perhaps the mundane world itself, is an attempt to bring order to something that was on the verge of falling apart in Atlantis, or in whatever came before Atlantis. Perhaps the thing that happened with the Interdict of Merlin has happened over and over again. People trying to impose law upon reality, and that law having flaws, and the flaws being more and more exploited until they reach a point of power that recons to destroy the world, and the most adapt wielders of that power try to once again impose mundanity.
And I will also observe, although Dumbledore had no way of figuring this out, and I think Harry might not have figured it out yet because he dosen't yet know about chromosomal crossover, That if there is no wizard gene, but rather a muggle gene, and the muggle gene sometimes gets hit by cosmic rays and ceases to function thereby producing a non-muggle allele, then some of the muggle vs. wizard alleles in the wizard population that got there from muggleborns will be repairable via chromosomal crossover, thus sometimes causing two wizards to give birth to a squib. Furthermore this will happen more frequently in wizards who have recent muggleborn ancestry. I wonder if Lucius told Draco that when Draco told him about Harry's theory of genetics. Anyway, this concludes my strictly personal speculations. It's not in the text, so it's not real unless it's in the text somewhere. 'Opinion of God', Not 'Word of God'. But this concludes my personal speculations on the origin of magic, and the nature of the "wizard gene". [A]
Political topics attract participants inclined to use the norms of mainstream political debate, risking a tipping point to lower quality discussion
(I hope that is the least click-baity title ever.)
Political topics elicit lower quality participation, holding the set of participants fixed. This is the thesis of "politics is the mind-killer".
Here's a separate effect: Political topics attract mind-killed participants. This can happen even when the initial participants are not mind-killed by the topic.
Since outreach is important, this could be a good thing. Raise the sanity water line! But the sea of people eager to enter political discussions is vast, and the epistemic problems can run deep. Of course not everyone needs to come perfectly prealigned with community norms, but any community will be limited in how robustly it can handle an influx of participants expecting a different set of norms. If you look at other forums, it seems to take very little overt contemporary political discussion before the whole place is swamped, and politics becomes endemic. As appealing as "LW, but with slightly more contemporary politics" sounds, it's probably not even an option. You have "LW, with politics in every thread", and "LW, with as little politics as we can manage".
That said, most of the problems are avoided by just not saying anything that patterns matches too easily to current political issues. From what I can tell, LW has always had tons of meta-political content, which doesn't seem to cause problems, as well as standard political points presented in unusual ways, and contrarian political opinions that are too marginal to raise concern. Frankly, if you have a "no politics" norm, people will still talk about politics, but to a limited degree. But if you don't even half-heartedly (or even hypocritically) discourage politics, then a open-entry site that accepts general topics will risk spiraling too far in a political direction.
As an aside, I'm not apolitical. Although some people advance a more sweeping dismissal of the importance or utility of political debate, this isn't required to justify restricting politics in certain contexts. The sort of the argument I've sketched (I don't want LW to be swamped by the worse sorts of people who can be attracted to political debate) is enough. There's no hypocrisy in not wanting politics on LW, but accepting political talk (and the warts it entails) elsewhere. Of the top of my head, Yvain is one LW affiliate who now largely writes about more politically charged topics on their own blog (SlateStarCodex), and there are some other progressive blogs in that direction. There are libertarians and right-leaning (reactionary? NRx-lbgt?) connections. I would love a grand unification as much as anyone, (of course, provided we all realize that I've been right all along), but please let's not tell the generals to bring their armies here for the negotiations.
A recent survey showed that the LessWrong discussion forums mostly attract readers who are predominantly either atheists or agnostics, and who lean towards the left or far left in politics. As one of the main goals of LessWrong is overcoming bias, I would like to come up with a topic which I think has a high probability of challenging some biases held by at least some members of the community. It's easy to fight against biases when the biases belong to your opponents, but much harder when you yourself might be the one with biases. It's also easy to cherry-pick arguments which prove your beliefs and ignore those which would disprove them. It's also common in such discussions, that the side calling itself rationalist makes exactly the same mistakes they accuse their opponents of doing. Far too often have I seen people (sometimes even Yudkowsky himself) who are very good rationalists but can quickly become irrational and use several fallacies when arguing about history or religion. This most commonly manifests when we take the dumbest and most fundamentalist young Earth creationists as an example, winning easily against them, then claiming that we disproved all arguments ever made by any theist. No, this article will not be about whether God exists or not, or whether any real world religion is fundamentally right or wrong. I strongly discourage any discussion about these two topics.
This article has two main purposes:
1. To show an interesting example where the scientific method can lead to wrong conclusions
2. To overcome a certain specific bias, namely, that the pre-modern Catholic Church was opposed to the concept of the Earth orbiting the Sun with the deliberate purpose of hindering scientific progress and to keep the world in ignorance. I hope this would prove to also be an interesting challenge for your rationality, because it is easy to fight against bias in others, but not so easy to fight against bias on yourselves.
The basis of my claims is that I have read the book written by Galilei himself, and I'm very interested (and not a professional, but well read) in early modern, but especially 16-17th century history.
Geocentrism versus Heliocentrism
I assume every educated person knows the name of Galileo Galilei. I won't waste the space on the site and the time of the readers to present a full biography about his life, there are plenty of on-line resources where you can find more than enough biographic information about him.
What is interesting about him is how many people have severe misconceptions about him. Far too often he is celebrated as the one sane man in an era of ignorance, the sole propagator of science and rationality when the powers of that era suppressed any scientific thought and ridiculed everyone who tried to challenge the accepted theories about the physical world. Some even go as far as claiming that people believed the Earth was flat. Although the flat Earth theory was not propagated at all, it's true that the heliocentric view of the Solar System (the Earth revolving around the Sun) was not yet accepted.
However, the claim that the Church was suppressing evidence about heliocentrism "to maintain its power over the ignorant masses" can be disproved easily:
- The common people didn't go to school where they could have learned about it, and those commoners who did go to school, just learned to read and write, not much more, so they wouldn't care less about what orbits around what. This differs from 20-21th century fundamentalists who want to teach young Earth creationism in schools - back then in the 17th century, there would be no classes where either the geocentric or heliocentric views could have been taught to the masses.
- Heliocentrism was not discovered by Galilei. It was first proposed by Nicolaus Copernicus almost 100 years before Galilei. Copernicus didn't have any affairs with the Inquisition. His theories didn't gain wide acceptance, but he and his followers weren't persecuted either.
- Galilei was only sentenced to house arrest, and mostly because of insulting the pope and doing other unwise things. The political climate in 17th century Italy was quite messy, and Galilei did quite a few unfortunate choices regarding his alliances. Actually, Galilei was the one who brought religion into the debate: his opponents were citing Aristotle, not the Bible in their arguments. Galilei, however, wanted to redefine the Scripture based on his (unproven) beliefs, and insisted that he should have the authority to push his own views about how people interpret the Bible. Of course this pissed quite a few people off, and his case was not helped by publicly calling the pope an idiot.
- For a long time Galilei was a good friend of the pope, while holding heliocentric views. So were a couple of other astronomers. The heliocentrism-geocentrism debates were common among astronomers of the day, and were not hindered, but even encouraged by the pope.
- The heliocentrism-geocentrism debate was never an ateism-theism debate. The heliocentrists were committed theists, just like the defenders of geocentrism. The Church didn't suppress science, but actually funded the research of most scientists.
- The defenders of geocentrism didn't use the Bible as a basis for their claims. They used Aristotle and, for the time being, good scientific reasoning. The heliocentrists were much more prone to use the "God did it" argument when they couldn't defend the gaps in their proofs.
The birth of heliocentrism.
By the 16th century, astronomers have plotted the movements of the most important celestial bodies in the sky. Observing the motion of the Sun, the Moon and the stars, it would seem obvious that the Earth is motionless and everything orbits around it. This model (called geocentrism) had only one minor flaw: the planets would sometimes make a loop in their motion, "moving backwards". This required a lot of very complicated formulas to model their motions. Thus, by the virtue of Occam's razor, a theory was born which could better explain the motion of the planets: what if the Earth and everything else orbited around the Sun? However, this new theory (heliocentrism) had a lot of issues, because while it could explain the looping motion of the planets, there were a lot of things which it either couldn't explain, or the geocentric model could explain it much better.
The proofs, advantages and disadvantages
The heliocentric view had only a single advantage against the geocentric one: it could describe the motion of the planets by a much simper formula.
However, it had a number of severe problems:
- Gravity. Why do the objects have weight, and why are they all pulled towards the center of the Earth? Why don't objects fall off the Earth on the other side of the planet? Remember, Newton wasn't even born yet! The geocentric view had a very simple explanation, dating back to Aristotle: it is the nature of all objects that they strive towards the center of the world, and the center of the spherical Earth is the center of the world. The heliocentric theory couldn't counter this argument.
- Stellar parallax. If the Earth is not stationary, then the relative position of the stars should change as the Earth orbits the Sun. No such change was observable by the instruments of that time. Only in the first half of the 19th century did we succeed in measuring it, and only then was the movement of the Earth around the Sun finally proven.
- Galilei tried to used the tides as a proof. The geocentrists argued that the tides are caused by the Moon even if they didn't knew by what mechanisms, but Galilei said that it's just a coincidence, and the tides are not caused by the Moon: just as if we put a barrel of water onto a cart, the water would be still if the cart was stationary and the water would be sloshing around if the cart was pulled by a horse, so are the tides caused by the water sloshing around as the Earth moves. If you read Galilei's book, you will discover quite a number of such silly arguments, and you'll see that Galilei was anything but a rationalist. Instead of changing his views against overwhelming proofs, he used all possible fallacies to push his view through.
Actually the most interesting author in this topic was Riccioli. If you study his writings you will get definite proof that the heliocentrism-geocentrism debate was handled with scientific accuracy and rationality, and it was not a religious debate at all. He defended geocentrism, and presented 126 arguments in the topic (49 for heliocentrism, 77 against), and only two of them (both for heliocentrism) had any religious connotations, and he stated valid responses against both of them. This means that he, as a rationalist, presented both sides of the debate in a neutral way, and used reasoning instead of appeal to authority or faith in all cases. Actually this was what the pope expected of Galilei, and such a book was what he commissioned from Galilei. Galilei instead wrote a book where he caricatured the pope as a strawman, and instead of presenting arguments for and against both world-views in a neutral way, he wrote a book which can be called anything but scientific.
By the way, Riccioli was a Catholic priest. And a scientist. And, it seems to me, also a rationalist. Studying the works of such people like him, you might want to change your mind if you perceive a conflict between science and religion, which is part of today's public consciousness only because of a small number of very loud religious fundamentalists, helped by some committed atheists trying to suggest that all theists are like them.
Finally, I would like to copy a short summary about this book:
In 1651 the Italian astronomer Giovanni Battista Riccioli published within his Almagestum Novum, a massive 1500 page treatise on astronomy, a discussion of 126 arguments for and against the Copernican hypothesis (49 for, 77 against). A synopsis of each argument is presented here, with discussion and analysis. Seen through Riccioli's 126 arguments, the debate over the Copernican hypothesis appears dynamic and indeed similar to more modern scientific debates. Both sides present good arguments as point and counter-point. Religious arguments play a minor role in the debate; careful, reproducible experiments a major role. To Riccioli, the anti-Copernican arguments carry the greater weight, on the basis of a few key arguments against which the Copernicans have no good response. These include arguments based on telescopic observations of stars, and on the apparent absence of what today would be called "Coriolis Effect" phenomena; both have been overlooked by the historical record (which paints a picture of the 126 arguments that little resembles them). Given the available scientific knowledge in 1651, a geo-heliocentric hypothesis clearly had real strength, but Riccioli presents it as merely the "least absurd" available model - perhaps comparable to the Standard Model in particle physics today - and not as a fully coherent theory. Riccioli's work sheds light on a fascinating piece of the history of astronomy, and highlights the competence of scientists of his time.
The full article can be found under this link. I recommend it to everyone interested in the topic. It shows that geocentrists at that time had real scientific proofs and real experiments regarding their theories, and for most of them the heliocentrists had no meaningful answers.
- I'm not a Catholic, so I have no reason to defend the historic Catholic church due to "justifying my insecurities" - a very common accusation against someone perceived to be defending theists in a predominantly atheist discussion forum.
- Any discussion about any perceived proofs for or against the existence of God would be off-topic here. I know it's tempting to show off your best proofs against your carefully constructed straw-men yet again, but this is just not the place for it, as it would detract from the main purpose of this article, as summarized in its introduction.
- English is not my native language. Nevertheless, I hope that what I wrote was comprehensive enough to be understandable. If there is any part of my article which you find ambiguous, feel free to ask.
I have great hopes and expectations that the LessWrong community is suitable to discuss such ideas. I have experience with presenting these ideas on other, predominantly atheist internet communities, and most often the reactions was outright flaming, a hurricane of unexplained downvotes, and prejudicial ad hominem attacks based on what affiliations they assumed I was subscribing to. It is common for people to decide whether they believe a claim or not, based solely by whether the claim suits their ideological affiliations or not. The best quality of rationalists, however, should be to be able to change their views when confronted by overwhelming proof, instead of trying to come up with more and more convoluted explanations. In the time I spent in the LessWrong community, I became to respect that the people here can argue in a civil manner, listening to the arguments of others instead of discarding them outright.
There seems to be a lot of enthusiasm around LessWrong meetups, so I thought something like this might be interesting too. There is no need to register - just add your marker and keep an eye out for someone living near you.
Here's the link: https://www.zeemaps.com/map?group=1323143
I posted this on an Open Thread first. Below are some observations based on the previous discussion:
When creating a new marker you will be given a special URL you can use to edit it later. If you lose it, you can create a new one and ask me to delete the old marker. Try not to lose it though.
If someone you tried to contact is unreachable, notify me and I'll delete the marker in order to keep the map tidy. Also, try to keep your own marker updated.
It was suggested that it would be a good idea to circulate the map around survey time. I'll try to remind everyone to update their markers around that time. Any major changes (e.g. changing admin, switching services, remaking the map to eliminate dead markers) will also happen then.
The map data can be exported by anyone, so there's no need to start over if I disappear or whatever.
Edit: Please, you have to make it possible to contact you. If you choose to use a name that doesn't match your LW account, you have to add an email address or equivalent. If you don't do that, it is assumed that the name on the marker is your username here, but if it isn't you are essentially unreachable and will be removed.
The argument that AIs provided with a reward channel will observe their controllers and learn to manipulate them is a valid one. Unfortunately, it's often framed in a way that feels counterintuitive or extreme, especially to AI designers. It typically starts with the standard reinforcement learning scenario, then posits that the AI becomes superintelligent and either manipulates the controller with super-social powers, or breaks out and gains control of its reward channel, killing or threatening its controllers.
And that is a fair argument. But conceptually, it leaps from a standard reinforcement learning scenario, to a science-fiction-sounding scenario. It might help to have intermediate scenarios: to show that even lower intelligence AIs might start exhibiting the same sort of behaviour, long before it gets to superintelligence.
So consider the following scenario. Some complex, trainable AI is tasked with writing automated news stories for a student newspaper. It trawls the web and composes its stories, then gets reward and feedback from the editors. Assume there are two editors for this newspaper, and they work on alternate days. The two editors have somewhat different ideas as to what constitutes a good story, so their feedbacks are different. After a while, the AI finds that it gets higher reward by using a certain style on Monday, Wednesday and Friday, and another style on Tuesday and Thursdays - this is a simple consequence of its reward mechanism.
But the rota isn't perfect. Sometime the Monday editor will edit a story so late on Monday that it's Tuesday, and sometimes the Tuesday editor will be up early and edit a story at the same time. This pattern is more common during the holidays, when the editors go to different time zones, often travel, and become more irregular in their editing.
So if the AI complex and skilled enough, then, simply through simple feedback, it will start building up a picture of its editors. It will figure out when they are likely to stick to a schedule, and when they will be more irregular. It will figure out the difference between holidays and non-holidays. Given time, it may be able to track the editors moods and it will certainly pick up on any major change in their lives - such as romantic relationships and breakups, which will radically change whether and how it should present stories with a romantic focus.
It will also likely learn the correlation between stories and feedbacks - maybe presenting a story define roughly as "positive" will increase subsequent reward for the rest of the day, on all stories. Or maybe this will only work on a certain editor, or only early in the term. Or only before lunch.
Thus the simple trainable AI with a particular focus - write automated news stories - will be trained, through feedback, to learn about its editors/controllers, to distinguish them, to get to know them, and, in effect, to manipulate them.
This may be a useful "bridging example" between standard RL agents and the superintelligent machines.
Immortality: A Practical Guide
This article is about how to increase one’s own chances of living forever or, failing that, living for a long time. To be clear, this guide defines death as the long-term loss of one’s consciousness and defines immortality as never-ending life. For those who would like less lengthy information on decreasing one’s risk of death, I recommend reading the sections “Can we become immortal,” “Should we try to become immortal,” and “Cryonics,” in this guide, along with the article Lifestyle Interventions to Increase Longevity.
This article does not discuss how to treat specific disease you may have. It is not intended as a substitute for the medical advice of physicians. You should consult a physician with respect to any symptoms that may require diagnosis or medical attention. Additionally, I suggest considering using MetaMed to receive customized, albeit perhaps very expensive, information on your specific conditions, if you have any.
When reading about the effect sizes in scientific studies, keep in mind that many scientific studies report false-positives and are biased,101 though I have tried to minimize this by maximizing the quality of the studies used. Meta-analyses and scientific reviews seem to typically be of higher quality than other study types, but are still subject to biases.114
Corrections, criticisms, and suggestions for new topics are greatly appreciated. I’ve tried to write this article tersely, so feedback on doing so would be especially appreciated. Apologies if the article’s font type, size and color isn’t standard on Less Wrong; I made it in google docs without being aware of Less Wrong’s standard and it would take too much work changing the style of the entire article.
Can we become immortal?
Should we try to become immortal?
Relative importance of the different topics
What to eat and drink
When to eat and drink
How much to eat
How much to drink
Emotions and feelings
Positive emotions and feelings
Anger and hostility
Social and personality factors
Giving to others
External causes of death
Intentional self harm
Inanimate mechanical forces
Smoke, fire, and heat
Other accidental threats to breathing
Forces of nature
Can we become immortal?
In order to potentially live forever, one never needs to make it impossible to die; one instead just needs to have one’s life expectancy increase faster than time passes, a concept known as the longevity escape velocity.61 For example, if one had a 10% chance of dying in their first century of life, but their chance of death decreased by 90% at the end of each century, then one’s chance of ever dying would be be 0.1 + 0.12 + 0.13… = 0.11… = 11.11...%. When applied to risk of death from aging, this akin to one’s remaining life expectancy after jumping off a cliff while being affected by gravity and jet propulsion, with gravity being akin to aging and jet propulsion being akin to anti-aging (rejuvenation) therapies, as shown below.
The numbers in the above figure denote plausible ages of individuals when the first rejuvenation therapies arrive. A 30% increase in healthy lifespan would give the users of first-generation rejuvenation therapies 20 years to benefit from second-generation rejuvenation therapies, which could give an additional 30% increase if life span, ad infinitum.61
As for causes of death, many deaths are strongly age-related. The proportion of deaths that are caused by aging in the industrial world approaches 90%.53 Thus, I suppose postponing aging would drastically increase life expectancy.
As for efforts against aging, the SENS Research foundation and Science for Life Extension are charitable foundations for trying to cure aging.54, 55 Additionally, Calico, a Google-backed company, and AbbVie, a large pharmaceutical company, have each committed fund $250 million to cure aging.56
I speculate that one could additionally decrease risk of death by becoming a cyborg, as mechanical bodies seem easier to maintain than biological ones, though I’ve found no articles discussing this.
Similar to becoming a cyborg, another potential method of decreasing one’s risk of death is mind uploading, which is, roughly speaking, the transfer of most or all of one’s mental contents into a computer.62 However, there are some concerns about the transfer creating a copy of one’s consciousness, rather than being the same consciousness. This issue is made very apparent if the mind-uploaded process leaves the original mind intact, making it seem unlikely that one’s consciousness was transferred to the new body.63 Eliezer Yudkowsky doesn’t seem to believe this is an issue, though I haven't found a citation for this.
With regard to consciousness, it seems that most individuals believe that the consciousness in one’s body is the “same” consciousness as the one that was in one’s body in the past and will be in it in the future. However, I know of no evidence for this. If one’s consciousness isn’t the same of the one in one’s body in the future, and one defined death as one’s consciousness permanently ending, then I suppose one can’t prevent death for any time at all. Surprisingly, I’ve found no articles discussing this possibility.
Although curing aging, becoming a cyborg, and mind uploading may prevent death from disease, they still seem to leave oneself vulnerable to accidents, murder, suicide, and existential catastrophes. I speculate that these problems could be solved by giving an artificial superintelligence the ability to take control of one’s body in order to prevent such deaths from occurring. Of course, this possibility is currently unavailable.
Another potential cause of death is the Sun expanding, which could render Earth uninhabitable in roughly one billion years. Death from this could be prevented by colonizing other planets in the solar system, although eventually the sun would render the rest of the solar system uninhabitable. After this, one could potentially inhabit other stars; it is expected that stars will remain for roughly 10 quintillion years, although some theories predict that the universe will be destroyed in a mere 20 billion years. To continue surviving, one could potentially go to other universes.64 Additionally, there are ideas for space-time crystals that could process information even after heat death (i.e. the “end of the universe”),65 so perhaps one could make oneself composed of the space-time crystals via mind uploading or another technique. There could also be other methods of surviving the conventional end of the universe, and life could potentially have 10 quintillion years to find them.
Yet another potential cause of death is living in a computer simulation that is ended. The probability of one living in a computer simulation actually seems to not be very improbable. Nick Bostrom argues that:
...at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.
The argument for this is here.100
If one does die, one could potentially be revived. Cryonics, discussed later in this article, may help in this. Additionally, I suppose one could possibly be revived if future intelligences continually create new conscious individuals and eventually create one of them that have one’s “own” consciousness, though consciousness remains a mystery, so this may not be plausible, and I’ve found no articles discussing this possibility. If the probability of one’s consciousness being revived per unit time does not approach or equal zero as time approaches infinity, then I suppose one is bound to become conscious again, though this scenario may be unlikely. Again, I’ve found no articles discussing this possibility.
As already discussed, in order to be live forever, one must either be revived after dying or prevent death from the consciousness in one’s body not being the same as the one that will be in one’s body in the future, accidents, aging, the sun dying, the universe dying, being in a simulation and having it end, and other, unknown, causes. Keep in mind that adding extra details that aren’t guaranteed to be true can only make events less probable, and that people often don’t account for this.66 A spreadsheet for estimating one’s chance of living forever is here.
Should we try to become immortal?
Before deciding whether one should try to become immortal, I suggest learning about the cognitive biases scope insensitivity, hyperbolic discounting, and bias blind spot if you don’t know currently know about them. Also, keep in mind that one study found that simply informing people of a cognitive bias made them no less likely to fall prey to it. A study also found that people only partially adjusted for cognitive biases after being told that informing people of a cognitive bias made them no less likely to fall prey to it.67
Many articles arguing against immortality are found via a quick google search, including this, this, this, and this. This article along with its comments discusses counter-arguments to many of these arguments. The Fable of the Dragon Tyrant provides an argument for curing aging, which can be extended to be an argument against mortality as a whole. I suggest reading it.
One can also evaluate the utility of immortality via decision theory. Assuming individuals receive a finite amount of utility per unit time such that it is never less than some above-zero constant, living forever would give infinitely more utility than living for a finite amount of time. Using these assumptions, in order to maximize utility, one should be willing to accept any finite cost to become immortal. However, the situation is complicated when one considers the potential of becoming immortal and receiving an infinite positive utility unintentionally, in which case one would receive infinite expected utility regardless of if one tried to become immortal. Additionally, if one both has the chance of receiving infinitely high and infinitely low utility, one’s expected utility would be undefined. Infinite utilities are discussed in “Infinite Ethics” by Nick Bostrom.
For those interested in decreasing existential risk, living for a very long time, albeit not necessarily forever, may give one more opportunity to do so. This idea can be generalized to many goals one has in life.
On whether one can influence one’s chances of becoming immortal, studies have shown that only roughly 20-30% of longevity in humans is accounted for by genetic factors.68 There are multiple actions one can to increase one’s chances of living forever; these are what the rest of this article is about. Keep in mind that you should consider continuing reading this article even if you don’t want to try to become immortal, as the article provides information on living longer, even if not forever, as well.
Relative importance of the different topics
The figure below gives the relative frequencies of preventable causes of death.
Some causes of death are excluded from the graph, but are still large causes of death. Most notably, 440,000 deaths in the US, roughly one sixth of total deaths in the US are estimated to be from preventable medical errors in hospitals.2
Here are the frequencies of causes of deaths in the US in year 2010 based off of another classification:
Heart disease: 596,577
Chronic lower respiratory diseases: 142,943
Stroke (cerebrovascular diseases): 128,932
Accidents (unintentional injuries): 126,438
Alzheimer's disease: 84,974
Influenza and Pneumonia: 53,826
Nephritis, nephrotic syndrome, and nephrosis: 45,591
- Intentional self-harm (suicide): 39,518
What to eat and drink
Keep in mind that the relationship between health and the consumption of types of substances aren’t necessarily linear. I.e. some substances are beneficial in small amounts but harmful in large amounts, while others are beneficial in both small and large amounts, but consuming large amounts is no more beneficial than consuming small amounts.
Recommendations from The Nutrition Source
The Nutrition Source is part of the Harvard School of Public Health.
Make ½ of your “plate” consist of a variety of fruits and a variety of vegetables, excluding potatoes, due to potatoes’ negative effect on blood sugar. The Harvard School of Public Health doesn’t seem to specify if this is based on calories or volume. It also doesn’t explain what it means by plate, but presumably ½ of one’s plate means ½ solid food consumed.
Make ¼ of your plate consist of whole grains.
Make ¼ of your plate consist of high-protein foods.
Limit red meat consumption.
Avoid processed meats.
Consume monounsaturated and polyunsaturated fats in moderation; they are healthy.
Avoid partially hydrogenated oils, which contain trans fats, which are unhealthy.
Limit milk and dairy products to one to two servings per day.
Limit juice to one small glass per day.
It is important to eat seafood one or two times per week, particularly fatty (dark meat) fish that are richer in EPA and DHA.
Limit diet drink consumption or consume in moderation.
Avoid sugary drinks like soda, sports drinks, and energy drinks.3
The bottom line is that saturated fats and especially trans fats are unhealthy, while unsaturated fats are healthy and the types of unsaturated fats omega-3 and omega-6 fatty acids fats are essential. The proportion of calories from fat in one’s diet isn’t really linked with disease.
Saturated fat is unhealthy. It’s generally a good idea to minimize saturated fat consumption. The latest Dietary Guidelines for Americans recommends consuming no more than 10% of calories from saturated fat, but the American Heart Association recommends consuming no more than 7% of calories from saturated fat. However, don’t decrease nut, oil, and fish consumption to minimize saturated fat consumption. Foods that contain large amounts of saturated fat include red meat, butter, cheese, and ice cream.
Trans fats are especially unhealthy. For every 2% increase of calories from trans-fat, risk of coronary heart disease increases by 23%. The Federal Institute for Medicine states that there are no known requirements for trans fats for bodily functions, so their consumption should be minimized. Partially hydrogenated oils contain trans fats, and foods that contain trans fats are often processed foods. In the US, products can claim to have zero grams of trans fat if they have no more than 0.5 grams of trans fat. Products with no more than 0.5 grams of trans fat that still have non-negligible amounts of trans fat will probably have the ingredients “partially hydrogenated vegetable oils” or “vegetable shortening” in their ingredient list.
Unsaturated fats have beneficial effects, including improving cholesterol levels, easing inflammation, and stabilizing heart rhythms. The American Heart Association has set 8-10% of calories as a target for polyunsaturated fat consumption, though eating more polyunsaturated fat, around 15%of daily calories, in place of saturated fat may further lower heart disease risk. Consuming unsaturated fats instead of saturated fat also prevents insulin resistance, a precursor to diabetes. Monounsaturated fats and polyunsaturated fats are types of unsaturated fats.
Omega-3 fatty acids (omega-3 fats) are a type of unsaturated fat. There are two main types: Marine omega-3s and alpha-linolenic acid (ALA). Omega-3 fatty acids, especially marine omega-3s, are healthy. Though one can make most needed types of fats from other fats or substances consumed, omega-3 fat is an essential fat, meaning it is an important type of fat and cannot be made in the body, so they must come from food. Most americans don’t get enough omega-3 fats.
Marine omega-3s are primarily found in fish, especially fatty (dark mean) fish. A comprehensive review found that eating roughly two grams per week of omega-3s from fish, equal to about one or two servings of fatty fish per week, decreased risk of death from heart disease by more than one-third. Though fish contain mercury, this is insignificant the positive health effects of their consumption (for the consumer, not the fish). However, it does benefit one’s health to consult local advisories to determine how much local freshwater fish to consume.
ALA may be an essential nutrient, and increased ALA consumption may be beneficial. ALA is found in vegetable oils, nuts (especially walnuts), flax seeds, flaxseed oil, leafy vegetables, and some animal fat, especially those from grass-fed animals. ALA is primarily used as energy, but a very small amount of it is converted into marine omega-3s. ALA is the most common omega-3 in western diets.
Most Americans consume much more omega-6 fatty acids (omega-6 fats) than omega-3 fats. Omega-6 fat is an essential nutrient and its consumption is healthy. Some sources of it include corn and soybean oils. The Nutrition Sources stated that the theory that omega-3 fats are healthier than omega-6 fats isn’t supported by evidence. However, in an image from the Nutrition Source, seafood omega-6 fats were ranked as healthier than plant omega-6 fats, which were ranked as healthier than monounsaturated fats, although such a ranking was to the best of my knowledge never stated in the text.3
There seems to be two main determinants of carbohydrate sources’ effects on health: nutrition content and effect on blood sugar. The bottom line is that consuming whole grains and other less processed grains and decreasing refined grain consumption improves health. Additionally, moderately low carbohydrate diets can increase heart health as long as protein and fat comes from health sources, though the type of carbohydrate at least as important as the amount of carbohydrates in a diet.
Glycemic index and is a measure of how much food increases blood sugar levels. Consuming carbohydrates that cause blood-sugar spikes can increase risk of heart disease and diabetes at least as much as consuming too much saturated fat does. Some factors that increase the glycemic index of foods include:
Being a refined grain as opposed to a whole grain.
Being finely ground, which is why consuming whole grains in their whole form, such as rice, can be healthier than consuming them as bread.
Having less fiber.
Being more ripe, in the case of fruits and vegetables.
Having a lower fat content, as meals with fat are converted more slowly into sugar.
Vegetables (excluding potatoes), fruits, whole grains, and beans, are healthier than other carbohydrates. Potatoes have a negative effect on blood sugar, due to their high glycemic index. Information on glycemic index and the index of various foods is here.
Whole grains also contain essential minerals such as magnesium, selenium, and copper, which may protect against some cancers. Refining grains takes away 50% of the grains’ B vitamins, 90% of vitamin E, and virtually all fiber. Sugary drinks usually have little nutritional value.
Identifying whole grains as food that has at least one gram of fiber for every gram of carbohydrate is a more effective measure of healthfulness than identifying a whole grain as the first ingredient, any whole grain as the first ingredient without added sugars in the first 3 ingredients, the word “whole” before any grain ingredient, and the whole grain stamp.3
Proteins are broken down to form amino acids, which are needed for health. Though the body can make some amino acids by modifying others, some must come from food, which are called essential amino acids. The institute of medicine recommends that adults get a minimum of 0.8 grams of protein per kilogram of body weight per day, and sets the range of acceptable protein intake to 10-35% of calories per day. The Institute of Medicine recommends getting 10-35% of calories from protein each day. The US recommended daily allowance for protein is 46 grams per day for women over 18 and 56 grams per day for men over 18.
Animal products tend to give all essential amino acids, but other sources lack some essential amino acids. Thus, vegetarians need to consume a variety of sources of amino acids each day to get all needed types. Fish, chicken, beans, and nuts are healthy protein sources.3
There are two types of fiber: soluble fiber and insoluble fiber. Both have important health benefits, so one should eat a variety of foods to get both.94 The best sources of fiber are whole grains, fresh fruits and vegetables, legumes, and nuts.3
There are many micronutrients in food; getting enough of them is important. Most healthy individuals can get sufficient micronutrients by consuming a wide variety of healthy foods, such as fruits, vegetables, whole grains, legumes, and lean meats and fish. However, supplementation may be necessary for some. Information about supplements is here.110
Concerning supplementation, potassium, iodine, and lithium supplementation are recommended in the first-place entry in the Quantified Health Prize, a contest on determining good mineral intake levels. However, others suggest that potassium supplementation isn’t necessarily beneficial, as shown here. I’m somewhat skeptical that the supplements are beneficial, as I have not found other sources recommending their supplementation. The suggested supplementation levels are in the entry.
Note that food processing typically decreases micronutrient levels, as described here. In general, it seems cooking, draining and drying foods sizably, taking potentially half of nutrients away, while freezing and reheating take away relatively few nutrients.111
One micronutrient worth discussing is sodium. Some sodium is needed for health, but most Americans consume more sodium than needed. However, recommendations on ideal sodium levels vary. The US government recommends limiting sodium consumption to 2,300mg/day (one teaspoon). The American Heart Association recommends limiting sodium consumption to 1,500mg/day (⅔ of a teaspoon), especially for those who are over 50, have high or elevated blood pressure, have diabetes, or are African Americans3 However, As RomeoStevens pointed out, the Institute of Medicine found that there's inconclusive evidence that decreasing sodium consumption below 2,300mg/day effects mortality,115 and some meta-analyses have suggested that there is a U-shaped relationship between sodium and mortality.116, 117
Vitamin D is another micronutrient that’s important for health. It can be obtained from food or made in the body after sun exposure. Most people who live farther north than San Francisco or don’t go outside at least fifteen minutes when it’s sunny are vitamin D deficient. Vitamin D deficiency is increases the risk of many chronic diseases including heart disease, infectious diseases, and some cancers. However, there is controversy about optimal vitamin D intake. The Institute of medicine recommends getting 600 to 4000 IU/day, though it acknowledged that there was no good evidence of harm at 4000 IU/day. The Nutrition Sources states that these recommendations are too low and fail to account for new evidence. The nutrition source states that for most people, supplements are the best source of vitamin D, but most multivitamins have too little vitamin D in them. The Nutrition Source recommends considering and talking to a doctor about taking an additional multivitamin if the you take less than 1000 IU of vitamin D and especially if you have little sun exposure.3
Information on blood pressure is here in the section titled “Blood Pressure.”
Cholesterol and triglycerides
Information on optimal amounts of cholesterol and triglycerides are here.
The biggest influences on cholesterol are fats and carbohydrates in one’s diet, and cholesterol consumption generally has a far weaker influence. However, some people’s cholesterol levels rise and fall very quickly with the amount of cholesterol consumed. For them, decreasing cholesterol consumption from food can have a considerable effect on cholesterol levels. Trial and error is currently the only way of determining if one’s cholesterol levels risk and fall very quickly with the amount of cholesterol consumed.
Despite their initial hype, randomized controlled trials have offered little support for the benefit is single antioxidants, though studies are inconclusive.3
Dietary reference intakes
For the numerically inclined, the Dietary Reference Intake provides quantitative guidelines on good nutrient consumption amounts for many nutrients, though it may be harder to use for some, due to its quantitative nature.
The Nutrition Source and SFGate state that water is the best drink,3, 112 though I don’t know why it’s considered healthier than drinks such as tea.
Unsweetened tea decreases the risk of many diseases, likely largely due to polyphenols, and antioxidant, in it. Despite antioxidants typically having little evidence of benefit, I suppose polyphenols are relatively beneficial. All teas have roughly the same levels of polyphenols except decaffeinated tea,3 which has fewer polyphenols.96 Research suggests that proteins and possibly fat in milk decrease the antioxidant capacity of tea.
It’s considered safe to drink up to six cups of coffee per day. Unsweetened coffee is healthy and may decrease some disease risks, though coffee may slightly increase blood pressure. Some people may want to consider avoiding coffee or switching to decaf, especially women who are pregnant or people who have a hard time controlling their blood pressure or blood sugar. The nutrition source states that it’s best to brew coffee with a paper filter to remove a substance that increases LDL cholesterol, despite consumed cholesterol typically having a very small effect on the body’s cholesterol level.
Alcohol increases risk of diseases for some people3 and decreases it for others.3, 119 Heavy alcohol consumption is a major cause of preventable death in most countries. For some groups of people, especially pregnant people, people recovering from alcohol addiction, and people with liver disease, alcohol causes greater health risks and should be avoided. The likelihood of becoming addicted to alcohol can be genetically determined. Moderate drinking, generally defined as no more than one or two drinks per day for men, can increase colon and breast cancer risk, but these effects are offset by decreased heart disease and diabetes risk, especially in middle age, where heart disease begins to account for an increasingly large proportion of deaths. However, alcohol consumption won’t decrease cardiovascular disease risk much for those who are thin, physically active, don’t smoke, eat a healthy diet, and have no family history of heart disease. Some research suggests that red wine, particularly when consumed after a meal, has more cardiovascular benefits than beers or spirits, but alcohol choice has still little effect on disease risk. In one study, moderate drinkers were 30-35% less likely to have heart attacks than non-drinkers and men who drank daily had lower heart attack risk than those who drank once or twice per week.
There’s no need to drink more than one or two glasses of milk per day. Less milk is fine if calcium is obtained from other sources.
The health effects of artificially sweetened drinks are largely unknown. Oddly, they may also cause weight gain. It’s best to limit consuming them if one drinks them at all.
Sugary drinks can cause weight gain, as they aren’t as filling as solid food and have high sugar. They also increase the risk of diabetes, heart disease, and other diseases. Fruit juice has more calories and less fiber than whole fruit and is reportedly no better than soft drinks.3
Fruits and vegetables are an important part of a healthy diet. Eating a variety of them is as important as eating many of them.3 Fish and nut consumption is also very healthy.98
Processed meat, on the other hand, is shockingly bad.98 A meta-analysis found that processed meat consumption is associated with a 42% increased risk of coronary heart disease (relative risk per 50g serving per day; 95% confidence interval: 1.07 - 1.89) and 19% increased risk of diabetes.97 Despite this, a bit of red meat consumption has been found to be beneficial.98 Consumption of well-done, fried, or barbecued meat has been associated with certain cancers, presumably due to carcinogens made in the meat from being cooked, though this link isn’t definitive. The amount of carcinogens increases with increased cooking temperature (especially above 300ºF, increased cooking time, charring, or being exposed to smoke.99
Eating less than one egg per day doesn’t increase heart disease risk in healthy individuals and can be part of a healthy diet.3
Organic foods have lower levels of pesticides than inorganic foods, though the residues of most organic and inorganic products don’t exceed government safety threshold. Washing fresh fruits and vegetables in recommended, as it removes bacteria and some, though not all, pesticide residues. Organic foods probably aren’t more nutritious than non-organic foods.103
When to eat and drink
A randomized controlled trial found an increase in blood sugar variation for subjects who skipped breakfast.6 Increasing meal frequency and decreasing meal size appears to have some metabolic advantages, and doesn’t appear to have metabolic disadvantages.7 Note: old source; made in 1994 However, Mayo Clinic states that fasting for 1-2 days per week may increase heart health.32 Perhaps it is optimal for health to fast, but to have high meal frequency when not fasting.
How much to eat
One’s weight gain is directly proportional to the number of calories consumed divided by the number of calories burnt. Centers for Disease Control and Prevention (CDC) has guidelines for healthy weights and information on how to lose weight.
Some advocate restricting weight to a greater extent, which is known as calorie restriction. It’s unknown whether calorie restriction increases lifespan in humans or not, but moderate calorie restriction with adequate nutrition decreases risk of obesity, type 2 diabetes, inflammation, hypertension, cardiovascular disease, and metabolic risk factors associated with cancer, and is the most effective way of consistently increasing lifespan in a variety of organisms. The CR Society has information on getting started on calorie restriction.4
How much to drink
Generally, drinking enough to rarely feel thirsty and to have colorless or light yellow urine is usually sufficient. It’s also possible to drink too much water. In general, drinking too much water is rare in healthy adults who eat an average American diet, although endurance athletes are at a higher risk.10
A meta-analysis found the data in the following graphs for people aged over 40.
A weekly total of roughly five hours of vigorous exercise has been identified by several studies to be the safe upper limit for life expectancy. It may be beneficial to take one or two days off from vigorous exercise per week and to limit chronic vigorous exercise to <= 60 min/day.9 Based on the above, I my best guess for the optimal amount of exercise for longevity is roughly 30 MET-hr/wk. Calisthenics burn 6-10 METs/hr11, so an example exercise routine to get this amount of exercise is doing calisthenics 38 minutes per day and 6 days/wk. Guides on how to exercise are available, e.g. this one.
Carcinogens are cancer-causing substances. Since cancer causes death, decreasing exposure to carcinogens presumably decreases one’s risk of death. Some foods are also carcinogenic, as discussed in the “Food” section.
Tobacco use is the greatest avoidable risk factor for cancer worldwide, causing roughly 22% of cancer deaths. Additionally, second hand smoke has been proven to cause lung cancer in nonsmoking adults.
Alcohol use is a risk factor for many types of cancer. The risk of cancer increases with the amount of alcohol consumed, and substantially increases if one is also a heavy smoker. The attributable fraction of cancer from alcohol use varies depending on gender, due to differences in consumption level. E.g. 22% of mouth and oropharynx cancer is attributable to cancer in men but only 9% is attributable to alcohol in women.
Environmental air pollution accounts for 1-4% of cancer.84 Diesel exhaust is one type of carcinogenic air pollution. Those with the highest exposure to diesel exhaust are exposed to it occupationally. As for residential exposure, diesel exhaust is highest in homes near roads where traffic is heaviest. Limiting time spent near large sources of diesel exhaust decreases exposure. Benzene, another carcinogen, is found in gasoline and vehicle exhaust but exposure to it can also be cause by being in areas with unventilated fumes from gasoline, glues, solvents, paints, and art supplies. It can cause exposure from inhalation or skin contact.86
Some occupations exposure workers to occupational carcinogens.84 A list of some of the occupations is here, all of which involve manual labor, except for hospital-related jobs.87
Infections are responsible for 6% of cancer deaths in developed nations.84 Many of the infections are spread via sexual contact and sharing needles and some can be vaccinated against.85
Ionizing radiation is carcinogenic to humans. Residential exposure to radon gas is estimated to cause 3-14% of lung cancers, which is the largest source of radon exposure for most people 84 Being exposed to radon and cigarette smoke together increases one’s cancer risk much more than they do separately. There is much variation radon levels depending on where one lives and and radon is usually higher inside buildings, especially levels closer to the ground, such as basements. The EPA recommends taking action to reduce radon levels if they are greater than or equal to 4.0 pCi/L. Radon levels can be reduced by a qualified contractor. Reducing radon levels without proper training and equipment can increase instead of decrease them.88
Some medical tests can also increase exposure to radiation. The EPA estimates that exposure to 10 mSv from a medical imaging test increases risk of cancer by roughly 0.05%. To decrease exposure to radiation from medical imaging tests, one can ask if there are ways to shield parts of one’s body from radiation that aren’t being tested and making sure the doctor performing the test is qualified.89
Small doses of ionizing radiation increase risk by a very small amount. Most studies haven’t detected increased cancer risk in people exposed to low levels of ionizing radiation. For example, people living in higher altitudes don’t have noticeably higher cancer rates than other people. In general, cancer risk from radiation increases as the dose of radiation increases and there is thought to be no safe level of exposure. Ultraviolet radiation as a type of radiation that can be ionizing radiation. Sunlight is the main source of ultraviolet radiation.84
Factors that increase one’s exposure to ultraviolet radiation when outside include:
Time of day. Almost ⅓ of UV radiation hits the surface between 11AM and 1PM, and ¾ hit the surface between 9AM and 5PM.
Time of year. UV radiation is greater during summer. This factor is less significant near the equator.
Altitude. High elevation causes more UV radiation to penetrate the atmosphere.
Clouds. Sometimes clouds decrease levels of UV radiation because they block UV radiation from the sun. Other times, they increase exposure because they reflect UV radiation.
Reflection off surfaces, such as water, sand, snow, and grass increases UV radiation.
Ozone density, because ozone stops some UV radiation from reaching the surface.
Some tips to decrease exposure to UV radiation:
Stay in the shade. This is one of the best ways to limit exposure to UV radiation in sunlight.
Cover yourself with clothing.
Use sunscreen on exposed skin.90
Tanning beds are also a source of ultraviolet radiation. Using tanning booths can increase one’s chance of getting skin melanoma by at least 75%.91
Vitamin D3 is also produced from ultraviolet radiation, although the American Society for Clinical Nutrition states that vitamin D is readily available from supplements and that the controversy about reducing ultraviolet radiation exposure was fueled by the tanning industry.92
There could be some risk of cell phone use being associated with cancer, but the evidence is not strong enough to be considered causal and needs to be investigated further.93, 118
Emotions and feelings
Positive emotions and feelings
A review suggested that positive emotions and feelings decreased mortality. Proposed mechanisms include positive emotions and feelings being associated with better health practices such as improved sleep quality, increased exercise, and increased dietary zinc consumption, as well as lower levels of some stress hormones. It has also been hypothesized to be associated with other health-relevant hormones, various aspects of immune function, and closer and more social contacts.33 Less Wrong has a good article on how to be happy.
A meta-analysis was conducted on psychological stress. To measure psychological stress, it used the GHQ-12 score, which measured symptoms of anxiety, depression, social dysfunction, and loss of confidence. The scores range from 0 to 12, with 0 being asymptomatic, 1-3 being subclinically symptomatic, 4-6 being symptomatic, and 7-12 being highly symptomatic. It found the results shown in the following graphs.
This association was essentially unchanged after controlling for a range of covariates including occupational social class, alcohol intake, and smoking. However, reverse causality may still partly explain the association.30
A study found that individuals with moderate and high stress levels as opposed to low stress had hazard ratios (HRs) of mortality of 1.43 and 1.49, respectively.27 A meta-analysis found that high perceived stress as opposed to low perceived stress had a coronary heart disease relative risk (RR) of 1.27. The mean age of participants in the studies used in the meta-analysis varied from 44 to 72.5 years and was significantly and positively associated with effect size. It explained 46% of the variance in effect sizes between the studies used in the meta-analysis.28
A cross-sectional study (which is a relatively weak study design) not in the aforementioned meta-analysis used 28,753 subjects to study the effect on mortality from the amount of stress and the perception of whether stress is harmful or not. It found that neither of these factors predicted mortality independently, but but that taken together, they did have a statistically significant effect. Subjects who reported much stress and that stress has a large effect on health had a HR of 1.43 (95% CI: 1.2, 1.7). Reverse causality may partially explain this though, as those who have had negative health impacts from stress may have been more likely to report that stress influences health.83
Anger and hostility
A meta-analysis found that after fully controlling for behavior covariates such as smoking, physical activity or body mass index, and socioeconomic status, anger and hostility was not associated with coronary heart disease (CHD), though the results are inconclusive.34
Social and personality factors
A review suggested that social status is linked to health via gender, race, ethnicity, education levels, socioeconomic differences, family background, and old age.46
Giving to others
An observational study found that stressful life events was not a predictor for mortality for those who engaged in unpaid helping behavior directed towards friends, neighbors, or relatives who did not live with them. This association may be due to giving to others causing one to have a sense of mattering, opportunities for generativity, improved social well-being, the emotional state of compassion, and the physiology of the caregiving behavioral system.35
A large meta-analysis found that the odds ratio of mortality of having weak social relationships is 1.5 (95% confidence interval (CI): 1.42 to 1.59). However, this effect may be a conservative estimate. Many of the studies used in the meta-analysis used single item measures of social relations, but the size of the association was greatest in studies that used more complex measurements. Additionally, some of the studies in the meta-analysis adjusted for risk factors that may be mediators of social relationships’ effect on mortality (e.g. behavior, diet, and exercise). Many of the studies in the meta-analysis also ignored the quality of social relationships, but research suggests that negative social relationships are linked to increased mortality. Thus, the effect of social relationships on mortality could be even greater than the study found.
Concerning causation, social relationships are linked to better health practices and psychological processes, such as stress and depression, which influence health outcomes on their own. However, the meta-analysis also states that social relationships exert an independent effect. Some studies show that social support is linked to better immune system functioning and to immune-mediated inflammatory processes.36
A cohort study with 468 deaths found that each 1 standard deviation decrease in conscientiousness was associated with HR being multiplied by 1.07 (95% CI: 0.98 – 1.17), though it gave no mechanism for the association.39 Although it adjusted for several variables, (e.g. socioeconomic status, smoking, and drinking), it didn’t adjust for drug use, risky driving, risky sex, suicide, and violence, which were all found by a meta-analysis to have statistically significant associations with conscientiousness.40 Overall, it seems to me that conscientiousness doesn’t seem to have a significant effect on mortality.
Mayo clinic has a good article on preventing infectious disease.
A cohort study of 5611 adults found that compared to men with 26-32 teeth, men with 16-25 teeth had an HR of 1.03 (95% CI: 0.91-1.17), men with 1-15 teeth had an HR of 1.21 (95% CI: 1.05-1.40) and men with 0 teeth had an HR of 1.18 (95% CI: 1.00-1.39).
In the study, men who never brushed their teeth at night had a HR of 1.34 (95% CI: 1.14-1.57) relative to those who did every night. Among subjects who brushed at night, HR was similar between those who did and didn’t brush daily in the morning or day. The HR for men who brushed in the morning every day but not at night every day was 1.19 (95% CI: 0.99-1.43).
In the study, men who never used dental floss had an HR of 1.27 (95% CI: 1.11-1.46) and those who sometimes used it had an HR or 1.14 (95% CI: 1.00-1.30) compared to men who used it every day. Among subjects who brushed their teeth at night daily, not flossing was associated with a significantly increased HR.
Use of toothpicks didn’t significantly decrease HR and mouthwash had no effect.
The study had a list of other studies on the effect of dental health on mortality. It seems to us that almost all of them found a negative correlation between dental health and risk of mortality, although the study didn’t say their methodology for selecting the studies to show. I did a crude review of other literature by only looking at their abstracts and found that five studies found that poor dental health increased risk of mortality and one found it didn’t.
Regarding possible mechanisms, the study says that toothpaste helps prevent dental caries and that dental floss is the most effective means of removing interdental plaque and decreasing interdental gingival inflammation.38
It seems that getting too little or too much sleep likely increases one’s risk of mortality, but it’s hard to tell exactly how much is too much and how little is too little.
One review found that the association between amount of sleep and mortality is inconsistent in studies and that what association does exist may be due to reverse-causality.41 However, a meta-analysis found that the RR associated with short sleep duration (variously defined as sleeping from < 8 hrs/night to < 6 hrs/night) was 1.10 (95% CI: 1.06-1.15). It also found that the RR associated with long sleep duration (variously defined as sleeping for > 8 hrs/night to > 10 hrs per night) compared with medium sleep duration (variously defined as sleeping for 7-7.9 hrs/night to 9-9.9 hrs/night) was 1.23 (95% CI: 1.17 - 1.30).42
The National Heart, Lung, and Blood Institute and Mayo Clinic recommend adults get 7-8 hours of sleep per night, although it also says sleep needs vary from person to person. It gives no method of determining optimal sleep for an individual. Additionally, it doesn’t say if its recommendations are for optimal longevity, optimal productivity, something else, or a combination of factors.43 The Harvard Medical School implies that one’s optimal amount of sleep is enough sleep to not need an alarm to wake up, though it didn’t specify the criteria for determining optimality either.45
None of the drugs I’ve looked into have a beneficial effect for the people without a special disease or risk factor. Notes on them are here.
A quasi-randomized experiment with a validity near that of a randomized trial presumably suggested that blood donation didn’t significantly decrease risk of coronary heart disease (CHD). Observational studies have shown much lower CHD incidence among donors, although the authors of the former experiment suspect that bias and reverse causation played a role in this.29 That said, a review found that reverse causation accounted for only 30% of the effect of blood donation, though I haven't been able to find the review. RomeoStevens suggests that the potential benefits of blood donation are high enough and the costs are low enough that blood donation is worth doing.120
After adjusting for amount of physical activity, a meta-analysis estimated that for every one hour increment of sitting in intervals 0-3, >3-7 and >7 h/day total sitting time, the hazard ratios of mortality were 1.00 (95% CI: 0.98-1.03), 1.02 (95% CI: 0.99-1.05) and 1.05 (95% CI: 1.02-1.08) respectively. It proposed no mechanism for sitting time having this effect,37 so it might have been due to confounding variables it didn’t control.
Sleep apnea is an independent risk factor for mortality and cardiovascular disease.26 Symptoms and other information on sleep apnea are here.
A meta-analysis found that self-reported habitual snoring had a small but statistically significant association with stroke and coronary heart disease, but not with cardiovascular disease and all-cause mortality [HR 0.98 (95% CI: 0.78-1.23)]. Whether the risk is due to obstructive sleep apnea is controversial. Only the abstract is able to be viewed for free, so I’m just basing this off the abstract.31
The organization Susan G. Komen, citing a meta-analysis that used randomized controlled trials, doesn’t recommend breast self exams as a screening tool for breast cancer, as it hasn’t been shown to decrease cancer death. However, it still stated that it is important to be familiar with one’s breasts’ appearance and how they normally feel.49 According to the Memorial Sloan Kettering Cancer Center, no study has been able to show a statistically significant decrease in breast cancer deaths from breast self-exams.50 The National Cancer Institute states that breast self-examinations haven’t been shown to decrease breast cancer mortality, but does increase biopsies of benign breast lesions.51
The American Cancer Society doesn’t recommend testicular self-exams for all men, as they haven’t been studied enough to determine if they decrease mortality. However, it states that men with risk factors of testicular cancer (e.g. an undescended testical, previous testicular cancer, of a family member who previously had testicular cancer) should consider self-exams and discuss them with a doctor. The American Cancer Society also recommends having testicular self-exams in routine cancer-related check-ups.52
Genomics is the study of genes in one’s genome, and may help increase health by using knowledge of one’s genes to have personalized treatment. However, it hasn’t proved to be useful for most; recommendations rarely change after knowledge from genomic testing. Still, genomics has much future potential.102
Like I’ve said in the section “Can we become immortal,” the proportion of deaths that are caused by aging in the industrial world approaches 90%,53 but some organizations and companies are working on curing it.54, 55, 56
One could support these organizations in an effort to hasten the development of anti-aging therapies, although I doubt an individual would have a noticeable impact on one’s own chance of death unless one is very wealthy. That said, I have little knowledge in investments, but I suppose investing in companies working on curing aging may be beneficial, as if they succeed, they may offer an enormous return on investment, and if they fail, one would probably die, so losing one’s money may not be as bad. Calico currently isn’t a public stock, though.
External causes of death
Unless otherwise specified, graphs in this section are on data collected from American citizens ages 15-24, as based off the Less Wrong census results, this seems to be the most probable demographic that will read this. For this demographic, external causes cause 76% of deaths. Note that although this is true, one is much more likely to die when older than when aged 15-24, and older individuals are much more likely to die from disease than from external causes of death. Thus, I think it’s more important when young to decrease risk of disease than external causes of death. The graph below shows the percentage of total deaths from external causes caused by various causes.
Below are the relative death rates of specified means of transportation for people in general:
Lifehacker's “Basic Self-Defense Moves Anyone Can Do (and Everyone Should Know)” gives a basic introduction to self defence.
Intentional self harm
Intentional self harm such as suicide, presumably, increases one’s risk of death.47 Mayo Clinic has a guide on preventing suicide. I recommend looking at it if you are considering killing yourself. Additionally, if are are considering killing yourself, I suggest reviewing the potential rewards of achieving immortality from the section “Should we try to become immortal.”
What to do if a poisoning occurs
CDC recommends staying calm, dialing 1-800-222-1222, and having this information ready:
Your age and weight.
If available, the container of the poison.
The time of the poison exposure.
The address where the poisoning occurred.
It also recommends staying on the phone and following the instructions of the emergency operator or poison control center.18
Types of poisons
Below is a graph of the risk of death per type of poison.
Some types of poisons:
Some household chemicals.
Recreational drug overdoses.
Metals such as lead and mercury.
Plants12 and mushrooms.14
Presumably some animals.
Some fumes, gases, and vapors.15
Using recreational drugs increases risk of death.
Medicine overdoses and household chemicals
CDC has tips for these here.
Lead poisoning causes 0.2% of deaths worldwide and 0.0% of deaths in developed countries.22 Children under the age of 6 are at higher risk of lead poisoning.24 Thus, for those who aren’t children, learning more about preventing lead poisoning seems like more effort than it’s worth. No completely safe blood lead level has been identified.23
MedlinePlus has an article on mercury poisoning here.
Inanimate mechanical forces
Over half of deaths from inanimate mechanical forces for Americans aged 15-24 are from firearms. Many of the other deaths are from explosions, machinery, and getting hit by objects. I suppose using common sense, precaution, and standard safety procedures when dealing with such things is one’s best defense.
Again, I suppose common sense and precaution is one’s best defense. Additionally, alcohol and substance abuse is a risk factor of falling.72
Smoke, fire and heat
Owning smoke alarms halves one’s risk of dying in a home fire.73 Again, common sense when dealing with fires and items potentially causing fires (e.g. electrical wires and devices) seems effective.
Other accidental threats to breathing
Deaths from other accidental threats to breathing are largely caused by strangling or choking on food or gastric contents, and occasionally by being in a cave-in or trapped in a low-oxygen environment.21 Choking can be caused by eating quickly or laughing while eating.74 If you are choking:
Forcefully cough. Lean as far forwards as you can and hold onto something that is firmly anchored, if possible. Breathe out and then take a deep breath in and cough; this may eject the foreign object.
Attract someone’s attention for help.75
Additionally, choking can be caused by vomiting while unconscious, which can be caused by being very drunk.76 I suggest lying in the recovery position if you think you may vomit while unconscious, so as to to decrease the chance of choking on vomit.77 Don’t forget to use common sense.
Electric shock is usually caused by contact with poorly insulated wires or ungrounded electrical equipment, using electrical devices while in water, or lightning.78 Roughly ⅓ of deaths from electricity are caused by exposure to electric transmission lines.21
Forces of nature
Deaths from forces of nature in (for Americans ages 15-24) in descending order of number of deaths caused are: exposure to cold, exposure to heat, lightning, avalanches or other earth movements, cataclysmic storms, and floods.21 Here are some tips to prevent these deaths:
When traveling in cold weather, carry emergency supplies in your car and tell someone where you’re heading.79
Stay hydrated during hot weather.80
Safe locations from lightning include substantial buildings and hard-topped vehicles. Safe locations don’t include small sheds, rain shelters, and open vehicles.
Wait until there are no thunderstorm clouds in the area before going to a location that isn’t lightning safe.81
Since medical care is tasked with treating diseases, receiving medical care when one has illnesses presumably decreases risk of death. Though necessary medical care may be essential when one has illnesses, a review estimated that preventable medical errors contributed to roughly 440,000 deaths per year in the US, which is roughly one-sixth of total deaths in the US. It gave a lower limit of 210,000 deaths per year.
The frequency of deaths from preventable medical errors varied across studies used in the review, with a hospital that was shown the put much effort into improving patient safety having a lower proportion of deaths from preventable medical errors than that of others.57 Thus, I suppose that it would be beneficial to go to hospitals that are known for their dedication to patient safety. There are several rankings of hospital safety available on the internet, such as this one. Information on how to help prevent medical errors is found here and under the “What Consumers Can Do” section here. One rare medical error is having a surgery be done on the wrong body part. The New York Times gives tips for preventing this here.
Additionally, I suppose it may be good to live relatively close to a hospital so as to be able to quickly reach it in emergencies, though I’ve found no sources stating this.
A common form of medical care are general health checks. A comprehensive Cochrane review with 182,880 subjects concluded that general health checks are probably not beneficial.107 A meta-analysis found that general health checks are associated with small but statistically significant benefits in factoring related to mortality, such as blood pressure and body mass index. However, it found no significant association with mortality.109 The New York Times acknowledged that health checks are probably not beneficial and gave some explanation why general health checks are nonetheless still common.108 However, CDC and MedlinePlus recommend getting routine general health checks. The cited no studies to support their claims.104, 106 When I contacted CDC about it, it responded, “Regular health exams and tests can help find problems before they start. They also can help find problems early, when your chances for treatment and cure are better. By getting the right health services, screenings, and treatments, you are taking steps that help your chances for living a longer, healthier life,” a claim that doesn’t seem supported by evidence. It also stated, “Although CDC understands you are concerned, the agency does not comment on information from unofficial or non-CDC sources.” I never heard back from MedlinePlus.
Cryonics is the freezing of legally dead humans with the purpose preserving their bodies so they can be brought back to life in the future once technology makes it possible. Human tissue have been cryopreserved and then brought back to life, although this has never been done on full humans.59 The price of Cryonics at least ranges from $28,000 to $200,000.60 More information on cryonics is on LessWrong Wiki.
Cryonics, medical care, safe housing, and basic needs all take money. Rejuvenation therapy may also be very expensive. It seems valuable to have a reasonable amount of money and income.
Keeping updated on further advancements in technology seems like a good idea, as not doing so would prevent one from making use of future technologies. Keeping updated on advancements on curing aging seems especially important, due to the massive number of casualties it inflicts and the current work being done to stop it. Updates on mind-uploading seem important as well. I don’t know of any very efficient method of keeping updated on new advancements, but periodically googling for articles about curing aging or Calico and searching for new scientific articles on topics in this guide seems reasonable. As knb suggested, it seems beneficial to periodically check on Fight Aging, a website advocating anti-aging therapies. I’ll try to do this and update this guide with any new relevant information I find.
There is much uncertainty ahead, but if we’re clever enough, we just might make it though alive.
- Actual Causes of Death in the United States, 2000.
- A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care.
- All pages in The Nutrition Source, a part of the Harvard School of Public Health.
- Will calorie restriction work on humans?
- The pages Getting Started, Tests and Biomarkers, and Risks from The CR Society.
- The causal role of breakfast in energy balance and health: a randomized controlled trial in lean adults.
- Low Glycemic Index: Lente Carbohydrates and Physiological Effects of altered food frequency. Published in 1994.
- Leisure Time Physical Activity of Moderate to Vigorous Intensity and Mortality: A Large Pooled Cohort Analysis.
- Exercising for Health and Longevity vs Peak Performance: Different Regimens for Different Goals.
- Water: How much should you drink every day?
- MET-hour equivalents of various physical activities.
- Poisoning. NLM
- Carcinogen. Dictionary.com
- Types of Poisons. New York Poison Center
- The Most Common Poisons for Children and Adults. National Capital Poison Center.
- Known and Probable Human Carcinogens. American cancer society.
- Nutritional Effects of Food Processing. Nutritiondata.com.
- Tips to Prevent Poisonings. CDC.
- Carbon monoxide poisoning. Mayo Clinic.
- Carbon Monoxide Poisoning. CDC.
- CDCWONDER. Query Criteria taken from all genders, all states, all races, all levels of urbanization, all weekdays, dates 1999 – 2010, ages 15 – 24.
- Global health risks: mortality and burden of disease attributable to selected major risks.
- National Biomonitoring Program Factsheet. CDC
- Lead poisoning. Mayo Clinic.
- Mercury. Medline Plus.
- Snoring Is Not Associated With All-Cause Mortality, Incident Cardiovascular Disease, or Stroke in the Busselton Health Study.
- Do Stress Trajectories Predict Mortality in Older Men? Longitudinal Findings from the VA Normative Aging Study.
- Meta-analysis of Perceived Stress and its Association with Incident Coronary Heart Disease.
- Iron and cardiac ischemia: a natural, quasi-random experiment comparing eligible with disqualified blood donors.
- Association between psychological distress and mortality: individual participant pooled analysis of 10 prospective cohort studies.
- Self-reported habitual snoring and risk of cardiovascular disease and all-cause mortality.
- Is it true that occasionally following a fasting diet can reduce my risk of heart disease?
- Positive Affect and Health.
- The Association of Anger and Hostility with Future Coronary Heart Disease: A Meta-Analytic Review of Prospective Evidence.
- Giving to Others and the Association Between Stress and Mortality.
- Social Relationships and Mortality Risk: A Meta-analytic Review.
- Daily Sitting Time and All-Cause Mortality: A Meta-Analysis.
- Dental Health Behaviors, Dentition, and Mortality in the Elderly: The Leisure World Cohort Study.
- Low Conscientiousness and Risk of All-Cause, Cardiovascular and Cancer Mortality over 17 Years: Whitehall II Cohort Study.
- Conscientiousness and Health-Related Behaviors: A Meta-Analysis of the Leading Behavioral Contributors to Mortality.
- Sleep duration and all-cause mortality: a critical review of measurement and associations.
- Sleep duration and mortality: a systematic review and meta-analysis.
- How Much Sleep Is Enough? National Lung, Blood, and Heart Institute.
- How many hours of sleep are enough for good health? Mayo Clinic.
- Assess Your Sleep Needs. Harvard Medical School.
- A Life-Span Developmental Perspective on Social Status and Health.
- Suicide. Merriam-Webster.
- Can testosterone therapy promote youth and vitality? Mayo Clinic.
- Breast Self-Exam. Susan G. Komen.
- Screening Guidelines. The Memorial Sloan Kettering Cancer Center.
- Breast Cancer Screening Overview. The National Cancer Institute.
- Testicular self-exam. The American Cancer Society.
- Life Span Extension Research and Public Debate: Societal Considerations.
- SENS Research Foundation: About.
- Science for Life Extension Homepage.
- Google's project to 'cure death,' Calico, announces $1.5 billion research center. The Verge.
- A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care.
- When Surgeons Cut the Wrong Body Part. The New York Times.
- Cold facts about cryonics. The Guardian.
- The cryonics organization founded by the "Father of Cryonics," Robert C.W. Ettinger. Cryonics Institute.
- Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now.
- International Journal of Machine Consciousness Introduction.
- The Philosophy of ‘Her.’ The New York Times.
- How to Survive the End of the Universe. Discover Magazine.
- A Space-Time Crystal to Outlive the Universe. Universe Today.
- Conjunction Fallacy. Less Wrong.
- Cognitive Biases Potentially Affecting Judgment of Global Risks.
- Genetic influence on human lifespan and longevity.
- First Drug Shown to Extend Life Span in Mammals. MIT Technology Review.
- Sirolimus (Oral Route). Mayo Clinic.
- Micromorts. Understanding Uncertainty.
- Falls. WHO.
- Smoke alarm outreach materials. US Fire Administration.
- What causes choking? 17 possible conditions. Healthline.
- Choking. Better Health Channel.
- Aspiration pneumonia. HealthCentral.
- First aid - Recovery position. NHS Choices.
- Electric Shock. HowStuffWorks.
- Hypothermia prevention. Mayo Clinic.
- Extreme Heat: A Prevention Guide to Promote Your Personal Health and Safety. CDC.
- Understanding the Lightning Threat: Minimizing Your Risk. National weather service.
- The Case Against QuikClot. The survival mom.
- Does the Perception that Stress Affects Health Matter? The Association with Health and Mortality.
- Cancer Prevention. WHO.
- Infections That Can Lead to Cancer. American Cancer Society.
- Pollution. American Cancer Society.
- Occupations or Occupational Groups Associated with Carcinogen Exposures. Canadian Centre for Occupational Health and Safety.
- Radon. American Cancer Society.
- Medical radiation. American Cancer Society.
- Ultraviolet (UV) Radiation. American Cancer Society.
- An Unhealthy Glow. American Cancer Society.
- Sun exposure and vitamin D sufficiency.
- Cell Phones and Cancer Risk. National Cancer Institute.
- Nutrition for Everyone. CDC.
- How Can I Tell If My Body is Missing Key Nutrients? Oprah.com.
- Decaffeination, Green Tea and Benefits. Teas etc.
- Red and Processed Meat Consumption and Risk of Incident Coronary Heart Disease, Stroke, and Diabetes Mellitus.
- Lifestyle interventions to increase longevity.
- Chemicals in Meat Cooked at High Temperatures and Cancer Risk. National Cancer Institute.
- Are You Living in a Simulation?
- How reliable are scientific studies?
- Genomics: What You Should Know. Forbes.
- Organic foods: Are they safer? More nutritious? Mayo Clinic.
- Health screening - men - ages 18 to 39. MedlinePlus.
- Why do I need medical checkups. Banner Health.
- Regular Check-Ups are Important. CDC.
- General health checks in adults for reducing morbidity and mortality for disease (Review).
- Let’s (Not) Get Physicals.
- Effectiveness of general practice-based health checks: a systematic review and meta-analysis.
- Supplements: Nutrition in a Pill? Mayo Clinic.
- Nutritional Effects of Food Processing. SelfNutritionData.
- What Is the Healthiest Drink? SFGate.
- Leading Causes of Death. CDC.
- Bias Detection in Meta-analysis. Statistical Help.
- The summary of Sodium Intake in Populations: Assessment of Evidence. Institute of Medicine.
- Compared With Usual Sodium Intake, Low and Excessive -Sodium Diets Are Associated With Increased Mortality: A Meta-analysis.
- The Cochrane Review of Sodium and Health.
- Is there any link between cellphones and cancer? Mayo Clinic.
- A glass of red wine a day keeps the doctor away. Yale-New Haven Hospital.
- Comment on Lifestyle Interventions to Increase Longevity. Less Wrong.
On Wednesday, 15 April 2015, just under a month out from this posting, I will hold the first session of an online reading group for the ebook Rationality: From AI to Zombies, a compilation of the LessWrong sequences by our own Eliezer Yudkowsky. I would like to model this on the very successful Superintelligence reading group led by
The reading group will 'meet' on a semi-monthly post on the LessWrong discussion forum. For each 'meeting' we will read one sequence from the the Rationality book, which contains a total of 26 lettered sequences. A few of the sequences are unusually long, and these might be split into two sessions. If so, advance warning will be given.
In each posting I will briefly summarize the salient points of the essays comprising the sequence, link to the original articles and discussion when possible, attempt to find, link to, and quote one or more related materials or opposing viewpoints from outside the text, and present a half-dozen or so question prompts to get the conversation rolling. Discussion will take place in the comments. Others are encouraged to provide their own question prompts or unprompted commentary as well.
We welcome both newcomers and veterans on the topic. If you've never read the sequences, this is a great opportunity to do so. If you are an old timer from the Overcoming Bias days then this is a chance to share your wisdom and perhaps revisit the material with fresh eyes. All levels of time commitment are welcome.
If this sounds like something you want to participate in, then please grab a copy of the book and get started reading the preface, introduction, and the 10 essays / 42 pages which comprise Part A: Predictably Wrong. The first virtual meeting (forum post) covering this material will go live before 6pm Wednesday PDT (1am Thursday UTC), 15 April 2015. Successive meetings will start no later than 6pm PDT on the first and third Wednesdays of a month.
Following this schedule it is expected that it will take just over a year to complete the entire book. If you prefer flexibility, come by any time! And if you are coming upon this post from the future, please feel free leave your opinions as well. The discussion period never closes.
Topic for the first week is the preface by Eliezer Yudkowsky, the introduction by Rob Bensinger, and Part A: Predictably Wrong, a sequence covering rationality, the search for truth, and a handful of biases.
Happy New Year, everyone!
In the past few months I've been thinking several thoughts that all seem to point in the same direction:
1) People who live in developed Western countries usually make and spend much more money than people in poorer countries, but aren't that much happier. It feels like we're overpaying for happiness, spending too much money to get a single bit of enjoyment.
2) When you get enjoyment from something, the association between "that thing" and "pleasure" in your mind gets stronger, but at the same time it becomes less sensitive and requires more stimulus. For example if you like sweet food, you can get into a cycle of eating more and more food that's sweeter and sweeter. But the guy next door, who's eating much less and periodically fasting to keep the association fresh, is actually getting more pleasure from food than you! The same thing happens when you learn to deeply appreciate certain kinds of art, the folks who enjoy "low" art are visibly having more fun.
3) People sometimes get unrealistic dreams and endlessly chase them, like trying to "make it big" in writing or sports, because they randomly got rewarded for it at an early age. I wrote a post about that.
I'm not offering any easy answers here. But it seems like too many people get locked in loops where they spend more and more effort to get less and less happiness. The most obvious examples are drug addiction and video gaming, but also "one-itis" in dating, overeating, being a connoisseur of anything, striving for popular success, all these things follow the same pattern. You're just chasing after some Skinner-box thing that you think you "love", but it doesn't love you back.
Sooo... if you like eating, give yourself a break every once in a while? If you like comfort, maybe get a cold shower sometimes? Might be a good idea to make yourself the kind of person that can get happiness cheaply.
Sorry if this post is not up to LW standards, I typed it really quickly as it came to my mind.
Past and Present
Ten years ago teenager me was hopeful. And stupid.
The world neglected aging as a disease, Aubrey had barely started spreading memes, to the point it was worth it for him to let me work remotely to help with Metuselah foundation. They had not even received that initial 1,000,000 donation from an anonymous donor. The Metuselah prize was running for less than 400,000 if I remember well. Still, I was a believer.
Now we live in the age of Larry Page's Calico, 100,000,000 dollars trying to tackle the problem, besides many other amazing initiatives, from the research paid for by Life Extension Foundation and Bill Faloon, to scholars in top universities like Steve Garan and Kenneth Hayworth fixing things from our models of aging to plastination techniques. Yet, I am much more skeptical now.
I am skeptical because I could not find a single individual who already used a simple technique that could certainly save you many years of healthy life. I could not even find a single individual who looked into it and decided it wasn't worth it, or was too pricy, or something of that sort.
That technique is freezing some of your cells now.
Freezing cells is not a far future hope, this is something that already exists, and has been possible for decades. The reason you would want to freeze them, in case you haven't thought of it, is that they are getting older every day, so the ones you have now are the youngest ones you'll ever be able to use.
Using these cells to create new organs is not something that may help you if medicine and technology continue progressing according to the law of accelerating returns in 10 or 30 years. We already know how to make organs out of your cells. Right now. Some organs live longer, some shorter, but it can be done - for instance to bladders - and is being done.
Hope versus Reason
Now, you'd think if there was an almost non-invasive technique already shown to work in humans that can preserve many years of your life and involves only a few trivial inconveniences - compared to changing diet or exercising for instance- the whole longevist/immortalist crowd would be lining up for it and keeping back up tissue samples all over the place.
Well I've asked them. I've asked some of the adamant researchers, and I've asked the superwealthy; I've asked the cryonicists and supplement gorgers; I've asked those who work on this 8 hour a day every day, and I've asked those who pay others to do so. I asked it mostly for selfish reasons, I saw the TEDs by Juan Enriquez and Anthony Atala and thought: hey look, clearly beneficial expected life length increase, yay! let me call someone who found this out before me - anyone, I'm probably the last one, silly me - and fix this.
I've asked them all, and I have nothing to show for it.
My takeaway lesson is: whatever it is that other people are doing to solve their own impending death, they are far from doing it rationally, and maybe most of the money and psychology involved in this whole business is about buying hope, not about staring into the void and finding out the best ways of dodging it. Maybe people are not in fact going to go all-in if the opportunity comes.
How to fix this?
Let me disclose first that I have no idea how to fix this problem. I don't mean the problem of getting all longevists to freeze their cells, I mean the problem of getting them to take information from the world of science and biomedicine and applying it to themselves. To become users of the technology they are boasters of. To behave rationally in a CFAR or even homo economicus sense.
I was hoping for a grandiose idea in this last paragraph, but it didn't come. I'll go with a quote from this emotional song sung by us during last year's Secular Solstice celebration
"Today we are announcing that we will donate 10% of our advertising revenue receipts in 2014 to non-profits chosen by the reddit community. Whether it’s a large ad campaign or a $5 sponsored headline on reddit, we intend for all ad revenue this year to benefit not only reddit as a platform but also to support the goals and causes of the entire community."
Like many Less Wrong readers, I greatly enjoy Slate Star Codex; there's a large overlap in readership. However, the comments there are far worse, not worth reading for me. I think this is in part due to the lack of LW-style up and downvotes. Have there ever been discussion threads about SSC posts here on LW? What do people think of the idea occasionally having them? Does Scott himself have any views on this, and would he be OK with it?
The latest from Scott:
I'm fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with "things i will regret writing"
In this thread some have also argued for not posting the most hot-button political writings.
Would anyone be up for doing this? Ataxerxes started with "Extremism in Thought Experiments is No Vice"
I've been making rounds on social media with the following message.
Great content on LessWrong isn't as frequent as it used to be, so not as many people read it as frequently. This makes sense. However, I read it at least once every two days for personal interest. So, I'm starting a LessWrong/Rationality Digest, which will be a summary of all posts or comments exceeding 20 upvotes within a week. It will be like a newsletter. Also, it's a good way for those new to LessWrong to learn cool things without having to slog through online cultural baggage. It will never be more than once weekly. If you're curious here is a sample of what the Digest will be like.
Also, major blog posts or articles from related websites, such as Slate Star Codex and Overcoming Bias, or publications from the MIRI, may be included occasionally. If you want on the list send an email to:
lesswrongdigest *at* gmail *dot* com
Users of LessWrong itself have noticed this 'decline' in frequency of quality posts on LessWrong. It's not necessarily a bad thing, as much of the community has migrated to other places, such as Slate Star Codex, or even into meatspace with various organizations, meetups, and the like. In a sense, the rationalist community outgrew LessWrong as a suitable and ultimate nexus. Anyway, I thought you as well would be interested in a LessWrong Digest. If you or your friends:
- find articles in 'Main' are too infrequent, and Discussion only filled with announcements, open threads, and housekeeping posts, to bother checking LessWrong regularly, or,
- are busying themselves with other priorities, and are trying to limit how distracted they are by LessWrong and other media
the LessWrong Digest might work for you, and as a suggestion for your friends. I've fielded suggestions I transform this into a blog, Tumblr, or other format suitable for RSS Feed. Almost everyone is happy with email format right now, but if a few people express an interest in a blog or RSS format, I can make that happen too.
We have a recurring theme in the greater Less Wrong community that life should be more like a high fantasy novel. Maybe that is to be expected when a quarter of the community came here via Harry Potter fanfiction, and we also have rationalist group houses named after fantasy locations, descriptions of community members in terms of character archetypes and PCs versus NPCs, semi-serious development of the new atheist gods, and feel free to contribute your favorites in the comments.
A failure mode common to high fantasy novels as well as politics is solving all our problems by defeating the villain. Actually, this is a common narrative structure for our entire storytelling species, and it works well as a narrative structure. The story needs conflict, so we pit a sympathetic protagonist against a compelling antagonist, and we reach a satisfying climax when the two come into direct conflict, good conquers evil, and we live happily ever after.
This isn't an article about whether your opponent really is a villain. Let's make the (large) assumption that you have legitimately identified a villain who is doing evil things. They certainly exist in the world. Defeating this villain is a legitimate goal.
And then what?
Defeating the villain is rarely enough. Building is harder than destroying, and it is very unlikely that something good will spontaneously fill the void when something evil is taken away. It is also insufficient to speak in vague generalities about the ideals to which the post-[whatever] society will adhere. How are you going to avoid the problems caused by whatever you are eliminating, and how are you going to successfully transition from evil to good?
In fantasy novels, this is rarely an issue. The story ends shortly after the climax, either with good ascending or time-skipping to a society made perfect off-camera. Sauron has been vanquished, the rightful king has been restored, cue epilogue(s). And then what? Has the Chosen One shown skill in diplomacy and economics, solving problems not involving swords? What was Aragorn's tax policy? Sauron managed to feed his armies from a wasteland; what kind of agricultural techniques do you have? And indeed, if the book/series needs a sequel, we find that a problem at least as bad as the original fills in the void.
Reality often follows that pattern. Marx explicitly had no plan for what happened after you smashed capitalism. Destroy the oppressors and then ... as it turns out, slightly different oppressors come in and generally kill a fair percentage of the population. It works on the other direction as well; the fall of Soviet communism led not to spontaneous capitalism but rather kleptocracy and Vladmir Putin. For most of my lifetime, a major pillar of American foreign policy has seemed to be the overthrow of hostile dictators (end of plan). For example, Muammar Gaddafi was killed in 2011, and Libya has been in some state of unrest or civil war ever since. Maybe this is one where it would not be best to contribute our favorites in the comments.
This is not to say that you never get improvements that way. Aragorn can hardly be worse than Sauron. Regression to the mean perhaps suggests that you will get something less bad just by luck, as Putin seems clearly less bad than Stalin, although Stalin seems clearly worse than almost any other regime change in history. Some would say that causing civil wars in hostile countries is the goal rather than a failure of American foreign policy, which seems a darker sort of instrumental rationality.
Human flourishing is not the default state of affairs, temporarily suppressed by villainy. Spontaneous order is real, but it still needs institutions and social technology to support it.
Defeating the villain is a (possibly) necessary but (almost certainly) insufficient condition for bringing about good.
One thing I really like about this community is that projects tend to be conceived in the positive rather than the negative. Please keep developing your plans not only in terms of "this is a bad thing to be eliminated" but also "this is a better thing to be created" and "this is how I plan to get there."
The Future of Life Institute has published their document Research priorities for robust and beneficial artificial intelligence and written an open letter for people to sign indicating their support.
Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls. This document gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.
In 2004, Michael Vassar gave the following talk about how humans can reduce existential risk, titled Memes and Rational Decisions, to some transhumanists. It is well-written and gives actionable advice, much of which is unfamiliar to the contemporary Less Wrong zeitgeist.
Although transhumanism is not a religion, advocating as it does the critical analysis of any position; it does have certain characteristics which may lead to its identification as such by concerned skeptics. I am sure that everyone here has had to deal with this difficulty, and as it is a cause of perplexity for me I would appreciate it if anyone who has some suggested guidelines for interacting honestly with non-transhumanists share them at the end of my presentation. It seems likely to me that each of our minds contains either meme complexes or complex functional adaptations which have evolved to identify “religious” thoughts and to neutralize their impact on our behavior. Most brains respond to these memes by simply rejecting them. Others however, instead neutralize such memes simply by not acting according to the conclusions that should be drawn from such memes. In almost any human environment prior to the 20th century this religious hypocrisy would be a vital cognitive trait for every selectively fit human. People who took in religious ideas and took them too seriously would end up sacrificing their lives overly casually at best, and at worst would become celibate priests. Unfortunately, these memes are no more discriminating than the family members and friends who tend to become concerned for our sanity in response to their activity. Since we are generally infested with the same set of memes, we genuinely are liable to insanity, though not of the suspected sort. A man who is shot by surprise is not particularly culpable for his failure to dodge or otherwise protect himself, though perhaps he should have signed up with Alcor. A hunter gatherer who confronts an aggressive European with a rifle for the first time can also receive sympathy when he is slain by the magic wand that he never expected to actually work. By contrast, a modern Archimedes who ignores a Roman soldier’s request that he cease from his geometric scribbling is truly a mad man. Most of people of the world, unaware of molecular nanotechnology and of the potential power of recursively self-improving AI are in a position roughly analogous to that of the first man. The business and political figures that dismiss eternal life and global destruction alike as plausible scenarios are in the position of the second man. By contrast, it is we transhumanists who are for the most part playing the part of Archimedes. With death, mediated by technologies we understand full well staring us in the face; we continue our pleasant intellectual games. At best a few percent of us have adopted the demeanor of an earlier Archimedes and transferred our attention from our choice activities to other, still interesting endeavors which happen to be vital to our own survival. The rest are presumably acting as puppets of the memes which react to the prospect of immortality by isolating the associated meme-complex and suppressing its effects on actual activity.
OK, so most of us don't seem to be behaving in an optimal manner. What manner would be optimal? This ISN'T a religion, remember? I can't tell you that. At best I can suggest an outline of the sort of behavior that seems to me least likely to lead to this region of space becoming the center of a sphere of tiny smiley faces expanding at the speed of light.
The first thing that I can suggest is that you take rationality seriously. Recognize how far you have to go. Trust me; the fact that you can't rationally trust me without evidence is itself a demonstration that at least one of us isn't even a reasonable approximation of rational, as demonstrated by Robin Hanson and Tyler Emerson of George Mason University in their paper on rational truth-seekers. The fact is that humans don't appear capable of approaching perfect rationality to anything like the degree to which most of you probably believe you have approached it. Nobel Laureate Daniel Kahneman and Amos Tversky provided a particularly valuable set of insights into this fact with their classic book Judgement Under Uncertainty: Heuristics and Biases and in subsequent works. As a trivial example of the uncertainty that humans typically exhibit, try these tests. (Offer some tests from Judgement Under Uncertainty)
I hope that I have made my point. Now let me point out some of the typical errors of transhumanists who have decided to act decisively to protect the world they care about from existential risks. After deciding to rationally defer most of the fun things that they would like to do for a few decades until the world is relatively safe, it is completely typical to either begin some quixotic quest to transform human behavior on a grand scale over the course of the next couple decades or to go raving blithering Cthulhu-worshiping mad and try to build an artificial intelligence. I will now try to discourage such activities.
One of the first rules of rationality is not to irrationally demand that others be rational. Demanding that someone make a difficult mental transformation has never once lead them to making said transformation. People have a strong evolved desire to make other people accept their assertions and opinions. Before you let the thought cross your mind that a person is not trying to be rational, I would suggest that you consider the following. If you and your audience were both trying to be rational, you would be mutually convinced of EVERY position that the members of your audience had on EVERY subject and vice versa. If this does not seem like a plausible outcome then one of you is not trying to be rational, and it is silly to expect a rational outcome from your discussion. By all means, if a particular person is in a position to be helpful try to blunder past the fact of your probably mutual unwillingness to be rational; in a particular instance it is entirely possible that ordinary discussion will lead to the correct conclusion, though it will take hundreds of times longer than it would if the participants were able to abandon the desire to win an argument as a motivation separate from the desire to reach the correct conclusion. On the other hand, when dealing with a group of people, or with an abstract class of people, Don't Even Try to influence them with what you believe to be a well-reasoned argument. This has been scientifically shown not to work, and if you are going to try to simply will your wishes into being you may as well debate the nearest million carbon atoms into forming an assembler and be done with it, or perhaps convince your own brain to become transhumanly intelligent. Hey, it's your brain, if you can't convince it to do something contrary to its nature that it wants to do is it likely that you can convince the brains of many other people to do something contrary to their natures that they don't want to do just by generating a particular set of vocalizations?
My recommendation that you not make an AI is slightly more urgent. Attempting to transform the behavior of a substantial group of people via a reasoned argument is a silly and superstitious act, but it is still basically a harmless one. On the other hand, attempts by ordinary physicist Nobel Laureate quality geniuses to program AI systems are not only astronomically unlikely to succeed, but in the shockingly unlikely event that they do succeed they are almost equally likely to leave nothing of value in this part of the universe. If you think you can do this safely despite my warning, here are a few things to consider:
- A large fraction of the greatest computer scientists and other information scientists in history have done work on AI, but so far none of them have begun to converge on even the outlines of a theory or succeeded in matching the behavioral complexity of an insect, despite the fantastic military applications of even dragonfly-equivalent autonomous weapons.
- Top philosophers, pivotal minds in the history of human thought, have consistently failed to converge on ethical policy.
- Isaac Asimov, history's most prolifi writer and Mensa's honorary president, attempted to formulate a more modest set of ethical precepts for robots and instead produced the blatantly suicidal three laws (if you don't see why the three laws wouldn't work I refer you to the Singularity Institute for Artificial Intelligence's campaign against the three laws)
- Science fiction authors as a class, a relatively bright crowd by human standards, have subsequently thrown more time into considering the question of machine ethics than they have any other philosophical issue other than time travel, yet have failed to develop anything more convincing than the three laws.
- AI ethics cannot be arrived at either through dialectic (critical speculation) or through the scientific method. The first method fails to distinguish between an idea that will actually work and the first idea you and your friends couldn't rapidly see big holes in, influenced as you were by your specific desire for a cool-sounding idea to be correct and your more general desire to actually realize your AI concept, saving the world and freeing you to devote your life to whatever you wish. The second method is crippled by the impossibility of testing a transhumanly intelligent AI (because it could by definition trick you into thinking it had passed the test) and by the irrelevance of testing an ethical system on an AI without transhuman intelligence. Ask yourself, how constrained would your actions be if you were forced to obey the code of Hammurabi but you had no other ethical impulses at all. Now keep in mind that Hammurabi was actually FAR more like you than an AI will be. He shared almost all of your genes, your very high by human standards intellect, and the empathy that comes from an almost identical brain architecture, but his attempt at a set of rules for humans was a first try, just as your attempt at a set of rules for AIs would be.
- Actually, if you are thinking in terms of a set of rules AT ALL this implies that you are failing to appreciate both a programmer's control over an AI's cognition and an AI's alien nature. If you are thinking in terms of something more sophisticated, and bear in mind that apparently only one person has ever thought in terms of something more sophisticated so far, bear in mind that the first such "more sophisticated" theory was discovered on careful analysis to itself be inadequate, as was the second.
If you can't make people change, and you can't make an AI, what can you do to avoid being killed? As I said, I don't know. It's a good bet that money would help, as well as an unequivocal decision to make singularity strategy the focus of your life rather than a hobby. A good knowledge of cognitive psychology and of how people fail to be rational may enable you to better figure out what to do with your money, and may enable you to better co-operate your efforts with other serious and rational transhumanists without making serious mistakes. If you are willing to try, please let's keep in touch. Seriously, even if you discount your future at a very high rate, I think that you will find that living rationally and trying to save the world is much more fun and satisfying than the majority of stuff that even very smart people spend their time doing. It really really beats pretending to do the same, yet even such pretending is or once was a very popular activity among top-notch transhumanists.
Aiming at true rationality will be very difficult in the short run, a period of time which humans who expect to live for less than a century are prone to consider the long run. It entails absolutely no social support from non-transhumanists, and precious little from transhumanists, most of whom will probably resent the implicit claim that they should be more rational. If you haven't already, it will also require you to put your every-day life in order and acquire the ability to interact positively with people or a less speculative character. You will get no VC or angel funding, terribly limited grant money, and in general no acknowledgement of any expertise you acquire. On the other hand, if you already have some worth-while social relationships, you will be shocked by just how much these relationships improve when you dedicate yourself to shaping them rationally. The potential of mutual kindness, when even one partner really decides not to do anything to undermine it, shines absolutely beyond the dreams of self-help authors.
If you have not personally acquired a well-paying job, in the short term I recommend taking the actuarial tests. Actuarial positions, while somewhat boring, do provide practice in rationally analyzing data of a complexity that denies intuitive analysis or analytical automatonism. They also pay well, require no credentials other than tests in what should be mandatory material for anyone aiming at rationality, and have top job security in jobs that are easy to find and only require 40 hours per week of work. If you are competent with money, a few years in such a job should give you enough wealth to retire to some area with a low cost of living and analyze important questions. A few years more should provide the capital to fund your own research. If you are smart enough to build an AI's morality it should be a breeze to burn through the 8 exams in a year, earn a six-figure income, and get returns on investment far better than Buffet does. On the other hand, doing that doesn't begin to suggest that you are smart enough to build an AI's morality. I'm not convinced that anything does.
Fortunately ordinary geniuses with practiced rationality can contribute a great deal to the task of saving the world. Even more fortunately, so long as they are rational they can co-operate very effectively even if they don't share an ethical system. Eternity is an intrinsically shared prize. On this task more than any other the actual behavioral difference between an egoist, altruist, or even a Kantian should fade to nothing in terms of its impact on actual behavior. The hard part is actually being rational, which requires that you postpone the fun but currently irrelevant arguments until the pressing problem is solved, even perhaps with the full knowledge that you are actually probably giving them up entirely, as they may be about as interesting as watching moss grow post-singularity. Delaying gratification in this manner is not a unique difficulty faced by transhumanists. Anyone pursuing a long-term goal, such as a medical student or PhD candidate, does the same. The special difficulty that you will have to overcome is the difficulty of staying on track in the absence of social support or of appreciation of the problem, and the difficulty of overcoming your mind's anti-religion defenses, which will be screaming at you to cut out the fantasy and go live a normal life, with the normal empty set of beliefs about the future and its potential.
Another important difficulty to overcome is the desire for glory. It isn't important that the ideas that save the world be your ideas. What matters is that they be the right ideas. In ordinary life, the satisfaction that a person gains from winning an argument may usually be adequate compensation for walking away without having learned what they should have learned from the other side, but this is not the case when you elegantly prove to your opponent and yourself that the pie you are eating is not poisoned. Another glory-related concern is that of allowing science fiction to shape your expectations of the actual future. Yes it may be fun and exciting to speculate on government conspiracies to suppress nanotech, but even if you are the right conspiracy theories don't have enough predictive power to test or to guide your actions. If you are wrong, you may well end up clinically paranoid. Conspiracy thrillers are pleasant silly fun. Go ahead and read them if you lack the ability to take the future seriously, but don't end up in an imaginary one, that is NOT fun.
Likewise, don't trust science fiction when it implies that you have decades or centuries left before the singularity. You might, but you don't know that; it all depends on who actually goes out and makes it happen. Above all, don't trust its depictions of the sequence in which technologies will develop or of the actual consequences of technologies that enhance intelligence. These are just some author's guesses. Worse still, they aren't even the author's best guesses, they are the result of a lop-sided compromise between the author's best guess and the set of technologies that best fit the story the author wants to tell. So you want to see Mars colonized before singularity. That's common in science fiction, right? So it must be reasonably likely. Sorry, but that is not how a rational person estimates what is likely. Heuristics and Biases will introduce you to the representativeness heuristic, roughly speaking the degree to which a scenario fits a preconceived mental archetype. People who haven't actively optimized their rationally typically use representativeness as their estimate of probability because we are designed to do so automatically so we find it very easy to do so. In the real world this doesn't work well. Pay attention to logical relationships instead.
Since I am attempting to approximate a rational person, I don't expect e-mails from any of you to show up in my in-box in a month or two requesting my cooperation on some sensible and realistic project for minimizing existential risk. I don't expect that, but I place a low certainty value on most of my expectations, especially regarding the actions of outlier humans. I may be wrong. Please prove me wrong. The opportunity to find that I am mistaken in my estimates of the probability of finding serious transhumanists is what motivated me to come all the way across the continent. I'm betting we all die in a flash due to the abuse of these technologies. Please help me to be wrong.
We are familiar with the thesis that Value is Fragile. This is why we are researching how to impart values to an AGI.
Embedded Minds are Fragile
Besides values, it may be worth remembering that human minds too are very fragile.
A little magnetic tampering with your amygdalas, and suddenly you are a wannabe serial killer. A small dose of LSD can get you to believe you can fly, or that the world will end in 4 hours. Remove part of your Ventromedial PreFrontal Cortex, and suddenly you are so utilitarian even Joshua Greene would call you a psycho.
It requires very little material change to substantially modify a human being's behavior. Same holds for other animals with embedded brains, crafted by evolution and made of squishy matter modulated by glands and molecular gates.
A Problem for Paul-Boxing and CEV?
One assumption underlying Paul-Boxing and CEV is that:
It is easier to specify and simulate a human-like mind then to impart values to an AGI by means of teaching it values directly via code or human language.
Usually we assume that because, as we know, value is fragile. But so are embedded minds. Very little tampering is required to profoundly transform people's moral intuitions. A large fraction of the inmate population in the US has frontal lobe or amygdala malfunctions.
Finding out the simplest description of a human brain that when simulated continues to act as that human brain would act in the real world may turn out to be as fragile, or even more fragile, than concept learning for AGI's.
As a follow-on to the recent thread on purchasing research effectively, I thought it'd make sense to post the request for proposals for projects to be funded by Musk's $10M donation. LessWrong's been a place for discussing long-term AI safety and research for quite some time, so I'd be happy to see some applications come out of LW members.
Here's the full Request for Proposals.
If you have questions, feel free to ask them in the comments or to contact me!
Here's the email FLI has been sending around:
Initial proposals (300–1000 words) due March 1, 2015
The Future of Life Institute, based in Cambridge, MA and headed by Max Tegmark (MIT), is seeking proposals for research projects aimed to maximize the future societal benefit of artificial intelligence while avoiding potential hazards. Projects may fall in the fields of computer science, AI, machine learning, public policy, law, ethics, economics, or education and outreach. This 2015 grants competition will award funds totaling $6M USD.
This funding call is limited to research that explicitly focuses not on the standard goal of making AI more capable, but on making AI more robust and/or beneficial; for example, research could focus on making machine learning systems more interpretable, on making high-confidence assertions about AI systems' behavior, or on ensuring that autonomous systems fail gracefully. Funding priority will be given to research aimed at keeping AI robust and beneficial even if it comes to greatly supersede current capabilities, either by explicitly focusing on issues related to advanced future AI or by focusing on near-term problems, the solutions of which are likely to be important first steps toward long-term solutions.
Please do forward this email to any colleagues and mailing lists that you think would be appropriate.
Before applying, please read the complete RFP and list of example topics, which can be found online along with the application form:
As explained there, most of the funding is for $100K–$500K project grants, which will each support a small group of collaborators on a focused research project with up to three years duration. For a list of suggested topics, see the complete RFP  and the Research Priorities document . Initial proposals, which are intended to require merely a modest amount of preparation time, must be received on our website  on or before March 1, 2015.
Initial proposals should include a brief project summary, a draft budget, the principal investigator’s CV, and co-investigators’ brief biographies. After initial proposals are reviewed, some projects will advance to the next round, completing a Full Proposal by May 17, 2015. Public award recommendations will be made on or about July 1, 2015, and successful proposals will begin receiving funding in September 2015.
References and further resources
 Complete request for proposals and application form: http://futureoflife.org/grants/large/initial
 Research Priorities document: http://futureoflife.org/static/data/documents/research_priorities.pdf
 An open letter from AI scientists on research priorities for robust and beneficial AI: http://futureoflife.org/misc/open_letter
 Initial funding announcement: http://futureoflife.org/misc/AI
Questions about Project Grants: firstname.lastname@example.org
Media inquiries: email@example.com
I'm excited to announce that the Future of Life Institute has just launched an existential risk news site!
The site will have regular articles on topics related to existential risk, written by journalists, and a community blog written by existential risk researchers from around the world as well as FLI volunteers. Enjoy!
The very popular blog Wait But Why has published the first part of a two-part explanation/summary of AI risks and superintelligence, and it looks like the second part will be focused on Friendly AI. I found it very clear, reasonably thorough and appropriately urgent without signaling paranoia or fringe-ness. It may be a good article to share with interested friends.
Update: Part 2 is now here.
Cross-posted on my blog and the effective altruism forum with some minor tweaks; apologies if some of the formatting hasn't copied across. The article was written with an EA audience in mind but it is essentially one about rationality and consequentialism.
Summary: People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important – many standard arguments on both sides of moral issues like the permissibility of abortion are significantly undermined or otherwise effected by EA considerations, especially moral uncertainty.
A long time ago, Will wrote an article about how a key part of rationality was taking ideas seriously: fully exploring ideas, seeing all their consequences, and then acting upon them. This is something most of us do not do! I for one certainly have trouble. He later partially redacted it, and Anna has an excellent article on the subject, but at the very least decompartmentalizing is a very standard part of effective altruism.
Similarly, I think people selectively apply Effective Altruist (EA) principles. People are very willing to apply them in some cases, but when those principles would cut at a core part of the person’s identity – like requiring them to dress appropriately so they seem less weird – people are much less willing to take those EA ideas to their logical conclusion.
Consider your personal views. I’ve certainly changed some of my opinions as a result of thinking about EA ideas. For example, my opinion of bednet distribution is now much higher than it once was. And I’ve learned a lot about how to think about some technical issues, like regression to the mean. Yet I realized that I had rarely done a full 180 – and I think this is true of many people:
- Many think EA ideas argue for more foreign aid – but did anyone come to this conclusion who had previously been passionately anti-aid?
- Many think EA ideas argue for vegetarianism – but did anyone come to this conclusion who had previously been passionately carnivorous?
- Many think EA ideas argue against domestic causes – but did anyone come to this conclusion who had previously been a passionate nationalist?
Yet this is quite worrying. Given the power and scope of many EA ideas, it seems that they should lead to people changing their mind on issues were they had been previously very certain, and indeed emotionally involved.
Obviously we don’t need to apply EA principles to everything – we can probably continue to brush our teeth without need for much reflection. But we probably should apply them to issues with are seen as being very important: given the importance of the issues, any implications of EA ideas would probably be important implications.
In his PhD thesis, Will MacAskill argues that we should treat normative uncertainty in much the same way as ordinary positive uncertainty; we should assign credences (probabilities) to each theory, and then try to maximise the expected morality of our actions. He calls this idea ‘maximise expected choice-worthiness’, and if you’re into philosophy, I recommend reading the paper. As such, when deciding how to act we should give greater weight to the theories we consider more likely to be true, and also give more weight to theories that consider the issue to be of greater importance.
This is important because it means that a novel view does not have to be totally persuasive to demand our observance. Consider, for example, vegetarianism. Maybe you think there’s only a 10% chance that animal welfare is morally significant – you’re pretty sure they’re tasty for a reason. Yet if the consequences of eating meat are very bad in those 10% of cases (murder or torture, if the animal rights activists are correct), and the advantages are not very great in the other 90% (tasty, some nutritional advantages), we should not eat meat regardless. Taking into account the size of the issue at stake as well as probability of its being correct means paying more respect to ‘minority’ theories.
And this is more of an issue for EAs than for most people. Effective Altruism involves a group of novel moral premisses, like cosmopolitanism, the moral imperative for cost-effectiveness and the importance of the far future. Each of these imply that our decisions are in some way very important, so even if we assign them only a small credence, their plausibility implies radical revisions to our actions.
One issue that Will touches on in his thesis is the issue of whether fetuses morally count. In the same way that we have moral uncertainty as to whether animals, or people in the far future, count, so too we have moral uncertainty as to whether unborn children are morally significant. Yes, many people are confident they know the correct answer – but there many of these on each side of the issue. Given the degree of disagreement on the issue, among philosophers, politicians and the general public, it seems like the perfect example of an issue where moral uncertainty should be taken into account – indeed Will uses it as a canonical example.
Consider the case of a pregnant women Sarah, wondering whether it is morally permissible to abort her child1. The alternative course of action she is considering is putting the child up for adoption. In accordance with the level of social and philosophical debate on the issue, she is uncertain as to whether aborting the fetus is morally permissible. If it’s morally permissible, it’s merely permissible – it’s not obligatory. She follows the example from Normative Uncertainty and constructs the following table
In the best case scenario, abortion has nothing to recommend it, as adoption is also permissible. In the worst case, abortion is actually impermissible, whereas adoption is permissible. As such, adoption dominates abortion.
However, Sarah might not consider this representation as adequate. In particular, she thinks that now is not the best time to have a child, and would prefer to avoid it.2 She has made plans which are inconsistent with being pregnant, and prefers not to give birth at the current time. So she amends the table to take into account these preferences.
Now adoption no longer strictly dominates abortion, because she prefers abortion to adoption in the scenario where it is morally permissible. As such, she considers her credence: she considers the pro-choice arguments slightly more persuasive than the pro-life ones: she assigns a 70% credence to abortion being morally permissible, but only a 30% chance to its being morally impermissible.
Looking at the table with these numbers in mind, intuitively it seems that again it’s not worth the risk of abortion: a 70% chance of saving oneself inconvenience and temporary discomfort is not sufficient to justify a 30% chance of committing murder. But Sarah’s unsatisfied with this unscientific comparison: it doesn’t seem to have much of a theoretical basis, and she distrusts appeals to intuitions in cases like this. What is more, Sarah is something of a utilitarian; she doesn’t really believe in something being impermissible.
Fortunately, there’s a standard tool for making inter-personal welfare comparisons: QALYs. We can convert the previous table into QALYs, with the moral uncertainty now being expressed as uncertainty as to whether saving fetuses generates QALYs. If it does, then it generates a lot; supposing she’s at the end of her first trimester, if she doesn’t abort the baby it has a 98% chance of surviving to birth, at which point its life expectancy is 78.7 in the US, for 78.126 QALYs. This calculation assumes assigns no QALYs to the fetus’s 6 months of existence between now and birth. If fetuses are not worthy of ethical consideration, then it accounts for 0 QALYs.
We also need to assign QALYs to Sarah. For an upper bound, being pregnant is probably not much worse than having both your legs amputated without medication, which is 0.494 QALYs, so lets conservatively say 0.494 QALYs. She has an expected 6 months of pregnancy remaining, so we divide by 2 to get 0.247 QALYs. Women’s Health Magazine gives the odds of maternal death during childbirth at 0.03% for 2013; we’ll round up to 0.05% to take into account risk of non-death injury. Women at 25 have a remaining life expectancy of around 58 years, so thats 0.05%*58= 0.029 QALYs. In total that gives us an estimate of 0.276 QALYs. If the baby doesn’t survive to birth, however, some of these costs will not be incurred, so the truth is probably slightly lower than this. All in all a 0.276 QALYs seems like a reasonably conservative figure.
Obviously you could refine these numbers a lot (for example, years of old age are likely to be at lower quality of life, there are some medical risks to the mother from aborting a fetus, etc.) but they’re plausibly in the right ballpark. They would also change if we used inherent temporal discounting, but probably we shouldn’t.
We can then take into account her moral uncertainty directly, and calculate the expected QALYs of each action:
- If she aborts the fetus, our expected QALYs are 70%x0 + 30%(-78.126) = -23.138
- If she carries the baby to term and puts it up for adoption, our expected QALYs are 70%(-0.247) + 30%(-0.247) = -0.247
Which again suggests that the moral thing to do is to not abort the baby. Indeed, the life expectancy is so long at birth that it quite easily dominates the calculation: Sarah would have to be extremely confident in rejecting the value of the fetus to justify aborting it. So, mindful of overconfidence bias, she decides to carry the child to term.
Indeed, we can show just how confident in the lack of moral significance of the fetuses one would have to be to justify aborting one. Here is a sensitivity table, showing credence in moral significance of fetuses on the y axis, and the direct QALY cost of pregnancy on the x axis for a wide range of possible values. The direct QALY cost of pregnancy is obviously bounded above by its limited duration. As is immediately apparent, one has to be very confident in fetuses lacking moral significance, and pregnancy has to be very bad, before aborting a fetus becomes even slightly QALY-positive. For moderate values, it is extremely QALY-negative.
Other EA concepts and their applications to this issue
Of course, moral uncertainty is not the only EA principle that could have bearing on the issue, and given that the theme of this blogging carnival, and this post, is things we’re overlooking, it would be remiss not to give at least a broad overview of some of the others. Here, I don’t intend to judge how persuasive any given argument is – as we discussed above, this is a debate that has been going without settlement for thousands of years – but merely to show the ways that common EA arguments affect the plausibility of the different arguments. This is a section about the directionality of EA concerns, not on the overall magnitudes.
Not really people
One of the most important arguments for the permissibility of abortion is that fetuses are in some important sense ‘not really people’. In many ways this argument resembles the anti-animal rights argument that animals are also ‘not really people’. We already covered above the way that considerations of moral uncertainty undermine both these arguments, but it’s also noteworthy that in general it seems that the two views are mutually supporting (or mutually undermining, if both are false). Animal-rights advocates often appeal to the idea of an ‘expanding circle’ of moral concern. I’m skeptical of such an argument, but it seems clear that the larger your sphere, the more likely fetuses are to end up on the inside. The fact that, in the US at least, animal activists tend to be pro-abortion seems to be more of a historical accident than anything else. We could imagine alternative-universe political coalitions, where a “Defend the Weak; They’re morally valuable too” party faced off against a “Exploit the Weak; They just don’t count” party. In general, to the extent that EAs care about animal suffering (even insect suffering ), EAs should tend to be concerned about the welfare of the unborn.
Not people yet
A slightly different common argument is that while fetuses will eventually be people, they’re not people yet. Since they’re not people right now, we don’t have to pay any attention to their rights or welfare right now. Indeed, many people make short sighted decisions that implicitly assign very little value to the futures of people currently alive, or even to their own futures – through self-destructive drug habits, or simply failing to save for retirement. If we don’t assign much value to our own futures, it seems very sensible to disregard the futures of those not even born. And even if people who disregarded their own futures were simply negligent, we might still be concerned about things like the non-identity problem.
Yet it seems that EAs are almost uniquely unsuited to this response. EAs do tend to care explicitly about future generations. We put considerable resources into investigating how to help them, whether through addressing climate change or existential risks. And yet these people have far less of a claim to current personhood than fetuses, who at least have current physical form, even if it is diminutive. So again to the extent that EAs care about future welfare, EAs should tend to be concerned about the welfare of the unborn.
Another important EA idea is that of replaceability. Typically this arises in contexts of career choice, but there is a different application here. The QALYs associated with aborted children might not be so bad if the mother will go on to have another child instead. If she does, the net QALY loss is much lower than the gross QALY loss. Of course, the benefits of aborting the fetus are equivalently much smaller – if she has a child later on instead, she will have to bear the costs of pregnancy eventually anyway. This resembles concerns that maybe saving children in Africa doesn’t make much difference, because their parents adjust their subsequent fertility.
The plausibility behind this idea comes from the idea that, at least in the US, most families have a certain ideal number of children in mind, and basically achieve this goal. As such, missing an opportunity to have an early child simply results in having another later on.
If this were fully true, utilitarians might decide that abortion actually has no QALY impact at all – all it does is change the timing of events. On the other hand, fertility declines with age, so many couples planning to have a replacement child later may be unable to do so. Also, some people do not have ideal family size plans.
Additionally, this does not really seem to hold when the alternative is adoption; presumably a woman putting a child up for adoption does not consider it as part of her family, so her future childbearing would be unaffected. This argument might hold if raising the child yourself was the only alternative, but given that adoption services are available, it does not seem to go through.
Sometimes people argue for the permissibility of abortion through autonomy arguments. “It is my body”, such an argument would go, “therefore I may do whatever I want with it.” To a certain extent this argument is addressed by pointing out that one’s bodily rights presumably do not extent to killing others, so if the anti-abortion side are correct, or even have a non-trivial probability of being correct, autonomy would be insufficient. It seems that if the autonomy argument is to work, it must be because a different argument has established the non-personhood of fetuses – in which case the autonomy argument is redundant. Yet even putting this aside, this argument is less appealing to EAs than to non-EAs, because EAs often hold a distinctly non-libertarian account of personal ethics. We believe it is actually good to help people (and avoid hurting them), and perhaps that it is bad to avoid doing so. And many EAs are utilitarians, for whom helping/not-hurting is not merely laud-worthy but actually compulsory. EAs are generally not very impressed with Ayn Rand style autonomy arguments for rejecting charity, so again EAs should tend to be unsympathetic to autonomy arguments for the permissibility of abortion.
Indeed, some EAs even think we should be legally obliged to act in good ways, whether through laws against factory farming or tax-funded foreign aid.
An argument often used on the opposite side – that is, an argument used to oppose abortion, is that abortion is murder, and murder is simply always wrong. Whether because God commanded it or Kant derived it, we should place the utmost importance of never murdering. I’m not sure that any EA principle directly pulls against this, but nonetheless most EAs are consequentialists, who believe that all values can be compared. If aborting one child would save a million others, most EAs would probably endorse the abortion. So I think this is one case where a common EA view pulls in favor of the permissibility of abortion.
I didn’t ask for this
Another argument often used for the permissibility of abortion is that the situation is in some sense unfair. If one did not intend to become pregnant – perhaps even took precautions to avoid becoming so – but nonetheless ends up pregnant, you’re in some way not responsible for becoming pregnant. And since you’re not responsible for it you have no obligations concerning it – so may permissible abort the fetus.
However, once again this runs counter to a major strand of EA thought. Most of us did not ask to be born in rich countries, or to be intelligent, or hardworking. Perhaps it was simply luck. Yet being in such a position nonetheless means we have certain opportunities and obligations. Specifically, we have the opportunity to use of wealth to significantly aid those less fortunate than ourselves in the developing world, and many EAs would agree the obligation. So EAs seem to reject the general idea that not intending a situation relieves one of the responsibilities of that situation.
Infanticide is okay too
A frequent argument against the permissibility of aborting fetuses is by analogy to infanticide. In general it is hard to produce a coherent criteria that permits the killing of babies before birth but forbids it after birth. For most people, this is a reasonably compelling objection: murdering innocent babies is clearly evil! Yet some EAs actually endorse infanticide. If you were one of those people, this particular argument would have little sway over you.
A common implicit premise in many moral discussion is that the same moral principles apply to everyone. When Sarah did her QALY calculation, she counted the baby’s QALYs as equally important to her own in the scenario where they counted at all. Similarly, both sides of the debate assume that whatever the answer is, it will apply fairly broadly. Perhaps permissibility varies by age of the fetus – maybe ending when viability hits – but the same answer will apply to rich and poor, Christian and Jew, etc.
This is something some EAs might reject. Yes, saving the baby produces many more QALYs than Sarah loses through the pregnancy, and that would be the end of the story if Sarah were simply an ordinary person. But Sarah is an EA, and so has a much higher opportunity cost for her time. Becoming pregnant will undermine her career as an investment banker, the argument would go, which in turn prevents her from donating to AMF and saving a great many lives. Because of this, Sarah is in a special position – it is permissible for her, but it would not be permissible for someone who wasn’t saving many lives a year.
I think this is a pretty repugnant attitude in general, and a particularly objectionable instance of it, but I include it here for completeness.
May we discuss this?
Now we’ve considered these arguments, it appears that applying general EA principles to the issue in general tends to make abortion look less morally permissible, though there were one or two exceptions. But there is also a second order issue that we should perhaps address – is it permissible to discuss this issue at all?
Nothing to do with you
A frequently seen argument on this issue is to claim that the speaker has no right to opine on the issue. If it doesn’t personally affect you, you cannot discuss it – especially if you’re privileged. As many (a majority?) of EAs are male, and of the women many are not pregnant, this would curtail dramatically the ability of EAs to discuss abortion. This is not so much an argument on one side or other of the issue as an argument for silence.
Leaving aside the inherent virtues and vices of this argument, it is not very suitable for EAs. Because EAs have many many opinions on topics that don’t directly affect them:
- EAs have opinions on disease in Africa, yet most have never been to Africa, and never will
- EAs have opinions on (non-human) animal suffering, yet most are not non-human animals
- EAs have opinions on the far future, yet live in the present
Indeed, EAs seem more qualified to comment on abortion – as we all were once fetuses, and many of us will become pregnant. If taken seriously this argument would call foul on virtually ever EA activity! And this is no idle fantasy – there are certainly some people who think that Westerns cannot usefully contribute to solving African poverty.
We can safely say this is a somewhat controversial issue. Perhaps it is too controversial – maybe it is bad for the movement to discuss. One might accept the arguments above – that EA principles generally undermine the traditional reasons for thinking abortion is morally permissible – yet think we should not talk about it. The controversy might divide the community and undermine trust. Perhaps it might deter newcomers. I’m somewhat sympathetic to this argument – I take the virtue of silence seriously, though eventually my boyfriend persuaded me it was worth publishing.
Note that the controversial nature is evidence against abortion’s moral permissibility, due to moral uncertainty.
However, the EA movement is no stranger to controversy.
- There is a semi-official EA position on immigration, which is about as controversial as abortion in the US at the moment, and the EA position is such an extreme position that essentially no mainstream politicians hold it.
- There is a semi-official EA position on vegetarianism, which is pretty controversial too, as it involves implying that the majority of Americans are complicit in murder every day.
Not worthy of discussion
Finally, another objection to discussing this is it simply it’s an EA idea. There are many disagreements in the world, yet there is no need for an EA view on each. Conflict between the Lilliputians and Blefuscudians notwithstanding, there is no need for an EA perspective on which end of the egg to break first. And we should be especially careful of heated, emotional topics with less avenue to pull the rope sideways. As such, even though the object-level arguments given above are correct, we should simply decline to discuss it.
However, it seems that if abortion is a moral issue, it is a very large one. In the same way that the sheer number of QALYs lost makes abortion worse than adoption even if our credence in fetuses having moral significance was very low, the large number of abortions occurring each year make the issue as a whole of high significance. In 2011 there were over 1 million babies were aborted in the US. I’ve seen a wide range of global estimates, including around 10 million to over 40 million. By contrast, the WHO estimates there are fewer than 1 million malaria deaths worldwide each year. Abortion deaths also cause a higher loss of QALYs due to the young age at which they occur. On the other hand, we should discount them for the uncertainty that they are morally significant. And perhaps there is an even larger closely related moral issue. The size of the issue is not the only factor in estimating the cost-effectiveness of interventions, but it is the most easily estimable. On the other hand, I have little idea how many dollars of donations it takes to save a fetus – it seems like an excellent example of some low-hanging fruit research.
People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important. In this post we the implications of common EA beliefs on the permissibility of abortion. Taking into account moral uncertainty makes aborting a fetus seem far less permissible, as the high counterfactual life expectancy of the baby tends to dominate other factors. Many other EA views are also significant to the issue, making various standard arguments on each side less plausible.
- There doesn’t seem to be any neutral language one can use here, so I’m just going to switch back and forth between ‘fetus’ and ‘child’ or ‘baby’ in a vain attempt at terminological neutrality.
- I chose this reason because it is the most frequently cited main motivation for aborting a fetus according to the Guttmacher Institute.
As many people have noted, Less Wrong currently isn't receiving as much content as we would like. One way to think about expanding the content is to think about which areas of study deserve more articles written on them.
For example, I expect that sociology has a lot to say about many of our cultural assumptions. It is quite possible that 95% of it is either obvious or junk, but almost all fields have that 5% within them that could be valuable. Another area of study that might be interesting to consider is anthropology. Again this is a field that allows us to step outside of our cultural assumptions.
I don't know anything about media studies, but I imagine that they have some worthwhile things to say about how we the information that we hear is distorted.
What other fields would you like to see some discussion of on Less Wrong?
The book version of the Sequences is supposed to be published in the next month or two, if I understand correctly. I would really enjoy an online reading group to go through the book together.
Reasons for a reading group:
- It would give some of us the motivation to actually go through the Sequences finally.
- I have frequently had thoughts or questions on some articles in the Sequences, but I refrained from commenting because I assumed it would be covered in a later article or because I was too intimidated to ask a stupid question. A reading group would hopefully assume that many of the readers would be new to the Sequences, so asking a question or making a comment without knowing the later articles would not appear stupid.
- It may even bring back a bit of the blog-style excitement of the "old" LW ("I wonder what exciting new thoughts are going to be posted today?") that many have complained has been missing since the major contributors stopped posting.
What can I purchase with $100 that will be the best thing I can buy to make my life better?
I've decided to budget some regular money to improving my life each month. I'd like to start with low hanging fruit for obvious reasons - but when I sat down to think of improvements, I found myself thinking of the same old things I'd already been planning to do anyway... and I'd like out of that rut.
- be concrete. I know - "spend money on experiences" is a good idea - but what experiences are the best option to purchase *first*
- "better" is deliberately left vague - choose how you would define it, so that I'm not constrained just by ways of "being better" that I'd have thought of myself.
- please assume that I have all my basic needs met (eg food, clothing, shelter) and that I have budgeted separately for things like investing for my financial future and for charity.
- apart from the above, assume nothing - Especially don't try and tailor solutions to anything you might know and/or guess about me specifically, because I think this would be a useful resource for others who might have just begun.
- don't constrain yourself to exactly $100 - I could buy 2-3 things for that, or I could save up over a couple of months and buy something more expensive... I picked $100 because it's a round number and easy to imagine.
- it's ok to add "dumb" things - they can help spur great ideas, or just get rid of an elephant in the room.
- try thinking of your top-ten before reading any comments, in order not to bias your initial thinking. Then come back and add ten more once you've been inspired by what everyone else came up with.
This is a question I recently posed to my local Less Wrong group and we came up with a few good ideas, so I thought I'd share the discussion with the wider community and see what we can come up with. I'll add the list we came up with later on in the comments...
It'd be great to have a repository of low-hanging fruit for things that can be solved with (relatively affordable) amounts of money. I'd personally like to go through the list - look at candidates that sound like they'd be really useful to me and then make a prioritised list of what to work on first.
(Crossposted from ordinary ideas).
I’ve recently been thinking about AI safety, and some of the writeups might be interesting to some LWers:
- Ideas for building useful agents without goals: approval-directed agents, approval-directed bootstrapping, and optimization and goals. I think this line of reasoning is very promising.
- A formalization of one piece of the AI safety challenge: the steering problem. I am eager to see more precise, high-level discussion of AI safety, and I think this article is a helpful step in that direction. Since articulating the steering problem I have become much more optimistic about versions of it being solved in the near term. This mostly means that the steering problem fails to capture the hardest parts of AI safety. But it’s still good news, and I think it may eventually cause some people to revise their understanding of AI safety.
- Some ideas for getting useful work out of self-interested agents, based on arguments: of arguments and wagers, adversarial collaboration [older], and delegating to a mixed crowd. I think these are interesting ideas in an interesting area, but they have a ways to go until they could be useful.
I’m excited about a few possible next steps:
- Under the (highly improbable) assumption that various deep learning architectures could yield human-level performance, could they also predictably yield safe AI? I think we have a good chance of finding a solution---i.e. a design of plausibly safe AI, under roughly the same assumptions needed to get human-level AI---for some possible architectures. This would feel like a big step forward.
- For what capabilities can we solve the steering problem? I had originally assumed none, but I am now interested in trying to apply the ideas from the approval-directed agents post. From easiest to hardest, I think there are natural lines of attack using any of: natural language question answering, precise question answering, sequence prediction. It might even be possible using reinforcement learners (though this would involve different techniques).
- I am very interested in implementing effective debates, and am keen to test some unusual proposals. The connection to AI safety is more impressionistic, but in my mind these techniques are closely linked with approval-directed behavior.
- I’m currently writing up a concrete architecture for approval-directed agents, in order to facilitate clearer discussion about the idea. This kind of work that seems harder to do in advance, but at this point I think it’s mostly an exposition problem.
Vitalik Buterin has a new post about an interesting theoretical attack against Bitcoin. The idea relies on the assumption that the attacker can credibly commit to something quite crazy. The crazy thing is this: paying out 25.01 BTC to all the people who help him in his attack to steal 25 BTC from everyone, but only if the attack fails. This leads to a weird payoff matrix where the dominant strategy is to help him in the attack. The attack succeeds, and no payoff is made.
Of course, smart contracts make such crazy commitments perfectly possible, so this is a bit less theoretical than it sounds. But even as an abstract though experiment about decision theories, it looks pretty interesting.
By the way, Vitalik Buterin is really on a roll. Just a week ago he had a thought-provoking blog post about how Decentralized Autonomous Organizations could possibly utilize a concept often discussed here: decision theory in a setup where agents can inspect each others' source code. It was shared on LW Discussion, but earned less exposure than I think it deserved.
EDIT 1: One smart commenter of the original post spotted that an isomorphic, extremely cool game was already proposed by billionaire Warren Buffett. Does this thing already have a name in game theory maybe?
EDIT 2: I wrote the game up in detail for some old-school game theorist friends:
The attacker orchestrates a game with 99 players. The attacker himself does not participate in the game.
Each of the players can either defect or cooperate, in the usual game theoretic setup where they do announce their decisions simultaneously, without side channels. We call "aggregate outcome" the decision that was made by the majority of the players. If the aggregate outcome is defection, we say that the attack succeeds. A player's payoff consists of two components:
1. If her decision coincides with the aggregate outcome, the player gets 10 utilons.
2. if the attack succeeds, the attacker gets 1 utilons from each of the 99 players, regardless of their own decision.
There are two equilibria, but the second payoff component breaks the symmetry, and everyone will cooperate.
Now the attacker spices things up, by making a credible commitment before the game. ("Credible" simply means that somehow they make sure that the promise can not be broken. The classic way to achieve such things is an escrow, but so called smart contracts are emerging as a method for making fully unbreakable commitments.)
The attacker's commitment is quite counterintuitive: he promises that he will pay 11 utilons to each of the defecting players, but only if the attack fails.
Now the payoff looks like this:
Defection became a dominant strategy. The clever thing, of course, is that if everyone defects, then the attacker reaches his goal without paying out anything.
Quick summary: "Hidden rationalists" are what I call authors who espouse rationalist principles, and probably think of themselves as rational people, but don't always write on "traditional" Less Wrong-ish topics and probably haven't heard of Less Wrong.
I've noticed that a lot of my rationalist friends seem to read the same ten blogs, and while it's great to have a core set of favorite authors, it's also nice to stretch out a bit and see how everyday rationalists are doing cool stuff in their own fields of expertise. I've found many people who push my rationalist buttons in fields of interest to me (journalism, fitness, etc.), and I'm sure other LWers have their own people in their own fields.
So I'm setting up this post as a place to link to/summarize the work of your favorite hidden rationalists. Be liberal with your suggestions!
Another way to phrase this: Who are the people/sources who give you the same feelings you get when you read your favorite LW posts, but who many of us probably haven't heard of?
Here's my list, to kick things off:
- Peter Sandman, professional risk communication consultant. Often writes alongside Jody Lanard. Specialties: Effective communication, dealing with irrational people in a kind and efficient way, carefully weighing risks and benefits. My favorite recent post of his deals with empathy for Ebola victims and is a major, Slate Star Codex-esque tour de force. His "guestbook comments" page is better than his collection of web articles, but both are quite good.
- Doug McGuff, MD, fitness guru and author of the exercise book with the highest citation-to-page ratio of any I've seen. His big thing is "superslow training", where you perform short and extremely intense workouts (video here). I've been moving in this direction for about 18 months now, and I've been able to cut my workout time approximately in half without losing strength. May not work for everyone, but reminds me of Leverage Research's sleep experiments; if it happens to work for you, you gain a heck of a lot of time. I also love the way he emphasizes the utility of strength training for all ages/genders -- very different from what you'd see on a lot of weightlifting sites.
- Philosophers' Mail. A website maintained by applied philosophers at the School of Life, which reminds me of a hippy-dippy European version of CFAR (in a good way). Not much science, but a lot of clever musings on the ways that philosophy can help us live, and some excellent summaries of philosophers who are hard to read in the original. (Their piece on Vermeer is a personal favorite, as is this essay on Simon Cowell.) This recently stopped posting new material, but the School of Life now collects similar work through The Book of Life.
It has long been known that algorithms out-perform human experts on a range of topics (here's a LW post on this by lukeprog). Why, then, is it that people continue to mistrust algorithms, in spite of their superiority, and instead cling to human advice? A recent paper by Dietvorst, Simmons and Massey suggests it is due to a cognitive bias which they call algorithm aversion. We judge less-than-perfect algorithms more harshly than less-than-perfect humans. They argue that since this aversion leads to poorer decisions, it is very costly, and that we therefore must find ways of combating it.
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
The results of five studies show that seeing algorithms err makes people less confident in them and less likely to choose them over an inferior human forecaster. This effect was evident in two distinct domains of judgment, including one in which the human forecasters produced nearly twice as much error as the algorithm. It arose regardless of whether the participant was choosing between the algorithm and her own forecasts or between the algorithm and the forecasts of a different participant. And it even arose among the (vast majority of) participants who saw the algorithm outperform the human forecaster.
The aversion to algorithms is costly, not only for the participants in our studies who lost money when they chose not to tie their bonuses to the algorithm, but for society at large. Many decisions require a forecast, and algorithms are almost always better forecasters than humans (Dawes, 1979; Grove et al., 2000; Meehl, 1954). The ubiquity of computers and the growth of the “Big Data” movement (Davenport & Harris, 2007) have encouraged the growth of algorithms but many remain resistant to using them. Our studies show that this resistance at least partially arises from greater intolerance for error from algorithms than from humans. People are more likely to abandon an algorithm than a human judge for making the same mistake. This is enormously problematic, as it is a barrier to adopting superior approaches to a wide range of important tasks. It means, for example, that people will more likely forgive an admissions committee than an admissions algorithm for making an error, even when, on average, the algorithm makes fewer such errors. In short, whenever prediction errors are likely—as they are in virtually all forecasting tasks—people will be biased against algorithms.
More optimistically, our findings do suggest that people will be much more willing to use algorithms when they do not see algorithms err, as will be the case when errors are unseen, the algorithm is unseen (as it often is for patients in doctors’ offices), or when predictions are nearly perfect. The 2012 U.S. presidential election season saw people embracing a perfectly performing algorithm. Nate Silver’s New York Times blog, Five Thirty Eight: Nate Silver’s Political Calculus, presented an algorithm for forecasting that election. Though the site had its critics before the votes were in— one Washington Post writer criticized Silver for “doing little more than weighting and aggregating state polls and combining them with various historical assumptions to project a future outcome with exaggerated, attention-grabbing exactitude” (Gerson, 2012, para. 2)—those critics were soon silenced: Silver’s model correctly predicted the presidential election results in all 50 states. Live on MSNBC, Rachel Maddow proclaimed, “You know who won the election tonight? Nate Silver,” (Noveck, 2012, para. 21), and headlines like “Nate Silver Gets a Big Boost From the Election” (Isidore, 2012) and “How Nate Silver Won the 2012 Presidential Election” (Clark, 2012) followed. Many journalists and popular bloggers declared Silver’s success a great boost for Big Data and statistical prediction (Honan, 2012; McDermott, 2012; Taylor, 2012; Tiku, 2012).
However, we worry that this is not such a generalizable victory. People may rally around an algorithm touted as perfect, but we doubt that this enthusiasm will generalize to algorithms that are shown to be less perfect, as they inevitably will be much of the time.
In American society, talking about money is a taboo. It is ok to talk about how much money someone else made when they sold their company, or how much money you would like to earn yearly if you got a raise, but in many different ways, talking about money is likely to trigger some embarrassment in the brain, and generate social discomfort. As one random example: no one dares suggest that bills should be paid according to wealth, for instance, instead people quietly assume that fair is each paying ~1/n, which of course completely fails utilitarian standards.
One more interesting thing people don't talk about, but would probably be useful to know, are money trigger action patterns. That would be a trigger action pattern that should trigger whenever you have more money than X, for varying Xs.
A trivial example is when should you stop caring about pennies, or quarters? When should you start taking cabs or Ubers everywhere? These are minor examples, but there are more interesting questions that would benefit from a money trigger action pattern.
An argument can be made for instance that one should invest in health insurance prior to cryonics, cryonics prior to painting a house and recommended charities before expensive soundsystems. But people never put numbers on those things.
When should you buy cryonics and life insurance for it? When you own $1,000? $10,000? $1,000,000? Yes of course those vary from person to person, currency to currency, environment, age group and family size. This is no reason to remain silent about them. Money is the unit of caring, but some people can care about many more things than others in virtue of having more money. Some things are worth caring about if and only if you have that many caring units to spare.
I'd like to see people talking about what one should care about after surpassing specific numeric thresholds of money, and that seems to be an extremely taboo topic. Seems that would be particularly revealing when someone who does not have a certain amount suggests a trigger action pattern and someone who does have that amount realizes that, indeed, they should purchase that thing. Some people would also calibrate better over whether they need more or less money, if they had thought about these thresholds beforehand.
Some suggested items for those who want to try numeric triggers: health insurance, cryonics, 10% donation to favorite cause, virtual assistant, personal assistant, car, house cleaner, masseuse, quitting your job, driver, boat, airplane, house, personal clinician, lawyer, body guard, etc...
...notice also that some of these are resource satisfiable, but some may not. It may always be more worth financing your anti-aging helper than your costume designer, so you'd hire the 10 millionth scientist to find out how to keep you young before considering hiring someone to design clothes specifically for you, perhaps because you don't like unique clothes. This is my feeling about boats, it feels like there are always other things that can be done with money that precede having a boat, though outside view is that a lot of people who own a lot of money buy boats.
Sean Carroll, physicist and proponent of Everettian Quantum Mechanics, has just posted a new article going over some of the common objections to EQM and why they are false. Of particular interest to us as rationalists:
Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate. And the actual postulates of the theory are quite simple indeed:
- The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space.
- The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.
That is, as they say, it. Notice you don’t see anything about worlds in there. The worlds are there whether you like it or not, sitting in Hilbert space, waiting to see whether they become actualized in the course of the evolution. Notice, also, that these postulates are eminently testable — indeed, even falsifiable! And once you make them (and you accept an appropriate “past hypothesis,” just as in statistical mechanics, and are considering a sufficiently richly-interacting system), the worlds happen automatically.
Given that, you can see why the objection is dispiritingly wrong-headed. You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away. This distinction between what is postulated (which should be testable) and everything that is derived (which clearly need not be) seems pretty straightforward to me, but is a favorite thing for people to get confused about.
Very reminiscent of the quantum physics sequence here! I find that this distinction between number of entities and number of postulates is something that I need to remind people of all the time.
META: This is my first post; if I have done anything wrong, or could have done something better, please tell me!
View more: Next