Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

I'm the new moderator

85 NancyLebovitz 13 January 2015 11:21PM

Viliam Bur made the announcement in Main, but not everyone checks main, so I'm repeating it here.

During the following months my time and attention will be heavily occupied by some personal stuff, so I will be unable to function as a LW moderator. The new LW moderator is... NancyLebovitz!

From today, please direct all your complaints and investigation requests to Nancy. Please not everyone during the first week. That can be a bit frightening for a new moderator.

There are a few old requests I haven't completed yet. I will try to close everything during the following days, but if I don't do it till the end of January, then I will forward the unfinished cases to Nancy, too.

Long live the new moderator!

Apptimize -- rationalist startup hiring engineers

65 nancyhua 12 January 2015 08:22PM

Apptimize is a 2-year old startup closely connected with the rationalist community, one of the first founded by CFAR alumni.  We make “lean” possible for mobile apps -- our software lets mobile developers update or A/B test their apps in minutes, without submitting to the App Store. Our customers include big companies such as Nook and Ebay, as well as Top 10 apps such as Flipagram. When companies evaluate our product against competitors, they’ve chosen us every time.


We work incredibly hard, and we’re striving to build the strongest engineering team in the Bay Area. If you’re a good developer, we have a lot to offer.


Team

  • Our team of 14 includes 7 MIT alumni, 3 ex-Googlers, 1 Wharton MBA, 1 CMU CS alum, 1 Stanford alum, 2 MIT Masters, 1 MIT Ph. D. candidate, and 1 “20 Under 20” Thiel Fellow. Our CEO was also just named to the Forbes “30 Under 30

  • David Salamon, Anna Salamon’s brother, built much of our early product

  • Our CEO is Nancy Hua, while our Android lead is "20 under 20" Thiel Fellow James Koppel. They met after James spoke at the Singularity Summit

  • HP:MoR is required reading for the entire company

  • We evaluate candidates on curiosity even before evaluating them technically

  • Seriously, our team is badass. Just look

Self Improvement

  • You will have huge autonomy and ownership over your part of the product. You can set up new infrastructure and tools, expense business products and services, and even subcontract some of your tasks if you think it's a good idea

  • You will learn to be a more goal-driven agent, and understand the impact of everything you do on the rest of the business

  • Access to our library of over 50 books and audiobooks, and the freedom to purchase more

  • Everyone shares insights they’ve had every week

  • Self-improvement is so important to us that we only hire people committed to it. When we say that it’s a company value, we mean it

The Job

  • Our mobile engineers dive into the dark, undocumented corners of iOS and Android, while our backend crunches data from billions of requests per day

  • Engineers get giant monitors, a top-of-the-line MacBook pro, and we’ll pay for whatever else is needed to get the job done

  • We don’t demand prior experience, but we do demand the fearlessness to jump outside your comfort zone and job description. That said, our website uses AngularJS, jQuery, and nginx, while our backend uses AWS, Java (the good parts), and PostgreSQL

  • We don’t have gratuitous perks, but we have what counts: Free snacks and catered meals, an excellent health and dental plan, and free membership to a gym across the street

  • Seriously, working here is awesome. As one engineer puts it, “we’re like a family bent on taking over the world”


If you’re interested, send some Bayesian evidence that you’re a good match to jobs@apptimize.com

Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial

53 ciphergoth 15 January 2015 04:33PM

We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity. 

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk's donation aims to support precisely this type of research: "Here are all these leading AI researchers saying that AI safety is important", says Elon Musk. "I agree with them, so I'm today committing $10M to support research aimed at keeping AI beneficial for humanity." 

[...] The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. [...]

The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy  (a detailed list of examples can be found here [PDF]). "Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere", says FLI co-founder Viktoriya Krakovna. 

[...] Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories. 

Elon Musk donates $10M to keep AI beneficial, Future of Life Institute, Thursday January 15, 2015

Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time"

50 ciphergoth 22 January 2015 08:21PM

Steven Levy: Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?

Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.

"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21

Could you be Prof Nick Bostrom's sidekick?

45 RobertWiblin 05 December 2014 01:09AM

If funding were available, the Centre for Effective Altruism would consider hiring someone to work closely with Prof Nick Bostrom to provide anything and everything he needs to be more productive. Bostrom is obviously the Director of the Future of Humanity Institute at Oxford University, and author of Superintelligence, the best guide yet to the possible risks posed by artificial intelligence.

Nobody has yet confirmed they will fund this role, but we are nevertheless interested in getting expressions of interest from suitable candidates.

The list of required characteristics is hefty, and the position would be a challenging one:

  • Willing to commit to the role for at least a year, and preferably several
  • Able to live and work in Oxford during this time
  • Conscientious and discreet
  • Trustworthy
  • Able to keep flexible hours (some days a lot of work, others not much)
  • Highly competent at almost everything in life (for example, organising travel, media appearances, choosing good products, and so on)
  • Will not screw up and look bad when dealing with external parties (e.g. media, event organisers, the university)
  • Has a good personality 'fit' with Bostrom
  • Willing to do some tasks that are not high-status
  • Willing to help Bostrom with both his professional and personal life (to free up his attention)
  • Can speak English well
  • Knowledge of rationality, philosophy and artificial intelligence would also be helpful, and would allow you to also do more work as a research assistant.

The research Bostrom can do is unique; to my knowledge we don't have anyone who has made such significant strides clarifying the biggest risks facing humanity as a whole. As a result, helping increase Bostrom's output by say, 20%, would be a major contribution. This person's work would also help the rest of the Future of Humanity Institute run smoothly.

The role would offer significant skill development in operations, some skill development in communications and research, and the chance to build extensive relationships with the people and organisations working on existential risks.

If you would like to know more, or be added to the list of potential candidates, please email me: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org. Feel free to share this post around.

Note that we are also hiring for a bunch of other roles, with applications closing Friday the 12th December.

 

Harper's Magazine article on LW/MIRI/CFAR and Ethereum

44 gwern 12 December 2014 08:34PM

Cover title: “Power and paranoia in Silicon Valley”; article title: “Come with us if you want to live: Among the apocalyptic libertarians of Silicon Valley” (mirrors: 1, 2, 3), by Sam Frank; Harper’s Magazine, January 2015, pg26-36 (~8500 words). The beginning/ending are focused on Ethereum and Vitalik Buterin, so I'll excerpt the LW/MIRI/CFAR-focused middle:

…Blake Masters-the name was too perfect-had, obviously, dedicated himself to the command of self and universe. He did CrossFit and ate Bulletproof, a tech-world variant of the paleo diet. On his Tumblr’s About page, since rewritten, the anti-belief belief systems multiplied, hyperlinked to Wikipedia pages or to the confoundingly scholastic website Less Wrong: “Libertarian (and not convinced there’s irreconcilable fissure between deontological and consequentialist camps). Aspiring rationalist/Bayesian. Secularist/agnostic/ ignostic . . . Hayekian. As important as what we know is what we don’t. Admittedly eccentric.” Then: “Really, really excited to be in Silicon Valley right now, working on fascinating stuff with an amazing team.” I was startled that all these negative ideologies could be condensed so easily into a positive worldview. …I saw the utopianism latent in capitalism-that, as Bernard Mandeville had it three centuries ago, it is a system that manufactures public benefit from private vice. I started CrossFit and began tinkering with my diet. I browsed venal tech-trade publications, and tried and failed to read Less Wrong, which was written as if for aliens.

…I left the auditorium of Alice Tully Hall. Bleary beside the silver coffee urn in the nearly empty lobby, I was buttonholed by a man whose name tag read MICHAEL VASSAR, METAMED research. He wore a black-and-white paisley shirt and a jacket that was slightly too big for him. “What did you think of that talk?” he asked, without introducing himself. “Disorganized, wasn’t it?” A theory of everything followed. Heroes like Elon and Peter (did I have to ask? Musk and Thiel). The relative abilities of physicists and biologists, their standard deviations calculated out loud. How exactly Vassar would save the world. His left eyelid twitched, his full face winced with effort as he told me about his “personal war against the universe.” My brain hurt. I backed away and headed home. But Vassar had spoken like no one I had ever met, and after Kurzweil’s keynote the next morning, I sought him out. He continued as if uninterrupted. Among the acolytes of eternal life, Vassar was an eschatologist. “There are all of these different countdowns going on,” he said. “There’s the countdown to the broad postmodern memeplex undermining our civilization and causing everything to break down, there’s the countdown to the broad modernist memeplex destroying our environment or killing everyone in a nuclear war, and there’s the countdown to the modernist civilization learning to critique itself fully and creating an artificial intelligence that it can’t control. There are so many different - on different time-scales - ways in which the self-modifying intelligent processes that we are embedded in undermine themselves. I’m trying to figure out ways of disentangling all of that. . . .I’m not sure that what I’m trying to do is as hard as founding the Roman Empire or the Catholic Church or something. But it’s harder than people’s normal big-picture ambitions, like making a billion dollars.” Vassar was thirty-four, one year older than I was. He had gone to college at seventeen, and had worked as an actuary, as a teacher, in nanotech, and in the Peace Corps. He’d founded a music-licensing start-up called Sir Groovy. Early in 2012, he had stepped down as president of the Singularity Institute for Artificial Intelligence, now called the Machine Intelligence Research Institute (MIRI), which was created by an autodidact named Eliezer Yudkowsky, who also started Less Wrong. Vassar had left to found MetaMed, a personalized-medicine company, with Jaan Tallinn of Skype and Kazaa, $500,000 from Peter Thiel, and a staff that included young rationalists who had cut their teeth arguing on Yudkowsky’s website. The idea behind MetaMed was to apply rationality to medicine-“rationality” here defined as the ability to properly research, weight, and synthesize the flawed medical information that exists in the world. Prices ranged from $25,000 for a literature review to a few hundred thousand for a personalized study. “We can save lots and lots and lots of lives,” Vassar said (if mostly moneyed ones at first). “But it’s the signal-it’s the ‘Hey! Reason works!’-that matters. . . . It’s not really about medicine.” Our whole society was sick - root, branch, and memeplex - and rationality was the only cure. …I asked Vassar about his friend Yudkowsky. “He has worse aesthetics than I do,” he replied, “and is actually incomprehensibly smart.” We agreed to stay in touch.

One month later, I boarded a plane to San Francisco. I had spent the interim taking a second look at Less Wrong, trying to parse its lore and jargon: “scope insensitivity,” “ugh field,” “affective death spiral,” “typical mind fallacy,” “counterfactual mugging,” “Roko’s basilisk.” When I arrived at the MIRI offices in Berkeley, young men were sprawled on beanbags, surrounded by whiteboards half black with equations. I had come costumed in a Fermat’s Last Theorem T-shirt, a summary of the proof on the front and a bibliography on the back, printed for the number-theory camp I had attended at fifteen. Yudkowsky arrived late. He led me to an empty office where we sat down in mismatched chairs. He wore glasses, had a short, dark beard, and his heavy body seemed slightly alien to him. I asked what he was working on. “Should I assume that your shirt is an accurate reflection of your abilities,” he asked, “and start blabbing math at you?” Eight minutes of probability and game theory followed. Cogitating before me, he kept grimacing as if not quite in control of his face. “In the very long run, obviously, you want to solve all the problems associated with having a stable, self-improving, beneficial-slash-benevolent AI, and then you want to build one.” What happens if an artificial intelligence begins improving itself, changing its own source code, until it rapidly becomes - foom! is Yudkowsky’s preferred expression - orders of magnitude more intelligent than we are? A canonical thought experiment devised by Oxford philosopher Nick Bostrom in 2003 suggests that even a mundane, industrial sort of AI might kill us. Bostrom posited a “superintelligence whose top goal is the manufacturing of paper-clips.” For this AI, known fondly on Less Wrong as Clippy, self-improvement might entail rearranging the atoms in our bodies, and then in the universe - and so we, and everything else, end up as office supplies. Nothing so misanthropic as Skynet is required, only indifference to humanity. What is urgently needed, then, claims Yudkowsky, is an AI that shares our values and goals. This, in turn, requires a cadre of highly rational mathematicians, philosophers, and programmers to solve the problem of “friendly” AI - and, incidentally, the problem of a universal human ethics - before an indifferent, unfriendly AI escapes into the wild.

Among those who study artificial intelligence, there’s no consensus on either point: that an intelligence explosion is possible (rather than, for instance, a proliferation of weaker, more limited forms of AI) or that a heroic team of rationalists is the best defense in the event. That MIRI has as much support as it does (in 2012, the institute’s annual revenue broke $1 million for the first time) is a testament to Yudkowsky’s rhetorical ability as much as to any technical skill. Over the course of a decade, his writing, along with that of Bostrom and a handful of others, has impressed the dangers of unfriendly AI on a growing number of people in the tech world and beyond. In August, after reading Superintelligence, Bostrom’s new book, Elon Musk tweeted, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” In 2000, when Yudkowsky was twenty, he founded the Singularity Institute with the support of a few people he’d met at the Foresight Institute, a Palo Alto nanotech think tank. He had already written papers on “The Plan to Singularity” and “Coding a Transhuman AI,” and posted an autobiography on his website, since removed, called “Eliezer, the Person.” It recounted a breakdown of will when he was eleven and a half: “I can’t do anything. That’s the phrase I used then.” He dropped out before high school and taught himself a mess of evolutionary psychology and cognitive science. He began to “neuro-hack” himself, systematizing his introspection to evade his cognitive quirks. Yudkowsky believed he could hasten the singularity by twenty years, creating a superhuman intelligence and saving humankind in the process. He met Thiel at a Foresight Institute dinner in 2005 and invited him to speak at the first annual Singularity Summit. The institute’s paid staff grew. In 2006, Yudkowsky began writing a hydra-headed series of blog posts: science-fictionish parables, thought experiments, and explainers encompassing cognitive biases, self-improvement, and many-worlds quantum mechanics that funneled lay readers into his theory of friendly AI. Rationality workshops and Meetups began soon after. In 2009, the blog posts became what he called Sequences on a new website: Less Wrong. The next year, Yudkowsky began publishing Harry Potter and the Methods of Rationality at fanfiction.net. The Harry Potter category is the site’s most popular, with almost 700,000 stories; of these, HPMoR is the most reviewed and the second-most favorited. The last comment that the programmer and activist Aaron Swartz left on Reddit before his suicide in 2013 was on /r/hpmor. In Yudkowsky’s telling, Harry is not only a magician but also a scientist, and he needs just one school year to accomplish what takes canon-Harry seven. HPMoR is serialized in arcs, like a TV show, and runs to a few thousand pages when printed; the book is still unfinished. Yudkowsky and I were talking about literature, and Swartz, when a college student wandered in. Would Eliezer sign his copy of HPMoR? “But you have to, like, write something,” he said. “You have to write, ‘I am who I am.’ So, ‘I am who I am’ and then sign it.” “Alrighty,” Yudkowsky said, signed, continued. “Have you actually read Methods of Rationality at all?” he asked me. “I take it not.” (I’d been found out.) “I don’t know what sort of a deadline you’re on, but you might consider taking a look at that.” (I had taken a look, and hated the little I’d managed.) “It has a legendary nerd-sniping effect on some people, so be warned. That is, it causes you to read it for sixty hours straight.”

The nerd-sniping effect is real enough. Of the 1,636 people who responded to a 2013 survey of Less Wrong’s readers, one quarter had found the site thanks to HPMoR, and many more had read the book. Their average age was 27.4, their average IQ 138.2. Men made up 88.8% of respondents; 78.7% were straight, 1.5% transgender, 54.7 % American, 89.3% atheist or agnostic. The catastrophes they thought most likely to wipe out at least 90% of humanity before the year 2100 were, in descending order, pandemic (bioengineered), environmental collapse, unfriendly AI, nuclear war, pandemic (natural), economic/political collapse, asteroid, nanotech/gray goo. Forty-two people, 2.6 %, called themselves futarchists, after an idea from Robin Hanson, an economist and Yudkowsky’s former coblogger, for reengineering democracy into a set of prediction markets in which speculators can bet on the best policies. Forty people called themselves reactionaries, a grab bag of former libertarians, ethno-nationalists, Social Darwinists, scientific racists, patriarchists, pickup artists, and atavistic “traditionalists,” who Internet-argue about antidemocratic futures, plumping variously for fascism or monarchism or corporatism or rule by an all-powerful, gold-seeking alien named Fnargl who will free the markets and stabilize everything else. At the bottom of each year’s list are suggestive statistical irrelevancies: “every optimizing system’s a dictator and i’m not sure which one i want in charge,” “Autocracy (important: myself as autocrat),” “Bayesian (aspiring) Rationalist. Technocratic. Human-centric Extropian Coherent Extrapolated Volition.” “Bayesian” refers to Bayes’s Theorem, a mathematical formula that describes uncertainty in probabilistic terms, telling you how much to update your beliefs when given new information. This is a formalization and calibration of the way we operate naturally, but “Bayesian” has a special status in the rationalist community because it’s the least imperfect way to think. “Extropy,” the antonym of “entropy,” is a decades-old doctrine of continuous human improvement, and “coherent extrapolated volition” is one of Yudkowsky’s pet concepts for friendly artificial intelligence. Rather than our having to solve moral philosophy in order to arrive at a complete human goal structure, C.E.V. would computationally simulate eons of moral progress, like some kind of Whiggish Pangloss machine. As Yudkowsky wrote in 2004, “In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together.” Yet can even a single human’s volition cohere or compute in this way, let alone humanity’s? We stood up to leave the room. Yudkowsky stopped me and said I might want to turn my recorder on again; he had a final thought. “We’re part of the continuation of the Enlightenment, the Old Enlightenment. This is the New Enlightenment,” he said. “Old project’s finished. We actually have science now, now we have the next part of the Enlightenment project.”

In 2013, the Singularity Institute changed its name to the Machine Intelligence Research Institute. Whereas MIRI aims to ensure human-friendly artificial intelligence, an associated program, the Center for Applied Rationality, helps humans optimize their own minds, in accordance with Bayes’s Theorem. The day after I met Yudkowsky, I returned to Berkeley for one of CFAR’s long-weekend workshops. The color scheme at the Rose Garden Inn was red and green, and everything was brocaded. The attendees were mostly in their twenties: mathematicians, software engineers, quants, a scientist studying soot, employees of Google and Facebook, an eighteen-year-old Thiel Fellow who’d been paid $100,000 to leave Boston College and start a company, professional atheists, a Mormon turned atheist, an atheist turned Catholic, an Objectivist who was photographed at the premiere of Atlas Shrugged II: The Strike. There were about three men for every woman. At the Friday-night meet and greet, I talked with Benja, a German who was studying math and behavioral biology at the University of Bristol, whom I had spotted at MIRI the day before. He was in his early thirties and quite tall, with bad posture and a ponytail past his shoulders. He wore socks with sandals, and worried a paper cup as we talked. Benja had felt death was terrible since he was a small child, and wanted his aging parents to sign up for cryonics, if he could figure out how to pay for it on a grad-student stipend. He was unsure about the risks from unfriendly AI - “There is a part of my brain,” he said, “that sort of goes, like, ‘This is crazy talk; that’s not going to happen’” - but the probabilities had persuaded him. He said there was only about a 30% chance that we could make it another century without an intelligence explosion. He was at CFAR to stop procrastinating. Julia Galef, CFAR’s president and cofounder, began a session on Saturday morning with the first of many brain-as-computer metaphors. We are “running rationality on human hardware,” she said, not supercomputers, so the goal was to become incrementally more self-reflective and Bayesian: not perfectly rational agents, but “agent-y.” The workshop’s classes lasted six or so hours a day; activities and conversations went well into the night. We got a condensed treatment of contemporary neuroscience that focused on hacking our brains’ various systems and modules, and attended sessions on habit training, urge propagation, and delegating to future selves. We heard a lot about Daniel Kahneman, the Nobel Prize-winning psychologist whose work on cognitive heuristics and biases demonstrated many of the ways we are irrational. Geoff Anders, the founder of Leverage Research, a “meta-level nonprofit” funded by Thiel, taught a class on goal factoring, a process of introspection that, after many tens of hours, maps out every one of your goals down to root-level motivations-the unchangeable “intrinsic goods,” around which you can rebuild your life. Goal factoring is an application of Connection Theory, Anders’s model of human psychology, which he developed as a Rutgers philosophy student disserting on Descartes, and Connection Theory is just the start of a universal renovation. Leverage Research has a master plan that, in the most recent public version, consists of nearly 300 steps. It begins from first principles and scales up from there: “Initiate a philosophical investigation of philosophical method”; “Discover a sufficiently good philosophical method”; have 2,000-plus “actively and stably benevolent people successfully seek enough power to be able to stably guide the world”; “People achieve their ultimate goals as far as possible without harming others”; “We have an optimal world”; “Done.” On Saturday night, Anders left the Rose Garden Inn early to supervise a polyphasic-sleep experiment that some Leverage staff members were conducting on themselves. It was a schedule called the Everyman 3, which compresses sleep into three twenty-minute REM naps each day and three hours at night for slow-wave. Anders was already polyphasic himself. Operating by the lights of his own best practices, goal-factored, coherent, and connected, he was able to work 105 hours a week on world optimization. For the rest of us, for me, these were distant aspirations. We were nerdy and unperfected. There was intense discussion at every free moment, and a genuine interest in new ideas, if especially in testable, verifiable ones. There was joy in meeting peers after years of isolation. CFAR was also insular, overhygienic, and witheringly focused on productivity. Almost everyone found politics to be tribal and viscerally upsetting. Discussions quickly turned back to philosophy and math. By Monday afternoon, things were wrapping up. Andrew Critch, a CFAR cofounder, gave a final speech in the lounge: “Remember how you got started on this path. Think about what was the time for you when you first asked yourself, ‘How do I work?’ and ‘How do I want to work?’ and ‘What can I do about that?’ . . . Think about how many people throughout history could have had that moment and not been able to do anything about it because they didn’t know the stuff we do now. I find this very upsetting to think about. It could have been really hard. A lot harder.” He was crying. “I kind of want to be grateful that we’re now, and we can share this knowledge and stand on the shoulders of giants like Daniel Kahneman . . . I just want to be grateful for that. . . . And because of those giants, the kinds of conversations we can have here now, with, like, psychology and, like, algorithms in the same paragraph, to me it feels like a new frontier. . . . Be explorers; take advantage of this vast new landscape that’s been opened up to us in this time and this place; and bear the torch of applied rationality like brave explorers. And then, like, keep in touch by email.” The workshop attendees put giant Post-its on the walls expressing the lessons they hoped to take with them. A blue one read RATIONALITY IS SYSTEMATIZED WINNING. Above it, in pink: THERE ARE OTHER PEOPLE WHO THINK LIKE ME. I AM NOT ALONE.

That night, there was a party. Alumni were invited. Networking was encouraged. Post-its proliferated; one, by the beer cooler, read SLIGHTLY ADDICTIVE. SLIGHTLY MIND-ALTERING. Another, a few feet to the right, over a double stack of bound copies of Harry Potter and the Methods of Rationality: VERY ADDICTIVE. VERY MIND-ALTERING. I talked to one of my roommates, a Google scientist who worked on neural nets. The CFAR workshop was just a whim to him, a tourist weekend. “They’re the nicest people you’d ever meet,” he said, but then he qualified the compliment. “Look around. If they were effective, rational people, would they be here? Something a little weird, no?” I walked outside for air. Michael Vassar, in a clinging red sweater, was talking to an actuary from Florida. They discussed timeless decision theory (approximately: intelligent agents should make decisions on the basis of the futures, or possible worlds, that they predict their decisions will create) and the simulation argument (essentially: we’re living in one), which Vassar traced to Schopenhauer. He recited lines from Kipling’s “If-” in no particular order and advised the actuary on how to change his life: Become a pro poker player with the $100k he had in the bank, then hit the Magic: The Gathering pro circuit; make more money; develop more rationality skills; launch the first Costco in Northern Europe. I asked Vassar what was happening at MetaMed. He told me that he was raising money, and was in discussions with a big HMO. He wanted to show up Peter Thiel for not investing more than $500,000. “I’m basically hoping that I can run the largest convertible-debt offering in the history of finance, and I think it’s kind of reasonable,” he said. “I like Peter. I just would like him to notice that he made a mistake . . . I imagine a hundred million or a billion will cause him to notice . . . I’d like to have a pi-billion-dollar valuation.” I wondered whether Vassar was drunk. He was about to drive one of his coworkers, a young woman named Alyssa, home, and he asked whether I would join them. I sat silently in the back of his musty BMW as they talked about potential investors and hires. Vassar almost ran a red light. After Alyssa got out, I rode shotgun, and we headed back to the hotel.

It was getting late. I asked him about the rationalist community. Were they really going to save the world? From what? “Imagine there is a set of skills,” he said. “There is a myth that they are possessed by the whole population, and there is a cynical myth that they’re possessed by 10% of the population. They’ve actually been wiped out in all but about one person in three thousand.” It is important, Vassar said, that his people, “the fragments of the world,” lead the way during “the fairly predictable, fairly total cultural transition that will predictably take place between 2020 and 2035 or so.” We pulled up outside the Rose Garden Inn. He continued: “You have these weird phenomena like Occupy where people are protesting with no goals, no theory of how the world is, around which they can structure a protest. Basically this incredibly, weirdly, thoroughly disempowered group of people will have to inherit the power of the world anyway, because sooner or later everyone older is going to be too old and too technologically obsolete and too bankrupt. The old institutions may largely break down or they may be handed over, but either way they can’t just freeze. These people are going to be in charge, and it would be helpful if they, as they come into their own, crystallize an identity that contains certain cultural strengths like argument and reason.” I didn’t argue with him, except to press, gently, on his particular form of elitism. His rationalism seemed so limited to me, so incomplete. “It is unfortunate,” he said, “that we are in a situation where our cultural heritage is possessed only by people who are extremely unappealing to most of the population.” That hadn’t been what I’d meant. I had meant rationalism as itself a failure of the imagination. “The current ecosystem is so totally fucked up,” Vassar said. “But if you have conversations here”-he gestured at the hotel-“people change their mind and learn and update and change their behaviors in response to the things they say and learn. That never happens anywhere else.” In a hallway of the Rose Garden Inn, a former high-frequency trader started arguing with Vassar and Anna Salamon, CFAR’s executive director, about whether people optimize for hedons or utilons or neither, about mountain climbers and other high-end masochists, about whether world happiness is currently net positive or negative, increasing or decreasing. Vassar was eating and drinking everything within reach. My recording ends with someone saying, “I just heard ‘hedons’ and then was going to ask whether anyone wants to get high,” and Vassar replying, “Ah, that’s a good point.” Other voices: “When in California . . .” “We are in California, yes.”

…Back on the East Coast, summer turned into fall, and I took another shot at reading Yudkowsky’s Harry Potter fanfic. It’s not what I would call a novel, exactly, rather an unending, self-satisfied parable about rationality and trans-humanism, with jokes.

…I flew back to San Francisco, and my friend Courtney and I drove to a cul-de-sac in Atherton, at the end of which sat the promised mansion. It had been repurposed as cohousing for children who were trying to build the future: start-up founders, singularitarians, a teenage venture capitalist. The woman who coined the term “open source” was there, along with a Less Wronger and Thiel Capital employee who had renamed himself Eden. The Day of the Idealist was a day for self-actualization and networking, like the CFAR workshop without the rigor. We were to set “mega goals” and pick a “core good” to build on in the coming year. Everyone was a capitalist; everyone was postpolitical. I squabbled with a young man in a Tesla jacket about anti-Google activism. No one has a right to housing, he said; programmers are the people who matter; the protesters’ antagonistic tactics had totally discredited them.

…Thiel and Vassar and Yudkowsky, for all their far-out rhetoric, take it on faith that corporate capitalism, unchecked just a little longer, will bring about this era of widespread abundance. Progress, Thiel thinks, is threatened mostly by the political power of what he calls the “unthinking demos.”


Pointer thanks to /u/Vulture.

CFAR fundraiser far from filled; 4 days remaining

42 AnnaSalamon 27 January 2015 07:26AM

We're 4 days from the end of our matching fundraiser, and still only about 1/3rd of the way to our target (and to the point where pledged funds would cease being matched).

If you'd like to support the growth of rationality in the world, do please consider donating, or asking me about any questions/etc. you may have.  I'd love to talk.  I suspect funds donated to CFAR between now and Jan 31 are quite high-impact.

As a random bonus, I promise that if we meet the $120k matching challenge, I'll post at least two posts with some never-before-shared (on here) rationality techniques that we've been playing with around CFAR.

Don't estimate your creative intelligence by your critical intelligence

37 PhilGoetz 05 February 2015 02:41AM

When I criticize, I'm a genius. I can go through a book of highly-referenced scientific articles and find errors in each of them. Boy, I feel smart. How are these famous people so dumb?

But when I write, I suddenly become stupid. I sometimes spend half a day writing something and then realize at the end, or worse, after posting, that what it says simplifies to something trivial, or that I've made several unsupported assumptions, or claimed things I didn't really know were true. Or I post something, then have to go back every ten minutes to fix some point that I realize is not quite right, sometimes to the point where the whole thing falls apart.

If someone writes an article or expresses an idea that you find mistakes in, that doesn't make you smarter than that person. If you create an equally-ambitious article or idea that no one else finds mistakes in, then you can start congratulating yourself.

Easy wins aren't news

35 PhilGoetz 19 February 2015 07:38PM

Recently I talked with a guy from Grant Street Group. They make, among other things, software with which local governments can auction their bonds on the Internet.

By making the auction process more transparent and easier to participate in, they enable local governments which need to sell bonds (to build a high school, for instance), to sell those bonds at, say, 7% interest instead of 8%. (At least, that's what he said.)

They have similar software for auctioning liens on property taxes, which also helps local governments raise more money by bringing more buyers to each auction, and probably helps the buyers reduce their risks by giving them more information.

This is a big deal. I think it's potentially more important than any budget argument that's been on the front pages since the 1960s. Yet I only heard of it by chance.

People would rather argue about reducing the budget by eliminating waste, or cutting subsidies to people who don't deserve it, or changing our ideological priorities. Nobody wants to talk about auction mechanics. But fixing the auction mechanics is the easy win. It's so easy that nobody's interested in it. It doesn't buy us fuzzies or let us signal our affiliations. To an individual activist, it's hardly worth doing.

The Galileo affair: who was on the side of rationality?

34 Val 15 February 2015 08:52PM

Introduction

A recent survey showed that the LessWrong discussion forums mostly attract readers who are predominantly either atheists or agnostics, and who lean towards the left or far left in politics. As one of the main goals of LessWrong is overcoming bias, I would like to come up with a topic which I think has a high probability of challenging some biases held by at least some members of the community. It's easy to fight against biases when the biases belong to your opponents, but much harder when you yourself might be the one with biases. It's also easy to cherry-pick arguments which prove your beliefs and ignore those which would disprove them. It's also common in such discussions, that the side calling itself rationalist makes exactly the same mistakes they accuse their opponents of doing. Far too often have I seen people (sometimes even Yudkowsky himself) who are very good rationalists but can quickly become irrational and use several fallacies when arguing about history or religion. This most commonly manifests when we take the dumbest and most fundamentalist young Earth creationists as an example, winning easily against them, then claiming that we disproved all arguments ever made by any theist. No, this article will not be about whether God exists or not, or whether any real world religion is fundamentally right or wrong. I strongly discourage any discussion about these two topics.

This article has two main purposes:

1. To show an interesting example where the scientific method can lead to wrong conclusions

2. To overcome a certain specific bias, namely, that the pre-modern Catholic Church was opposed to the concept of the Earth orbiting the Sun with the deliberate purpose of hindering scientific progress and to keep the world in ignorance. I hope this would prove to also be an interesting challenge for your rationality, because it is easy to fight against bias in others, but not so easy to fight against bias on yourselves.

The basis of my claims is that I have read the book written by Galilei himself, and I'm very interested (and not a professional, but well read) in early modern, but especially 16-17th century history.

 

Geocentrism versus Heliocentrism

I assume every educated person knows the name of Galileo Galilei. I won't waste the space on the site and the time of the readers to present a full biography about his life, there are plenty of on-line resources where you can find more than enough biographic information about him.

The controversy?

What is interesting about him is how many people have severe misconceptions about him. Far too often he is celebrated as the one sane man in an era of ignorance, the sole propagator of science and rationality when the powers of that era suppressed any scientific thought and ridiculed everyone who tried to challenge the accepted theories about the physical world. Some even go as far as claiming that people believed the Earth was flat. Although the flat Earth theory was not propagated at all, it's true that the heliocentric view of the Solar System (the Earth revolving around the Sun) was not yet accepted.

However, the claim that the Church was suppressing evidence about heliocentrism "to maintain its power over the ignorant masses" can be disproved easily:

- The common people didn't go to school where they could have learned about it, and those commoners who did go to school, just learned to read and write, not much more, so they wouldn't care less about what orbits around what. This differs from 20-21th century fundamentalists who want to teach young Earth creationism in schools - back then in the 17th century, there would be no classes where either the geocentric or heliocentric views could have been taught to the masses.

- Heliocentrism was not discovered by Galilei. It was first proposed by Nicolaus Copernicus almost 100 years before Galilei. Copernicus didn't have any affairs with the Inquisition. His theories didn't gain wide acceptance, but he and his followers weren't persecuted either.

- Galilei was only sentenced to house arrest, and mostly because of insulting the pope and doing other unwise things. The political climate in 17th century Italy was quite messy, and Galilei did quite a few unfortunate choices regarding his alliances. Actually, Galilei was the one who brought religion into the debate: his opponents were citing Aristotle, not the Bible in their arguments. Galilei, however, wanted to redefine the Scripture based on his (unproven) beliefs, and insisted that he should have the authority to push his own views about how people interpret the Bible. Of course this pissed quite a few people off, and his case was not helped by publicly calling the pope an idiot.

- For a long time Galilei was a good friend of the pope, while holding heliocentric views. So were a couple of other astronomers. The heliocentrism-geocentrism debates were common among astronomers of the day, and were not hindered, but even encouraged by the pope.

- The heliocentrism-geocentrism debate was never an ateism-theism debate. The heliocentrists were committed theists, just like  the defenders of geocentrism. The Church didn't suppress science, but actually funded the research of most scientists.

- The defenders of geocentrism didn't use the Bible as a basis for their claims. They used Aristotle and, for the time being, good scientific reasoning. The heliocentrists were much more prone to use the "God did it" argument when they couldn't defend the gaps in their proofs.

 

The birth of heliocentrism.

By the 16th century, astronomers have plotted the movements of the most important celestial bodies in the sky. Observing the motion of the Sun, the Moon and the stars, it would seem obvious that the Earth is motionless and everything orbits around it. This model (called geocentrism) had only one minor flaw: the planets would sometimes make a loop in their motion, "moving backwards". This required a lot of very complicated formulas to model their motions. Thus, by the virtue of Occam's razor, a theory was born which could better explain the motion of the planets: what if the Earth and everything else orbited around the Sun? However, this new theory (heliocentrism) had a lot of issues, because while it could explain the looping motion of the planets, there were a lot of things which it either couldn't explain, or the geocentric model could explain it much better.

 

The proofs, advantages and disadvantages

The heliocentric view had only a single advantage against the geocentric one: it could describe the motion of the planets by a much simper formula.

However, it had a number of severe problems:

- Gravity. Why do the objects have weight, and why are they all pulled towards the center of the Earth? Why don't objects fall off the Earth on the other side of the planet? Remember, Newton wasn't even born yet! The geocentric view had a very simple explanation, dating back to Aristotle: it is the nature of all objects that they strive towards the center of the world, and the center of the spherical Earth is the center of the world. The heliocentric theory couldn't counter this argument.

- Stellar parallax. If the Earth is not stationary, then the relative position of the stars should change as the Earth orbits the Sun. No such change was observable by the instruments of that time. Only in the first half of the 19th century did we succeed in measuring it, and only then was the movement of the Earth around the Sun finally proven.

- Galilei tried to used the tides as a proof. The geocentrists argued that the tides are caused by the Moon even if they didn't knew by what mechanisms, but Galilei said that it's just a coincidence, and the tides are not caused by the Moon: just as if we put a barrel of water onto a cart, the water would be still if the cart was stationary and the water would be sloshing around if the cart was pulled by a horse, so are the tides caused by the water sloshing around as the Earth moves. If you read Galilei's book, you will discover quite a number of such silly arguments, and you'll see that Galilei was anything but a rationalist. Instead of changing his views against overwhelming proofs, he used  all possible fallacies to push his view through.

Actually the most interesting author in this topic was Riccioli. If you study his writings you will get definite proof that the heliocentrism-geocentrism debate was handled with scientific accuracy and rationality, and it was not a religious debate at all. He defended geocentrism, and presented 126 arguments in the topic (49 for heliocentrism, 77 against), and only two of them (both for heliocentrism) had any religious connotations, and he stated valid responses against both of them. This means that he, as a rationalist, presented both sides of the debate in a neutral way, and used reasoning instead of appeal to authority or faith in all cases. Actually this was what the pope expected of Galilei, and such a book was what he commissioned from Galilei. Galilei instead wrote a book where he caricatured the pope as a strawman, and instead of presenting arguments for and against both world-views in a neutral way, he wrote a book which can be called anything but scientific.

By the way, Riccioli was a Catholic priest. And a scientist. And, it seems to me, also a rationalist. Studying the works of such people like him, you might want to change your mind if you perceive a conflict between science and religion, which is part of today's public consciousness only because of a small number of very loud religious fundamentalists, helped by some committed atheists trying to suggest that all theists are like them.

Finally, I would like to copy a short summary about this book:

Journal for the History of Astronomy, Vol. 43, No. 2, p. 215-226
In 1651 the Italian astronomer Giovanni Battista Riccioli published within his Almagestum Novum, a massive 1500 page treatise on astronomy, a discussion of 126 arguments for and against the Copernican hypothesis (49 for, 77 against). A synopsis of each argument is presented here, with discussion and analysis. Seen through Riccioli's 126 arguments, the debate over the Copernican hypothesis appears dynamic and indeed similar to more modern scientific debates. Both sides present good arguments as point and counter-point. Religious arguments play a minor role in the debate; careful, reproducible experiments a major role. To Riccioli, the anti-Copernican arguments carry the greater weight, on the basis of a few key arguments against which the Copernicans have no good response. These include arguments based on telescopic observations of stars, and on the apparent absence of what today would be called "Coriolis Effect" phenomena; both have been overlooked by the historical record (which paints a picture of the 126 arguments that little resembles them). Given the available scientific knowledge in 1651, a geo-heliocentric hypothesis clearly had real strength, but Riccioli presents it as merely the "least absurd" available model - perhaps comparable to the Standard Model in particle physics today - and not as a fully coherent theory. Riccioli's work sheds light on a fascinating piece of the history of astronomy, and highlights the competence of scientists of his time.

The full article can be found under this link. I recommend it to everyone interested in the topic. It shows that geocentrists at that time had real scientific proofs and real experiments regarding their theories, and for most of them the heliocentrists had no meaningful answers.

 

Disclaimers:

- I'm not a Catholic, so I have no reason to defend the historic Catholic church due to "justifying my insecurities" - a very common accusation against someone perceived to be defending theists in a predominantly atheist discussion forum.

- Any discussion about any perceived proofs for or against the existence of God would be off-topic here. I know it's tempting to show off your best proofs against your carefully constructed straw-men yet again, but this is just not the place for it, as it would detract from the main purpose of this article, as summarized in its introduction.

- English is not my native language. Nevertheless, I hope that what I wrote was comprehensive enough to be understandable. If there is any part of my article which you find ambiguous, feel free to ask.

I have great hopes and expectations that the LessWrong community is suitable to discuss such ideas. I have experience with presenting these ideas on other, predominantly atheist internet communities, and most often the reactions was outright flaming, a hurricane of unexplained downvotes, and prejudicial ad hominem attacks based on what affiliations they assumed I was subscribing to. It is common for people to decide whether they believe a claim or not, based solely by whether the claim suits their ideological affiliations or not. The best quality of rationalists, however, should be to be able to change their views when confronted by overwhelming proof, instead of trying to come up with more and more convoluted explanations. In the time I spent in the LessWrong community, I became to respect that the people here can argue in a civil manner, listening to the arguments of others instead of discarding them outright.

 

'Dumb' AI observes and manipulates controllers

33 Stuart_Armstrong 13 January 2015 01:35PM

The argument that AIs provided with a reward channel will observe their controllers and learn to manipulate them is a valid one. Unfortunately, it's often framed in a way that feels counterintuitive or extreme, especially to AI designers. It typically starts with the standard reinforcement learning scenario, then posits that the AI becomes superintelligent and either manipulates the controller with super-social powers, or breaks out and gains control of its reward channel, killing or threatening its controllers.

And that is a fair argument. But conceptually, it leaps from a standard reinforcement learning scenario, to a science-fiction-sounding scenario. It might help to have intermediate scenarios: to show that even lower intelligence AIs might start exhibiting the same sort of behaviour, long before it gets to superintelligence.

So consider the following scenario. Some complex, trainable AI is tasked with writing automated news stories for a student newspaper. It trawls the web and composes its stories, then gets reward and feedback from the editors. Assume there are two editors for this newspaper, and they work on alternate days. The two editors have somewhat different ideas as to what constitutes a good story, so their feedbacks are different. After a while, the AI finds that it gets higher reward by using a certain style on Monday, Wednesday and Friday, and another style on Tuesday and Thursdays - this is a simple consequence of its reward mechanism.

But the rota isn't perfect. Sometime the Monday editor will edit a story so late on Monday that it's Tuesday, and sometimes the Tuesday editor will be up early and edit a story at the same time. This pattern is more common during the holidays, when the editors go to different time zones, often travel, and become more irregular in their editing.

So if the AI complex and skilled enough, then, simply through simple feedback, it will start building up a picture of its editors. It will figure out when they are likely to stick to a schedule, and when they will be more irregular. It will figure out the difference between holidays and non-holidays. Given time, it may be able to track the editors moods and it will certainly pick up on any major change in their lives - such as romantic relationships and breakups, which will radically change whether and how it should present stories with a romantic focus.

It will also likely learn the correlation between stories and feedbacks - maybe presenting a story define roughly as "positive" will increase subsequent reward for the rest of the day, on all stories. Or maybe this will only work on a certain editor, or only early in the term. Or only before lunch.

Thus the simple trainable AI with a particular focus - write automated news stories - will be trained, through feedback, to learn about its editors/controllers, to distinguish them, to get to know them, and, in effect, to manipulate them.

This may be a useful "bridging example" between standard RL agents and the superintelligent machines.

Immortality: A Practical Guide

31 G0W51 26 January 2015 04:17PM

Immortality: A Practical Guide

Introduction

This article is about how to increase one’s own chances of living forever or, failing that, living for a long time. To be clear, this guide defines death as the long-term loss of one’s consciousness and defines immortality as never-ending life. For those who would like less lengthy information on decreasing one’s risk of death, I recommend reading the sections “Can we become immortal,” “Should we try to become immortal,” and “Cryonics,” in this guide, along with the article Lifestyle Interventions to Increase Longevity.

This article does not discuss how to treat specific disease you may have. It is not intended as a substitute for the medical advice of physicians. You should consult a physician with respect to any symptoms that may require diagnosis or medical attention. Additionally, I suggest considering using MetaMed to receive customized, albeit perhaps very expensive, information on your specific conditions, if you have any.

When reading about the effect sizes in scientific studies, keep in mind that many scientific studies report false-positives and are biased,101 though I have tried to minimize this by maximizing the quality of the studies used. Meta-analyses and scientific reviews seem to typically be of higher quality than other study types, but are still subject to biases.114

Corrections, criticisms, and suggestions for new topics are greatly appreciated. I’ve tried to write this article tersely, so feedback on doing so would be especially appreciated. Apologies if the article’s font type, size and color isn’t standard on Less Wrong; I made it in google docs without being aware of Less Wrong’s standard and it would take too much work changing the style of the entire article.

 

Contents

  1. Can we become immortal?

  2. Should we try to become immortal?

  3. Relative importance of the different topics

  4. Food

    1. What to eat and drink

    2. When to eat and drink

    3. How much to eat

    4. How much to drink

  5. Exercise

  6. Carcinogens

    1. Chemicals

    2. Infections

    3. Radiation

  7. Emotions and feelings

    1. Positive emotions and feelings

    2. Psychological distress

    3. Stress

    4. Anger and hostility

  8. Social and personality factors

    1. Social status

    2. Giving to others

    3. Social relationships

    4. Conscientiousness

  9. Infectious diseases

    1. Dental health

  10. Sleep

  11. Drugs

  12. Blood donation

  13. Sitting

  14. Sleep apnea

  15. Snoring

  16. Exams

  17. Genomics

  18. Aging

  19. External causes of death

    1. Transport accidents

    2. Assault

    3. Intentional self harm

    4. Poisoning

    5. Accidental drowning

    6. Inanimate mechanical forces

    7. Falls

    8. Smoke, fire, and heat

    9. Other accidental threats to breathing

    10. Electric current

    11. Forces of nature

  20. Medical care

  21. Cryonics

  22. Money

  23. Future advancements

  24. References

 

Can we become immortal?

In order to potentially live forever, one never needs to make it impossible to die; one instead just needs to have one’s life expectancy increase faster than time passes, a concept known as the longevity escape velocity.61 For example, if one had a 10% chance of dying in their first century of life, but their chance of death decreased by 90% at the end of each century, then one’s chance of ever dying would be be 0.1 + 0.12 + 0.13… = 0.11… = 11.11...%. When applied to risk of death from aging, this akin to one’s remaining life expectancy after jumping off a cliff while being affected by gravity and jet propulsion, with gravity being akin to aging and jet propulsion being akin to anti-aging (rejuvenation) therapies, as shown below.

The numbers in the above figure denote plausible ages of individuals when the first rejuvenation therapies arrive. A 30% increase in healthy lifespan would give the users of first-generation rejuvenation therapies 20 years to benefit from second-generation rejuvenation therapies, which could give an additional 30% increase if life span, ad infinitum.61

As for causes of death, many deaths are strongly age-related. The proportion of deaths that are caused by aging in the industrial world approaches 90%.53 Thus, I suppose postponing aging would drastically increase life expectancy.

As for efforts against aging, the SENS Research foundation and Science for Life Extension are charitable foundations for trying to cure aging.54, 55 Additionally, Calico, a Google-backed company, and AbbVie, a large pharmaceutical company, have each committed fund $250 million to cure aging.56

I speculate that one could additionally decrease risk of death by becoming a cyborg, as mechanical bodies seem easier to maintain than biological ones, though I’ve found no articles discussing this.

Similar to becoming a cyborg, another potential method of decreasing one’s risk of death is mind uploading, which is, roughly speaking, the transfer of most or all of one’s mental contents into a computer.62 However, there are some concerns about the transfer creating a copy of one’s consciousness, rather than being the same consciousness. This issue is made very apparent if the mind-uploaded process leaves the original mind intact, making it seem unlikely that one’s consciousness was transferred to the new body.63 Eliezer Yudkowsky doesn’t seem to believe this is an issue, though I haven't found a citation for this.

With regard to consciousness, it seems that most individuals believe that the consciousness in one’s body is the “same” consciousness as the one that was in one’s body in the past and will be in it in the future. However, I know of no evidence for this. If one’s consciousness isn’t the same of the one in one’s body in the future, and one defined death as one’s consciousness permanently ending, then I suppose one can’t prevent death for any time at all. Surprisingly, I’ve found no articles discussing this possibility.

Although curing aging, becoming a cyborg, and mind uploading may prevent death from disease, they still seem to leave oneself vulnerable to accidents, murder, suicide, and existential catastrophes. I speculate that these problems could be solved by giving an artificial superintelligence the ability to take control of one’s body in order to prevent such deaths from occurring. Of course, this possibility is currently unavailable.

Another potential cause of death is the Sun expanding, which could render Earth uninhabitable in roughly one billion years. Death from this could be prevented by colonizing other planets in the solar system, although eventually the sun would render the rest of the solar system uninhabitable. After this, one could potentially inhabit other stars; it is expected that stars will remain for roughly 10 quintillion years, although some theories predict that the universe will be destroyed in a mere 20 billion years. To continue surviving, one could potentially go to other universes.64 Additionally, there are ideas for space-time crystals that could process information even after heat death (i.e. the “end of the universe”),65 so perhaps one could make oneself composed of the space-time crystals via mind uploading or another technique. There could also be other methods of surviving the conventional end of the universe, and life could potentially have 10 quintillion years to find them.

Yet another potential cause of death is living in a computer simulation that is ended. The probability of one living in a computer simulation actually seems to not be very improbable. Nick Bostrom argues that:

...at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

The argument for this is here.100

If one does die, one could potentially be revived. Cryonics, discussed later in this article, may help in this. Additionally, I suppose one could possibly be revived if future intelligences continually create new conscious individuals and eventually create one of them that have one’s “own” consciousness, though consciousness remains a mystery, so this may not be plausible, and I’ve found no articles discussing this possibility. If the probability of one’s consciousness being revived per unit time does not approach or equal zero as time approaches infinity, then I suppose one is bound to become conscious again, though this scenario may be unlikely. Again, I’ve found no articles discussing this possibility.

As already discussed, in order to be live forever, one must either be revived after dying or prevent death from the consciousness in one’s body not being the same as the one that will be in one’s body in the future, accidents, aging, the sun dying, the universe dying, being in a simulation and having it end, and other, unknown, causes. Keep in mind that adding extra details that aren’t guaranteed to be true can only make events less probable, and that people often don’t account for this.66 A spreadsheet for estimating one’s chance of living forever is here.

 

Should we try to become immortal?

Before deciding whether one should try to become immortal, I suggest learning about the cognitive biases scope insensitivity, hyperbolic discounting, and bias blind spot if you don’t know currently know about them. Also, keep in mind that one study found that simply informing people of a cognitive bias made them no less likely to fall prey to it. A study also found that people only partially adjusted for cognitive biases after being told that informing people of a cognitive bias made them no less likely to fall prey to it.67

Many articles arguing against immortality are found via a quick google search, including this, this, this, and this. This article along with its comments discusses counter-arguments to many of these arguments. The Fable of the Dragon Tyrant provides an argument for curing aging, which can be extended to be an argument against mortality as a whole. I suggest reading it.

One can also evaluate the utility of immortality via decision theory. Assuming individuals receive a finite amount of utility per unit time such that it is never less than some above-zero constant, living forever would give infinitely more utility than living for a finite amount of time. Using these assumptions, in order to maximize utility, one should be willing to accept any finite cost to become immortal. However, the situation is complicated when one considers the potential of becoming immortal and receiving an infinite positive utility unintentionally, in which case one would receive infinite expected utility regardless of if one tried to become immortal. Additionally, if one both has the chance of receiving infinitely high and infinitely low utility, one’s expected utility would be undefined. Infinite utilities are discussed in “Infinite Ethics” by Nick Bostrom.

For those interested in decreasing existential risk, living for a very long time, albeit not necessarily forever, may give one more opportunity to do so. This idea can be generalized to many goals one has in life.

On whether one can influence one’s chances of becoming immortal, studies have shown that only roughly 20-30% of longevity in humans is accounted for by genetic factors.68 There are multiple actions one can to increase one’s chances of living forever; these are what the rest of this article is about. Keep in mind that you should consider continuing reading this article even if you don’t want to try to become immortal, as the article provides information on living longer, even if not forever, as well.

 

Relative importance of the different topics

The figure below gives the relative frequencies of preventable causes of death.

1

Some causes of death are excluded from the graph, but are still large causes of death. Most notably, 440,000 deaths in the US, roughly one sixth of total deaths in the US are estimated to be from preventable medical errors in hospitals.2

Risk calculators for cardiovascular disease are here and here. Though they seem very simplistic, they may be worth looking at and can probably be completed quickly.

Here are the frequencies of causes of deaths in the US in year 2010 based off of another classification:

  • Heart disease: 596,577

  • Cancer: 576,691

  • Chronic lower respiratory diseases: 142,943

  • Stroke (cerebrovascular diseases): 128,932

  • Accidents (unintentional injuries): 126,438

  • Alzheimer's disease: 84,974

  • Diabetes: 73,831

  • Influenza and Pneumonia: 53,826

  • Nephritis, nephrotic syndrome, and nephrosis: 45,591

  • Intentional self-harm (suicide): 39,518

113

 

Food

What to eat and drink

Keep in mind that the relationship between health and the consumption of types of substances aren’t necessarily linear. I.e. some substances are beneficial in small amounts but harmful in large amounts, while others are beneficial in both small and large amounts, but consuming large amounts is no more beneficial than consuming small amounts.

 

Recommendations from The Nutrition Source

The Nutrition Source is part of the Harvard School of Public Health.

Its recommendations:

  • Make ½ of your “plate” consist of a variety of fruits and a variety of vegetables, excluding potatoes, due to potatoes’ negative effect on blood sugar. The Harvard School of Public Health doesn’t seem to specify if this is based on calories or volume. It also doesn’t explain what it means by plate, but presumably ½ of one’s plate means ½ solid food consumed.

  • Make ¼ of your plate consist of whole grains.

  • Make ¼ of your plate consist of high-protein foods.

  • Limit red meat consumption.

  • Avoid processed meats.

  • Consume monounsaturated and polyunsaturated fats in moderation; they are healthy.

  • Avoid partially hydrogenated oils, which contain trans fats, which are unhealthy.

  • Limit milk and dairy products to one to two servings per day.

  • Limit juice to one small glass per day.

  • It is important to eat seafood one or two times per week, particularly fatty (dark meat) fish that are richer in EPA and DHA.

  • Limit diet drink consumption or consume in moderation.

  • Avoid sugary drinks like soda, sports drinks, and energy drinks.3

 

Fat

The bottom line is that saturated fats and especially trans fats are unhealthy, while unsaturated fats are healthy and the types of unsaturated fats omega-3 and omega-6 fatty acids fats are essential. The proportion of calories from fat in one’s diet isn’t really linked with disease.

Saturated fat is unhealthy. It’s generally a good idea to minimize saturated fat consumption. The latest Dietary Guidelines for Americans recommends consuming no more than 10% of calories from saturated fat, but the American Heart Association recommends consuming no more than 7% of calories from saturated fat. However, don’t decrease nut, oil, and fish consumption to minimize saturated fat consumption. Foods that contain large amounts of saturated fat include red meat, butter, cheese, and ice cream.

Trans fats are especially unhealthy. For every 2% increase of calories from trans-fat, risk of coronary heart disease increases by 23%. The Federal Institute for Medicine states that there are no known requirements for trans fats for bodily functions, so their consumption should be minimized. Partially hydrogenated oils contain trans fats, and foods that contain trans fats are often processed foods. In the US, products can claim to have zero grams of trans fat if they have no more than 0.5 grams of trans fat. Products with no more than 0.5 grams of trans fat that still have non-negligible amounts of trans fat will probably have the ingredients “partially hydrogenated vegetable oils” or “vegetable shortening” in their ingredient list.

Unsaturated fats have beneficial effects, including improving cholesterol levels, easing inflammation, and stabilizing heart rhythms. The American Heart Association has set 8-10% of calories as a target for polyunsaturated fat consumption, though eating more polyunsaturated fat, around 15%of daily calories, in place of saturated fat may further lower heart disease risk. Consuming unsaturated fats instead of saturated fat also prevents insulin resistance, a precursor to diabetes. Monounsaturated fats and polyunsaturated fats are types of unsaturated fats.

Omega-3 fatty acids (omega-3 fats) are a type of unsaturated fat. There are two main types: Marine omega-3s and alpha-linolenic acid (ALA). Omega-3 fatty acids, especially marine omega-3s, are healthy. Though one can make most needed types of fats from other fats or substances consumed, omega-3 fat is an essential fat, meaning it is an important type of fat and cannot be made in the body, so they must come from food. Most americans don’t get enough omega-3 fats.

Marine omega-3s are primarily found in fish, especially fatty (dark mean) fish. A comprehensive review found that eating roughly two grams per week of omega-3s from fish, equal to about one or two servings of fatty fish per week, decreased risk of death from heart disease by more than one-third. Though fish contain mercury, this is insignificant the positive health effects of their consumption (for the consumer, not the fish). However, it does benefit one’s health to consult local advisories to determine how much local freshwater fish to consume.

ALA may be an essential nutrient, and increased ALA consumption may be beneficial. ALA is found in vegetable oils, nuts (especially walnuts), flax seeds, flaxseed oil, leafy vegetables, and some animal fat, especially those from grass-fed animals. ALA is primarily used as energy, but a very small amount of it is converted into marine omega-3s. ALA is the most common omega-3 in western diets.

Most Americans consume much more omega-6 fatty acids (omega-6 fats) than omega-3 fats. Omega-6 fat is an essential nutrient and its consumption is healthy. Some sources of it include corn and soybean oils. The Nutrition Sources stated that the theory that omega-3 fats are healthier than omega-6 fats isn’t supported by evidence. However, in an image from the Nutrition Source, seafood omega-6 fats were ranked as healthier than plant omega-6 fats, which were ranked as healthier than monounsaturated fats, although such a ranking was to the best of my knowledge never stated in the text.3

 

Carbohydrates

There seems to be two main determinants of carbohydrate sources’ effects on health: nutrition content and effect on blood sugar. The bottom line is that consuming whole grains and other less processed grains and decreasing refined grain consumption improves health. Additionally, moderately low carbohydrate diets can increase heart health as long as protein and fat comes from health sources, though the type of carbohydrate at least as important as the amount of carbohydrates in a diet.

Glycemic index and is a measure of how much food increases blood sugar levels. Consuming carbohydrates that cause blood-sugar spikes can increase risk of heart disease and diabetes at least as much as consuming too much saturated fat does. Some factors that increase the glycemic index of foods include:

  • Being a refined grain as opposed to a whole grain.

  • Being finely ground, which is why consuming whole grains in their whole form, such as rice, can be healthier than consuming them as bread.

  • Having less fiber.

  • Being more ripe, in the case of fruits and vegetables.

  • Having a lower fat content, as meals with fat are converted more slowly into sugar.

Vegetables (excluding potatoes), fruits, whole grains, and beans, are healthier than other carbohydrates. Potatoes have a negative effect on blood sugar, due to their high glycemic index. Information on glycemic index and the index of various foods is here.

Whole grains also contain essential minerals such as magnesium, selenium, and copper, which may protect against some cancers. Refining grains takes away 50% of the grains’ B vitamins, 90% of vitamin E, and virtually all fiber. Sugary drinks usually have little nutritional value.

Identifying whole grains as food that has at least one gram of fiber for every gram of carbohydrate is a more effective measure of healthfulness than identifying a whole grain as the first ingredient, any whole grain as the first ingredient without added sugars in the first 3 ingredients, the word “whole” before any grain ingredient, and the whole grain stamp.3

 

Protein

Proteins are broken down to form amino acids, which are needed for health. Though the body can make some amino acids by modifying others, some must come from food, which are called essential amino acids. The institute of medicine recommends that adults get a minimum of 0.8 grams of protein per kilogram of body weight per day, and sets the range of acceptable protein intake to 10-35% of calories per day. The Institute of Medicine recommends getting 10-35% of calories from protein each day. The US recommended daily allowance for protein is 46 grams per day for women over 18 and 56 grams per day for men over 18.

Animal products tend to give all essential amino acids, but other sources lack some essential amino acids. Thus, vegetarians need to consume a variety of sources of amino acids each day to get all needed types. Fish, chicken, beans, and nuts are healthy protein sources.3

 

Fiber

There are two types of fiber: soluble fiber and insoluble fiber. Both have important health benefits, so one should eat a variety of foods to get both.94 The best sources of fiber are whole grains, fresh fruits and vegetables, legumes, and nuts.3

 

Micronutrients

There are many micronutrients in food; getting enough of them is important. Most healthy individuals can get sufficient micronutrients by consuming a wide variety of healthy foods, such as fruits, vegetables, whole grains, legumes, and lean meats and fish. However, supplementation may be necessary for some. Information about supplements is here.110

Concerning supplementation, potassium, iodine, and lithium supplementation are recommended in the first-place entry in the Quantified Health Prize, a contest on determining good mineral intake levels. However, others suggest that potassium supplementation isn’t necessarily beneficial, as shown here. I’m somewhat skeptical that the supplements are beneficial, as I have not found other sources recommending their supplementation. The suggested supplementation levels are in the entry.

Note that food processing typically decreases micronutrient levels, as described here. In general, it seems cooking, draining and drying foods sizably, taking potentially half of nutrients away, while freezing and reheating take away relatively few nutrients.111

One micronutrient worth discussing is sodium. Some sodium is needed for health, but most Americans consume more sodium than needed. However, recommendations on ideal sodium levels vary. The US government recommends limiting sodium consumption to 2,300mg/day (one teaspoon). The American Heart Association recommends limiting sodium consumption to 1,500mg/day (⅔ of a teaspoon), especially for those who are over 50, have high or elevated blood pressure, have diabetes, or are African Americans3 However, As RomeoStevens pointed out, the Institute of Medicine found that there's inconclusive evidence that decreasing sodium consumption below 2,300mg/day effects mortality,115 and some meta-analyses have suggested that there is a U-shaped relationship between sodium and mortality.116, 117

Vitamin D is another micronutrient that’s important for health. It can be obtained from food or made in the body after sun exposure. Most people who live farther north than San Francisco or don’t go outside at least fifteen minutes when it’s sunny are vitamin D deficient. Vitamin D deficiency is increases the risk of many chronic diseases including heart disease, infectious diseases, and some cancers. However, there is controversy about optimal vitamin D intake. The Institute of medicine recommends getting 600 to 4000 IU/day, though it acknowledged that there was no good evidence of harm at 4000 IU/day. The Nutrition Sources states that these recommendations are too low and fail to account for new evidence. The nutrition source states that for most people, supplements are the best source of vitamin D, but most multivitamins have too little vitamin D in them. The Nutrition Source recommends considering and talking to a doctor about taking an additional multivitamin if the you take less than 1000 IU of vitamin D and especially if you have little sun exposure.3

 

Blood pressure

Information on blood pressure is here in the section titled “Blood Pressure.”

 

Cholesterol and triglycerides

Information on optimal amounts of cholesterol and triglycerides are here.

 

The biggest influences on cholesterol are fats and carbohydrates in one’s diet, and cholesterol consumption generally has a far weaker influence. However, some people’s cholesterol levels rise and fall very quickly with the amount of cholesterol consumed. For them, decreasing cholesterol consumption from food can have a considerable effect on cholesterol levels. Trial and error is currently the only way of determining if one’s cholesterol levels risk and fall very quickly with the amount of cholesterol consumed.

 

Antioxidants

Despite their initial hype, randomized controlled trials have offered little support for the benefit is single antioxidants, though studies are inconclusive.3

 

Dietary reference intakes

For the numerically inclined, the Dietary Reference Intake provides quantitative guidelines on good nutrient consumption amounts for many nutrients, though it may be harder to use for some, due to its quantitative nature.

 

Drinks

The Nutrition Source and SFGate state that water is the best drink,3, 112 though I don’t know why it’s considered healthier than drinks such as tea.

Unsweetened tea decreases the risk of many diseases, likely largely due to polyphenols, and antioxidant, in it. Despite antioxidants typically having little evidence of benefit, I suppose polyphenols are relatively beneficial. All teas have roughly the same levels of polyphenols except decaffeinated tea,3 which has fewer polyphenols.96 Research suggests that proteins and possibly fat in milk decrease the antioxidant capacity of tea.

It’s considered safe to drink up to six cups of coffee per day. Unsweetened coffee is healthy and may decrease some disease risks, though coffee may slightly increase blood pressure. Some people may want to consider avoiding coffee or switching to decaf, especially women who are pregnant or people who have a hard time controlling their blood pressure or blood sugar. The nutrition source states that it’s best to brew coffee with a paper filter to remove a substance that increases LDL cholesterol, despite consumed cholesterol typically having a very small effect on the body’s cholesterol level.

Alcohol increases risk of diseases for some people3 and decreases it for others.3, 119 Heavy alcohol consumption is a major cause of preventable death in most countries. For some groups of people, especially pregnant people, people recovering from alcohol addiction, and people with liver disease, alcohol causes greater health risks and should be avoided. The likelihood of becoming addicted to alcohol can be genetically determined. Moderate drinking, generally defined as no more than one or two drinks per day for men, can increase colon and breast cancer risk, but these effects are offset by decreased heart disease and diabetes risk, especially in middle age, where heart disease begins to account for an increasingly large proportion of deaths. However, alcohol consumption won’t decrease cardiovascular disease risk much for those who are thin, physically active, don’t smoke, eat a healthy diet, and have no family history of heart disease. Some research suggests that red wine, particularly when consumed after a meal, has more cardiovascular benefits than beers or spirits, but alcohol choice has still little effect on disease risk. In one study, moderate drinkers were 30-35% less likely to have heart attacks than non-drinkers and men who drank daily had lower heart attack risk than those who drank once or twice per week.

There’s no need to drink more than one or two glasses of milk per day. Less milk is fine if calcium is obtained from other sources.

The health effects of artificially sweetened drinks are largely unknown. Oddly, they may also cause weight gain. It’s best to limit consuming them if one drinks them at all.

Sugary drinks can cause weight gain, as they aren’t as filling as solid food and have high sugar. They also increase the risk of diabetes, heart disease, and other diseases. Fruit juice has more calories and less fiber than whole fruit and is reportedly no better than soft drinks.3

 

Solid food

Fruits and vegetables are an important part of a healthy diet. Eating a variety of them is as important as eating many of them.3 Fish and nut consumption is also very healthy.98

Processed meat, on the other hand, is shockingly bad.98 A meta-analysis found that processed meat consumption is associated with a 42% increased risk of coronary heart disease (relative risk per 50g serving per day; 95% confidence interval: 1.07 - 1.89) and 19% increased risk of diabetes.97 Despite this, a bit of red meat consumption has been found to be beneficial.98 Consumption of well-done, fried, or barbecued meat has been associated with certain cancers, presumably due to carcinogens made in the meat from being cooked, though this link isn’t definitive. The amount of carcinogens increases with increased cooking temperature (especially above 300ºF, increased cooking time, charring, or being exposed to smoke.99

Eating less than one egg per day doesn’t increase heart disease risk in healthy individuals and can be part of a healthy diet.3

Organic foods have lower levels of pesticides than inorganic foods, though the residues of most organic and inorganic products don’t exceed government safety threshold. Washing fresh fruits and vegetables in recommended, as it removes bacteria and some, though not all, pesticide residues. Organic foods probably aren’t more nutritious than non-organic foods.103

 

When to eat and drink

A randomized controlled trial found an increase in blood sugar variation for subjects who skipped breakfast.6 Increasing meal frequency and decreasing meal size appears to have some metabolic advantages, and doesn’t appear to have metabolic disadvantages.7 Note:  old source; made in 1994 However, Mayo Clinic states that fasting for 1-2 days per week may increase heart health.32 Perhaps it is optimal for health to fast, but to have high meal frequency when not fasting.

 

How much to eat

One’s weight gain is directly proportional to the number of calories consumed divided by the number of calories burnt. Centers for Disease Control and Prevention (CDC) has guidelines for healthy weights and information on how to lose weight.

Some advocate restricting weight to a greater extent, which is known as calorie restriction. It’s unknown whether calorie restriction increases lifespan in humans or not, but moderate calorie restriction with adequate nutrition decreases risk of obesity, type 2 diabetes, inflammation, hypertension, cardiovascular disease, and metabolic risk factors associated with cancer, and is the most effective way of consistently increasing lifespan in a variety of organisms. The CR Society has information on getting started on calorie restriction.4

 

How much to drink

Generally, drinking enough to rarely feel thirsty and to have colorless or light yellow urine is usually sufficient. It’s also possible to drink too much water. In general, drinking too much water is rare in healthy adults who eat an average American diet, although endurance athletes are at a higher risk.10

 

Exercise

A meta-analysis found the data in the following graphs for people aged over 40.

8

A weekly total of roughly five hours of vigorous exercise has been identified by several studies to be the safe upper limit for life expectancy. It may be beneficial to take one or two days off from vigorous exercise per week and to limit chronic vigorous exercise to <= 60 min/day.9 Based on the above, I my best guess for the optimal amount of exercise for longevity is roughly 30 MET-hr/wk. Calisthenics burn 6-10 METs/hr11, so an example exercise routine to get this amount of exercise is doing calisthenics 38 minutes per day and 6 days/wk. Guides on how to exercise are available, e.g. this one.

 

Carcinogens

Carcinogens are cancer-causing substances. Since cancer causes death, decreasing exposure to carcinogens presumably decreases one’s risk of death. Some foods are also carcinogenic, as discussed in the “Food” section.

 

Chemicals

Tobacco use is the greatest avoidable risk factor for cancer worldwide, causing roughly 22% of cancer deaths. Additionally, second hand smoke has been proven to cause lung cancer in nonsmoking adults.

Alcohol use is a risk factor for many types of cancer. The risk of cancer increases with the amount of alcohol consumed, and substantially increases if one is also a heavy smoker. The attributable fraction of cancer from alcohol use varies depending on gender, due to differences in consumption level. E.g. 22% of mouth and oropharynx cancer is attributable to cancer in men but only 9% is attributable to alcohol in women.

Environmental air pollution accounts for 1-4% of cancer.84 Diesel exhaust is one type of carcinogenic air pollution. Those with the highest exposure to diesel exhaust are exposed to it occupationally. As for residential exposure, diesel exhaust is highest in homes near roads where traffic is heaviest. Limiting time spent near large sources of diesel exhaust decreases exposure. Benzene, another carcinogen, is found in gasoline and vehicle exhaust but exposure to it can also be cause by being in areas with unventilated fumes from gasoline, glues, solvents, paints, and art supplies. It can cause exposure from inhalation or skin contact.86

Some occupations exposure workers to occupational carcinogens.84 A list of some of the occupations is here, all of which involve manual labor, except for hospital-related jobs.87

 

Infections

Infections are responsible for 6% of cancer deaths in developed nations.84 Many of the infections are spread via sexual contact and sharing needles and some can be vaccinated against.85

 

Radiation

Ionizing radiation is carcinogenic to humans. Residential exposure to radon gas is estimated to cause 3-14% of lung cancers, which is the largest source of radon exposure for most people 84 Being exposed to radon and cigarette smoke together increases one’s cancer risk much more than they do separately. There is much variation radon levels depending on where one lives and and radon is usually higher inside buildings, especially levels closer to the ground, such as basements. The EPA recommends taking action to reduce radon levels if they are greater than or equal to 4.0 pCi/L. Radon levels can be reduced by a qualified contractor. Reducing radon levels without proper training and equipment can increase instead of decrease them.88

Some medical tests can also increase exposure to radiation. The EPA estimates that exposure to 10 mSv from a medical imaging test increases risk of cancer by  roughly 0.05%. To decrease exposure to radiation from medical imaging tests, one can ask if there are ways to shield parts of one’s body from radiation that aren’t being tested and making sure  the doctor performing the test is qualified.89

 

Small doses of ionizing radiation increase risk by a very small amount. Most studies haven’t detected increased cancer risk in people exposed to low levels of ionizing radiation. For example, people living in higher altitudes don’t have noticeably higher cancer rates than other people. In general, cancer risk from radiation increases as the dose of radiation increases and there is thought to be no safe level of exposure. Ultraviolet radiation as a type of radiation that can be ionizing radiation. Sunlight is the main source of ultraviolet radiation.84

Factors that increase one’s exposure to ultraviolet radiation when outside include:

  • Time of day. Almost ⅓ of UV radiation hits the surface between 11AM and 1PM, and ¾ hit the surface between 9AM and 5PM.  

  • Time of year. UV radiation is greater during summer. This factor is less significant near the equator.

  • Altitude. High elevation causes more UV radiation to penetrate the atmosphere.

  • Clouds. Sometimes clouds decrease levels of UV radiation because they block UV radiation from the sun. Other times, they increase exposure because they reflect UV radiation.

  • Reflection off surfaces, such as water, sand, snow, and grass increases UV radiation.

  • Ozone density, because ozone stops some UV radiation from reaching the surface.

Some tips to decrease exposure to UV radiation:

  • Stay in the shade. This is one of the best ways to limit exposure to UV radiation in sunlight.

  • Cover yourself with clothing.

  • Wear sunglasses.

  • Use sunscreen on exposed skin.90

 

Tanning beds are also a source of ultraviolet radiation. Using tanning booths can increase one’s chance of getting skin melanoma by at least 75%.91

 

Vitamin D3 is also produced from ultraviolet radiation, although the American Society for Clinical Nutrition states that vitamin D is readily available from supplements and that the controversy about reducing ultraviolet radiation exposure was fueled by the tanning industry.92

 

There could be some risk of cell phone use being associated with cancer, but the evidence is not strong enough to be considered causal and needs to be investigated further.93, 118

 

Emotions and feelings

Positive emotions and feelings

A review suggested that positive emotions and feelings decreased mortality. Proposed mechanisms include positive emotions and feelings being associated with better health practices such as improved sleep quality, increased exercise, and increased dietary zinc consumption, as well as lower levels of some stress hormones. It has also been hypothesized to be associated with other health-relevant hormones, various aspects of immune function, and closer and more social contacts.33 Less Wrong has a good article on how to be happy.

 

Psychological distress

A meta-analysis was conducted on psychological stress. To measure psychological stress, it used the GHQ-12 score, which measured symptoms of anxiety, depression, social dysfunction, and loss of confidence. The scores range from 0 to 12, with 0 being asymptomatic, 1-3 being subclinically symptomatic, 4-6 being symptomatic, and 7-12 being highly symptomatic. It found the results shown in the following graphs.

http://www.bmj.com/content/bmj/345/bmj.e4933/F3.large.jpg?width=800&height=600

This association was essentially unchanged after controlling for a range of covariates including occupational social class, alcohol intake, and smoking. However, reverse causality may still partly explain the association.30

 

Stress

A study found that individuals with moderate and high stress levels as opposed to low stress had hazard ratios (HRs) of mortality of 1.43 and 1.49, respectively.27 A meta-analysis found that high perceived stress as opposed to low perceived stress had a coronary heart disease relative risk (RR) of 1.27. The mean age of participants in the studies used in the meta-analysis varied from 44 to 72.5 years and was significantly and positively associated with effect size. It explained 46% of the variance in effect sizes between the studies used in the meta-analysis.28

A cross-sectional study (which is a relatively weak study design) not in the aforementioned meta-analysis used 28,753 subjects to study the effect on mortality from the amount of stress and the perception of whether stress is harmful or not. It found that neither of these factors predicted mortality independently, but but that taken together, they did have a statistically significant effect. Subjects who reported much stress and that stress has a large effect on health had a HR of 1.43 (95% CI: 1.2, 1.7). Reverse causality may partially explain this though, as those who have had negative health impacts from stress may have been more likely to report that stress influences health.83

 

Anger and hostility

A meta-analysis found that after fully controlling for behavior covariates such as smoking, physical activity or body mass index, and socioeconomic status, anger and hostility was not associated with coronary heart disease (CHD), though the results are inconclusive.34

 

Social and personality factors

Social status

A review suggested that social status is linked to health via gender, race, ethnicity, education levels, socioeconomic differences, family background, and old age.46

 

Giving to others

An observational study found that stressful life events was not a predictor for mortality for those who engaged in unpaid helping behavior directed towards friends, neighbors, or relatives who did not live with them. This association may be due to giving to others causing one to have a sense of mattering, opportunities for generativity, improved social well-being, the emotional state of compassion, and the physiology of the caregiving behavioral system.35

 

Social relationships

A large meta-analysis found that the odds ratio of mortality of having weak social relationships is 1.5 (95% confidence interval (CI): 1.42 to 1.59). However, this effect may be a conservative estimate. Many of the studies used in the meta-analysis used single item measures of social relations, but the size of the association was greatest in studies that used more complex measurements. Additionally, some of the studies in the meta-analysis adjusted for risk factors that may be mediators of social relationships’ effect on mortality (e.g. behavior, diet, and exercise). Many of the studies in the meta-analysis also ignored the quality of social relationships, but research suggests that negative social relationships are linked to increased mortality. Thus, the effect of social relationships on mortality could be even greater than the study found.

Concerning causation, social relationships are linked to better health practices and psychological processes, such as stress and depression, which influence health outcomes on their own. However, the meta-analysis also states that social relationships exert an independent effect. Some studies show that social support is linked to better immune system functioning and to immune-mediated inflammatory processes.36

 

Conscientiousness

A cohort study with 468 deaths found that each 1 standard deviation decrease in conscientiousness was associated with HR being multiplied by 1.07 (95% CI: 0.98 – 1.17), though it gave no mechanism for the association.39 Although it adjusted for several variables, (e.g.  socioeconomic status, smoking, and drinking), it didn’t adjust for drug use, risky driving, risky sex, suicide, and violence, which were all found by a meta-analysis to have statistically significant associations with conscientiousness.40 Overall, it seems to me that conscientiousness doesn’t seem to have a significant effect on mortality.

 

Infectious diseases

Mayo clinic has a good article on preventing infectious disease.

 

Dental health

A cohort study of 5611 adults found that compared to men with 26-32 teeth, men with 16-25 teeth had an HR of 1.03 (95% CI: 0.91-1.17), men with 1-15 teeth had an HR of 1.21 (95% CI: 1.05-1.40) and men with 0 teeth had an HR of 1.18 (95% CI: 1.00-1.39).

In the study, men who never brushed their teeth at night had a HR of 1.34 (95% CI: 1.14-1.57) relative to those who did every night. Among subjects who brushed at night, HR was similar between those who did and didn’t brush daily in the morning or day. The HR for men who brushed in the morning every day but not at night every day was 1.19 (95% CI: 0.99-1.43).

In the study, men who never used dental floss had an HR of 1.27 (95% CI: 1.11-1.46) and those who sometimes used it had an HR or 1.14 (95% CI: 1.00-1.30) compared to men who used it every day. Among subjects who brushed their teeth at night daily, not flossing was associated with a significantly increased HR.

Use of toothpicks didn’t significantly decrease HR and mouthwash had no effect.

The study had a list of other studies on the effect of dental health on mortality. It seems to us that almost all of them found a negative correlation between dental health and risk of mortality, although the study didn’t say their methodology for selecting the studies to show. I did a crude review of other literature by only looking at their abstracts and found that five studies found that poor dental health increased risk of mortality and one found it didn’t.

Regarding possible mechanisms, the study says that toothpaste helps prevent dental caries and that dental floss is the most effective means of removing interdental plaque and decreasing interdental gingival inflammation.38

 

Sleep

It seems that getting too little or too much sleep likely increases one’s risk of mortality, but it’s hard to tell exactly how much is too much and how little is too little.

 

One review found that the association between amount of sleep and mortality is inconsistent in studies and that what association does exist may be due to reverse-causality.41 However, a meta-analysis found that the RR associated with short sleep duration (variously defined as sleeping from < 8 hrs/night to < 6 hrs/night) was 1.10 (95% CI: 1.06-1.15). It also found that the RR associated with long sleep duration (variously defined as sleeping for > 8 hrs/night to > 10 hrs per night) compared with medium sleep duration (variously defined as sleeping for 7-7.9 hrs/night to 9-9.9 hrs/night) was 1.23 (95% CI: 1.17 - 1.30).42

 

The National Heart, Lung, and Blood Institute and Mayo Clinic recommend adults get 7-8 hours of sleep per night, although it also says sleep needs vary from person to person. It gives no method of determining optimal sleep for an individual. Additionally, it doesn’t say if its recommendations are for optimal longevity, optimal productivity, something else, or a combination of factors.43 The Harvard Medical School implies that one’s optimal amount of sleep is enough sleep to not need an alarm to wake up, though it didn’t specify the criteria for determining optimality either.45

 

Drugs

None of the drugs I’ve looked into have a beneficial effect for the people without a special disease or risk factor. Notes on them are here.

 

Blood donation

A quasi-randomized experiment with a validity near that of a randomized trial presumably suggested that blood donation didn’t significantly decrease risk of coronary heart disease (CHD). Observational studies have shown much lower CHD incidence among donors, although the authors of the former experiment suspect that bias and reverse causation played a role in this.29 That said, a review found that reverse causation accounted for only 30% of the effect of blood donation, though I haven't been able to find the review. RomeoStevens suggests that the potential benefits of blood donation are high enough and the costs are low enough that blood donation is worth doing.120

 

Sitting

After adjusting for amount of physical activity, a meta-analysis estimated that for every one hour increment of sitting in intervals 0-3, >3-7 and >7 h/day total sitting time, the hazard ratios of mortality were 1.00 (95% CI: 0.98-1.03), 1.02 (95% CI: 0.99-1.05) and 1.05 (95% CI: 1.02-1.08) respectively. It proposed no mechanism for sitting time having this effect,37 so it might have been due to confounding variables it didn’t control.

 

Sleep apnea

Sleep apnea is an independent risk factor for mortality and cardiovascular disease.26 Symptoms and other information on sleep apnea are here.

 

Snoring

A meta-analysis found that self-reported habitual snoring had a small but statistically significant association with stroke and coronary heart disease, but not with cardiovascular disease and all-cause mortality [HR 0.98 (95% CI: 0.78-1.23)]. Whether the risk is due to obstructive sleep apnea is controversial. Only the abstract is able to be viewed for free, so I’m just basing this off the abstract.31

 

Exams

The organization Susan G. Komen, citing a meta-analysis that used randomized controlled trials, doesn’t recommend breast self exams as a screening tool for breast cancer, as it hasn’t been shown to decrease cancer death. However, it still stated that it is important to be familiar with one’s breasts’ appearance and how they normally feel.49 According to the Memorial Sloan Kettering Cancer Center, no study has been able to show a statistically significant decrease in breast cancer deaths from breast self-exams.50 The National Cancer Institute states that breast self-examinations haven’t been shown to decrease breast cancer mortality, but does increase biopsies of benign breast lesions.51

The American Cancer Society doesn’t recommend testicular self-exams for all men, as they haven’t been studied enough to determine if they decrease mortality. However, it states that men with risk factors of testicular cancer (e.g. an undescended testical, previous testicular cancer, of a family member who previously had testicular cancer) should consider self-exams and discuss them with a doctor. The American Cancer Society also recommends having testicular self-exams in routine cancer-related check-ups.52

 

Genomics

Genomics is the study of genes in one’s genome, and may help increase health by using knowledge of one’s genes to have personalized treatment. However, it hasn’t proved to be useful for most; recommendations rarely change after knowledge from genomic testing. Still, genomics has much future potential.102

 

Aging

Like I’ve said in the section “Can we become immortal,” the proportion of deaths that are caused by aging in the industrial world approaches 90%,53 but some organizations and companies are working on curing it.54, 55, 56

One could support these organizations in an effort to hasten the development of anti-aging therapies, although I doubt an individual would have a noticeable impact on one’s own chance of death unless one is very wealthy. That said, I have little knowledge in investments, but I suppose investing in companies working on curing aging may be beneficial, as if they succeed, they may offer an enormous return on investment, and if they fail, one would probably die, so losing one’s money may not be as bad. Calico currently isn’t a public stock, though.

 

External causes of death

Unless otherwise specified, graphs in this section are on data collected from American citizens ages 15-24, as based off the Less Wrong census results, this seems to be the most probable demographic that will read this. For this demographic, external causes cause 76% of deaths. Note that although this is true, one is much more likely to die when older than when aged 15-24, and older individuals are much more likely to die from disease than from external causes of death. Thus, I think it’s more important when young to decrease risk of disease than external causes of death. The graph below shows the percentage of total deaths from external causes caused by various causes.

21

 

Transport accidents

Below are the relative death rates of specified means of transportation for people in general:

71

Much information about preventing death from car crashes is here. Information on preventing death from car crashes is here, here, here, and here.

 

Assault

Lifehacker's “Basic Self-Defense Moves Anyone Can Do (and Everyone Should Know)” gives a basic introduction to self defence.

 

Intentional self harm

Intentional self harm such as suicide, presumably, increases one’s risk of death.47 Mayo Clinic has a guide on preventing suicide. I recommend looking at it if you are considering killing yourself. Additionally, if are are considering killing yourself, I suggest reviewing the potential rewards of achieving immortality from the section “Should we try to become immortal.”

 

Poisoning

What to do if a poisoning occurs

CDC recommends staying calm, dialing 1-800-222-1222, and having this information ready:

  • Your age and weight.

  • If available, the container of the poison.

  • The time of the poison exposure.

  • The address where the poisoning occurred.

It also recommends staying on the phone and following the instructions of the emergency operator or poison control center.18

 

Types of poisons

Below is a graph of the risk of death per type of poison.

21

Some types of poisons:

  • Medicine overdoses.

  • Some household chemicals.

  • Recreational drug overdoses.

  • Carbon monoxide.

  • Metals such as lead and mercury.

  • Plants12 and mushrooms.14

  • Presumably some animals.

  • Some fumes, gases, and vapors.15

 

Recreational drugs

Using recreational drugs increases risk of death.

 

Medicine overdoses and household chemicals

CDC has tips for these here.

 

Carbon monoxide

CDC and Mayo Clinic have tips for this here and here.

 

Lead

Lead poisoning causes 0.2% of deaths worldwide and 0.0% of deaths in developed countries.22 Children under the age of 6 are at higher risk of lead poisoning.24 Thus, for those who aren’t children, learning more about preventing lead poisoning seems like more effort than it’s worth. No completely safe blood lead level has been identified.23

 

Mercury

MedlinePlus has an article on mercury poisoning here.

 

Accidental drowning

Information on preventing accidental drowning from CDC is here and here.

 

Inanimate mechanical forces

Over half of deaths from inanimate mechanical forces for Americans aged 15-24 are from firearms. Many of the other deaths are from explosions, machinery, and getting hit by objects. I suppose using common sense, precaution, and standard safety procedures when dealing with such things is one’s best defense.

 

Falls

Again, I suppose common sense and precaution is one’s best defense. Additionally, alcohol and substance abuse is a risk factor of falling.72

 

Smoke, fire and heat

Owning smoke alarms halves one’s risk of dying in a home fire.73 Again, common sense when dealing with fires and items potentially causing fires (e.g. electrical wires and devices) seems effective.

 

Other accidental threats to breathing

Deaths from other accidental threats to breathing are largely caused by strangling or choking on food or gastric contents, and occasionally by being in a cave-in or trapped in a low-oxygen environment.21 Choking can be caused by eating quickly or laughing while eating.74 If you are choking:

  • Forcefully cough. Lean as far forwards as you can and hold onto something that is firmly anchored, if possible. Breathe out and then take a deep breath in and cough; this may eject the foreign object.

  • Attract someone’s attention for help.75

 

Additionally, choking can be caused by vomiting while unconscious, which can be caused by being very drunk.76 I suggest lying in the recovery position if you think you may vomit while unconscious, so as to to decrease the chance of choking on vomit.77 Don’t forget to use common sense.

 

Electric current

Electric shock is usually caused by contact with poorly insulated wires or ungrounded electrical equipment, using electrical devices while in water, or lightning.78 Roughly ⅓ of deaths from electricity are caused by exposure to electric transmission lines.21

 

Forces of nature

Deaths from forces of nature in (for Americans ages 15-24) in descending order of number of deaths caused are: exposure to cold, exposure to heat, lightning, avalanches or other earth movements, cataclysmic storms, and floods.21 Here are some tips to prevent these deaths:

  • When traveling in cold weather, carry emergency supplies in your car and tell someone where you’re heading.79

  • Stay hydrated during hot weather.80

  • Safe locations from lightning include substantial buildings and hard-topped vehicles. Safe locations don’t include small sheds, rain shelters, and open vehicles.

  • Wait until there are no thunderstorm clouds in the area before going to a location that isn’t lightning safe.81

 

Medical care

Since medical care is tasked with treating diseases, receiving medical care when one has illnesses presumably decreases risk of death. Though necessary medical care may be essential when one has illnesses, a review estimated that preventable medical errors contributed to roughly 440,000 deaths per year in the US, which is roughly one-sixth of total deaths in the US. It gave a lower limit of 210,000 deaths per year.

The frequency of deaths from preventable medical errors varied across studies used in the review, with a hospital that was shown the put much effort into improving patient safety having a lower proportion of deaths from preventable medical errors than that of others.57 Thus, I suppose that it would be beneficial to go to hospitals that are known for their dedication to patient safety. There are several rankings of hospital safety available on the internet, such as this one. Information on how to help prevent medical errors is found here and under the “What Consumers Can Do” section here. One rare medical error is having a surgery be done on the wrong body part. The New York Times gives tips for preventing this here.

Additionally, I suppose it may be good to live relatively close to a hospital so as to be able to quickly reach it in emergencies, though I’ve found no sources stating this.

A common form of medical care are general health checks. A comprehensive Cochrane review with 182,880 subjects concluded that general health checks are probably not beneficial.107 A meta-analysis found that general health checks are associated with small but statistically significant benefits in factoring related to mortality, such as blood pressure and body mass index. However, it found no significant association with mortality.109 The New York Times acknowledged that health checks are probably not beneficial and gave some explanation why general health checks are nonetheless still common.108 However, CDC and MedlinePlus recommend getting routine general health checks. The cited no studies to support their claims.104, 106 When I contacted CDC about it, it responded, “Regular health exams and tests can help find problems before they start. They also can help find problems early, when your chances for treatment and cure are better. By getting the right health services, screenings, and treatments, you are taking steps that help your chances for living a longer, healthier life,” a claim that doesn’t seem supported by evidence. It also stated, “Although CDC understands you are concerned, the agency does not comment on information from unofficial or non-CDC sources.” I never heard back from MedlinePlus.

 

Cryonics

Cryonics is the freezing of legally dead humans with the purpose preserving their bodies so they can be brought back to life in the future once technology makes it possible. Human tissue have been cryopreserved and then brought back to life, although this has never been done on full humans.59 The price of Cryonics at least ranges from $28,000 to $200,000.60 More information on cryonics is on LessWrong Wiki.

 

Money

Cryonics, medical care, safe housing, and basic needs all take money. Rejuvenation therapy may also be very expensive. It seems valuable to have a reasonable amount of money and income.

 

Future advancements

Keeping updated on further advancements in technology seems like a good idea, as not doing so would prevent one from making use of future technologies. Keeping updated on advancements on curing aging seems especially important, due to the massive number of casualties it inflicts and the current work being done to stop it. Updates on mind-uploading seem important as well. I don’t know of any very efficient method of keeping updated on new advancements, but periodically googling for articles about curing aging or Calico and searching for new scientific articles on topics in this guide seems reasonable. As knb suggested, it seems beneficial to periodically check on Fight Aging, a website advocating anti-aging therapies. I’ll try to do this and update this guide with any new relevant information I find.

There is much uncertainty ahead, but if we’re clever enough, we just might make it though alive.

 

References

 

  1. Actual Causes of Death in the United States, 2000.
  2. A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care.
  3. All pages in The Nutrition Source, a part of the Harvard School of Public Health.
  4. Will calorie restriction work on humans? 
  5. The pages Getting Started, Tests and Biomarkers, and Risks from The CR Society.
  6. The causal role of breakfast in energy balance and health: a randomized controlled trial in lean adults.
  7. Low Glycemic Index: Lente Carbohydrates and Physiological Effects of altered food frequency. Published in 1994. 
  8. Leisure Time Physical Activity of Moderate to Vigorous Intensity and Mortality: A Large Pooled Cohort Analysis.
  9. Exercising for Health and Longevity vs Peak Performance: Different Regimens for Different Goals.
  10. Water: How much should you drink every day? 
  11. MET-hour equivalents of various physical activities.
  12. Poisoning. NLM
  13. Carcinogen. Dictionary.com
  14. Types of Poisons. New York Poison Center
  15. The Most Common Poisons for Children and Adults. National Capital Poison Center.
  16. Known and Probable Human Carcinogens. American cancer society.
  17. Nutritional Effects of Food Processing. Nutritiondata.com.
  18. Tips to Prevent Poisonings. CDC.
  19. Carbon monoxide poisoning. Mayo Clinic.
  20. Carbon Monoxide Poisoning. CDC. 
  21. CDCWONDER. Query Criteria taken from all genders, all states, all races, all levels of urbanization, all weekdays, dates 1999 – 2010, ages 15 – 24. 
  22. Global health risks: mortality and burden of disease attributable to selected major risks.
  23. National Biomonitoring Program Factsheet. CDC
  24. Lead poisoning. Mayo Clinic.
  25. Mercury. Medline Plus.
  26. Snoring Is Not Associated With All-Cause Mortality, Incident Cardiovascular Disease, or Stroke in the Busselton Health Study.
  27. Do Stress Trajectories Predict Mortality in Older Men? Longitudinal Findings from the VA Normative Aging Study.
  28. Meta-analysis of Perceived Stress and its Association with Incident Coronary Heart Disease.
  29. Iron and cardiac ischemia: a natural, quasi-random experiment comparing eligible with disqualified blood donors.
  30. Association between psychological distress and mortality: individual participant pooled analysis of 10 prospective cohort studies.
  31. Self-reported habitual snoring and risk of cardiovascular disease and all-cause mortality.
  32. Is it true that occasionally following a fasting diet can reduce my risk of heart disease? 
  33. Positive Affect and Health.
  34. The Association of Anger and Hostility with Future Coronary Heart Disease: A Meta-Analytic Review of Prospective Evidence.
  35. Giving to Others and the Association Between Stress and Mortality.
  36. Social Relationships and Mortality Risk: A Meta-analytic Review.
  37. Daily Sitting Time and All-Cause Mortality: A Meta-Analysis.
  38. Dental Health Behaviors, Dentition, and Mortality in the Elderly: The Leisure World Cohort Study.
  39. Low Conscientiousness and Risk of All-Cause, Cardiovascular and Cancer Mortality over 17 Years: Whitehall II Cohort Study.
  40. Conscientiousness and Health-Related Behaviors: A Meta-Analysis of the Leading Behavioral Contributors to Mortality.
  41. Sleep duration and all-cause mortality: a critical review of measurement and associations.
  42. Sleep duration and mortality: a systematic review and meta-analysis.
  43. How Much Sleep Is Enough? National Lung, Blood, and Heart Institute. 
  44. How many hours of sleep are enough for good health? Mayo Clinic.
  45. Assess Your Sleep Needs. Harvard Medical School.
  46. A Life-Span Developmental Perspective on Social Status and Health.
  47. Suicide. Merriam-Webster. 
  48. Can testosterone therapy promote youth and vitality? Mayo Clinic.
  49. Breast Self-Exam. Susan G. Komen.
  50. Screening Guidelines. The Memorial Sloan Kettering Cancer Center.
  51. Breast Cancer Screening Overview. The National Cancer Institute.
  52. Testicular self-exam. The American Cancer Society.
  53. Life Span Extension Research and Public Debate: Societal Considerations
  54. SENS Research Foundation: About.
  55. Science for Life Extension Homepage.
  56. Google's project to 'cure death,' Calico, announces $1.5 billion research center. The Verge.
  57. A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care.
  58. When Surgeons Cut the Wrong Body Part. The New York Times.
  59. Cold facts about cryonics. The Guardian. 
  60. The cryonics organization founded by the "Father of Cryonics," Robert C.W. Ettinger. Cryonics Institute. 
  61. Escape Velocity: Why the Prospect of Extreme Human Life Extension Matters Now
  62. International Journal of Machine Consciousness Introduction.
  63. The Philosophy of ‘Her.’ The New York Times.
  64. How to Survive the End of the Universe. Discover Magazine.
  65. A Space-Time Crystal to Outlive the Universe. Universe Today.
  66. Conjunction Fallacy. Less Wrong.
  67. Cognitive Biases Potentially Affecting Judgment of Global Risks.
  68. Genetic influence on human lifespan and longevity.
  69. First Drug Shown to Extend Life Span in Mammals. MIT Technology Review.
  70. Sirolimus (Oral Route). Mayo Clinic.
  71. Micromorts. Understanding Uncertainty.
  72. Falls. WHO.
  73. Smoke alarm outreach materials.  US Fire Administration.
  74. What causes choking? 17 possible conditions. Healthline.
  75. Choking. Better Health Channel.
  76. Aspiration pneumonia. HealthCentral.
  77. First aid - Recovery position. NHS Choices.
  78. Electric Shock. HowStuffWorks.
  79. Hypothermia prevention. Mayo Clinic.
  80. Extreme Heat: A Prevention Guide to Promote Your Personal Health and Safety. CDC.
  81. Understanding the Lightning Threat: Minimizing Your Risk. National weather service.
  82. The Case Against QuikClot. The survival mom.
  83. Does the Perception that Stress Affects Health Matter? The Association with Health and Mortality.
  84. Cancer Prevention. WHO.
  85. Infections That Can Lead to Cancer. American Cancer Society.
  86. Pollution. American Cancer Society.
  87. Occupations or Occupational Groups Associated with Carcinogen Exposures. Canadian Centre for Occupational Health and Safety. 
  88. Radon. American Cancer Society.
  89. Medical radiation. American Cancer Society.
  90. Ultraviolet (UV) Radiation. American Cancer Society.
  91. An Unhealthy Glow. American Cancer Society.
  92. Sun exposure and vitamin D sufficiency.  
  93. Cell Phones and Cancer Risk. National Cancer Institute.
  94. Nutrition for Everyone. CDC.
  95. How Can I Tell If My Body is Missing Key Nutrients? Oprah.com.
  96. Decaffeination, Green Tea and Benefits. Teas etc.
  97. Red and Processed Meat Consumption and Risk of Incident Coronary Heart Disease, Stroke, and Diabetes Mellitus.
  98. Lifestyle interventions to increase longevity.
  99. Chemicals in Meat Cooked at High Temperatures and Cancer Risk. National Cancer Institute.
  100. Are You Living in a Simulation? 
  101. How reliable are scientific studies?
  102. Genomics: What You Should Know. Forbes.
  103. Organic foods: Are they safer? More nutritious? Mayo Clinic.
  104. Health screening - men - ages 18 to 39. MedlinePlus. 
  105. Why do I need medical checkups. Banner Health.
  106. Regular Check-Ups are Important. CDC.
  107. General health checks in adults for reducing morbidity and mortality for disease (Review)
  108. Let’s (Not) Get Physicals.
  109. Effectiveness of general practice-based health checks: a systematic review and meta-analysis.
  110. Supplements: Nutrition in a Pill? Mayo Clinic.
  111. Nutritional Effects of Food Processing. SelfNutritionData.
  112. What Is the Healthiest Drink? SFGate.
  113. Leading Causes of Death. CDC.
  114. Bias Detection in Meta-analysis. Statistical Help.
  115. The summary of Sodium Intake in Populations: Assessment of Evidence. Institute of Medicine.
  116. Compared With Usual Sodium Intake, Low and Excessive -Sodium Diets Are Associated With Increased Mortality: A Meta-analysis.
  117. The Cochrane Review of Sodium and Health.
  118. Is there any link between cellphones and cancer? Mayo Clinic.
  119. A glass of red wine a day keeps the doctor away. Yale-New Haven Hospital.
  120. Comment on Lifestyle Interventions to Increase Longevity. Less Wrong.

Overpaying for happiness?

31 cousin_it 01 January 2015 12:22PM

Happy New Year, everyone!

In the past few months I've been thinking several thoughts that all seem to point in the same direction:

1) People who live in developed Western countries usually make and spend much more money than people in poorer countries, but aren't that much happier. It feels like we're overpaying for happiness, spending too much money to get a single bit of enjoyment.

2) When you get enjoyment from something, the association between "that thing" and "pleasure" in your mind gets stronger, but at the same time it becomes less sensitive and requires more stimulus. For example if you like sweet food, you can get into a cycle of eating more and more food that's sweeter and sweeter. But the guy next door, who's eating much less and periodically fasting to keep the association fresh, is actually getting more pleasure from food than you! The same thing happens when you learn to deeply appreciate certain kinds of art, the folks who enjoy "low" art are visibly having more fun.

3) People sometimes get unrealistic dreams and endlessly chase them, like trying to "make it big" in writing or sports, because they randomly got rewarded for it at an early age. I wrote a post about that.

I'm not offering any easy answers here. But it seems like too many people get locked in loops where they spend more and more effort to get less and less happiness. The most obvious examples are drug addiction and video gaming, but also "one-itis" in dating, overeating, being a connoisseur of anything, striving for popular success, all these things follow the same pattern. You're just chasing after some Skinner-box thing that you think you "love", but it doesn't love you back.

Sooo... if you like eating, give yourself a break every once in a while? If you like comfort, maybe get a cold shower sometimes? Might be a good idea to make yourself the kind of person that can get happiness cheaply.

Sorry if this post is not up to LW standards, I typed it really quickly as it came to my mind.

An alarming fact about the anti-aging community

29 diegocaleiro 16 February 2015 05:49PM

Past and Present

Ten years ago teenager me was hopeful. And stupid.

The world neglected aging as a disease, Aubrey had barely started spreading memes, to the point it was worth it for him to let me work remotely to help with Metuselah foundation. They had not even received that initial 1,000,000 donation from an anonymous donor. The Metuselah prize was running for less than 400,000 if I remember well. Still, I was a believer.

Now we live in the age of Larry Page's Calico, 100,000,000 dollars trying to tackle the problem, besides many other amazing initiatives, from the research paid for by Life Extension Foundation and Bill Faloon, to scholars in top universities like Steve Garan and Kenneth Hayworth fixing things from our models of aging to plastination techniques. Yet, I am much more skeptical now.

Individual risk

I am skeptical because I could not find a single individual who already used a simple technique that could certainly save you many years of healthy life. I could not even find a single individual who looked into it and decided it wasn't worth it, or was too pricy, or something of that sort.

That technique is freezing some of your cells now.

Freezing cells is not a far future hope, this is something that already exists, and has been possible for decades. The reason you would want to freeze them, in case you haven't thought of it, is that they are getting older every day, so the ones you have now are the youngest ones you'll ever be able to use.

Using these cells to create new organs is not something that may help you if medicine and technology continue progressing according to the law of accelerating returns in 10 or 30 years. We already know how to make organs out of your cells. Right now. Some organs live longer, some shorter, but it can be done - for instance to bladders - and is being done.

Hope versus Reason

Now, you'd think if there was an almost non-invasive technique already shown to work in humans that can preserve many years of your life and involves only a few trivial inconveniences - compared to changing diet or exercising for instance- the whole longevist/immortalist crowd would be lining up for it and keeping back up tissue samples all over the place.

Well I've asked them. I've asked some of the adamant researchers, and I've asked the superwealthy; I've asked the cryonicists and supplement gorgers; I've asked those who work on this 8 hour a day every day, and I've asked those who pay others to do so. I asked it mostly for selfish reasons, I saw the TEDs by Juan Enriquez and Anthony Atala and thought: hey look, clearly beneficial expected life length increase, yay! let me call someone who found this out before me - anyone, I'm probably the last one, silly me - and fix this.

I've asked them all, and I have nothing to show for it.

My takeaway lesson is: whatever it is that other people are doing to solve their own impending death, they are far from doing it rationally, and maybe most of the money and psychology involved in this whole business is about buying hope, not about staring into the void and finding out the best ways of dodging it. Maybe people are not in fact going to go all-in if the opportunity comes.

How to fix this?

Let me disclose first that I have no idea how to fix this problem. I don't mean the problem of getting all longevists to freeze their cells, I mean the problem of getting them to take information from the world of science and biomedicine and applying it to themselves. To become users of the technology they are boasters of. To behave rationally in a CFAR or even homo economicus sense.

I was hoping for a grandiose idea in this last paragraph, but it didn't come. I'll go with a quote from this emotional song sung by us during last year's Secular Solstice celebration

Do you realize? that everyone, you know, someday will die...

And instead of sending all your goodbyes

Let them know you realize that life goes fast

It's hard to make the good things last

My experience of the recent CFAR workshop

29 Kaj_Sotala 27 November 2014 04:17PM

Originally posted at my blog.

---

I just got home from a four-day rationality workshop in England that was organized by the Center For Applied Rationality (CFAR). It covered a lot of content, but if I had to choose a single theme that united most of it, it was listening to your emotions.

That might sound like a weird focus for a rationality workshop, but cognitive science has shown that the intuitive and emotional part of the mind (”System 1”) is both in charge of most of our behavior, and also carries out a great deal of valuable information-processing of its own (it’s great at pattern-matching, for example). Much of the workshop material was aimed at helping people reach a greater harmony between their System 1 and their verbal, logical System 2. Many of people’s motivational troubles come from the goals of their two systems being somehow at odds with each other, and we were taught to have our two systems have a better dialogue with each other, harmonizing their desires and making it easier for information to cross from one system to the other and back.

To give a more concrete example, there was the technique of goal factoring. You take a behavior that you often do but aren’t sure why, or which you feel might be wasted time. Suppose that you spend a lot of time answering e-mails that aren’t actually very important. You start by asking yourself: what’s good about this activity, that makes me do it? Then you try to listen to your feelings in response to that question, and write down what you perceive. Maybe you conclude that it makes you feel productive, and it gives you a break from tasks that require more energy to do.

Next you look at the things that you came up with, and consider whether there’s a better way to accomplish them. There are two possible outcomes here. Either you conclude that the behavior is an important and valuable one after all, meaning that you can now be more motivated to do it. Alternatively, you find that there would be better ways of accomplishing all the goals that the behavior was aiming for. Maybe taking a walk would make for a better break, and answering more urgent e-mails would provide more value. If you were previously using two hours per day on the unimportant e-mails, possibly you could now achieve more in terms of both relaxation and actual productivity by spending an hour on a walk and an hour on the important e-mails.

At this point, you consider your new plan, and again ask yourself: does this feel right? Is this motivating? Are there any slight pangs of regret about giving up my old behavior? If you still don’t want to shift your behavior, chances are that you still have some motive for doing this thing that you have missed, and the feelings of productivity and relaxation aren’t quite enough to cover it. In that case, go back to the step of listing motives.

Or, if you feel happy and content about the new direction that you’ve chosen, victory!

Notice how this technique is all about moving information from one system to another. System 2 notices that you’re doing something but it isn’t sure why that is, so it asks System 1 for the reasons. System 1 answers, ”here’s what I’m trying to do for us, what do you think?” Then System 2 does what it’s best at, taking an analytic approach and possibly coming up with better ways of achieving the different motives. Then it gives that alternative approach back to System 1 and asks, would this work? Would this give us everything that we want? If System 1 says no, System 2 gets back to work, and the dialogue continues until both are happy.

Again, I emphasize the collaborative aspect between the two systems. They’re allies working for common goals, not enemies. Too many people tend towards one of two extremes: either thinking that their emotions are stupid and something to suppress, or completely disdaining the use of logical analysis. Both extremes miss out on the strengths of the system that is neglected, and make it unlikely for the person to get everything that they want.

As I was heading back from the workshop, I considered doing something that I noticed feeling uncomfortable about. Previous meditation experience had already made me more likely to just attend to the discomfort rather than trying to push it away, but inspired by the workshop, I went a bit further. I took the discomfort, considered what my System 1 might be trying to warn me about, and concluded that it might be better to err on the side of caution this time around. Finally – and this wasn’t a thing from the workshop, it was something I invited on the spot – I summoned a feeling of gratitude and thanked my System 1 for having been alert and giving me the information. That might have been a little overblown, since neither system should actually be sentient by itself, but it still felt like a good mindset to cultivate.

Although it was never mentioned in the workshop, what comes to mind is the concept of wu-wei from Chinese philosophy, a state of ”effortless doing” where all of your desires are perfectly aligned and everything comes naturally. In the ideal form, you never need to force yourself to do something you don’t want to do, or to expend willpower on an unpleasant task. Either you want to do something and do, or don’t want to do it, and don’t.

A large number of the workshop’s classes – goal factoring, aversion factoring and calibration, urge propagation, comfort zone expansion, inner simulation, making hard decisions, Hamming questions, againstness – were aimed at more or less this. Find out what System 1 wants, find out what System 2 wants, dialogue, aim for a harmonious state between the two. Then there were a smaller number of other classes that might be summarized as being about problem-solving in general.

The classes about the different techniques were interspersed with ”debugging sessions” of various kinds. In the beginning of the workshop, we listed different bugs in our lives – anything about our lives that we weren’t happy with, with the suggested example bugs being things like ”every time I talk to so-and-so I end up in an argument”, ”I think that I ‘should’ do something but don’t really want to”, and ”I’m working on my dissertation and everything is going fine – but when people ask me why I’m doing a PhD, I have a hard time remembering why I wanted to”. After we’d had a class or a few, we’d apply the techniques we’d learned to solving those bugs, either individually, in pairs, or small groups with a staff member or volunteer TA assisting us. Then a few more classes on techniques and more debugging, classes and debugging, and so on.

The debugging sessions were interesting. Often when you ask someone for help on something, they will answer with direct object-level suggestions – if your problem is that you’re underweight and you would like to gain some weight, try this or that. Here, the staff and TAs would eventually get to the object-level advice as well, but first they would ask – why don’t you want to be underweight? Okay, you say that you’re not completely sure but based on the other things that you said, here’s a stupid and quite certainly wrong theory of what your underlying reasons for it might be, how does that theory feel like? Okay, you said that it’s mostly on the right track, so now tell me what’s wrong with it? If you feel that gaining weight would make you more attractive, do you feel that this is the most effective way of achieving that?

Only after you and the facilitator had reached some kind of consensus of why you thought that something was a bug, and made sure that the problem you were discussing was actually the best way to address to reasons, would it be time for the more direct advice.

At first, I had felt that I didn’t have very many bugs to address, and that I had mostly gotten reasonable advice for them that I might try. But then the workshop continued, and there were more debugging sessions, and I had to keep coming up with bugs. And then, under the gentle poking of others, I started finding the underlying, deep-seated problems, and some things that had been motivating my actions for the last several months without me always fully realizing it. At the end, when I looked at my initial list of bugs that I’d come up with in the beginning, most of the first items on the list looked hopelessly shallow compared to the later ones.

Often in life you feel that your problems are silly, and that you are affected by small stupid things that ”shouldn’t” be a problem. There was none of that at the workshop: it was tacitly acknowledged that being unreasonably hindered by ”stupid” problems is just something that brains tend to do.  Valentine, one of the staff members, gave a powerful speech about ”alienated birthrights” – things that all human beings should be capable of engaging in and enjoying, but which have been taken from people because they have internalized beliefs and identities that say things like ”I cannot do that” or ”I am bad at that”. Things like singing, dancing, athletics, mathematics, romantic relationships, actually understanding the world, heroism, tackling challenging problems. To use his analogy, we might not be good at these things at first, and may have to grow into them and master them the way that a toddler grows to master her body. And like a toddler who’s taking her early steps, we may flail around and look silly when we first start doing them, but these are capacities that – barring any actual disabilities – are a part of our birthright as human beings, which anyone can ultimately learn to master.

Then there were the people, and the general atmosphere of the workshop. People were intelligent, open, and motivated to work on their problems, help each other, and grow as human beings. After a long, cognitively and emotionally exhausting day at the workshop, people would then shift to entertainment ranging from wrestling to telling funny stories of their lives to Magic: the Gathering. (The game of ”bunny” was an actual scheduled event on the official agenda.) And just plain talk with each other, in a supportive, non-judgemental atmosphere. It was the people and the atmosphere that made me the most reluctant to leave, and I miss them already.

Would I recommend CFAR’s workshops to others? Although my above description may sound rather gushingly positive, my answer still needs to be a qualified ”mmmaybe”. The full price tag is quite hefty, though financial aid is available and I personally got a very substantial scholarship, with the agreement that I would pay it at a later time when I could actually afford it.

Still, the biggest question is, will the changes from the workshop stick? I feel like I have gained a valuable new perspective on emotions, a number of useful techniques, made new friends, strengthened my belief that I can do the things that I really set my mind on, and refined the ways by which I think of the world and any problems that I might have – but aside for the new friends, all of that will be worthless if it fades away in a week. If it does, I would have to judge even my steeply discounted price as ”not worth it”. That said, the workshops do have a money-back guarantee if you’re unhappy with the results, so if it really feels like it wasn’t worth it, I can simply choose to not pay. And if all the new things do end up sticking, it might still turn out that it would have been worth paying even the full, non-discounted price.

CFAR does have a few ways by which they try to make the things stick. There will be Skype follow-ups with their staff, for talking about how things have been going since the workshop. There is a mailing list for workshop alumni, and the occasional events, though the physical events are very US-centric (and in particular, San Francisco Bay Area-centric).

The techniques that we were taught are still all more or less experimental, and are being constantly refined and revised according to people’s experiences. I have already been thinking of a new skill that I had been playing with for a while before the workshop, and which has a bit of that ”CFAR feel” – I will aim to have it written up soon and sent to the others, and maybe it will eventually make its way to the curriculum of a future workshop. That should help keep me engaged as well.

We shall see. Until then, as they say in CFAR – to victory!

Vote for MIRI to be donated a share of reddit's advertising revenue

28 asd 19 February 2015 10:07AM

http://www.reddit.com/donate?organization=582565917

 

"Today we are announcing that we will donate 10% of our advertising revenue receipts in 2014 to non-profits chosen by the reddit community. Whether it’s a large ad campaign or a $5 sponsored headline on reddit, we intend for all ad revenue this year to benefit not only reddit as a platform but also to support the goals and causes of the entire community."

If you can see the box, you can open the box

26 ThePrussian 26 February 2015 10:36AM

First post here, and I'm disagreeing with something in the main sequences.  Hubris acknowledged, here's what I've been thinking about.  It comes from the post "Are your enemies innately evil?":

On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America.  Now why do you suppose they might have done that?  Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?

Realistically, most people don't construct their life stories with themselves as the villains.  Everyone is the hero of their own story.  The Enemy's story, as seen by the Enemy, is not going to make the Enemy look bad.  If you try to construe motivations that would make the Enemy look bad, you'll end up flat wrong about what actually goes on in the Enemy's mind.

If I'm misreading this, please correct me, but the way I am reading this is:

1) People do not construct their stories so that they are the villains,

therefore

2) the idea that Al Qaeda is motivated by a hatred of American freedom is false.

Reading the Al Qaeda document released after the attacks called Why We Are Fighting You you find the following:

 

What are we calling you to, and what do we want from you?

1.  The first thing that we are calling you to is Islam.

A.  The religion of tahwid; of freedom from associating partners with Allah Most High , and rejection of such blasphemy; of complete love for Him, the Exalted; of complete submission to his sharia; and of the discarding of all the opinions, orders, theories, and religions that contradict with the religion He sent down to His Prophet Muhammad.  Islam is the religion of all the prophets and makes no distinction between them. 

It is to this religion that we call you …

2.  The second thing we call you to is to stop your oppression, lies, immorality and debauchery that has spread among you.

A.  We call you to be a people of manners, principles, honor and purity; to reject the immoral acts of fornication, homosexuality, intoxicants, gambling and usury.

We call you to all of this that you may be freed from the deceptive lies that you are a great nation, which your leaders spread among you in order to conceal from you the despicable state that you have obtained.

B.  It is saddening to tell you that you are the worst civilization witnessed in the history of mankind:

i.  You are the nation who, rather than ruling through the sharia of Allah, chooses to invent your own laws as you will and desire.  You separate religion from you policies, contradicting the pure nature that affirms absolute authority to the Lord your Creator….

ii.  You are the nation that permits usury…

iii.   You are a nation that permits the production, spread, and use of intoxicants.  You also permit drugs, and only forbid the trade of them, even though your nation is the largest consumer of them.

iv.  You are a nation that permits acts of immorality, and you consider them to be pillars of personal freedom.  

"Freedom" is of course one of those words.  It's easy enough to imagine an SS officer saying indignantly: "Of course we are fighting for freedom!  For our people to be free of Jewish domination, free from the contamination of lesser races, free from the sham of democracy..."

If we substitute the symbol with the substance though, what we mean by freedom - "people to be left more or less alone, to follow whichever religion they want or none, to speak their minds, to try to shape society's laws so they serve the people" - then Al Qaeda is absolutely inspired by a hatred of freedom.  They wouldn't call it "freedom", mind you, they'd call it "decadence" or "blasphemy" or "shirk" - but the substance is what we call "freedom".

Returning to the syllogism at the top, it seems to be that there is an unstated premise.  The conclusion "Al Qaeda cannot possibly hate America for its freedom because everyone sees himself as the hero of his own story" only follows if you assume that What is heroic, what is good, is substantially the same for all humans, for a liberal Westerner and an Islamic fanatic.

(for Americans, by "liberal" here I mean the classical sense that includes just about everyone you are likely to meet, read or vote for.  US conservatives say they are defending the American revolution, which was broadly in line with liberal principles - slavery excepted, but since US conservatives don't support that, my point stands).

When you state the premise baldly like that, you can see the problem.  There's no contradiction in thinking that Muslim fanatics think of themselves as heroic precisely for being opposed to freedom, because they see their heroism as trying to extend the rule of Allah - Shariah - across the world.

Now to the point - we all know the phrase "thinking outside the box".  I submit that if you can recognize the box, you've already opened it.  Real bias isn't when you have a point of view you're defending, but when you cannot imagine that another point of view seriously exists.

That phrasing has a bit of negative baggage associated with it, that this is just a matter of pigheaded close-mindedness.  Try thinking about it another way.  Would you say to someone with dyscalculia "You can't get your head around the basics of calculus?  You are just being so close minded!"  No, that's obviously nuts.  We know that different peoples minds work in different ways, that some people can see things others cannot. 

Orwell once wrote about the British intellectuals inability to "get" fascism, in particular in his essay on H.G. Wells.  He wrote that the only people who really understood the nature and menace of fascism were either those who had felt the lash on their backs, or those who had a touch of the fascist mindset themselves.  I suggest that some people just cannot imagine, cannot really believe, the enormous power of faith, of the idea of serving and fighting and dying for your god and His prophet.  It is a kind of thinking that is just alien to many.

Perhaps this is resisted because people think that "Being able to think like a fascist makes you a bit of a fascist".  That's not really true in any way that matters - Orwell was one of the greatest anti-fascist writers of his time, and fought against it in Spain. 

So - if you can see the box you are in, you can open it, and already have half-opened it.  And if you are really in the box, you can't see the box.  So, how can you tell if you are in a box that you can't see versus not being in a box?  

The best answer I've been able to come up with is not to think of "box or no box" but rather "open or closed box".  We all work from a worldview, simply because we need some knowledge to get further knowledge.  If you know you come at an issue from a certain angle, you can always check yourself.  You're in a box, but boxes can be useful, and you have the option to go get some stuff from outside the box.

The second is to read people in other boxes.  I like steelmanning, it's an important intellectual exercise, but it shouldn't preclude finding actual Men of Steel - that is, people passionately committed to another point of view, another box, and taking a look at what they have to say.  

Now you might say: "But that's steelmanning!"  Not quite.  Steelmanning is "the art of addressing the best form of the other person’s argument, even if it’s not the one they presented."  That may, in some circumstances, lead you to make the mistake of assuming that what you think is the best argument for a position is the same as what the other guy thinks is the best argument for his position.  That's especially important if you are addressing a belief held by a large group of people.

Again, this isn't to run down steelmanning - the practice is sadly limited, and anyone who attempts it has gained a big advantage in figuring out how the world is.  It's just a reminder that the steelman you make may not be quite as strong as the steelman that is out to get you.  

[EDIT: Link included to the document that I did not know was available online before now]

Why "Changing the World" is a Horrible Phrase

26 ozziegooen 25 December 2014 06:04AM

Steve Jobs famously convinced John Scully from Pepsi to join Apple Computer with the line, “Do you want to sell sugared water for the rest of your life? Or do you want to come with me and change the world?”.  This sounds convincing until one thinks closely about it.

Steve Jobs was a famous salesman.   He was known for his selling ability, not his honesty.  His terminology here was interesting.  ‘Change the world’ is a phrase that both sounds important and is difficult to argue with.  Arguing if Apple was really ‘changing the world’ would have been pointless, because the phrase was so ambiguous that there would be little to discuss.  On paper, of course Apple is changing the world, but then of course any organization or any individual is also ‘changing’ the world.  A real discussion of if Apple ‘changes the world’ would lead to a discussion of what ‘changing the world’ actually means, which would lead to obscure philosophy, steering the conversation away from the actual point.  

‘Changing the world’ is an effective marketing tool that’s useful for building the feeling of consensus. Steve Jobs used it heavily, as had endless numbers of businesses, conferences, nonprofits, and TV shows.  It’s used because it sounds good and is typically not questioned, so I’m here to question it.  I believe that the popularization of this phrase creates confused goals and perverse incentives from people who believe they are doing good things.

 

Problem 1: 'Changing the World' Leads to Television Value over Real Value

It leads nonprofit workers to passionately chase feeble things.  I’m amazed by the variety that I see in people who try to ‘change the world’. Some grow organic food, some research rocks, some play instruments. They do basically everything.  

Few people protest this variety.  There are millions of voices giving the appeal to ‘change the world’ in the way that would validate many radically diverse pursuits.  

TED, the modern symbol of the intellectual elite for many, is itself a grab bag of a ways to ‘change the world’, without any sense of scale between pursuits.  People tell comedic stories, sing songs, discuss tales of personal adventures and so on.  In TED Talks, all presentations are shown side-by-side with the same lighting and display.  Yet in real life some projects produce orders of magnitude more output than others.

At 80,000 Hours, I read many applications for career consulting. I got the sense that there are many people out there trying to live their lives in order to eventually produce a TED talk.  To them, that is what ‘changing the world’ means.  These are often very smart and motivated people with very high opportunity costs.  

I would see an application that would express interest in either starting an orphanage in Uganda, creating a woman's movement in Ohio, or making a conservatory in Costa Rica.  It was clear that they were trying to ‘change the world’ in a very vague and TED-oriented way.

I believe that ‘Changing the World’ is promoted by TED, but internally acts mostly as a Schelling point.  Agreeing on the importance of ‘changing the world’ is a good way of coming to a consensus without having to decide on moral philosophy. ‘Changing the world’ is simply the minimum common denominator for what that community can agree upon.  This is a useful social tool, but an unfortunate side effect was that it inspired many others to follow this shelling point itself.  Please don’t make the purpose of your life the lowest common denominator of a specific group of existing intellectuals. 

It leads businesses to be gain employees and media attention without having to commit to anything.  I’m living in Silicon Valley, and ‘Change the World’ is an incredibly common phrase for new and old startups. Silicon Valley (the TV show) made fun of it, as do much of the media.  They should, but I think much of the time they miss the point; the problem here is not one where the companies are dishonest, but one where their honestly itself just doesn’t mean much.  Declaring that a company is ‘changing the world’ isn’t really declaring anything.  

Hiring conversations that begin and end with the motivation of ‘changing the world’ are like hiring conversations that begin and end with making ‘lots’ of money.  If one couldn’t compare salaries between different companies, they would likely select poorly for salary.  In terms of social benefit, most companies don’t attempt to quantify their costs and benefits on society except in very specific and positive ways for them.  “Google has enabled Haiti disaster recovery” for social proof sounds to me like saying “We paid this other person $12,000 in July 2010” for salary proof. It sounds nice, but facts selected by a salesperson are simply not complete.

 

Problem 2: ‘Changing the World’ Creates Black and White Thinking

The idea that one wants to ‘change the world’ implies that there is such a thing as ‘changing the world’ and such a thing is ‘not changing the world’.  It implies that there are ‘world changers’ and people who are not ‘world changers’. It implies that there is one group of ‘important people’ out there and then a lot of ‘useless’ others.

This directly supports the ‘Great Man’ theory, a 19th century idea that history and future actions are led by a small number of ‘great men’.  There’s not a lot of academic research supporting this theory, but there’s a lot of attention to it, and it’s a lot of fun to pretend is true.  

But it’s not.  There is typically a lot of unglamorous work behind every successful project or organization. Behind every Steve Jobs are thousands of very intelligent and hard-working employees and millions of smart people who have created a larger ecosystem. If one only pays attention to Steve Jobs they will leave out most of the work. They will praise Steve Jobs far too highly and disregard the importance of unglamorous labor.

Typically much of the best work is also the most unglamorous.  Making WordPress websites, sorting facts into analysis, cold calling donors. Many the best ideas for organizations may be very simple and may have been done before. However, for someone looking to get to TED conferences or become superstars, it is very easy to look over other comparatively menial labor. This means that not only will it not get done, but those people who do it feel worse about themselves.

So some people do important work and feel bad because it doesn’t meet the TED standard of ‘change the world’.  Others try ridiculously ambitious things outside their own capabilities, fail, and then give up.  Others don’t even try, because their perceived threshold is too high for them.  The very idea of a threshold and a ‘change or don’t change the world’ approach is simply false, and believing something that’s both false and fundamentally important is really bad.

In all likelihood, you will not make the next billion-dollar nonprofit. You will not make the next billion-dollar business. You will not become the next congressperson in your district. This does not mean that you have not done a good job. It should not demoralize you in any way once you fail hardly to do these things. 

Finally, I would like to ponder on what happens once or if one does decide they have changed the world. What now? Should one change it again?

It’s not obvious.  Many retire or settle down after feeling accomplished.  However, this is exactly when trying is the most important.  People with the best histories have the best potentials.  No matter how much a U.S. President may achieve, they still can achieve significantly more after the end of their terms.  There is no ‘enough’ line for human accomplishment.

Conclusion

In summary the phrase change the world provides a lack of clear direction and encourages black-and-white thinking that distorts behaviors and motivation.  However, I do believe that the phrase can act as a stepping stone towards a more concrete goal.  ‘Change the World’ can act as an idea that requires a philosophical continuation.  It’s a start for a goal, but it should be recognized that it’s far from a good ending.

Next time someone tells you about ‘changing the world’, ask them to follow through with telling you the specifics of what they mean.  Make sure that they understand that they need to go further in order to mean anything.  

And more importantly, do this for yourself.  Choose a specific axiomatic philosophy or set of philosophies and aim towards those.  Your ultimate goal in life is too important to be based on an empty marketing term.

Entropy and Temperature

26 spxtr 17 December 2014 08:04AM

Eliezer Yudkowsky previously wrote (6 years ago!) about the second law of thermodynamics. Many commenters were skeptical about the statement, "if you know the positions and momenta of every particle in a glass of water, it is at absolute zero temperature," because they don't know what temperature is. This is a common confusion.

Entropy

To specify the precise state of a classical system, you need to know its location in phase space. For a bunch of helium atoms whizzing around in a box, phase space is the position and momentum of each helium atom. For N atoms in the box, that means 6N numbers to completely specify the system.

Lets say you know the total energy of the gas, but nothing else. It will be the case that a fantastically huge number of points in phase space will be consistent with that energy.* In the absence of any more information it is correct to assign a uniform distribution to this region of phase space. The entropy of a uniform distribution is the logarithm of the number of points, so that's that. If you also know the volume, then the number of points in phase space consistent with both the energy and volume is necessarily smaller, so the entropy is smaller.

This might be confusing to chemists, since they memorized a formula for the entropy of an ideal gas, and it's ostensibly objective. Someone with perfect knowledge of the system will calculate the same number on the right side of that equation, but to them, that number isn't the entropy. It's the entropy of the gas if you know nothing more than energy, volume, and number of particles.

Temperature

The existence of temperature follows from the zeroth and second laws of thermodynamics: thermal equilibrium is transitive, and entropy is maximum in equilibrium. Temperature is then defined as the thermodynamic quantity that is the shared by systems in equilibrium.

If two systems are in equilibrium then they cannot increase entropy by flowing energy from one to the other. That means that if we flow a tiny bit of energy from one to the other (δU1 = -δU2), the entropy change in the first must be the opposite of the entropy change of the second (δS1 = -δS2), so that the total entropy (S1 + S2) doesn't change. For systems in equilibrium, this leads to (∂S1/∂U1) = (∂S2/∂U2). Define 1/T = (∂S/∂U), and we are done.

Temperature is sometimes taught as, "a measure of the average kinetic energy of the particles," because for an ideal gas U/= (3/2) kBT. This is wrong as a definition, for the same reason that the ideal gas entropy isn't the definition of entropy.

Probability is in the mind. Entropy is a function of probabilities, so entropy is in the mind. Temperature is a derivative of entropy, so temperature is in the mind.

Second Law Trickery

With perfect knowledge of a system, it is possible to extract all of its energy as work. EY states it clearly:

So (again ignoring quantum effects for the moment), if you know the states of all the molecules in a glass of hot water, it is cold in a genuinely thermodynamic sense: you can take electricity out of it and leave behind an ice cube.

Someone who doesn't know the state of the water will observe a violation of the second law. This is allowed. Let that sink in for a minute. Jaynes calls it second law trickery, and I can't explain it better than he does, so I won't try:

A physical system always has more macroscopic degrees of freedom beyond what we control or observe, and by manipulating them a trickster can always make us see an apparent violation of the second law.

Therefore the correct statement of the second law is not that an entropy decrease is impossible in principle, or even improbable; rather that it cannot be achieved reproducibly by manipulating the macrovariables {X1, ..., Xn} that we have chosen to define our macrostate. Any attempt to write a stronger law than this will put one at the mercy of a trickster, who can produce a violation of it.

But recognizing this should increase rather than decrease our confidence in the future of the second law, because it means that if an experimenter ever sees an apparent violation, then instead of issuing a sensational announcement, it will be more prudent to search for that unobserved degree of freedom. That is, the connection of entropy with information works both ways; seeing an apparent decrease of entropy signifies ignorance of what were the relevant macrovariables.

Homework

I've actually given you enough information on statistical mechanics to calculate an interesting system. Say you have N particles, each fixed in place to a lattice. Each particle can be in one of two states, with energies 0 and ε. Calculate and plot the entropy if you know the total energy: S(E), and then the energy as a function of temperature: E(T). This is essentially a combinatorics problem, and you may assume that N is large, so use Stirling's approximation. What you will discover should make sense using the correct definitions of entropy and temperature.


*: How many combinations of 1023 numbers between 0 and 10 add up to 5×1023?

Has LessWrong Ever Backfired On You?

25 Evan_Gaensbauer 15 December 2014 05:44AM

Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I thought it would only be due diligence if I tried to track users on LessWrong who have received advice on this site and it's backfired. In other words, to avoid bias in the record, we might notice what LessWrong as a community is bad at giving advice about. So, I'm seeking feedback. If you have anecdotes or data of how a plan or advice directly from LessWrong backfired, failed, or didn't lead to satisfaction, please share below. 

Announcing LessWrong Digest

24 Evan_Gaensbauer 23 February 2015 10:41AM

I've been making rounds on social media with the following message.

Great content on LessWrong isn't as frequent as it used to be, so not as many people read it as frequently. This makes sense. However, I read it at least once every two days for personal interest. So, I'm starting a LessWrong/Rationality Digest, which will be a summary of all posts or comments exceeding 20 upvotes within a week. It will be like a newsletter. Also, it's a good way for those new to LessWrong to learn cool things without having to slog through online cultural baggage. It will never be more than once weekly. If you're curious here is a sample of what the Digest will be like.

https://docs.google.com/document/d/1e2mHi7W0H2toWPNooSq7QNjEhx_xa0LcLw_NZRfkPPk/edit

Also, major blog posts or articles from related websites, such as Slate Star Codex and Overcoming Bias, or publications from the MIRI, may be included occasionally. If you want on the list send an email to:

lesswrongdigest *at* gmail *dot* com

 

Users of LessWrong itself have noticed this 'decline' in frequency of quality posts on LessWrong. It's not necessarily a bad thing, as much of the community has migrated to other places, such as Slate Star Codex, or even into meatspace with various organizations, meetups, and the like. In a sense, the rationalist community outgrew LessWrong as a suitable and ultimate nexus. Anyway, I thought you as well would be interested in a LessWrong Digest. If you or your friends:

  • find articles in 'Main' are too infrequent, and Discussion only filled with announcements, open threads, and housekeeping posts, to bother checking LessWrong regularly, or,
  • are busying themselves with other priorities, and are trying to limit how distracted they are by LessWrong and other media

the LessWrong Digest might work for you, and as a suggestion for your friends. I've fielded suggestions I transform this into a blog, Tumblr, or other format suitable for RSS Feed. Almost everyone is happy with email format right now, but if a few people express an interest in a blog or RSS format, I can make that happen too. 

 

Research Priorities for Artificial Intelligence: An Open Letter

23 jimrandomh 11 January 2015 07:52PM

The Future of Life Institute has published their document Research priorities for robust and beneficial artificial intelligence and written an open letter for people to sign indicating their support.

Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls. This document gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.

 

Memes and Rational Decisions

23 inferential 09 January 2015 06:42AM

In 2004, Michael Vassar gave the following talk about how humans can reduce existential risk, titled Memes and Rational Decisions, to some transhumanists. It is well-written and gives actionable advice, much of which is unfamiliar to the contemporary Less Wrong zeitgeist.

Although transhumanism is not a religion, advocating as it does the critical analysis of any position; it does have certain characteristics which may lead to its identification as such by concerned skeptics. I am sure that everyone here has had to deal with this difficulty, and as it is a cause of perplexity for me I would appreciate it if anyone who has some suggested guidelines for interacting honestly with non-transhumanists share them at the end of my presentation. It seems likely to me that each of our minds contains either meme complexes or complex functional adaptations which have evolved to identify “religious” thoughts and to neutralize their impact on our behavior. Most brains respond to these memes by simply rejecting them. Others however, instead neutralize such memes simply by not acting according to the conclusions that should be drawn from such memes. In almost any human environment prior to the 20th century this religious hypocrisy would be a vital cognitive trait for every selectively fit human. People who took in religious ideas and took them too seriously would end up sacrificing their lives overly casually at best, and at worst would become celibate priests. Unfortunately, these memes are no more discriminating than the family members and friends who tend to become concerned for our sanity in response to their activity. Since we are generally infested with the same set of memes, we genuinely are liable to insanity, though not of the suspected sort. A man who is shot by surprise is not particularly culpable for his failure to dodge or otherwise protect himself, though perhaps he should have signed up with Alcor. A hunter gatherer who confronts an aggressive European with a rifle for the first time can also receive sympathy when he is slain by the magic wand that he never expected to actually work. By contrast, a modern Archimedes who ignores a Roman soldier’s request that he cease from his geometric scribbling is truly a mad man. Most of people of the world, unaware of molecular nanotechnology and of the potential power of recursively self-improving AI are in a position roughly analogous to that of the first man. The business and political figures that dismiss eternal life and global destruction alike as plausible scenarios are in the position of the second man. By contrast, it is we transhumanists who are for the most part playing the part of Archimedes. With death, mediated by technologies we understand full well staring us in the face; we continue our pleasant intellectual games. At best a few percent of us have adopted the demeanor of an earlier Archimedes and transferred our attention from our choice activities to other, still interesting endeavors which happen to be vital to our own survival. The rest are presumably acting as puppets of the memes which react to the prospect of immortality by isolating the associated meme-complex and suppressing its effects on actual activity.

OK, so most of us don't seem to be behaving in an optimal manner. What manner would be optimal? This ISN'T a religion, remember? I can't tell you that. At best I can suggest an outline of the sort of behavior that seems to me least likely to lead to this region of space becoming the center of a sphere of tiny smiley faces expanding at the speed of light.

The first thing that I can suggest is that you take rationality seriously. Recognize how far you have to go. Trust me; the fact that you can't rationally trust me without evidence is itself a demonstration that at least one of us isn't even a reasonable approximation of rational, as demonstrated by Robin Hanson and Tyler Emerson of George Mason University in their paper on rational truth-seekers. The fact is that humans don't appear capable of approaching perfect rationality to anything like the degree to which most of you probably believe you have approached it. Nobel Laureate Daniel Kahneman and Amos Tversky provided a particularly valuable set of insights into this fact with their classic book Judgement Under Uncertainty: Heuristics and Biases and in subsequent works. As a trivial example of the uncertainty that humans typically exhibit, try these tests. (Offer some tests from Judgement Under Uncertainty)

I hope that I have made my point. Now let me point out some of the typical errors of transhumanists who have decided to act decisively to protect the world they care about from existential risks. After deciding to rationally defer most of the fun things that they would like to do for a few decades until the world is relatively safe, it is completely typical to either begin some quixotic quest to transform human behavior on a grand scale over the course of the next couple decades or to go raving blithering Cthulhu-worshiping mad and try to build an artificial intelligence. I will now try to discourage such activities.

One of the first rules of rationality is not to irrationally demand that others be rational. Demanding that someone make a difficult mental transformation has never once lead them to making said transformation. People have a strong evolved desire to make other people accept their assertions and opinions. Before you let the thought cross your mind that a person is not trying to be rational, I would suggest that you consider the following. If you and your audience were both trying to be rational, you would be mutually convinced of EVERY position that the members of your audience had on EVERY subject and vice versa. If this does not seem like a plausible outcome then one of you is not trying to be rational, and it is silly to expect a rational outcome from your discussion. By all means, if a particular person is in a position to be helpful try to blunder past the fact of your probably mutual unwillingness to be rational; in a particular instance it is entirely possible that ordinary discussion will lead to the correct conclusion, though it will take hundreds of times longer than it would if the participants were able to abandon the desire to win an argument as a motivation separate from the desire to reach the correct conclusion. On the other hand, when dealing with a group of people, or with an abstract class of people, Don't Even Try to influence them with what you believe to be a well-reasoned argument. This has been scientifically shown not to work, and if you are going to try to simply will your wishes into being you may as well debate the nearest million carbon atoms into forming an assembler and be done with it, or perhaps convince your own brain to become transhumanly intelligent. Hey, it's your brain, if you can't convince it to do something contrary to its nature that it wants to do is it likely that you can convince the brains of many other people to do something contrary to their natures that they don't want to do just by generating a particular set of vocalizations?

My recommendation that you not make an AI is slightly more urgent. Attempting to transform the behavior of a substantial group of people via a reasoned argument is a silly and superstitious act, but it is still basically a harmless one. On the other hand, attempts by ordinary physicist Nobel Laureate quality geniuses to program AI systems are not only astronomically unlikely to succeed, but in the shockingly unlikely event that they do succeed they are almost equally likely to leave nothing of value in this part of the universe. If you think you can do this safely despite my warning, here are a few things to consider:

  1. A large fraction of the greatest computer scientists and other information scientists in history have done work on AI, but so far none of them have begun to converge on even the outlines of a theory or succeeded in matching the behavioral complexity of an insect, despite the fantastic military applications of even dragonfly-equivalent autonomous weapons.
  2. Top philosophers, pivotal minds in the history of human thought, have consistently failed to converge on ethical policy.
  3. Isaac Asimov, history's most prolifi writer and Mensa's honorary president, attempted to formulate a more modest set of ethical precepts for robots and instead produced the blatantly suicidal three laws (if you don't see why the three laws wouldn't work I refer you to the Singularity Institute for Artificial Intelligence's campaign against the three laws)
  4. Science fiction authors as a class, a relatively bright crowd by human standards, have subsequently thrown more time into considering the question of machine ethics than they have any other philosophical issue other than time travel, yet have failed to develop anything more convincing than the three laws.
  5. AI ethics cannot be arrived at either through dialectic (critical speculation) or through the scientific method. The first method fails to distinguish between an idea that will actually work and the first idea you and your friends couldn't rapidly see big holes in, influenced as you were by your specific desire for a cool-sounding idea to be correct and your more general desire to actually realize your AI concept, saving the world and freeing you to devote your life to whatever you wish. The second method is crippled by the impossibility of testing a transhumanly intelligent AI (because it could by definition trick you into thinking it had passed the test) and by the irrelevance of testing an ethical system on an AI without transhuman intelligence. Ask yourself, how constrained would your actions be if you were forced to obey the code of Hammurabi but you had no other ethical impulses at all. Now keep in mind that Hammurabi was actually FAR more like you than an AI will be. He shared almost all of your genes, your very high by human standards intellect, and the empathy that comes from an almost identical brain architecture, but his attempt at a set of rules for humans was a first try, just as your attempt at a set of rules for AIs would be.
  6. Actually, if you are thinking in terms of a set of rules AT ALL this implies that you are failing to appreciate both a programmer's control over an AI's cognition and an AI's alien nature. If you are thinking in terms of something more sophisticated, and bear in mind that apparently only one person has ever thought in terms of something more sophisticated so far, bear in mind that the first such "more sophisticated" theory was discovered on careful analysis to itself be inadequate, as was the second.

 

If you can't make people change, and you can't make an AI, what can you do to avoid being killed? As I said, I don't know. It's a good bet that money would help, as well as an unequivocal decision to make singularity strategy the focus of your life rather than a hobby. A good knowledge of cognitive psychology and of how people fail to be rational may enable you to better figure out what to do with your money, and may enable you to better co-operate your efforts with other serious and rational transhumanists without making serious mistakes. If you are willing to try, please let's keep in touch. Seriously, even if you discount your future at a very high rate, I think that you will find that living rationally and trying to save the world is much more fun and satisfying than the majority of stuff that even very smart people spend their time doing. It really really beats pretending to do the same, yet even such pretending is or once was a very popular activity among top-notch transhumanists.

Aiming at true rationality will be very difficult in the short run, a period of time which humans who expect to live for less than a century are prone to consider the long run. It entails absolutely no social support from non-transhumanists, and precious little from transhumanists, most of whom will probably resent the implicit claim that they should be more rational. If you haven't already, it will also require you to put your every-day life in order and acquire the ability to interact positively with people or a less speculative character. You will get no VC or angel funding, terribly limited grant money, and in general no acknowledgement of any expertise you acquire. On the other hand, if you already have some worth-while social relationships, you will be shocked by just how much these relationships improve when you dedicate yourself to shaping them rationally. The potential of mutual kindness, when even one partner really decides not to do anything to undermine it, shines absolutely beyond the dreams of self-help authors.

If you have not personally acquired a well-paying job, in the short term I recommend taking the actuarial tests. Actuarial positions, while somewhat boring, do provide practice in rationally analyzing data of a complexity that denies intuitive analysis or analytical automatonism. They also pay well, require no credentials other than tests in what should be mandatory material for anyone aiming at rationality, and have top job security in jobs that are easy to find and only require 40 hours per week of work. If you are competent with money, a few years in such a job should give you enough wealth to retire to some area with a low cost of living and analyze important questions. A few years more should provide the capital to fund your own research. If you are smart enough to build an AI's morality it should be a breeze to burn through the 8 exams in a year, earn a six-figure income, and get returns on investment far better than Buffet does. On the other hand, doing that doesn't begin to suggest that you are smart enough to build an AI's morality. I'm not convinced that anything does.

Fortunately ordinary geniuses with practiced rationality can contribute a great deal to the task of saving the world. Even more fortunately, so long as they are rational they can co-operate very effectively even if they don't share an ethical system. Eternity is an intrinsically shared prize. On this task more than any other the actual behavioral difference between an egoist, altruist, or even a Kantian should fade to nothing in terms of its impact on actual behavior. The hard part is actually being rational, which requires that you postpone the fun but currently irrelevant arguments until the pressing problem is solved, even perhaps with the full knowledge that you  are actually probably giving them up entirely, as they may be about as interesting as watching moss grow post-singularity. Delaying gratification in this manner is not a unique difficulty faced by transhumanists. Anyone pursuing a long-term goal, such as a medical student or PhD candidate, does the same. The special difficulty that you will have to overcome is the difficulty of staying on track in the absence of social support or of appreciation of the problem, and the difficulty of overcoming your mind's anti-religion defenses, which will be screaming at you to cut out the fantasy and go live a normal life, with the normal empty set of beliefs about the future and its potential.

Another important difficulty to overcome is the desire for glory. It isn't important that the ideas that save the world be your ideas. What matters is that they be the right ideas. In ordinary life, the satisfaction that a person gains from winning an argument may usually be adequate compensation for walking away without having learned what they should have learned from the other side, but this is not the case when you elegantly prove to your opponent and yourself that the pie you are eating is not poisoned. Another glory-related concern is that of allowing science fiction to shape your expectations of the actual future. Yes it may be fun and exciting to speculate on government conspiracies to suppress nanotech, but even if you are the right conspiracy theories don't have enough predictive power to test or to guide your actions. If you are wrong, you may well end up clinically paranoid. Conspiracy thrillers are pleasant silly fun. Go ahead and read them if you lack the ability to take the future seriously, but don't end up in an imaginary one, that is NOT fun.

Likewise, don't trust science fiction when it implies that you have decades or centuries left before the singularity. You might, but you don't know that; it all depends on who actually goes out and makes it happen. Above all, don't trust its depictions of the sequence in which technologies will develop or of the actual consequences of technologies that enhance intelligence. These are just some author's guesses. Worse still, they aren't even the author's best guesses, they are the result of a lop-sided compromise between the author's best guess and the set of technologies that best fit the story the author wants to tell. So you want to see Mars colonized before singularity. That's common in science fiction, right? So it must be reasonably likely. Sorry, but that is not how a rational person estimates what is likely. Heuristics and Biases will introduce you to the representativeness heuristic, roughly speaking the degree to which a scenario fits a preconceived mental archetype. People who haven't actively optimized their rationally typically use representativeness as their estimate of probability because we are designed to do so automatically so we find it very easy to do so. In the real world this doesn't work well. Pay attention to logical relationships instead.

Since I am attempting to approximate a rational person, I don't expect e-mails from any of you to show up in my in-box in a month or two requesting my cooperation on some sensible and realistic project for minimizing existential risk. I don't expect that, but I place a low certainty value on most of my expectations, especially regarding the actions of outlier humans. I may be wrong. Please prove me wrong. The opportunity to find that I am mistaken in my estimates of the probability of finding serious transhumanists is what motivated me to come all the way across the continent. I'm betting we all die in a flash due to the abuse of these technologies. Please help me to be wrong.

Request for proposals for Musk/FLI grants

22 danieldewey 05 February 2015 05:04PM

As a follow-on to the recent thread on purchasing research effectively, I thought it'd make sense to post the request for proposals for projects to be funded by Musk's $10M donation. LessWrong's been a place for discussing long-term AI safety and research for quite some time, so I'd be happy to see some applications come out of LW members.

Here's the full Request for Proposals.

If you have questions, feel free to ask them in the comments or to contact me!

Here's the email FLI has been sending around:

Initial proposals (300–1000 words) due March 1, 2015

The Future of Life Institute, based in Cambridge, MA and headed by Max Tegmark (MIT), is seeking proposals for research projects aimed to maximize the future societal benefit of artificial intelligence while avoiding potential hazards. Projects may fall in the fields of computer science, AI, machine learning, public policy, law, ethics, economics, or education and outreach. This 2015 grants competition will award funds totaling $6M USD.

This funding call is limited to research that explicitly focuses not on the standard goal of making AI more capable, but on making AI more robust and/or beneficial; for example, research could focus on making machine learning systems more interpretable, on making high-confidence assertions about AI systems' behavior, or on ensuring that autonomous systems fail gracefully. Funding priority will be given to research aimed at keeping AI robust and beneficial even if it comes to greatly supersede current capabilities, either by explicitly focusing on issues related to advanced future AI or by focusing on near-term problems, the solutions of which are likely to be important first steps toward long-term solutions.

Please do forward this email to any colleagues and mailing lists that you think would be appropriate.

Proposals

Before applying, please read the complete RFP and list of example topics, which can be found online along with the application form:

    http://futureoflife.org/grants/large/initial

As explained there, most of the funding is for $100K–$500K project grants, which will each support a small group of collaborators on a focused research project with up to three years duration. For a list of suggested topics, see the complete RFP [1] and the Research Priorities document [2]. Initial proposals, which are intended to require merely a modest amount of preparation time, must be received on our website [1] on or before March 1, 2015.

Initial proposals should include a brief project summary, a draft budget, the principal investigator’s CV, and co-investigators’ brief biographies. After initial proposals are reviewed, some projects will advance to the next round, completing a Full Proposal by May 17, 2015. Public award recommendations will be made on or about July 1, 2015, and successful proposals will begin receiving funding in September 2015.

References and further resources

[1] Complete request for proposals and application form: http://futureoflife.org/grants/large/initial

[2] Research Priorities document: http://futureoflife.org/static/data/documents/research_priorities.pdf

[3] An open letter from AI scientists on research priorities for robust and beneficial AI: http://futureoflife.org/misc/open_letter

[4] Initial funding announcement: http://futureoflife.org/misc/AI

Questions about Project Grants: dewey@futureoflife.org

Media inquiries: tegmark@mit.edu

Human Minds are Fragile

21 diegocaleiro 11 February 2015 06:40PM

We are familiar with the thesis that Value is Fragile. This is why we are researching how to impart values to an AGI.

Embedded Minds are Fragile

Besides values, it may be worth remembering that human minds too are very fragile.

A little magnetic tampering with your amygdalas, and suddenly you are a wannabe serial killer. A small dose of LSD can get you to believe you can fly, or that the world will end in 4 hours. Remove part of your Ventromedial PreFrontal Cortex, and suddenly you are so utilitarian even Joshua Greene would call you a psycho.

It requires very little material change to substantially modify a human being's behavior. Same holds for other animals with embedded brains, crafted by evolution and made of squishy matter modulated by glands and molecular gates.

A Problem for Paul-Boxing and CEV?

One assumption underlying Paul-Boxing and CEV is that:

It is easier to specify and simulate a human-like mind then to impart values to an AGI by means of teaching it values directly via code or human language.

Usually we assume that because, as we know, value is fragile. But so are embedded minds. Very little tampering is required to profoundly transform people's moral intuitions. A large fraction of the inmate population in the US has frontal lobe or amygdala malfunctions.

Finding out the simplest description of a human brain that when simulated continues to act as that human brain would act in the real world may turn out to be as fragile, or even more fragile, than concept learning for AGI's.

New, Brief Popular-Level Introduction to AI Risks and Superintelligence

21 LyleN 23 January 2015 03:43PM

The very popular blog Wait But Why has published the first part of a two-part explanation/summary of AI risks and superintelligence, and it looks like the second part will be focused on Friendly AI. I found it very clear, reasonably thorough and appropriately urgent without signaling paranoia or fringe-ness. It may be a good article to share with interested friends.

Update: Part 2 is now here.

Compartmentalizing: Effective Altruism and Abortion

21 Dias 04 January 2015 11:48PM

Cross-posted on my blog and the effective altruism forum with some minor tweaks; apologies if some of the formatting hasn't copied across. The article was written with an EA audience in mind but it is essentially one about rationality and consequentialism.

Summary: People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important – many standard arguments on both sides of moral issues like the permissibility of abortion are significantly undermined or otherwise effected by EA considerations, especially moral uncertainty.

A long time ago, Will wrote an article about how a key part of rationality was taking ideas seriously: fully exploring ideas, seeing all their consequences, and then acting upon them. This is something most of us do not do! I for one certainly have trouble. He later partially redacted it, and Anna has an excellent article on the subject, but at the very least decompartmentalizing is a very standard part of effective altruism.

Similarly, I think people selectively apply Effective Altruist (EA) principles. People are very willing to apply them in some cases, but when those principles would cut at a core part of the person’s identity – like requiring them to dress appropriately so they seem less weird – people are much less willing to take those EA ideas to their logical conclusion.

Consider your personal views. I’ve certainly changed some of my opinions as a result of thinking about EA ideas. For example, my opinion of bednet distribution is now much higher than it once was. And I’ve learned a lot about how to think about some technical issues, like regression to the mean. Yet I realized that I had rarely done a full 180  – and I think this is true of many people:

  • Many think EA ideas argue for more foreign aid – but did anyone come to this conclusion who had previously been passionately anti-aid?
  • Many think EA ideas argue for vegetarianism – but did anyone come to this conclusion who had previously been passionately carnivorous?
  • Many think EA ideas argue against domestic causes – but did anyone come to this conclusion who had previously been a passionate nationalist?

Yet this is quite worrying. Given the power and scope of many EA ideas, it seems that they should lead to people changing their mind on issues were they had been previously very certain, and indeed emotionally involved.

Obviously we don’t need to apply EA principles to everything – we can probably continue to brush our teeth without need for much reflection. But we probably should apply them to issues with are seen as being very important: given the importance of the issues, any implications of EA ideas would probably be important implications.

Moral Uncertainty

In his PhD thesis, Will MacAskill argues that we should treat normative uncertainty in much the same way as ordinary positive uncertainty; we should assign credences (probabilities) to each theory, and then try to maximise the expected morality of our actions. He calls this idea ‘maximise expected choice-worthiness’, and if you’re into philosophy, I recommend reading the paper. As such, when deciding how to act we should give greater weight to the theories we consider more likely to be true, and also give more weight to theories that consider the issue to be of greater importance.

This is important because it means that a novel view does not have to be totally persuasive to demand our observance. Consider, for example, vegetarianism. Maybe you think there’s only a 10% chance that animal welfare is morally significant – you’re pretty sure they’re tasty for a reason. Yet if the consequences of eating meat are very bad in those 10% of cases (murder or torture, if the animal rights activists are correct), and the advantages are not very great in the other 90% (tasty, some nutritional advantages), we should not eat meat regardless. Taking into account the size of the issue at stake as well as probability of its being correct means paying more respect to ‘minority’ theories.

And this is more of an issue for EAs than for most people. Effective Altruism involves a group of novel moral premisses, like cosmopolitanism, the moral imperative for cost-effectiveness and the importance of the far future. Each of these imply that our decisions are in some way very important, so even if we assign them only a small credence, their plausibility implies radical revisions to our actions.

One issue that Will touches on in his thesis is the issue of whether fetuses morally count. In the same way that we have moral uncertainty as to whether animals, or people in the far future, count, so too we have moral uncertainty as to whether unborn children are morally significant. Yes, many people are confident they know the correct answer – but there many of these on each side of the issue. Given the degree of disagreement on the issue, among philosophers, politicians and the general public, it seems like the perfect example of an issue where moral uncertainty should be taken into account – indeed Will uses it as a canonical example.

Consider the case of a pregnant women Sarah, wondering whether it is morally permissible to abort her child1. The alternative course of action she is considering is putting the child up for adoption. In accordance with the level of social and philosophical debate on the issue, she is uncertain as to whether aborting the fetus is morally permissible. If it’s morally permissible, it’s merely permissible – it’s not obligatory. She follows the example from Normative Uncertainty and constructs the following table

abortion table 1

In the best case scenario, abortion has nothing to recommend it, as adoption is also permissible. In the worst case, abortion is actually impermissible, whereas adoption is permissible. As such, adoption dominates abortion.

However, Sarah might not consider this representation as adequate. In particular, she thinks that now is not the best time to have a child, and would prefer to avoid it.2 She has made plans which are inconsistent with being pregnant, and prefers not to give birth at the current time. So she amends the table to take into account these preferences.

abortion table 2

Now adoption no longer strictly dominates abortion, because she prefers abortion to adoption in the scenario where it is morally permissible. As such, she considers her credence: she considers the pro-choice arguments slightly more persuasive than the pro-life ones: she assigns a 70% credence to abortion being morally permissible, but only a 30% chance to its being morally impermissible.

Looking at the table with these numbers in mind, intuitively it seems that again it’s not worth the risk of abortion: a 70% chance of saving oneself inconvenience and temporary discomfort is not sufficient to justify a 30% chance of committing murder. But Sarah’s unsatisfied with this unscientific comparison: it doesn’t seem to have much of a theoretical basis, and she distrusts appeals to intuitions in cases like this. What is more, Sarah is something of a utilitarian; she doesn’t really believe in something being impermissible.

Fortunately, there’s a standard tool for making inter-personal welfare comparisons: QALYs. We can convert the previous table into QALYs, with the moral uncertainty now being expressed as uncertainty as to whether saving fetuses generates QALYs. If it does, then it generates a lot; supposing she’s at the end of her first trimester, if she doesn’t abort the baby it has a 98% chance of surviving to birth, at which point its life expectancy is 78.7 in the US, for 78.126 QALYs. This calculation assumes assigns no QALYs to the fetus’s 6 months of existence between now and birth. If fetuses are not worthy of ethical consideration, then it accounts for 0 QALYs.

We also need to assign QALYs to Sarah. For an upper bound, being pregnant is probably not much worse than having both your legs amputated without medication, which is 0.494 QALYs, so lets conservatively say 0.494 QALYs. She has an expected 6 months of pregnancy remaining, so we divide by 2 to get 0.247 QALYs. Women’s Health Magazine gives the odds of maternal death during childbirth at 0.03% for 2013; we’ll round up to 0.05% to take into account risk of non-death injury. Women at 25 have a remaining life expectancy of around 58 years, so thats 0.05%*58= 0.029 QALYs. In total that gives us an estimate of 0.276 QALYs. If the baby doesn’t survive to birth, however, some of these costs will not be incurred, so the truth is probably slightly lower than this. All in all a 0.276 QALYs seems like a reasonably conservative figure.

Obviously you could refine these numbers a lot (for example, years of old age are likely to be at lower quality of life, there are some medical risks to the mother from aborting a fetus, etc.) but they’re plausibly in the right ballpark. They would also change if we used inherent temporal discounting, but probably we shouldn’t.

abortion table 3

We can then take into account her moral uncertainty directly, and calculate the expected QALYs of each action:

  • If she aborts the fetus, our expected QALYs are 70%x0 + 30%(-78.126) = -23.138
  • If she carries the baby to term and puts it up for adoption, our expected QALYs are 70%(-0.247) + 30%(-0.247) = -0.247

Which again suggests that the moral thing to do is to not abort the baby. Indeed, the life expectancy is so long at birth that it quite easily dominates the calculation: Sarah would have to be extremely confident in rejecting the value of the fetus to justify aborting it. So, mindful of overconfidence bias, she decides to carry the child to term.

Indeed, we can show just how confident in the lack of moral significance of the fetuses one would have to be to justify aborting one. Here is a sensitivity table, showing credence in moral significance of fetuses on the y axis, and the direct QALY cost of pregnancy on the x axis for a wide range of possible values. The direct QALY cost of pregnancy is obviously bounded above by its limited duration. As is immediately apparent, one has to be very confident in fetuses lacking moral significance, and pregnancy has to be very bad, before aborting a fetus becomes even slightly QALY-positive. For moderate values, it is extremely QALY-negative.

abortion table 4

Other EA concepts and their applications to this issue

Of course, moral uncertainty is not the only EA principle that could have bearing on the issue, and given that the theme of this blogging carnival, and this post, is things we’re overlooking, it would be remiss not to give at least a broad overview of some of the others. Here, I don’t intend to judge how persuasive any given argument is – as we discussed above, this is a debate that has been going without settlement for thousands of years – but merely to show the ways that common EA arguments affect the plausibility of the different arguments. This is a section about the directionality of EA concerns, not on the overall magnitudes.

Not really people

One of the most important arguments for the permissibility of abortion is that fetuses are in some important sense ‘not really people’. In many ways this argument resembles the anti-animal rights argument that animals are also ‘not really people’. We already covered above the way that considerations of moral uncertainty undermine both these arguments, but it’s also noteworthy that in general it seems that the two views are mutually supporting (or mutually undermining, if both are false). Animal-rights advocates often appeal to the idea of an ‘expanding circle’ of moral concern. I’m skeptical of such an argument, but it seems clear that the larger your sphere, the more likely fetuses are to end up on the inside. The fact that, in the US at least, animal activists tend to be pro-abortion seems to be more of a historical accident than anything else. We could imagine alternative-universe political coalitions, where a “Defend the Weak; They’re morally valuable too” party faced off against a “Exploit the Weak; They just don’t count” party. In general, to the extent that EAs care about animal suffering (even insect suffering ), EAs should tend to be concerned about the welfare of the unborn.

Not people yet

A slightly different common argument is that while fetuses will eventually be people, they’re not people yet. Since they’re not people right now, we don’t have to pay any attention to their rights or welfare right now. Indeed, many people make short sighted decisions that implicitly assign very little value to the futures of people currently alive, or even to their own futures – through self-destructive drug habits, or simply failing to save for retirement. If we don’t assign much value to our own futures, it seems very sensible to disregard the futures of those not even born. And even if people who disregarded their own futures were simply negligent, we might still be concerned about things like the non-identity problem.

Yet it seems that EAs are almost uniquely unsuited to this response. EAs do tend to care explicitly about future generations. We put considerable resources into investigating how to help them, whether through addressing climate change or existential risks. And yet these people have far less of a claim to current personhood than fetuses, who at least have current physical form, even if it is diminutive. So again to the extent that EAs care about future welfare, EAs should tend to be concerned about the welfare of the unborn.

Replaceability

Another important EA idea is that of replaceability. Typically this arises in contexts of career choice, but there is a different application here. The QALYs associated with aborted children might not be so bad if the mother will go on to have another child instead. If she does, the net QALY loss is much lower than the gross QALY loss. Of course, the benefits of aborting the fetus are equivalently much smaller – if she has a child later on instead, she will have to bear the costs of pregnancy eventually anyway. This resembles concerns that maybe saving children in Africa doesn’t make much difference, because their parents adjust their subsequent fertility.

The plausibility behind this idea comes from the idea that, at least in the US, most families have a certain ideal number of children in mind, and basically achieve this goal. As such, missing an opportunity to have an early child simply results in having another later on.

If this were fully true, utilitarians might decide that abortion actually has no QALY impact at all – all it does is change the timing of events. On the other hand, fertility declines with age, so many couples planning to have a replacement child later may be unable to do so. Also, some people do not have ideal family size plans.

Additionally, this does not really seem to hold when the alternative is adoption; presumably a woman putting a child up for adoption does not consider it as part of her family, so her future childbearing would be unaffected. This argument might hold if raising the child yourself was the only alternative, but given that adoption services are available, it does not seem to go through.

Autonomy

Sometimes people argue for the permissibility of abortion through autonomy arguments. “It is my body”, such an argument would go, “therefore I may do whatever I want with it.” To a certain extent this argument is addressed by pointing out that one’s bodily rights presumably do not extent to killing others, so if the anti-abortion side are correct, or even have a non-trivial probability of being correct, autonomy would be insufficient. It seems that if the autonomy argument is to work, it must be because a different argument has established the non-personhood of fetuses – in which case the autonomy argument is redundant. Yet even putting this aside, this argument is less appealing to EAs than to non-EAs, because EAs often hold a distinctly non-libertarian account of personal ethics. We believe it is actually good to help people (and avoid hurting them), and perhaps that it is bad to avoid doing so. And many EAs are utilitarians, for whom helping/not-hurting is not merely laud-worthy but actually compulsory. EAs are generally not very impressed with Ayn Rand style autonomy arguments for rejecting charity, so again EAs should tend to be unsympathetic to autonomy arguments for the permissibility of abortion.

Indeed, some EAs even think we should be legally obliged to act in good ways, whether through laws against factory farming or tax-funded foreign aid.

Deontology

An argument often used on the opposite side  – that is, an argument used to oppose abortion, is that abortion is murder, and murder is simply always wrong. Whether because God commanded it or Kant derived it, we should place the utmost importance of never murdering. I’m not sure that any EA principle directly pulls against this, but nonetheless most EAs are consequentialists, who believe that all values can be compared. If aborting one child would save a million others, most EAs would probably endorse the abortion. So I think this is one case where a common EA view pulls in favor of the permissibility of abortion.

I didn’t ask for this

Another argument often used for the permissibility of abortion is that the situation is in some sense unfair. If one did not intend to become pregnant – perhaps even took precautions to avoid becoming so – but nonetheless ends up pregnant, you’re in some way not responsible for becoming pregnant. And since you’re not responsible for it you have no obligations concerning it – so may permissible abort the fetus.

However, once again this runs counter to a major strand of EA thought. Most of us did not ask to be born in rich countries, or to be intelligent, or hardworking. Perhaps it was simply luck. Yet being in such a position nonetheless means we have certain opportunities and obligations. Specifically, we have the opportunity to use of wealth to significantly aid those less fortunate than ourselves in the developing world, and many EAs would agree the obligation. So EAs seem to reject the general idea that not intending a situation relieves one of the responsibilities of that situation.

Infanticide is okay too

A frequent argument against the permissibility of aborting fetuses is by analogy to infanticide. In general it is hard to produce a coherent criteria that permits the killing of babies before birth but forbids it after birth. For most people, this is a reasonably compelling objection: murdering innocent babies is clearly evil! Yet some EAs actually endorse infanticide. If you were one of those people, this particular argument would have little sway over you.

Moral Universalism

A common implicit premise in many moral discussion is that the same moral principles apply to everyone. When Sarah did her QALY calculation, she counted the baby’s QALYs as equally important to her own in the scenario where they counted at all. Similarly, both sides of the debate assume that whatever the answer is, it will apply fairly broadly. Perhaps permissibility varies by age of the fetus – maybe ending when viability hits – but the same answer will apply to rich and poor, Christian and Jew, etc.

This is something some EAs might reject. Yes, saving the baby produces many more QALYs than Sarah loses through the pregnancy, and that would be the end of the story if Sarah were simply an ordinary person. But Sarah is an EA, and so has a much higher opportunity cost for her time. Becoming pregnant will undermine her career as an investment banker, the argument would go, which in turn prevents her from donating to AMF and saving a great many lives. Because of this, Sarah is in a special position – it is permissible for her, but it would not be permissible for someone who wasn’t saving many lives a year.

I think this is a pretty repugnant attitude in general, and a particularly objectionable instance of it, but I include it here for completeness.

May we discuss this?

Now we’ve considered these arguments, it appears that applying general EA principles to the issue in general tends to make abortion look less morally permissible, though there were one or two exceptions. But there is also a second order issue that we should perhaps address – is it permissible to discuss this issue at all?

Nothing to do with you

A frequently seen argument on this issue is to claim that the speaker has no right to opine on the issue. If it doesn’t personally affect you, you cannot discuss it – especially if you’re privileged. As many (a majority?) of EAs are male, and of the women many are not pregnant, this would curtail dramatically the ability of EAs to discuss abortion. This is not so much an argument on one side or other of the issue as an argument for silence.

Leaving aside the inherent virtues and vices of this argument, it is not very suitable for EAs. Because EAs have many many opinions on topics that don’t directly affect them:

  • EAs have opinions on disease in Africa, yet most have never been to Africa, and never will
  • EAs have opinions on (non-human) animal suffering, yet most are not non-human animals
  • EAs have opinions on the far future, yet live in the present

Indeed, EAs seem more qualified to comment on abortion – as we all were once fetuses, and many of us will become pregnant. If taken seriously this argument would call foul on virtually ever EA activity! And this is no idle fantasy – there are certainly some people who think that Westerns cannot usefully contribute to solving African poverty.

Too controversial

We can safely say this is a somewhat controversial issue. Perhaps it is too controversial – maybe it is bad for the movement to discuss. One might accept the arguments above – that EA principles generally undermine the traditional reasons for thinking abortion is morally permissible – yet think we should not talk about it. The controversy might divide the community and undermine trust. Perhaps it might deter newcomers. I’m somewhat sympathetic to this argument – I take the virtue of silence seriously, though eventually my boyfriend persuaded me it was worth publishing.

Note that the controversial nature is evidence against abortion’s moral permissibility, due to moral uncertainty.

However, the EA movement is no stranger to controversy.

  • There is a semi-official EA position on immigration, which is about as controversial as abortion in the US at the moment, and the EA position is such an extreme position that essentially no mainstream politicians hold it.
  • There is a semi-official EA position on vegetarianism, which is pretty controversial too, as it involves implying that the majority of Americans are complicit in murder every day.

Not worthy of discussion

Finally, another objection to discussing this is it simply it’s an EA idea. There are many disagreements in the world, yet there is no need for an EA view on each. Conflict between the Lilliputians and Blefuscudians notwithstanding, there is no need for an EA perspective on which end of the egg to break first. And we should be especially careful of heated, emotional topics with less avenue to pull the rope sideways. As such, even though the object-level arguments given above are correct, we should simply decline to discuss it.

However, it seems that if abortion is a moral issue, it is a very large one. In the same way that the sheer number of QALYs lost makes abortion worse than adoption even if our credence in fetuses having moral significance was very low, the large number of abortions occurring each year make the issue as a whole of high significance. In 2011 there were over 1 million babies were aborted in the US. I’ve seen a wide range of global estimates, including around 10 million to over 40 million. By contrast, the WHO estimates there are fewer than 1 million malaria deaths worldwide each year. Abortion deaths also cause a higher loss of QALYs due to the young age at which they occur. On the other hand, we should discount them for the uncertainty that they are morally significant. And perhaps there is an even larger closely related moral issue. The size of the issue is not the only factor in estimating the cost-effectiveness of interventions, but it is the most easily estimable. On the other hand, I have little idea how many dollars of donations it takes to save a fetus – it seems like an excellent example of some low-hanging fruit research.

Conclusion

People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important. In this post we the implications of common EA beliefs on the permissibility of abortion. Taking into account moral uncertainty makes aborting a fetus seem far less permissible, as the high counterfactual life expectancy of the baby tends to dominate other factors. Many other EA views are also significant to the issue, making various standard arguments on each side less plausible.

 


  1. There doesn’t seem to be any neutral language one can use here, so I’m just going to switch back and forth between ‘fetus’ and ‘child’ or ‘baby’ in a vain attempt at terminological neutrality. 
  2. I chose this reason because it is the most frequently cited main motivation for aborting a fetus according to the Guttmacher Institute. 

Training Reflective Attention

21 BrienneStrohl 21 December 2014 12:53PM

Crossposted at Agenty Duck

And somewhere in the back of his mind was a small, small note of confusion, a sense of something wrong about that story; and it should have been a part of Harry's art to notice that tiny note, but he was distracted. For it is a sad rule that whenever you are most in need of your art as a rationalist, that is when you are most likely to forget it. —HPMOR, Ch. 3

A rationalist’s art is most distant when it is most needed. Why is that?

When I am very angry with my romantic partner, what I feel is anger. I don’t feel the futility of throwing a tantrum, or the availability of other options like honest communication, or freewriting, or taking a deep breath. My attention is so narrowly focused on the object of my anger that I’m likely not even aware that I’m angry, let alone that my anger might be blinding me to my art.

When her skills are most needed, a rationalist is lost in an unskillful state of mind. She doesn’t recognize that it’s happening, and she doesn’t remember that she has prepared for it by learning and practicing appropriate techniques.

I've designed and exercise that trains a skill I call reflective attention, and some call mindfulness. For me, it serves as an anchor in a stormy mind, or as a compass pointing always toward a mental state where my art is close at hand.

Noticing that I am lost in an unskillful state of mind is a separate skill. But when I do happen to notice—when I feel that small, small note of confusion—reflective attention helps me find my way back. Instead of churning out even more pointless things to yell at my partner, it allows me to say, “I am angry. I feel an impulse to yell. I notice my mind returning over and over to the memory that makes me more angry. I’m finding it hard to concentrate. I am distracted. I have a vague impression that I have prepared for this.” And awareness of that final thought allows me to ask, “What have I trained myself to do when I feel this way?”

The goal of the following exercise is to practice entering reflective attention.

It begins with an instruction to think of nothing. When you monitor yourself to make sure you’re not having any thoughts, your attention ends up directed toward the beginnings of thoughts. Since the contents of consciousness are always changing, maintaining focus on the beginnings of thoughts prevents you from engaging for an extended period with any particular thought. It prevents you from getting “lost in thought”, or keeping attention focused on a thought without awareness of doing so. The point is not actually to be successful at thinking nothing, as that is impossible while conscious, but to notice what happens when you try.

Keeping your focus on the constant changes in your stream of consciousness brings attention to your experience of awareness itself. Awareness of awareness is the anchor for attention. It lets you keep your bearings when you’d otherwise be carried away by a current of thought or emotion.

Once you’re so familiar with the feeling of reflection that creating it is a primitive action, you can forget the introductory part, and jump straight to reflective attention whenever it occurs to you to do so.


This will probably take around five minutes, but you can do it for much longer if you want to.

Notice what your mind is doing right now. One thing it’s doing is experiencing sensations of black and white as you read. What else are you experiencing? Are there words in your inner monologue? Are there emotions of any kind?

Spend about thirty seconds trying not to think anything. When thirty seconds is up, stop trying not to think, and read on.

.

.

.

What’s happening in your mind is constantly changing. Even when you were trying not to think, you probably noticed many times when the stillness would shift and some new thought would begin to emerge in conscious awareness.

Turn your attention to those changes. When a new thought emerges in consciousness, see if you can notice the exact moment when it happens, becoming aware of what it feels like for that particular change to take place.

If it helps at first, you can narrate your stream of consciousness in words: “Now I’m seeing the blue of the wall, now I’m hearing the sound of a car, now I’m feeling cold, now I’m curious what time it is…” You’ll probably find that you can’t narrate anywhere near quickly enough, in part because thoughts can happen in parallel, while speech is serial. Once narrating starts to become frustrating, stop slowing yourself down with words, and just silently observe your thoughts as they occur.

If you’re finding this overwhelming because there are too many thoughts, narrow your focus down to just your breathing, and try to precisely identify the experience of an exhale ending and an inhale beginning, of an inhale ending and an exhale beginning. Keep doing that until you feel comfortable with it, and then slowly expand your attention a little at a time: to other experiences associated with breathing, to non-breath-related bodily sensations, to non-tactile sensations from your environment, and finally to internal mental sensations like emotions.

If you notice an impulse to focus your attention on a particular thought, following it and engaging with it—perhaps you notice you feel hungry, and in response you begin to focus your attention on planning lunch—instead of letting that impulse take over your attention, recognize it as yet another change in the activity of your mind. If you’re narrating, say, “now I’m feeling an impulse to plan my lunch”, and keep your focus broad enough to catch the next thought when it arises. If you realize that you’ve already become lost in a particular thought, notice that realization itself as a new thought, and return to observing your stream of consciousness by noticing the next new thought that happens as well.

.

.

.

You might need to practice this many times before you get the hang of it. I suggest trying it for ten minutes to half an hour a day until you do.

Once you feel like you can recognize the sensation of reflective attention and enter that state of mind reliably given time, begin to train for speed. Instead of setting a timer for fifteen minutes or however long you want to practice, set it to go off every minute for the first half of your practice, spending one minute in reflective attention, and one minute out. (Don’t do this for all of your practice. You still need to practice maintenance.) When you can consistently arrive in reflective attention by the end of the minute, cut the intervals down to 45 seconds, then thirty, fifteen, and five.


In real life, the suspicion that you may be lost in an unskillful state of mind will be quiet and fleeting. “Quiet” means you’ll need to learn to snap your attention to the slightest hint of that feeling. For that, you’ll need to train “noticing”. “Fleeting” means you’ll need to be able to respond in less than five seconds. You’ll need to begin the process in less than one second, even if it takes a little longer to fully arrive in reflective attention. For that, training for speed is crucial.

Low Hanging fruit for buying a better life

20 taryneast 06 January 2015 10:11AM

What can I purchase with $100 that will be the best thing I can buy to make my life better?

 

I've decided to budget some regular money to improving my life each month. I'd like to start with low hanging fruit for obvious reasons - but when I sat down to think of improvements, I found myself thinking of the same old things I'd already been planning to do anyway... and I'd like out of that rut.

Constraints/more info:

 

  1. be concrete. I know - "spend money on experiences" is a good idea - but what experiences are the best option to purchase *first*
  2. "better" is deliberately left vague - choose how you would define it, so that I'm not constrained just by ways of "being better" that I'd have thought of myself.
  3. please assume that I have all my basic needs met (eg food, clothing, shelter) and that I have budgeted separately for things like investing for my financial future and for charity.
  4. apart from the above, assume nothing - Especially don't try and tailor solutions to anything you might know and/or guess about me specifically, because I think this would be a useful resource for others who might have just begun.
  5. don't constrain yourself to exactly $100 - I could buy 2-3 things for that, or I could save up over a couple of months and buy something more expensive... I picked $100 because it's a round number and easy to imagine.
  6. it's ok to add "dumb" things - they can help spur great ideas, or just get rid of an elephant in the room.
  7. try thinking of your top-ten before reading any comments, in order not to bias your initial thinking. Then come back and add ten more once you've been inspired by what everyone else came up with.

 

Background:

This is a question I recently posed to my local Less Wrong group and we came up with a few good ideas, so I thought I'd share the discussion with the wider community and see what we can come up with. I'll add the list we came up with later on in the comments...

It'd be great to have a repository of low-hanging fruit for things that can be solved with (relatively affordable) amounts of money. I'd personally like to go through the list - look at candidates that sound like they'd be really useful to me and then make a prioritised list of what to work on first.

Recent AI safety work

20 paulfchristiano 30 December 2014 06:19PM

(Crossposted from ordinary ideas). 

I’ve recently been thinking about AI safety, and some of the writeups might be interesting to some LWers:

  1. Ideas for building useful agents without goals: approval-directed agentsapproval-directed bootstrapping, and optimization and goals. I think this line of reasoning is very promising.
  2. A formalization of one piece of the AI safety challenge: the steering problem. I am eager to see more precise, high-level discussion of AI safety, and I think this article is a helpful step in that direction. Since articulating the steering problem I have become much more optimistic about versions of it being solved in the near term. This mostly means that the steering problem fails to capture the hardest parts of AI safety. But it’s still good news, and I think it may eventually cause some people to revise their understanding of AI safety.
  3. Some ideas for getting useful work out of self-interested agents, based on arguments: of arguments and wagersadversarial collaboration [older], and delegating to a mixed crowd. I think these are interesting ideas in an interesting area, but they have a ways to go until they could be useful.

I’m excited about a few possible next steps:

  1. Under the (highly improbable) assumption that various deep learning architectures could yield human-level performance, could they also predictably yield safe AI? I think we have a good chance of finding a solution---i.e. a design of plausibly safe AI, under roughly the same assumptions needed to get human-level AI---for some possible architectures. This would feel like a big step forward.
  2. For what capabilities can we solve the steering problem? I had originally assumed none, but I am now interested in trying to apply the ideas from the approval-directed agents post. From easiest to hardest, I think there are natural lines of attack using any of: natural language question answering, precise question answering, sequence prediction. It might even be possible using reinforcement learners (though this would involve different techniques).
  3. I am very interested in implementing effective debates, and am keen to test some unusual proposals. The connection to AI safety is more impressionistic, but in my mind these techniques are closely linked with approval-directed behavior.
  4. I’m currently writing up a concrete architecture for approval-directed agents, in order to facilitate clearer discussion about the idea. This kind of work that seems harder to do in advance, but at this point I think it’s mostly an exposition problem.

[Link] Eric S. Raymond - Me and Less Wrong

20 philh 05 December 2014 11:44PM

http://esr.ibiblio.org/?p=6549

I’ve gotten questions from a couple of different quarters recently about my relationship to the the rationalist community around Less Wrong and related blogs. The one sentence answer is that I consider myself a fellow-traveler and ally of that culture, but not really part of it nor particularly wishing to be.

The rest of this post is a slightly longer development of that answer.

[LINK] The P + epsilon Attack (Precommitment in cryptoeconomics)

18 DanielVarga 29 January 2015 02:02AM

Vitalik Buterin has a new post about an interesting theoretical attack against Bitcoin. The idea relies on the assumption that the attacker can credibly commit to something quite crazy. The crazy thing is this: paying out 25.01 BTC to all the people who help him in his attack to steal 25 BTC from everyone, but only if the attack fails. This leads to a weird payoff matrix where the dominant strategy is to help him in the attack. The attack succeeds, and no payoff is made.

Of course, smart contracts make such crazy commitments perfectly possible, so this is a bit less theoretical than it sounds. But even as an abstract though experiment about decision theories, it looks pretty interesting.

By the way, Vitalik Buterin is really on a roll. Just a week ago he had a thought-provoking blog post about how Decentralized Autonomous Organizations could possibly utilize a concept often discussed here: decision theory in a setup where agents can inspect each others' source code. It was shared on LW Discussion, but earned less exposure than I think it deserved.

EDIT 1: One smart commenter of the original post spotted that an isomorphic, extremely cool game was already proposed by billionaire Warren Buffett. Does this thing already have a name in game theory maybe?

 

EDIT 2: I wrote the game up in detail for some old-school game theorist friends:

The attacker orchestrates a game with 99 players. The attacker himself does not participate in the game.

Rules:

Each of the players can either defect or cooperate, in the usual game theoretic setup where they do announce their decisions simultaneously, without side channels. We call "aggregate outcome" the decision that was made by the majority of the players. If the aggregate outcome is defection, we say that the attack succeeds. A player's payoff consists of two components:

1. If her decision coincides with the aggregate outcome, the player gets 10 utilons.

and simultaneously:

2. if the attack succeeds, the attacker gets 1 utilons from each of the 99 players, regardless of their own decision.

                | Cooperate  | Defect
Attack fails    |        10  | 0
Attack succeeds |        -1  | 9

There are two equilibria, but the second payoff component breaks the symmetry, and everyone will cooperate.

Now the attacker spices things up, by making a credible commitment before the game. ("Credible" simply means that somehow they make sure that the promise can not be broken. The classic way to achieve such things is an escrow, but so called smart contracts are emerging as a method for making fully unbreakable commitments.)

The attacker's commitment is quite counterintuitive: he promises that he will pay 11 utilons to each of the defecting players, but only if the attack fails.

Now the payoff looks like this:

                | Cooperate  | Defect
Attack fails    |        10  | 11
Attack succeeds |        -1  | 9

Defection became a dominant strategy. The clever thing, of course, is that if everyone defects, then the attacker reaches his goal without paying out anything.

Who are your favorite "hidden rationalists"?

18 aarongertler 11 January 2015 06:26AM

Quick summary: "Hidden rationalists" are what I call authors who espouse rationalist principles, and probably think of themselves as rational people, but don't always write on "traditional" Less Wrong-ish topics and probably haven't heard of Less Wrong.

I've noticed that a lot of my rationalist friends seem to read the same ten blogs, and while it's great to have a core set of favorite authors, it's also nice to stretch out a bit and see how everyday rationalists are doing cool stuff in their own fields of expertise. I've found many people who push my rationalist buttons in fields of interest to me (journalism, fitness, etc.), and I'm sure other LWers have their own people in their own fields.

So I'm setting up this post as a place to link to/summarize the work of your favorite hidden rationalists. Be liberal with your suggestions!

Another way to phrase this: Who are the people/sources who give you the same feelings you get when you read your favorite LW posts, but who many of us probably haven't heard of?

 

Here's my list, to kick things off:

 

  • Peter Sandman, professional risk communication consultant. Often writes alongside Jody Lanard. Specialties: Effective communication, dealing with irrational people in a kind and efficient way, carefully weighing risks and benefits. My favorite recent post of his deals with empathy for Ebola victims and is a major, Slate Star Codex-esque tour de force. His "guestbook comments" page is better than his collection of web articles, but both are quite good.
  • Doug McGuff, MD, fitness guru and author of the exercise book with the highest citation-to-page ratio of any I've seen. His big thing is "superslow training", where you perform short and extremely intense workouts (video here). I've been moving in this direction for about 18 months now, and I've been able to cut my workout time approximately in half without losing strength. May not work for everyone, but reminds me of Leverage Research's sleep experiments; if it happens to work for you, you gain a heck of a lot of time. I also love the way he emphasizes the utility of strength training for all ages/genders -- very different from what you'd see on a lot of weightlifting sites.
  • Philosophers' Mail. A website maintained by applied philosophers at the School of Life, which reminds me of a hippy-dippy European version of CFAR (in a good way). Not much science, but a lot of clever musings on the ways that philosophy can help us live, and some excellent summaries of philosophers who are hard to read in the original. (Their piece on Vermeer is a personal favorite, as is this essay on Simon Cowell.) This recently stopped posting new material, but the School of Life now collects similar work through The Book of Life

Finally, I'll mention something many more people are probably aware of: I Am A, where people with interesting lives and experiences answer questions about those things. Few sites are better for broadening one's horizons; lots of concentrated honesty. Plus, the chance to update on beliefs you didn't even know you had.



Once more: Who are the people/sources who give you the same feeling you get when you read your favorite LW posts, but who many of us probably haven't heard of?

 

The new GiveWell recommendations are out: here's a summary of the charities

18 tog 01 December 2014 09:20PM

GiveWell have just announced their latest charity recommendations! What are everyone’s thoughts on them?

A summary: all of the old charities (GiveDirectly, SCI and Deworm the World) remain on the list. They're rejoined by AMF, as the room for more funding issues that led to it being delisted have been resolved to GiveWell's satisfaction. Together these organisations form GiveWell's list of 'top charities', which is now joined by a list of other charities which they see as excellent but not quite in the top tier. The charities on this list are Development Media International, Living Goods, and two salt fortification programs (run by GAIN and ICCIDD).

As normal, GiveWell's site contains extremely detailed writeups on these organisations. Here are some shorter descriptions which I wrote for Charity Science's donations page and my tool for donating tax-efficiently, starting with the new entries:

GiveWell's newly-added charities

Boost health and cognitive development with salt fortification

The charities GAIN and ICCIDD run programs that fortify the salt that millions of poor people eat with iodine. There is strong evidence that this boosts their health and cognitive development; iodine deficiency causes pervasive mental impairment, as well as stillbirth and congenital abnormalities such as severe retardation. It can be done very cheaply on a mass scale, so is highly cost-effective. GAIN is registered in the US and ICCIDD in Canada (although Canadians can give to either via Charity Science, which for complex reasons helps others who donate tax-deductibly to other charities), allowing for especially efficient donations from these countries, and taxpayers from other countries can also often give to them tax-deductibly. For more information, read GiveWell's detailed reviews of GAIN and ICCIDD.

Educate millions in life-saving practices with Development Media International

Development Media International (DMI) produces radio and television broadcasts in developing countries that tell people about improved health practices that can save lives, especially those of young children. Examples of such practices include exclusive breastfeeding. DMI are conducting a randomized controlled trial of their program which has found promising indications of a large decrease in children's deaths. With more funds they would be able to reach millions of people, due to the unparalleled reach of broadcasting. For more information, read GiveWell's detailed review.

Bring badly-needed goods and health services to the poor with Living Goods

Living Goods is a non-profit which runs a network of people selling badly-needed health and household goods door-to-door in their communities in Uganda and Kenya and provide free health advice. A randomized controlled trial suggested that this caused a 25% reduction in under-5 mortality among other benefits. Products sold range from fortified foods and mosquito nets to cookstoves and contraceptives. Giving to Living Goods is an exciting opportunity to bring these badly needed goods and services to some of the poorest families in the world. For more information, read GiveWell's detailed review.

GiveWell's old and returning charities

Treat hundreds of people for parasitic worms

Deworm the World and the Schistosomiasis Control Initiative (SCI) treat parasitic worm infections such as schistosomiasis, which can cause urinary infections, anemia, and other nutritional problems. For more information, read GiveWell's detailed review, or the more accessible Charity Science summary. Deworm the World is registered in the USA and SCI in the UK, allowing for tax-efficient direct donations in those countries, and taxpayers from other countries can also often give to them efficiently.

Make unconditional cash transfers with GiveDirectly

GiveDirectly lets you empower people to purchase whatever they believe will help them most. Eleven randomized controlled trials have supported cash transfers’ impact, and there is strong evidence that recipients know their own situation best and generally invest in things which make them happier in the long term. For more information, read GiveWell's detailed review, or the more accessible Charity Science summary.

Save lives and prevent infections with the Against Malaria Foundation

Malaria causes about a million deaths and two hundred million infections a year. Thankfully a $6 bednet can stop mosquitos from infecting children while they sleep, preventing this deadly disease. This intervention has exceptionally robust evidence behind it, with many randomized controlled trials suggesting that it is one of the most cost-effective ways to save lives. The Against Malaria Foundation (AMF) is an exceptional charity in every respect, and was GiveWell's top recommendation in 2012 and 2013. Not all bednet charities are created equal, and AMF outperforms the rest on every count. They can distribute nets cheaper than most others, for just $6.13 US. They distribute long-lasting nets which don’t need retreating with insecticide. They are extremely transparent and monitor their own impact carefully, requiring photo verification from each net distribution. For more information, read GiveWell's detailed review, or the more accessible Charity Science summary.

How to donate

To find out which charities are tax-deductible in your country and get links to give to them tax-efficiently, you can use this interactive tool that I made. If you give this season, consider sharing the charities you choose on the EA Donation Registry. We can see which charities EAs pick, and which of the new ones prove popular!

Request: Sequences book reading group

17 iarwain1 22 February 2015 01:06AM

The book version of the Sequences is supposed to be published in the next month or two, if I understand correctly. I would really enjoy an online reading group to go through the book together.

Reasons for a reading group:

  • It would give some of us the motivation to actually go through the Sequences finally.
  • I have frequently had thoughts or questions on some articles in the Sequences, but I refrained from commenting because I assumed it would be covered in a later article or because I was too intimidated to ask a stupid question. A reading group would hopefully assume that many of the readers would be new to the Sequences, so asking a question or making a comment without knowing the later articles would not appear stupid.
  • It may even bring back a bit of the blog-style excitement of the "old" LW ("I wonder what exciting new thoughts are going to be posted today?") that many have complained has been missing since the major contributors stopped posting.
I would recommend one new post per day, going in order of the book. I recommend re-posting the entire article to LW, including any edits or additions that are new in the book. Obviously this would require permission from the copyright holder (who is that? is there even going to be a copyright at all?), but I'm hoping that'll be fine.

I'd also recommend trying to make the barriers to entry as low as possible. As noted above, this means allowing people to ask questions / make comments without being required to have already read the later articles. Also, I suggest that people not be required to read all the comments from the original article. If something has already been discussed or if you think a particular comment from the original discussion was very important, then just link to it or quote it.

Finally, I think it would be very useful if some of the more knowledgeable LW members could provide links and references to the corresponding  "traditional" academic literature on each article.

Unfortunately, for various reasons I am unwilling to take responsibility for such a reading group. If you are willing to take on this responsibility, please post a comment to that effect below.

Thanks!

[LINK] Wait But Why - The AI Revolution Part 2

17 adamzerner 04 February 2015 04:02PM

Part 1 was previously posted and it seemed that people likd it, so I figured that I should post part 2 - http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

[LINK] The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics

16 tzachquiel 19 February 2015 06:06PM

Sean Carroll, physicist and proponent of Everettian Quantum Mechanics, has just posted a new article going over some of the common objections to EQM and why they are false. Of particular interest to us as rationalists:

Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate. And the actual postulates of the theory are quite simple indeed:

  1. The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space.
  2. The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.

That is, as they say, it. Notice you don’t see anything about worlds in there. The worlds are there whether you like it or not, sitting in Hilbert space, waiting to see whether they become actualized in the course of the evolution. Notice, also, that these postulates are eminently testable — indeed, even falsifiable! And once you make them (and you accept an appropriate “past hypothesis,” just as in statistical mechanics, and are considering a sufficiently richly-interacting system), the worlds happen automatically.

Given that, you can see why the objection is dispiritingly wrong-headed. You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away. This distinction between what is postulated (which should be testable) and everything that is derived (which clearly need not be) seems pretty straightforward to me, but is a favorite thing for people to get confused about.

Very reminiscent of the quantum physics sequence here! I find that this distinction between number of entities and number of postulates is something that I need to remind people of all the time.

 

 

META: This is my first post; if I have done anything wrong, or could have done something better, please tell me!

Wisdom for Smart Teens - my talk at SPARC 2014

16 Liron 09 February 2015 06:58PM

I recently had the privilege of a 1-hour speaking slot at SPARC, a yearly two-week camp for top high school math students.

Here's the video: Wisdom for Smart Teens

Instead of picking a single topic, I indulged in a bunch of mini-topics that I feel passionate about:

  1. Original Sight
  2. "Emperor has no clothes" moments
  3. Epistemology is cool
  4. Think quantitatively
  5. Be specific / use examples
  6. Organizations are inefficient
  7. How I use Bayesianism
  8. Be empathizable
  9. Communication
  10. Simplify
  11. Startups
  12. What you want
I think the LW crowd will get a kick out of it.

 

 

 

 

Money threshold Trigger Action Patterns

15 Neotenic 20 February 2015 04:56AM

In American society, talking about money is a taboo. It is ok to talk about how much money someone else made when they sold their company, or how much money you would like to earn yearly if you got a raise, but in many different ways, talking about money is likely to trigger some embarrassment in the brain, and generate social discomfort. As one random example: no one dares suggest that bills should be paid according to wealth, for instance, instead people quietly assume that fair is each paying ~1/n, which of course completely fails utilitarian standards.

One more interesting thing people don't talk about, but would probably be useful to know, are money trigger action patterns. That would be a trigger action pattern that should trigger whenever you have more money than X, for varying Xs.

A trivial example is when should you stop caring about pennies, or quarters? When should you start taking cabs or Ubers everywhere? These are minor examples, but there are more interesting questions that would benefit from a money trigger action pattern.

An argument can be made for instance that one should invest in health insurance prior to cryonics, cryonics prior to painting a house and recommended charities before expensive soundsystems. But people never put numbers on those things.

When should you buy cryonics and life insurance for it? When you own $1,000? $10,000? $1,000,000? Yes of course those vary from person to person, currency to currency, environment, age group and family size. This is no reason to remain silent about them. Money is the unit of caring, but some people can care about many more things than others in virtue of having more money. Some things are worth caring about if and only if you have that many caring units to spare.

I'd like to see people talking about what one should care about after surpassing specific numeric thresholds of money, and that seems to be an extremely taboo topic. Seems that would be particularly revealing when someone who does not have a certain amount suggests a trigger action pattern and someone who does have that amount realizes that, indeed, they should purchase that thing. Some people would also calibrate better over whether they need more or less money, if they had thought about these thresholds beforehand.

Some suggested items for those who want to try numeric triggers: health insurance, cryonics, 10% donation to favorite cause, virtual assistant, personal assistant, car, house cleaner, masseuse, quitting your job, driver, boat, airplane, house, personal clinician, lawyer, body guard,  etc...

...notice also that some of these are resource satisfiable, but some may not. It may always be more worth financing your anti-aging helper than your costume designer, so you'd hire the 10 millionth scientist to find out how to keep you young before considering hiring someone to design clothes specifically for you, perhaps because you don't like unique clothes. This is my feeling about boats, it feels like there are always other things that can be done with money that precede having a boat, though outside view is that a lot of people who own a lot of money buy boats.

Intrapersonal comparisons: you might be doing it wrong.

15 fowlertm 03 February 2015 09:34PM

Cross-posted

Nothing weighty or profound today, but I noticed a failure mode in myself which other people might plausibly suffer from so I thought I'd share it.

Basically, I noticed that sometimes when I discovered a more effective way of doing something -- say, going from conventional flashcards to Anki -- I found myself getting discouraged.

I realized that it was because each time I found such a technique, I automatically compared my current self to a version of me that had had access to the technique the whole time. Realizing that I wasn't as far along as I could've been resulted in a net loss of motivation. 

Now, I deliberately compare two future versions of myself, one armed with the technique I just discovered and one without. Seeing how much farther along I will be results in a net gain of motivation.

A variant of this exercise is taking any handicap you might have and wildly exaggerating it. I suffer from mild Carpal Tunnel (or something masquerading as CT) which makes progress in programming slow. When I feel down about this fact I imagine how hard programming would be without hands.

Sometimes I go as far as to plan out what I might do if I woke up tomorrow with a burning desire to program and nothing past my wrists. Well, I'd probably figure out a way to code by voice and then practice mnemonics because I wouldn't be able to write anything down. Since these solutions exist I can implement one or both of them the moment my carpal tunnel gets bad enough.

With this realization comes a boost in motivation knowing I can go a different direction if required. 

View more: Next