Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

How I accidentally discovered the pill to enlightenment but I wouldn’t recommend it.

3 Elo 03 January 2018 12:37AM

Main post:

Brief teaser:

Eastern enlightenment is not what you think.  I mean, maybe it is.  But it’s probably not.  There’s a reason it’s so elusive, and there’s a reason that it hasn’t joined western science and the western world the way that curiosity and discovery have as a driving force.

This is the story of my mistake accidentally discovering enlightenment.

February 2017

I was noticing some weird symptoms.  I felt cold.  Which was strange because I have never been cold.  Nicknames include “fire” and “hot hands”, my history includes a lot of bad jokes about how I am definitely on fire.  I am known for visiting the snow in shorts and a t-shirt.  I hit 70kg,  The least fat I have ever had in my life.  And that was the only explanation I had.  I asked a doctor about it, I did some reading – circulation problems.  I don’t have circulation problems at the age of 25.  I am more fit than I have ever been in my life.  I look into hesperidin (orange peel) and eat myself a few whole oranges including peel.  No change.  I look into other blood pressure supplements, other capillary modifying supplements…  Other ideas to investigate.  I decided I couldn’t be missing something because there was nothing to be missing.  I would have read it somewhere already.  So I settled for the obvious answer.  Being skinnier was making me colder.

Flashback to February 2016

This is where it all begins.  I move out of my parents house into an apartment with a girl I have been seeing for under 6 months.  I weigh around 80kg (that’s 12.5 stones or 176 pounds or 2822 ounces for our imperial friends).  Life happens and by March I am on my own.  I decide to start running.  Make myself a more desirable human.

I taught myself a lot about routines and habits and actually getting myself to run. Running is hard.  Actually, running is easy.  Leaving the house is hard.  But I work that out too.

For the rest of the post please visit:

[Link] 2018 AI Safety Literature Review and Charity Comparison

2 Larks 20 December 2017 10:04PM

[Link] Paper: Superintelligence as a Cause or Cure for Risks of Astronomical Suffering

1 Kaj_Sotala 03 January 2018 02:39PM

[Paper]: Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence

0 turchin 04 January 2018 02:28PM

There are two views on the best strategy among transhumanists and rationalists: The first involves the belief that one must invest in life extension technologies, and the latter, that it is necessary to create an aligned AI that will solve all problems, including giving us immortality or even something better. In our article, we showed that these two points of view do not contradict each other, because it is the development of AI that will be the main driver of increased life expectancy in the coming years, and as a result, even currently living people can benefit (and contribute) from the future superintelligence in several ways.

Firstly, because the use of machine learning, narrow AI will allow the study of aging biomarkers and combinations of geroprotectors, and this will produce an increase in life expectancy of several years, which means that tens of millions of people will live long enough to survive until the date of the creation of the superintelligence (whenever it happens) and will be saved from death. In other words, the current application of narrow AI to life extension provides us with a chance to jump on the “longevity escape velocity”, and the rapid growth of the AI will be the main factor that will, like the wind, help to increase this velocity.

Secondly, we can—here in the present—utilize some possibilities of the future superintelligence, by collecting data for “digital immortality”. Based on these data, the future AI can reconstruct the exact model of our personality, and also solve the identity problem. At the same time, the collection of medical data about the body will help both now—as it can train machine learning systems in predicting diseases—and in the future, when it becomes part of digital immortality. By subscribing to cryonics, we can also tap into the power of the future superintelligence, since without it, a successful reading of information from the frozen brain is impossible.

Thirdly, there are some grounds for assuming that medical AI will be safer. It is clear that fooming can occur with any AI. But the development of medical AI will accelerate the development of BCI interfaces, such as a Neuralink, and this will increase the chance of AI not appearing separately from humans, but as a product of integration with a person. As a result, a human mind will remain part of the AI, and from within, the human will direct its goal function. Actually, this is also Elon Musk’s vision, and he wants to commercialize his Neuralink through the treatment of diseases. In addition, if we assume that the principle of orthogonality may have exceptions, then any medical AI aimed at curing humans will be more likely to have benevolence as its terminal goal.

As a result, by developing AI for life extension, we make AI more safe, and increase the number of people who will survive up to the creation of superintelligence. Thus, there is no contradiction between the two main approaches in improving human life via the use of new technologies.

Moreover, for a radical life extension with the help of AI, it is necessary to take concrete steps right now: to collect data for digital immortality, to join patient organizations in order to combat aging, and to participate in clinical trials involving combinations of geroprotectors, and computer analysis of biomarkers. We see our article as a motivational pitch that will encourage the reader to fight for a personal and global radical life extension.

In order to substantiate all of these conclusions, we conducted a huge analysis of existing start-ups and directions in the field of AI applications for life extension, and we have identified the beginnings of many of these trends, fixed in the specific business plans of companies.


Michael Batin, Alexey Turchin, Markov Sergey, Alice Zhila, David Denkenberger

“Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence”

Informatica 41 (2017) 401–417:

[Link] The Peculiar Difficulty of Social Science

0 Crux 04 January 2018 06:33AM

January 2018 Media Thread

0 ArisKatsaris 01 January 2018 02:11AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.


  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

[Link] Happiness Is a Chore

0 SquirrelInHell 20 December 2017 11:11AM