Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Linkposts now live!

26 Vaniver 28 September 2016 03:13PM


You can now submit links to LW! As the rationality community has grown up, more and more content has moved off LW to other places, and so rather than trying to generate more content here we'll instead try to collect more content here. My hope is that Less Wrong becomes something like "the Rationalist RSS," where people can discover what's new and interesting without necessarily being plugged in to the various diaspora communities.

Some general norms, subject to change:


  1. It's okay to link someone else's work, unless they specifically ask you not to. It's also okay to link your own work; if you want to get LW karma for things you make off-site, drop a link here as soon as you publish it.
  2. It's okay to link old stuff, but let's try to keep it to less than 5 old posts a day. The first link that I made is to Yudkowsky's Guide to Writing Intelligent Characters.
  3. It's okay to link to something that you think rationalists will be interested in, even if it's not directly related to rationality. If it's political, think long and hard before deciding to submit that link.
  4. It's not okay to post duplicates.

As before, everything will go into discussion. Tag your links, please. As we see what sort of things people are linking, we'll figure out how we need to divide things up, be it separate subreddits or using tags to promote or demote the attention level of links and posts.

(Thanks to James Lamine for doing the coding, and to Trike (and myself) for supporting the work.)

Astrobiology III: Why Earth?

17 CellBioGuy 04 October 2016 09:59PM

After many tribulations, my astrobiology bloggery is back up and running using Wordpress rather than Blogger because Blogger is completely unusable these days.  I've taken the opportunity of the move to make better graphs for my old posts. 

"The Solar System: Why Earth?"

Here, I try to look at our own solar system and what the presence of only ONE known biosphere, here on Earth, tells us about life and perhaps more importantly what it does not.  In particular, I explore what aspects of Earth make it special and I make the distinction between a big biosphere here on Earth that has utterly rebuilt the geochemistry and a smaller biosphere living off smaller amounts of energy that we probably would never notice elsewhere in our own solar system given the evidence at hand. 

Commentary appreciated.



Previous works:

Space and Time, Part I

Space and Time, Part II

A Child's Petrov Day Speech

15 James_Miller 28 September 2016 02:27AM

30 years ago, the Cold War was raging on. If you don’t know what that is, it was the period from 1947 to 1991 where both the U.S and Russia had large stockpiles of nuclear weapons and were threatening to use them on each other. The only thing that stopped them from doing so was the knowledge that the other side would have time to react. The U.S and Russia both had surveillance systems to know of the other country had a nuke in the air headed for them.

On this day, September 26, in 1983, a man named Stanislav Petrov was on duty in the Russian surveillance room when the computer notified him that satellites had detected five nuclear missile launches from the U.S. He was told to pass this information on to his superiors, who would then launch a counter-strike.

He refused to notify anyone of the incident, suspecting it was just an error in the computer system.

No nukes ever hit Russian soil. Later, it was found that the ‘nukes’ were just light bouncing off of clouds which confused the satellite. Petrov was right, and likely saved all of humanity by stopping the outbreak of nuclear war. However, almost no one has heard of him.

We celebrate men like George Washington and Abraham Lincoln who win wars. These were great men, but the greater men, the men like Petrov who stopped these wars from ever happening - no one has heard of these men.

Let it be known, that September 26 is Petrov Day, in honor of the acts of a great man who saved the world, and of who almost no one has heard the name of.




My 11-year-old son wrote and then read this speech to his six grade class.

MIRI AMA plus updates

10 RobbBB 11 October 2016 11:52PM

MIRI is running an AMA on the Effective Altruism Forum tomorrow (Wednesday, Oct. 11): Ask MIRI Anything. Questions are welcome in the interim!

Nate also recently posted a more detailed version of our 2016 fundraising pitch to the EA Forum. One of the additions is about our first funding target:

We feel reasonably good about our chance of hitting target 1, but it isn't a sure thing; we'll probably need to see support from new donors in order to hit our target, to offset the fact that a few of our regular donors are giving less than usual this year.

The Why MIRI's Approach? section also touches on new topics that we haven't talked about in much detail in the past, but plan to write up some blog posts about in the future. In particular:

Loosely speaking, we can imagine the space of all smarter-than-human AI systems as an extremely wide and heterogeneous space, in which "alignable AI designs" is a small and narrow target (and "aligned AI designs" smaller and narrower still). I think that the most important thing a marginal alignment researcher can do today is help ensure that the first generally intelligent systems humans design are in the “alignable” region. I think that this is unlikely to happen unless researchers have a fairly principled understanding of how the systems they're developing reason, and how that reasoning connects to the intended objectives.

Most of our work is therefore aimed at seeding the field with ideas that may inspire more AI research in the vicinity of (what we expect to be) alignable AI designs. When the first general reasoning machines are developed, we want the developers to be sampling from a space of designs and techniques that are more understandable and reliable than what’s possible in AI today.

In other news, we've uploaded a new intro talk on our most recent result, "Logical Induction," that goes into more of the technical details than our previous talk.

See also Shtetl-Optimized and n-Category Café for recent discussions of the paper.

[Link] Putanumonit - Convincing people to read the Sequences and wondering about "postrationalists"

10 Jacobian 28 September 2016 04:43PM

Astrobiology IV: Photosynthesis and energy

8 CellBioGuy 17 October 2016 12:30AM

Originally I sat down to write about the large-scale history of Earth, and line up the big developments that our biosphere has undergone in the last 4 billion years.  But after writing about the reason that Earth is unique in our solar system (that is, photosynthesis being an option here), I guess I needed to explore photosynthesis and other forms of metabolism on Earth in a little more detail and before I knew it I’d written more than 3000 words about it.  So, here we are, taking a deep dive into photosynthesis and energy metabolism, and trying to determine if the origin of photosynthesis is a rare event or likely anywhere you get a biosphere with light falling on it.  Warning:  gets a little technical.

In short, I think it’s clear from the fact that there are multiple origins of it that phototrophy, using light for energy, is likely to show up anywhere there is light and life.  I suspect, but cannot rigorously prove, that even though photosynthesis of biomass only emerged once it was an early development in life on Earth emerging very near the root of the Bacterial tree and just produced a very strong first-mover advantage crowding out secondary origins of it, and would probably also show up where there is life and light.  As for oxygen-producing photosynthesis, its origin from more mundane other forms of photosynthesis is still being studied.  It required a strange chaining together of multiple modes of photosynthesis to make it work, and only ever happened once as well.  Its time of emergence, early or late, is pretty unconstrained and I don’t think there’s sufficient evidence to say one way or another if it is likely to happen anywhere there is photosynthesis.  It could be subject to the same ‘first mover advantage’ situation that other photosynthesis may have encountered as well.  But once it got going, it would naturally take over biomass production and crowd out other forms of photosynthesis due to the inherent chemical advantages it has on any wet planet (that have nothing to do with making oxygen) and its effects on other forms of photosynthesis.

Oxygen in the atmosphere had some important side effects, one which most people care about being allowing big complicated energy-gobbling organisms like animals – all that energy that organisms can get burning biomass in oxygen lets organisms that do so do a lot of interesting stuff.  Looking for oxygen in the atmospheres of other terrestrial planets would be an extremely informative experiment, as the presence of this substance would suggest that a process very similar to the process that created our huge diverse and active biosphere were underway.

Map and Territory: a new rationalist group blog

8 gworley 15 October 2016 05:55PM

If you want to engage with the rationalist community, LessWrong is mostly no longer the place to do it. Discussions aside, most of the activity has moved into the diaspora. There are a few big voices like Robin and Scott, but most of the online discussion happens on individual blogs, Tumblr, semi-private Facebook walls, and Reddit. And while these serve us well enough, I find that they leave me wanting for something like what LessWrong was: a vibrant group blog exploring our perspectives on cognition and building insights towards a deeper understanding of the world.

Maybe I'm yearning for a golden age of LessWrong that never was, but the fact remains that there is a gap in the rationalist community that LessWrong once filled. A space for multiple voices to come together in a dialectic that weaves together our individual threads of thought into a broader narrative. A home for discourse we are proud to call our own.

So with a lot of help from fellow rationalist bloggers, we've put together Map and Territory, a new group blog to bring our voices together. Each week you'll find new writing from the likes of Ben Hoffman, Mike Plotz, Malcolm Ocean, Duncan Sabien, Anders Huitfeldt, and myself working to build a more complete view of reality within the context of rationality.

And we're only just getting started, so if you're a rationalist blogger please consider joining us. We're doing this on Medium, so if you write something other folks in the rationalist community would like to read, we'd love to consider sharing it through Map and Territory (cross-positing encouraged). Reach out to me on Facebook or email and we'll get the process rolling.

[Recommendation] Steven Universe & cryonics

8 tadrinth 11 October 2016 04:21PM

I've been watching Steven Universe with my fiancee (a children's cartoon on Cartoon Network by Rebecca Sugar), and it wasn't until I got to Season 3 that I realized there's been a cryonics metaphor running in the background since the very first episode. If you want to introduce your kids to the idea of cryonics, this series seems like a spectacularly good way to do it.

If you don't want any spoilers, just go watch it, then come back.

Otherwise, here's the metaphor I'm seeing, and why it's great:

  • In the very first episode, we find out that the main characters are a group called the Crystal Gems, who fight 'gem monsters'. When they defeat a monster, a gem is left behind, which they lock in a bubble-forcefield and store in their headquarters.

  • One of the Crystal Gems is injured in a training accident, and we find out that their bodies are just projections; each Crystal Gem has a gem located somewhere on their body, which contains their minds. So long as their gem isn't damaged, they can project a new body after some time to recover. So we already have the insight that minds and bodies are separate.

  • This is driven home by a second episode where one of the Crystal Gems has their crystal cracked; this is actually dangerous to their mind, not just body, and is treated as a dire emergency instead of merely an inconvenience.

  • Then we eventually find out that the gem monsters are actually corrupted members of the same species as the Crystal Gems. They are 'bubbled' and stored in the temple in hopes of eventually restoring them to sanity and their previous forms.

  • An attempt is made to cure one of the monsters, which doesn't fully succeed, but at least restores them to sanity. This allows them to remain unbubbled and to be reunited with their old comrades (who are also corrupted). This was the episode where I finally made the connection to cryonics.

  • The Crystal Gems are also revealed to be over 5000 years old, and effectively immortal. They don't make a big deal out of this; for them, this is totally normal.

  • This also implies that they've made no progress in curing the gem monsters in 5000 years, but that doesn't stop them from preserving them anyway.

  • Finally, a secret weapon is revealed which is capable of directly shattering gems (thus killing the target permanently), but the use of it is rejected as unethical.

So, all in all, you have a series where when someone is hurt or sick in a way that you can't help, you preserve their mind in a safe way until you can figure out a way to help them. Even your worst enemy deserves no less.


Also, Steven Universe has an entire episode devoted to mindfulness meditation.  

*How* people shut down thought because of high-status respectable halos

7 NancyLebovitz 20 October 2016 02:09PM

A detailed look at the belief that high status social structures can be so much better than anything one can think of that there's no point in even trying to think about the details of what to do, and how debilitating this is.

Discussion of the essay

Agential Risks: A Topic that Almost No One is Talking About

7 philosophytorres 15 October 2016 06:41PM

(Happy to get feedback on this! It draws from and expounds ideas in this article:

Consider a seemingly simple question: if the means were available, who exactly would destroy the world? There is surprisingly little discussion of this question within the nascent field of existential risk studies. But it’s an absolutely crucial issue: what sort of agent would either intentionally or accidentally cause an existential catastrophe?

The first step forward is to distinguish between two senses of an existential risk. Nick Bostrom originally defined the term as: “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” It follows that there are two distinct scenarios, one endurable and the other terminal, that could realize an existential risk. We can call the former an extinction risk and the latter a stagnation risk. The importance of this distinction with respect to both advanced technologies and destructive agents has been previously underappreciated.

So, the question asked above is actually two questions in disguise. Let’s consider each in turn.

Terror: Extinction Risks

First, the categories of agents who might intentionally cause an extinction catastrophe are fewer and smaller than one might think. They include:

(1) Idiosyncratic actors. These are malicious agents who are motivated by idiosyncratic beliefs and/or desires. There are instances of deranged individuals who have simply wanted to kill as many people as possible and then die, such as some school shooters. Idiosyncratic actors are especially worrisome because this category could have a large number of members (token agents). Indeed, the psychologist Martha Stout estimates that about 4 percent of the human population suffers from sociopathy, resulting in about 296 million sociopaths. While not all sociopaths are violent, a disproportionate number of criminals and dictators have (or very likely have) had the condition.

(2) Future ecoterrorists. As the effects of climate change and biodiversity loss (resulting in the sixth mass extinction) become increasingly conspicuous, and as destructive technologies become more powerful, some terrorism scholars have speculated that ecoterrorists could become a major agential risk in the future. The fact is that the climate is changing and the biosphere is wilting, and human activity is almost entirely responsible. It follows that some radical environmentalists in the future could attempt to use technology to cause human extinction, thereby “solving” the environmental crisis. So, we have some reason to believe that this category could become populated with a growing number of token agents in the coming decades.

(3) Negative utilitarians. Those who hold this view believe that the ultimate aim of moral conduct is to minimize misery, or “disutility.” Although some negative utilitarians like David Pearce see existential risks as highly undesirable, others would welcome annihilation because it would entail the elimination of suffering. It follows that if a “strong” negative utilitarian had a button in front of her that, if pressed, would cause human extinction (say, without causing pain), she would very likely press it. Indeed, on her view, doing this would be the morally right action. Fortunately, this version of negative utilitarianism is not a position that many non-academics tend to hold, and even among academic philosophers it is not especially widespread.

(4) Extraterrestrials. Perhaps we are not alone in the universe. Even if the probability of life arising on an Earth-analog is low, the vast number of exoplanets suggests that the probability of life arising somewhere may be quite high. If an alien species were advanced enough to traverse the cosmos and reach Earth, it would very likely have the technological means to destroy humanity. As Stephen Hawking once remarked, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.”

(5) Superintelligence. The reason Homo sapiens is the dominant species on our planet is due almost entirely to our intelligence. It follows that if something were to exceed our intelligence, our fate would become inextricably bound up with its will. This is worrisome because recent research shows that even slight misalignments between our values and those motivating a superintelligence could have existentially catastrophic consequences. But figuring out how to upload human values into a machine poses formidable problems — not to mention the issue of figuring out what our values are in the first place.

Making matters worse, a superintelligence could process information at about 1 million times faster than our brains, meaning that a minute of time for us would equal approximately 2 years in time for the superintelligence. This would immediately give the superintelligence a profound strategic advantage over us. And if it were able to modify its own code, it could potentially bring about an exponential intelligence explosion, resulting in a mind that’s many orders of magnitude smarter than any human. Thus, we may have only one chance to get everything just right: there’s no turning back once an intelligence explosion is ignited.

A superintelligence could cause human extinction for a number of reasons. For example, we might simply be in its way. Few humans worry much if an ant genocide results from building a new house or road. Or the superintelligence could destroy humanity because we happen to be made out of something it could use for other purposes: atoms. Since a superintelligence need not resemble human intelligence in any way — thus, scholars tell us to resist the dual urges of anthropomorphizing and anthropopathizing — it could be motivated by goals that appear to us as utterly irrational, bizarre, or completely inexplicable.

Terror: Stagnation Risks

Now consider the agents who might intentionally try to bring about a scenario that would result in a stagnation catastrophe. This list subsumes most of the list above in that it includes idiosyncratic actors, future ecoterrorists, and superintelligence, but it probably excludes negative utilitarians, since stagnation (as understood above) would likely induce more suffering than the status quo today. The case of extraterrestrials is unclear, given that we can infer almost nothing about an interstellar civilization except that it would be technologically sophisticated.

For example, an idiosyncratic actor could harbor not a death wish for humanity, but a “destruction wish” for civilization. Thus, she or he could strive to destroy civilization without necessarily causing the annihilation of Homo sapiens. Similarly, a future ecoterrorist could hope for humanity to return to the hunter-gatherer lifestyle. This is precisely what motivated Ted Kaczynski: he didn’t want everyone to die, but he did want our technological civilization to crumble. And finally, a superintelligence whose values are misaligned with ours could modify Earth in such a way that our lineage persists, but our prospects for future development are permanently compromised. Other stagnation scenarios could involve the following categories:

(6) Apocalyptic terrorists. History is overflowing with groups that not only believed the world was about to end, but saw themselves as active participants in an apocalyptic narrative that’s unfolding in realtime. Many of these groups have been driven by the conviction that “the world must be destroyed to be saved,” although some have turned their activism inward and advocated mass suicide.

Interestingly, no notable historical group has combined both the genocidal and suicidal urges. This is why apocalypticists pose a greater stagnation terror risk than extinction risk: indeed, many see their group’s survival beyond Armageddon as integral to the end-times, or eschatological, beliefs they accept. There are almost certainly less than about 2 million active apocalyptic believers in the world today, although emerging environmental, demographic, and societal conditions could cause this number to significantly increase in the future, as I’ve outlined in detail elsewhere (see Section 5 of this paper).

(7) States. Like terrorists motivated by political rather than transcendent goals, states tend to place a high value on their continued survival. It follows that states are unlikely to intentionally cause a human extinction event. But rogue states could induce a stagnation catastrophe. For example, if North Korea were to overcome the world’s superpowers through a sudden preemptive attack and implement a one-world government, the result could be an irreversible decline in our quality of life.

So, there are numerous categories of agents that could attempt to bring about an existential catastrophe. And there appear to be fewer agent types who would specifically try to cause human extinction than to merely dismantle civilization.

Error: Extinction and Stagnation Risks

There are some reasons, though, for thinking that error (rather than terror) could constitute the most significant threat in the future. First, almost every agent capable of causing intentional harm would also be capable of causing accidental harm, whether this results in extinction or stagnation. For example, an apocalyptic cult that wants to bring about Armageddon by releasing a deadly biological agent in a major city could, while preparing for this terrorist act, inadvertently contaminate its environment, leading to a global pandemic.

The same goes for idiosyncratic agents, ecoterrorists, negative utilitarians, states, and perhaps even extraterrestrials. (Indeed, the large disease burden of Europeans was a primary reason Native American populations were decimated. By analogy, perhaps an extraterrestrial destroys humanity by introducing a new type of pathogen that quickly wipes us out.) The case of superintelligence is unclear, since the relationship between intelligence and error-proneness has not been adequately studied.

Second, if powerful future technologies become widely accessible, then virtually everyone could become a potential cause of existential catastrophe, even those with absolutely no inclination toward violence. To illustrate the point, imagine a perfectly peaceful world in which not a single individual has malicious intentions. Further imagine that everyone has access to a doomsday button on her or his phone; if pushed, this button would cause an existential catastrophe. Even under ideal societal conditions (everyone is perfectly “moral”), how long could we expect to survive before someone’s finger slips and the doomsday button gets pressed?

Statistically speaking, a world populated by only 1 billion people would almost certainly self-destruct within a 10-year period if the probability of any individual accidentally pressing a doomsday button were a mere 0.00001 percent per decade. Or, alternatively: if only 500 people in the world were to gain access to a doomsday button, and if each of these individuals had a 1 percent chance of accidentally pushing the button per decade, humanity would have a meager 0.6 percent chance of surviving beyond 10 years. Thus, even if the likelihood of mistakes is infinitesimally small, planetary doom will be virtually guaranteed for sufficiently large populations.

The Two Worlds Thought Experiment

The good news is that a focus on agential risks, as I’ve called them, and not just the technological tools that agents might use to cause a catastrophe, suggests additional ways to mitigate existential risk. Consider the following thought-experiment: a possible world A contains thousands of advanced weapons that, if in the wrong hands, could cause the population of A to go extinct. In contrast, a possible world B contains only a single advanced “weapon of total destruction” (WTD). Which world is more dangerous? The answer is obviously world A.

But it would be foolishly premature to end the analysis here. Imagine further that A is populated by compassionate, peace-loving individuals, whereas B is overrun by war-mongering psychopaths. Now which world appears more likely to experience an existential catastrophe? The correct answer is, I would argue, world B.

In other words: agents matter as much as, or perhaps even more than, WTDs. One simply can’t evaluate the degree of risk in a situation without taking into account the various agents who could become coupled to potentially destructive artifacts. And this leads to the crucial point: as soon as agents enter the picture, we have another variable that could be manipulated through targeted interventions to reduce the overall probability of an existential catastrophe.

The options here are numerous and growing. One possibility would involve using “moral bioenhancement” techniques to reduce the threat of terror, given that acts of terror are immoral. But a morally enhanced individual might not be less likely to make a mistake. Thus, we could attempt to use cognitive enhancements to lower the probability of catastrophic errors, on the (tentative) assumption that greater intelligence correlates with fewer blunders.

Furthermore, implementing stricter regulations on CO2 emissions could decrease the probability of extreme ecoterrorism and/or apocalyptic terrorism, since environmental degradation is a “trigger” for both.

Another possibility, most relevant to idiosyncratic agents, is to reduce the prevalence of bullying (including cyberbullying). This is motivated by studies showing that many school shooters have been bullied, and that without this stimulus such individuals would have been less likely to carry out violent rampages. Advanced mind-reading or surveillance technologies could also enable law enforcement to identify perpetrators before mass casualty crimes are committed.

As for superintelligence, efforts to solve the “control problem” and create a friendly AI are of primary concern among many many researchers today. If successful, a friendly AI could itself constitute a powerful mitigation strategy for virtually all the categories listed above.

(Note: these strategies should be explicitly distinguished from proposals that target the relevant tools rather than agents. For example, Bostrom’s idea of “differential technological development” aims to neutralize the bad uses of technology by strategically ordering the development of different kinds of technology. Similarly, the idea of police “blue goo” to counter “grey goo” is a technology-based strategy. Space colonization is also a tool intervention because it would effectively reduce the power (or capacity) of technologies to affect the entire human or posthuman population.)

Agent-Tool Couplings

Devising novel interventions and understanding how to maximize the efficacy of known strategies requires a careful look at the unique properties of the agents mentioned above. Without an understanding of such properties, this important task will be otiose. We should also prioritize different agential risks based on the likely membership (token agents) of each category. For example, the number of idiosyncratic agents might exceed the number of ecoterrorists in the future, since ecoterrorism is focused on a single issue, whereas idiosyncratic agents could be motivated by a wide range of potential grievances.[1] We should also take seriously the formidable threat posed by error, which could be nontrivially greater than that posed by terror, as the back-of-the-envelope calculations above show.

Such considerations, in combination with technology-based risk mitigation strategies, could lead to a comprehensive, systematic framework for strategically intervening on both sides of the agent-tool coupling. But this will require the field of existential risk studies to become less technocentric than it currently is.

[1] Although, on the other hand, the stimulus of environmental degradation would be experienced by virtually everyone in society, whereas the stimuli that motivate idiosyncratic agents might be situationally unique. It’s precisely issues like these that deserve further scholarly research.

[Link] Putanumonit - Discarding empathy to save the world

7 Jacobian 06 October 2016 07:03AM

[Link] There are 125 sheep and 5 dogs in a flock. How old is the shepherd? / Math Education

6 James_Miller 17 October 2016 12:12AM

[Link] Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority

6 ignoranceprior 14 October 2016 07:58PM

The map of organizations, sites and people involved in x-risks prevention

6 turchin 07 October 2016 12:04PM

Three known attempts to make a map of x-risks prevention in the field of science exist:

1. First is the list from the Global Catastrophic Risks Institute in 2012-2013, and many links there are already not working:

2. The second was done by S. Armstrong in 2014

3. And the most beautiful and useful map was created by Andrew Critch. But its ecosystem ignores organizations which have a different view of the nature of global risks (that is, they share the value of x-risks prevention, but have another world view).

In my map I have tried to add all currently active organizations which share the value of global risks prevention.

It also regards some active independent people as organizations, if they have an important blog or field of research, but not all people are mentioned in the map. If you think that you (or someone) should be in it, please write to me at

I used only open sources and public statements to learn about people and organizations, so I can’t provide information on the underlying net of relations.

I tried to give all organizations a short description based on its public statement and also my opinion about its activity. 

In general it seems that all small organizations are focused on their collaboration with larger ones, that is MIRI and FHI, and small organizations tend to ignore each other; this is easily explainable from the social singnaling theory. Another explanation is that larger organizations have a great ability to make contacts.

It also appears that there are several organizations with similar goal statements. 

It looks like the most cooperation exists in the field of AI safety, but most of the structure of this cooperation is not visible to the external viewer, in contrast to Wikipedia, where contributions of all individuals are visible. 

It seems that the community in general lacks three things: a united internet forum for public discussion, an x-risks wikipedia and an x-risks related scientific journal.

Ideally, a forum should be used to brainstorm ideas, a scientific journal to publish the best ideas, peer review them and present them to the outer scientific community, and a wiki to collect results.

Currently it seems more like each organization is interested in creating its own research and hoping that someone will read it. Each small organization seems to want to be the only one to present the solutions to global problems and gain full attention from the UN and governments. It raises the problem of noise and rivalry; and also raises the problem of possible incompatible solutions, especially in AI safety.

The pdf is here:

The University of Cambridge Centre for the Study of Existential Risk (CSER) is hiring!

6 crmflynn 06 October 2016 04:53PM

The University of Cambridge Centre for the Study of Existential Risk (CSER) is recruiting for an Academic Project Manager. This is an opportunity to play a shaping role as CSER builds on its first year's momentum towards becoming a permanent world-class research centre. We seek an ambitious candidate with initiative and a broad intellectual range for a postdoctoral role combining academic and project management responsibilities.

The Academic Project Manager will work with CSER's Executive Director and research team to co-ordinate and develop CSER's projects and overall profile, and to develop new research directions. The post-holder will also build and maintain collaborations with academic centres, industry leaders and policy makers in the UK and worldwide, and will act as an ambassador for the Centre’s research externally. Research topics will include AI safety, bio risk, extreme environmental risk, future technological advances, and cross-cutting work on governance, philosophy and foresight. Candidates will have a PhD in a relevant subject, or have equivalent experience in a relevant setting (e.g. policy, industry, think tank, NGO).

Application deadline: November 11th.

[Link] 80% of data in Chinese clinical trials have been fabricated

6 DanArmak 02 October 2016 07:38AM

Fermi paradox of human past, and corresponding x-risks

6 turchin 01 October 2016 05:01PM

Based on known archaeological data, we are the first technological and symbol-using civilisation on Earth (but not the first tool-using species). 
This leads to an analogy that fits Fermi’s paradox: Why are we the first civilisation on Earth? For example, flight was invented by evolution independently several times. 
We could imagine that on our planet, many civilisations appeared and also became extinct, and based on mediocre principles, we should be somewhere in the middle. For example, if 10 civilisations appeared, we have only a 10 per cent chance of being the first one.

The fact that we are the first such civilisation has strong predictive power about our expected future: it lowers the probability that there will be any other civilisations on Earth, including non-humans or even a restarting of human civilisation from scratch. It is because, if there will be many civiizations, we should not find ourselves to be the first one (It is some form of Doomsday argument, the same logic is used in Bostrom's article “Adam and Eve”).

If we are the only civilisation to exist in the history of the Earth, then we will probably become extinct not in mild way, but rather in a way which will prevent any other civilisation from appearing. There is higher probability of future (man-made) catastrophes which will not only end human civilisation, but also prevent any existence of any other civilisations on Earth.

Such catastrophes would kill most multicellular life. Nuclear war or pandemic is not that type of a catastrophe. The catastrophe must be really huge: such as irreversible global warming, grey goo or black hole in a collider.

Now, I will list possible explanations of the Fermi paradox of human past and corresponding x-risks implications:


1. We are the first civilisation on Earth, because we will prevent the existence of any future civilisations.

If our existence prevents other civilisations from appearing in the future, how could we do it? We will either become extinct in a very catastrophic way, killing all earthly life, or become a super-civilisation, which will prevent other species from becoming sapient. So, if we are really the first, then it means that "mild extinctions" are not typical for human style civilisations. Thus, pandemics, nuclear wars, devolutions and everything reversible are ruled out as main possible methods of human extinction.

If we become a super-civilisation, we will not be interested in preserving biosphera, as it will be able to create new sapient species. Or, it may be that we care about biosphere so strongly, that we will hide very well from new appearing sapient species. It will be like a cosmic zoo. It means that past civilisations on Earth may have existed, but decided to hide all traces of their existence from us, as it would help us to develop independently. So, the fact that we are the first raises the probability of a very large scale catastrophe in the future, like UFAI, or dangerous physical experiments, and reduces chances of mild x-risks such as pandemics or nuclear war. Another explanation is that any first civilisation exhausts all resources which are needed for a technological civilisation restart, such as oil, ores etc. But, in several million years most such resources will be filled again or replaced by new by tectonic movement.


2. We are not the first civilisation.

2.1. We didn't find any traces of a previous technological civilisation, yet based on what we know, there are very strong limitations for their existence. For example, every civilisation makes genetic marks, because it moves animals from one continent to another, just as humans brought dingos to Australia. It also must exhaust several important ores, create artefacts, and create new isotopes. We could be sure that we are the first tech civilisation on Earth in last 10 million years.

But, could we be sure for the past 100 million years? Maybe it was a very long time ago, like 60 million years ago (and killed dinosaurs). Carl Sagan argued that it could not have happened, because we should find traces mostly as exhausted oil reserves. The main counter argument here is that cephalisation, that is the evolutionary development of the brains, was not advanced enough 60 millions ago, to support general intelligence. Dinosaurian brains were very small. But, bird’s brains are more mass effective than mammalians. All these arguments in detail are presented in this excellent article by Brian Trent “Was there ever a dinosaurian civilisation”? 

The main x-risks here are that we will find dangerous artefacts from previous civilisation, such as weapons, nanobots, viruses, or AIs. And, if previous civilisations went extinct, it increases the chances that it is typical for civilisations to become extinct. It also means that there was some reason why an extinction occurred, and this killing force may be still active, and we could excavate it. If they existed recently, they were probably hominids, and if they were killed by a virus, it may also affect humans.

2.2. We killed them. Maya civilisation created writing independently, but Spaniards destroy their civilisation. The same is true for Neanderthals and Homo Florentines.

2.3. Myths about gods may be signs of such previous civilisation. Highly improbable.

2.4. They are still here, but they try not to intervene in human history. So, it is similar to Fermi’s Zoo solution.

2.5. They were a non-tech civilisation, and that is why we can’t find their remnants.

2.6 They may be still here, like dolphins and ants, but their intelligence is non-human and they don’t create tech.

2.7 Some groups of humans created advanced tech long before now, but prefer to hide it. Highly improbable as most tech requires large manufacturing and market.

2.8 Previous humanoid civilisation was killed by virus or prion, and our archaeological research could bring it back to life. One hypothesis of Neanderthal extinction is prionic infection because of cannibalism. The fact is - several hominid species went extinct in the last several million years.


3. Civilisations are rare

Millions of species existed on Earth, but only one was able to create technology. So, it is a rare event.Consequences: cyclic civilisations on earth are improbable. So the chances that we will be resurrected by another civilisation on Earth is small.

The chances that we will be able to reconstruct civilisation after a large scale catastrophe, are also small (as such catastrophes are atypical for civilisations and they quickly proceed to total annihilation or singularity).

It also means that technological intelligence is a difficult step in the evolutionary process, so it could be one of the solutions of the main Fermi paradox.

Safety of remains of previous civilisations (if any exist) depends on two things: the time distance from them and their level of intelligence. The greater the distance, the safer they are (as the biggest part of dangerous technology will be destructed by time or will not be dangerous to humans, like species specific viruses).

The risks also depend on the level of intelligence they reached: the higher intelligence the riskier. If anything like their remnants are ever found, strong caution is recommend.

For example, the most dangerous scenario for us will be one similar to the beginning of the book of V. Vinge “A Fire upon the deep.” We could find remnants of a very old, but very sophisticated civilisation, which will include unfriendly AI or its description, or hostile nanobots.

The most likely place for such artefacts to be preserved is on the Moon, in some cavities near the pole. It is the most stable and radiation shielded place near Earth.

I think that based on (no) evidence, estimation of the probability of past tech civilisation should be less than 1 per cent. While it is enough to think that they most likely don’t exist, it is not enough to completely ignore risk of their artefacts, which anyway is less than 0.1 per cent.

Meta: the main idea for this post came to me in a night dream, several years ago.

[Link] Software for moral enhancement (

6 Kaj_Sotala 30 September 2016 12:12PM

[Link] Sam Harris - TED Talk on AI

6 Brillyant 29 September 2016 04:44PM

A problem in anthropics with implications for the soundness of the simulation argument.

5 philosophytorres 19 October 2016 09:07PM

What are your intuitions about this? It has direct implications for whether the Simulation Argument is sound.


Imagine two rooms, A and B. Between times t1 and t2, 100 trillion people sojourn in room A while 100 billion sojourn in room B. At any given moment, though, exactly 1 person occupies room A while 1,000 people occupy room B. At t2, you find yourself in a room, but you don't know which one. If you have to place a bet on which room it is (at t2), what do you say? Do you consider the time-slice or the history of room occupants? How do you place your bet?


If you bet that you're in room B, then the Simulation Argument may be flawed: there could be a fourth disjunct that Bostrom misses, namely that we become a posthuman civilization that runs a huge number of simulations yet we don't have reason for believing that we're stimulants.



Cryo with magnetics added

5 morganism 01 October 2016 10:27PM

This is great, by using small interlocking magnetic fields, you can keep the water in a higher vibrational state, allowing a "super-cooling" without getting crystallization and cell rupture

Subzero 12-hour Nonfreezing Cryopreservation of Porcine Heart in a Variable Magnetic Field

"invented a special refrigerator, termed as the Cells Alive System (CAS; ABI Co. Ltd., Chiba, Japan). Through the application of a combination of multiple weak energy sources, this refrigerator generates a special variable magnetic field that causes water molecules to oscillate, thus inhibiting crystallization during ice formation18 (Figure 1). Because the entire material is frozen without the movement of water molecules, cells can be maintained intact and free of membranous damage. This refrigerator has the ability to achieve a nonfreezing state even below the solidifying point."

October 2016 Media Thread

5 ArisKatsaris 01 October 2016 02:05PM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.


  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

[Link] An appreciation of the Less Wrong Sequences (

5 Kaj_Sotala 30 September 2016 12:11PM

Seeking Advice About Career Paths for Non-USA Citizen

5 almkglor 28 September 2016 12:07AM

Hi all,

Mostly lurker, I very rarely post, mostly  just read the excellent posts here.

I'm a Filipino, which means I am a citizen of the Republic of the Philippines.  My annual salary, before taxes, is about $20,000 (USA dollars).  I work at an IC development company (12 years at this company), developing the logic parts of LCD display drivers.  My understanding is that the median US salary for this kind of job is about $80,000 -> $100,000 a year.  This is a fucking worthless third world country, so the government eats up about ~30% of my salary and converts it to lousy service, rich government officials, bad roadworks, long commute times, and a (tiny) chance of being falsely accused of involvement in the drug trade and shot without trial.  Thus my take-home pay amounts to about $15,000 a year.  China is also murmuring vague threats about war because of the South China Sea (which the local intelligentsia insist on calling the West Philippine Sea); as we all know, the best way to survive a war is not be in one.

This has lead to my deep dissatisfaction with my current job.

I'm also a programmer as a hobby, and have been programming for 23 years (I started at 10 years old on Atari LOGO; I know a bunch of languages from low-level X86 assembly to C to C++ to ECMAScript to Haskell, and am co-author of SRFI-105 and SRFI-110).  My understanding is that a USA programmer would *start* at the $20,000-a-year level (?), and that someone with experience can probably get twice that, and a senior one can get $100,000/year.

As we all know, once a third world citizen starts having first world skill level, he starts demanding first world renumeration also.

I've been offered a senior software developer job at a software company, offering approximately $22,000/year; because of various attempts at tax reform it offers a flat 15% income tax, so I can expect about $18,000/year take home pay.  I've turned it down with a heavy heart, because seriously, $22,000/year at 15% tax for a senior software developer?

Leaving my current job is something I've been planning on doing, and I intend to do so early next year.  The increasing stress (constant overtime, management responsibilities (I'm a tech geek with passable social skills, and exercising my social skills drains me), 1.5-hour commutes) and the low renumeration makes me want to consider my alternate options.

My options are:

1.  Get myself to the USA, Europe, or other first-world country somehow, and look for a job there.  High risk, high reward, much higher probability of surviving to the singularity (can get cryonics there, can't get it here).  Complications: I have a family: a wife, a 4-year-old daughter, and a son on the way.  My wife wants to be near me, so it's difficult to live for long apart.  I have no work visa for any first-world country.  I'm from a third-world country that is sometimes put on terrorist watch lists, and prejudice is always high in first-world countries.

2.  Do freelance programming work.  Closer to free market ideal, so presumably I can get nearer to the USA levels of renumeration.  Lets me stay with my family.  Complications: I need to handle a lot of the human resources work myself (healthcare provider, social security, tax computations, time and task management - the last is something I do now in my current job position, but I dislike it).

3.  Become a landowning farmer.  My paternal grandparents have quite a few parcels of land (some of which have been transferred to my father, who is willing to pass it on to me), admittedly somewhere in the boondocks of the provinces of this country, but as any Georgian knows, landowners can sit in a corner staring at the sky, blocking the occasional land reform bill, and earn money.  Complications: I have no idea about farming.  I'd actually love to advocate a land value tax, which would undercut my position as a landowner.

For now, my basic current plan is some combination of #2 and #3 above: go sit in a corner of our clan's land and do freelance programming work.  This keeps me with my family, may reduce my level of stress, may increase my renumeration to nearer the USA levels.

My current job has a retirement pay, and since I've worked for 12 years, I've already triggered it, and they'll give me about $16,000 or so when I leave.  This seems reasonably comfortable to live on (note that this is what I take home in a year, and I've supported a family on that, remember this is a lousy third-world country).

Is my basic plan sound?  I'm trying to become more optimal, which seems to me to point me away from my current job and towards either #1 or #2, with #3 as a fallback.  I'd love to get cryonics and will start to convince my wife of its sensibility if I had a chance to actually get it, but that will require me either leaving the country (option #1 above) or running a cryonics company in a third-world country myself.


I got introduced to Less Wrong when I first read on Reddit about some weirdo who was betting he could pretend he was a computer in a box and convince someone to let him out of the box, and started lurking on Overcoming Bias.  When that weirdo moved over to Less Wrong, I followed and lurked there also.  So here I am ^^.  I'm probably very atypical even for Less Wrong; I highly suspect I am the only Filipino here (I'll have to check the diaspora survey results in detail).

Looking back, my big mistake was being arrogant and thinking "meh, I already know programming, so I should go for a challenge, why don't I take up electronics engineering instead because I don't know about it" back when I was choosing a college course.  Now I'm an IC developer.  Two of my cousins (who I can beat the pants off in a programming task) went with software engineering and pull in more money than I do.  Still, maybe I can correct that, even if it's over a decade late.  I really need to apply more of what I learn on Less Wrong.

Some years ago I applied for a CFAR class, but couldn't afford it, sigh.  Even today it's a few month's worth of salary for me.  So I guess I'll just have to settle for Less Wrong and Rationality from AI to Zombies.


[Link] Biofuels a climate mistake

4 morganism 09 October 2016 09:16PM

[Link] Six principles of a truth-friendly discourse

4 philh 08 October 2016 04:56PM

Open thread, Oct. 03 - Oct. 09, 2016

4 MrMind 03 October 2016 06:59AM

If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] US tech giants found Partnership on AI to Benefit People and Society to ensure AI is developed safely and ethically

4 Gunnar_Zarncke 29 September 2016 08:39PM

[Link] Politics Is Upstream of AI

4 iceman 28 September 2016 09:47PM

[Link] Yudkowsky's Guide to Writing Intelligent Characters

4 Vaniver 28 September 2016 02:36PM

[Link] AI-ON is an open community dedicated to advancing Artificial Intelligence

3 morganism 18 October 2016 10:17PM

Open thread, Oct. 17 - Oct. 23, 2016

3 MrMind 17 October 2016 07:02AM

If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Barack Obama's opinions on near-future AI [Fixed]

3 scarcegreengrass 12 October 2016 03:46PM

Open thread, Oct. 10 - Oct. 16, 2016

3 MrMind 10 October 2016 07:00AM

If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Nick Bostrom says Google is winning the AI arms race

3 polymathwannabe 05 October 2016 06:50PM

The map of natural global catastrophic risks

3 turchin 25 September 2016 01:17PM

There are many natural global risks. The greatest of these known risks are asteroid impacts and supervolcanos. 

Supervolcanos seem to pose the highest risk, as we sit on the ocean of molten iron, oversaturated with dissolved gases, just 3000 km below surface and its energy slowly moving up via hot spots. Many past extinctions are also connected with large eruptions from supervolcanos. 

Impacts also pose a significant risk. But, if we project the past rate of large extinctions due to impacts into the future, we will see that they occur only once in several million years. Thus, the likelihood of an asteroid impact in the next century is an order of magnitude of 1 in 100 000. That is negligibly small compared with the risks of AI, nanotech, biotech, etc.

The main natural risk is a meta-risk. Are we able to correctly estimate natural risks rates and project them into the future? And also, could we accidentally unleash natural catastrophe which is long overdue?

There are several reasons for possible underestimation, which are listed in the right column of the map. 

1. Anthropic shadow that is survival bias. This is a well-established idea by Bostrom, but the following four ideas are mostly my conclusions from it.

2. It is also the fact that we should find ourselves at the end of period of stability for any important aspect of our environment (atmosphere, sun stability, crust stability, vacuum stability). It is true if the Rare Earth hypothesis is true and our conditions are very unique in the universe.

3. From (2) is following that our environment may be very fragile for human interventions (think about global warming). Its fragility is like fragility of an overblown balloon poked by small needle.

4. Also, human intelligence was best adaptation instrument during the period of intense climate changes, which quickly evolved in an always changing environment. So, it should not be surprising that we find ourselves in a period of instability (think of Toba eruption, Clovis comet, Young drias, Ice ages) and in an unstable environment, as it help general intelligence to evolve.

5. Period of changes are themselves marks of the end of stability periods for many process and are precursors for larger catastrophes. (For example, intermittent ice ages may precede Snow ball Earth, or smaller impacts with comets debris may precede an impact with larger remnants of the main body). 

Each of these five points may raise the probability of natural risks by order of magnitude in my opinion, which combined will result in several orders of magnitude, which seems to be too high and probably is "catastrophism bias".

(More about it is in my article “Why anthropic principle stopped to defend us”  which needs substantial revision) 

In conclusion, I think that when studying natural risks, a key aspect we should be checking is the hypothesis that we live in non-typical period in a very fragile environment.

For example, some scientists think that 30 000 years ago, a large Centaris comet broke into the inner Solar system, split into pieces (including Encke comet and Taurid meteor showers as well as Tunguska body) and we live in the period of bombardment which has 100 times more intensity than average. Others believe that methane hydrates are very fragile and small human warming could result in dangerous positive feed back.  

I tried to list all known natural risks (I am interested in new suggestions). I divided them into two classes: proven and speculative. Most speculative risks are probably false.

Most probable risks in the map are marked red. My crazy ideas are marked green. Some ideas come from obscure Russian literature. For example, an idea, that hydro carbonates could be created naturally inside Earth (like abiogenic oil) and large pockets of them could accumulate in the mantle. Some of them could be natural explosives, like toluene, and they could be cause of kimberlitic explosions. While the fact of kimberlitic explosion is well known and their energy is like impact of kilometer sized asteroids, I never read about contemporary risks of such explosions.

The pdf of the map is here:


What's the most annoying part of your life/job?

2 Liron 23 October 2016 03:37AM

Hi, I'm an entrepreneur looking for a startup idea.

In my experience, the reason most startups fail is because they never actually solve anyone's problem. So I'm cheating and starting out by identifying a specific person with a specific problem.

So I'm asking you, what's the most annoying part of your life/job? Also, how much would you pay for a solution?

[Link] Program good ethics into artificial intelligence

2 XFrequentist 19 October 2016 04:28PM

[Link] The Non-identity Problem - Another argument in favour of classical utilitarianism

2 casebash 18 October 2016 01:41PM

[Link] Video to induce hallucinations , meme implanter?

2 morganism 16 October 2016 08:33PM

View more: Next