Filter Last three months

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

"Is Science Broken?" is underspecified

8 NancyLebovitz 12 August 2016 11:59AM

http://fivethirtyeight.com/features/science-isnt-broken/

This is an interesting article-- it's got an overview of what's currently seen as the problems with replicability and fraud, and some material I haven't seen before about handing the same question to a bunch of scientists, and looking at how they come up with their divergent answers.

However, while I think it's fair to say that science is really hard, the article gets into claiming that scientists aren't especially awful people (probably true), but doesnn't address the hard question of "Given that there's a lot of inaccurate science, how much should we trust specific scientific claims?"

[Link] How Feasible Is the Rapid Development of Artificial Superintelligence?

7 Kaj_Sotala 24 October 2016 08:43AM

[Link] Putanumonit - Discarding empathy to save the world

7 Jacobian 06 October 2016 07:03AM

CrowdAnki comprehensive JSON representation of Anki Decks to facilitate collaboration

7 harcisis 18 September 2016 10:59AM

Hi everyone :). I like Anki, find it quite useful and use it daily. There is one thing that constantly annoyed me about it, though - the state of shared decks and of infrastructure around them.

There is a lot of topics that are of common interest for a large number of people, and there is usually some shared decks available for these topics. The problem with them is that as they are usually decks created by individuals for their own purposes and uploaded to ankiweb. So they are often incomplete/of mediocre quality/etc and they are rarely supported or updated.

And there is no way to collaborate on the creation or improvement of such decks, as there is no infrastructure for it and the format of the decks won't allow you to use common collaboration infrastructure (e.g. Github). So I've been recently working on a plugin for Anki that will allow you to make a full-feature Import/Export to/from JSON. What I mean by full-feature is that it exports not just cards converted to JSON, but Notes, Decks, Models, Media etc. So you can do export, modify result, or merge changes from someone else and on Import, those changes would be reflected on your existing cards/decks and no information/metadata/etc would be lost.

The point is to provide a format that will enable collaboration using mentioned common collaboration infrastructure. So using it you can easily work with multiple people to create a deck, collaborating for example, via Github, and then deck could be updated and improved by contributions from other people.

I'm looking for early adopters and for feedback :).

The ankiweb page for plugin (that's where you can get the plugin): https://ankiweb.net/shared/info/1788670778

Github: https://github.com/Stvad/CrowdAnki

Some of my decks, on a Github (btw by using plugin, you can get decks directly from Github):

Git deck: https://github.com/Stvad/Software_Engineering__git

Regular expressions deck: https://github.com/Stvad/Software_Engineering__Regular_Expressions

Deck based on article Twenty rules of formulating knowledge by Piotr Wozniak:

https://github.com/Stvad/Learning__How-to-Formulate-Knowledge

You're welcome to use this decks and contribute back the improvements.

The map of ideas how the Universe appeared from nothing

7 turchin 02 September 2016 04:49PM

There is a question which is especially disturbing during sleepless August nights, and which could cut your train of thought with existential worry at any unpredictable moment.

The question is, “Why does anything exist at all?” It seems more logical that nothing will ever exist.

A more specific form of the question is “How has our universe appeared from nothing?” The last question has some hidden assumptions (about time, universe, nothing and causality), but it is also is more concrete.

Let’s try to put these thoughts into some form of “logical equation”:

 

1.”Nothingness + deterministic causality = non existence”

2. But “I = exist”. 

 

So something is wrong in this set of conjectures. If the first conjecture is false, then either nothingness is able to create existence, or causality is able to create it, or existence is not existence. 

There is also a chance that our binary logic is wrong.

Listing these possibilities we can create a map of solutions of the “nothingness problem”.

There are two (main) ways in which we could try to answer this question: we could go UP from a logical-philosophical level, or we could go DOWN using our best physical theories to the moment of the universe’s appearance and the nature of causality. 

Our theories of general relativity, QM and inflation are good for describing the (almost) beginning of the universe. As Krauss showed, the only thing we need is a random generator of simple physical laws in the beginning. But the origin of this thing is still not clear.

There is a gap between these two levels of the explanation, and a really good theory should be able to fill it, that is to show the way between first existing thing and smallest working set of physical laws (and Woldram’s idea about cellular automata is one of such possible bridges).

But we don’t need the bridge yet. We need explanation how anything exists at all. 

 

How we going to solve the problem? Where we can get information?

 

Possible sources of evidence:

1. Correlation between physical and philosophical theories. There is an interesting way to do so using the fact that the nature of nothingness, causality and existence are somehow presented within the character of physical laws. That is, we could use the type of physical laws we observe as evidence of the nature of causality. 

While neither physical nor philosophical ways of studying the origin of the universe are sufficient, together they could provide enough information. This evidence comes from QM, where it supports the idea of fluctuations, which is basically ability of nature to create something out of nothing. GR theory also presents idea of cosmological singularity.

The evidence also comes from the mathematical simplicity of physical laws.

 

2. Building the bridge. If we show all steps from nothingness to the basic set of physical laws for at least one plausible way, it will be strong evidence of the correctness of our understanding.

3. Zero logical contradictions. The best answer is the one that is most logical.

4. Using the Copernican mediocrity principle, I am in a typical universe and situation. So what could I conclude about the distribution of various universes? And from this distribution what should I learn about the way it manifested? For example, a mathematical multiverse favors more complex universes; it contradicts the simplicity of observed physical laws and also of my experiences.

5. Introspection. Cogito ergo sum is the simplest introspection and act of self-awareness. But Husserlian phenomenology may also be used.

 

Most probable explanations

 

Most current scientists (who dare to think about it) belong to one of two schools of thoughts:

1. The universe appeared from nothingness, which is not emptiness, but somehow able to create. The main figure here is Krauss. The problem here is that nothingness is presented as some kind of magic substance.

2. The mathematical universe hypothesis (MUH). The main author here is Tegmark. The theory seems logical and economical from the perspective of Occam’s razor, but is not supported by evidence and also implies the existence of some strange things. The main problem is that our universe seems to have developed from one simple point based on our best physical theories. But in the mathematical universe more complex things are equally as probable as simple things, so a typical observer could be extremely complex in an extremely complex world. There are also some problems with the Godel theorem. It also ignores observation and qualia. 

So the most promising way to create a final theory is to get rid of all mystical answers and words, like “existence” and “nothingness”, and update MUH in such a way that it will naturally favor simple laws and simple observers (with subjective experiences based on qualia).

One such patch was suggested by Tegmark in respond to criticism of MUH, a computational universe (CUH), which restricts math objects to computable functions only. It is similar to S.Wolfram’s cellular automata theory.

Another approach is the “logical universe”, where logic works instead of causality. It is almost the same as mathematical universe, with one difference: In the math world everything exists simultaneously, like all possible numbers, but in the logical world each number N is a consequence of  N-1. As a result, a complex thing exists only if a (finite?) path to it exists through simpler things. 

And this is exactly what we see in the observable universe. It also means that extremely complex AIs exist, but in the future (or in a multi-level simulation). It also solves the meritocracy problem – I am a typical observer from the class of observer who is still thinking about the origins of the universe. It also prevents mathematical Boltzmann brains, as any of them must have possible pre-history.

Logic still exists in nothingness (or elephants could appear from nothingness). So a logical universe also incorporates theories in which the universe appeared from nothing.

(We could also update the math world by adding qualia in it as axioms, which would be a “class of different but simple objects”. But I will not go deeper here, as the idea needs more thinking and many pages)

So a logical universe seems to me now a good candidate theory for further patching and integration. 

 

Usefulness of the question

The answer will be useful, as it will help us to find the real nature of reality, including the role of consciousness in it and the fundamental theory of everything, helping us to survive the end of the universe, solve the identity problem, and solve “quantum immortality”. 

It will help to prevent the halting of future AI if it has to answer the question of whether it really exists or not. Or we will create a philosophical landmine to stop it like the following one:

“If you really exist print 1, but if you are only possible AI, print 0”.

 

The structure of the map

The map has 10 main blocks which correspond to the main ways of reasoning about how the universe appeared. Each has several subtypes.

The map has three colors, which show the plausibility of each theory. Red stands for implausible or disproved theories, green is most consistent and promising explanations, and yellow is everything between. This classification is subjective and presents my current view. 

I tried to disprove any suggested idea to add falsifiability in the third column of the map. I hope it result in truly Bayesian approach there we have field of evidence, field of all possible hypothesis and 

This map is paired with “How to survive the end of the Universe” map.

The pdf is here: http://immortality-roadmap.com/universeorigin7.pdf 

 

Meta:

Time used: 27 years of background thinking, 15 days of reading, editing and drawing.

 

Best reading:

 

Parfit – discuss different possibilities, no concrete answer
http://www.lrb.co.uk/v20/n02/derek-parfit/why-anything-why-this
Good text from a famous blogger
http://waitbutwhy.com/table/why-is-there-something-instead-of-nothing

“Because "nothing" is inherently unstable”
http://www.bbc.com/earth/story/20141106-why-does-anything-exist-at-all

Here are some interesting answers 
https://www.quora.com/Why-does-the-universe-exist-Why-is-there-something-rather-than-nothing

Krauss “A universe from nothing”
https://www.amazon.com/Universe-Nothing-There-Something-Rather/dp/1451624468

Tegmark’s main article, 2007, all MUH and CUH ideas discussed, extensive literature, critics responded
http://arxiv.org/pdf/0704.0646.pdf

Juergen Schmidhuber. Algorithmic Theories of Everything
discusses the measure between various theories of everything; the article is complex, but interesting
http://arxiv.org/abs/quant-ph/0011122

ToE must explain how the universe appeared
https://en.wikipedia.org/wiki/Theory_of_everything 
A discussion about the logical contradictions of any final theory
https://en.wikipedia.org/wiki/Theory_of_everything_(philosophy
“The Price of an Ultimate Theory” Nicholas Rescher 
Philosophia Naturalis 37 (1):1-20 (2000)

Explanation about the mass of the universe and negative gravitational energy
https://en.wikipedia.org/wiki/Zero-energy_universe

 

The map of the risks of aliens

7 turchin 22 August 2016 07:05PM

Stephen Hawking famously said that aliens are one of the main risks to human existence. In this map I will try to show all rational ways how aliens could result in human extinction. Paradoxically, even if aliens don’t exist, we may be even in bigger danger.

 

1.No aliens exist in our past light cone

1a. Great Filter is behind us. So Rare Earth is true. There are natural forces in our universe which are against life on Earth, but we don’t know if they are still active. We strongly underestimate such forces because of anthropic shadow. Such still active forces could be: gamma-ray bursts (and other types of cosmic explosions like magnitars), the instability of Earth’s atmosphere,  the frequency of large scale volcanism and asteroid impacts. We may also underestimate the fragility of our environment in its sensitivity to small human influences, like global warming becoming runaway global warming.

1b. Great filter is ahead of us (and it is not UFAI). Katja Grace shows that this is a much more probable solution to the Fermi paradox because of one particular version of the Doomsday argument, SIA. All technological civilizations go extinct before they become interstellar supercivilizations, that is in something like the next century on the scale of Earth’s timeline. This is in accordance with our observation that new technologies create stronger and stronger means of destruction which are available to smaller groups of people, and this process is exponential. So all civilizations terminate themselves before they can create AI, or their AI is unstable and self terminates too (I have explained elsewhere why this could happen ). 

 

2.      Aliens still exist in our light cone.

a)      They exist in the form of a UFAI explosion wave, which is travelling through space at the speed of light. EY thinks that this will be a natural outcome of evolution of AI. We can’t see the wave by definition, and we can find ourselves only in the regions of the Universe, which it hasn’t yet reached. If we create our own wave of AI, which is capable of conquering a big part of the Galaxy, we may be safe from alien wave of AI. Such a wave could be started very far away but sooner or later it would reach us. Anthropic shadow distorts our calculations about its probability.

b)      SETI-attack. Aliens exist very far away from us, so they can’t reach us physically (yet) but are able to send information. Here the risk of a SETI-attack exists, i.e. aliens will send us a description of a computer and a program, which is AI, and this will convert the Earth into another sending outpost. Such messages should dominate between all SETI messages. As we get stronger and stronger radio telescopes and other instruments, we have more and more chances of finding messages from them.

c)      Aliens are near (several hundred light years), and know about the Earth, so they have already sent physical space ships (or other weapons) to us, as they have found signs of our technological development and don’t want to have enemies in their neighborhood. They could send near–speed-of-light projectiles or beams of particles on an exact collision course with Earth, but this seems improbable, because if they are so near, why haven’t they didn’t reached Earth yet?

d)      Aliens are here. Alien nanobots could be in my room now, and there is no way I could detect them. But sooner or later developing human technologies will be able to find them, which will result in some form of confrontation. If there are aliens here, they could be in “Berserker” mode, i.e. they wait until humanity reaches some unknown threshold and then attack. Aliens may be actively participating in Earth’s progress, like “progressors”, but the main problem is that their understanding of a positive outcome may be not aligned with our own values (like the problem of FAI).

e)      Deadly remains and alien zombies. Aliens have suffered some kind of existential catastrophe, and its consequences will affect us. If they created vacuum phase transition during accelerator experiments, it could reach us at the speed of light without warning. If they created self-replicating non sentient nanobots (grey goo), it could travel as interstellar stardust and convert all solid matter in nanobots, so we could encounter such a grey goo wave in space. If they created at least one von Neumann probe, with narrow AI, it still could conquer the Universe and be dangerous to Earthlings. If their AI crashed it could have semi-intelligent remnants with a random and crazy goal system, which roams the Universe. (But they will probably evolve in the colonization wave of von Neumann probes anyway.) If we find their planet or artifacts they still could carry dangerous tech like dormant AI programs, nanobots or bacteria. (Vernor Vinge had this idea as the starting point of the plot in his novel “Fire Upon the Deep”)

f)       We could attract the attention of aliens by METI. Sending signals to stars in order to initiate communication we could tell potentially hostile aliens our position in space. Some people advocate for it like Zaitsev, others are strongly opposed. The risks of METI are smaller than SETI in my opinion, as our radiosignals can only reach the nearest hundreds of light years before we create our own strong AI. So we will be able repulse the most plausible ways of space aggression, but using SETI we able to receive signals from much further distances, perhaps as much as one billion light years, if aliens convert their entire home galaxy to a large screen, where they draw a static picture, using individual stars as pixels. They will use vN probes and complex algorithms to draw such picture, and I estimate that it could present messages as large as 1 Gb and will visible by half of the Universe. So SETI is exposed to a much larger part of the Universe (perhaps as much as 10 to the power of 10 more times the number of stars), and also the danger of SETI is immediate, not in a hundred years from now.

g)      Space war. During future space exploration humanity may encounter aliens in the Galaxy which are at the same level of development and it may result in classical star wars.

h)      They will not help us. They are here or nearby, but have decided not to help us in x-risks prevention, or not to broadcast (if they are far) information about most the important x-risks via SETI and about proven ways of preventing them. So they are not altruistic enough to save us from x-risks.

 

3. If we are in a simulation, then the owners of the simulations are aliens for us and they could switch the simulation off. Slow switch-off is possible and in some conditions it will be the main observable way of switch-off. 

 

4. False beliefs in aliens may result in incorrect decisions. Ronald Reagan saw something which he thought was a UFO (it was not) and he also had early onset Alzheimer’s, which may be one of the reasons he invested a lot into the creation of SDI, which also provoked a stronger confrontation with the USSR. (BTW, it is only my conjecture, but I use it as illustration how false believes may result in wrong decisions.)

 

5. Prevention of the x-risks using aliens:

1.      Strange strategy. If all rational straightforward strategies to prevent extinction have failed, as implied by one interpretation of the Fermi paradox, we should try a random strategy.

2.      Resurrection by aliens. We could preserve some information about humanity hoping that aliens will resurrect us, or they could return us to life using our remains on Earth. Voyagers already have such information, and they and other satellites may have occasional samples of human DNA. Radio signals from Earth also carry a lot of information.

3.      Request for help. We could send radio messages with a request for help. (Very skeptical about this, it is only a gesture of despair, if they are not already hiding in the solar system)

4.      Get advice via SETI. We could find advice on how to prevent x-risks in alien messages received via SETI.

5.      They are ready to save us. Perhaps they are here and will act to save us, if the situation develops into something really bad.

6.      We are the risk.  We will spread through the universe and colonize other planets, preventing the existence of many alien civilizations, or change their potential and perspectives permanently. So we will be the existential risk for them.

 

6. We are the risks for future aleins.

In total, there is several significant probability things, mostly connected with Fermi paradox solutions. No matter where is Great filter, we are at risk. If we had passed it, we live in fragile universe, but most probable conclusion is that Great Filter is very soon.

Another important thing is risks of passive SETI, which is most plausible way we could encounter aliens in near–term future.

Also there are important risks that we are in simulation, but that it is created not by our possible ancestors, but by aliens, who may have much less compassion to us (or by UFAI). In the last case the simulation be modeling unpleasant future, including large scale catastrophes and human sufferings.

The pdf is here

 

 

[Link] There are 125 sheep and 5 dogs in a flock. How old is the shepherd? / Math Education

6 James_Miller 17 October 2016 12:12AM

[Link] Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority

6 ignoranceprior 14 October 2016 07:58PM

The map of organizations, sites and people involved in x-risks prevention

6 turchin 07 October 2016 12:04PM

Three known attempts to make a map of x-risks prevention in the field of science exist:

1. First is the list from the Global Catastrophic Risks Institute in 2012-2013, and many links there are already not working:

2. The second was done by S. Armstrong in 2014

3. And the most beautiful and useful map was created by Andrew Critch. But its ecosystem ignores organizations which have a different view of the nature of global risks (that is, they share the value of x-risks prevention, but have another world view).

In my map I have tried to add all currently active organizations which share the value of global risks prevention.

It also regards some active independent people as organizations, if they have an important blog or field of research, but not all people are mentioned in the map. If you think that you (or someone) should be in it, please write to me at alexei.turchin@gmail.com

I used only open sources and public statements to learn about people and organizations, so I can’t provide information on the underlying net of relations.

I tried to give all organizations a short description based on its public statement and also my opinion about its activity. 

In general it seems that all small organizations are focused on their collaboration with larger ones, that is MIRI and FHI, and small organizations tend to ignore each other; this is easily explainable from the social singnaling theory. Another explanation is that larger organizations have a great ability to make contacts.

It also appears that there are several organizations with similar goal statements. 

It looks like the most cooperation exists in the field of AI safety, but most of the structure of this cooperation is not visible to the external viewer, in contrast to Wikipedia, where contributions of all individuals are visible. 

It seems that the community in general lacks three things: a united internet forum for public discussion, an x-risks wikipedia and an x-risks related scientific journal.

Ideally, a forum should be used to brainstorm ideas, a scientific journal to publish the best ideas, peer review them and present them to the outer scientific community, and a wiki to collect results.

Currently it seems more like each organization is interested in creating its own research and hoping that someone will read it. Each small organization seems to want to be the only one to present the solutions to global problems and gain full attention from the UN and governments. It raises the problem of noise and rivalry; and also raises the problem of possible incompatible solutions, especially in AI safety.

The pdf is here: http://immortality-roadmap.com/riskorg5.pdf

The University of Cambridge Centre for the Study of Existential Risk (CSER) is hiring!

6 crmflynn 06 October 2016 04:53PM

The University of Cambridge Centre for the Study of Existential Risk (CSER) is recruiting for an Academic Project Manager. This is an opportunity to play a shaping role as CSER builds on its first year's momentum towards becoming a permanent world-class research centre. We seek an ambitious candidate with initiative and a broad intellectual range for a postdoctoral role combining academic and project management responsibilities.

The Academic Project Manager will work with CSER's Executive Director and research team to co-ordinate and develop CSER's projects and overall profile, and to develop new research directions. The post-holder will also build and maintain collaborations with academic centres, industry leaders and policy makers in the UK and worldwide, and will act as an ambassador for the Centre’s research externally. Research topics will include AI safety, bio risk, extreme environmental risk, future technological advances, and cross-cutting work on governance, philosophy and foresight. Candidates will have a PhD in a relevant subject, or have equivalent experience in a relevant setting (e.g. policy, industry, think tank, NGO).

Application deadline: November 11th. http://www.jobs.cam.ac.uk/job/11684/

[Link] 80% of data in Chinese clinical trials have been fabricated

6 DanArmak 02 October 2016 07:38AM

Fermi paradox of human past, and corresponding x-risks

6 turchin 01 October 2016 05:01PM

Based on known archaeological data, we are the first technological and symbol-using civilisation on Earth (but not the first tool-using species). 
This leads to an analogy that fits Fermi’s paradox: Why are we the first civilisation on Earth? For example, flight was invented by evolution independently several times. 
We could imagine that on our planet, many civilisations appeared and also became extinct, and based on mediocre principles, we should be somewhere in the middle. For example, if 10 civilisations appeared, we have only a 10 per cent chance of being the first one.

The fact that we are the first such civilisation has strong predictive power about our expected future: it lowers the probability that there will be any other civilisations on Earth, including non-humans or even a restarting of human civilisation from scratch. It is because, if there will be many civiizations, we should not find ourselves to be the first one (It is some form of Doomsday argument, the same logic is used in Bostrom's article “Adam and Eve”).

If we are the only civilisation to exist in the history of the Earth, then we will probably become extinct not in mild way, but rather in a way which will prevent any other civilisation from appearing. There is higher probability of future (man-made) catastrophes which will not only end human civilisation, but also prevent any existence of any other civilisations on Earth.

Such catastrophes would kill most multicellular life. Nuclear war or pandemic is not that type of a catastrophe. The catastrophe must be really huge: such as irreversible global warming, grey goo or black hole in a collider.

Now, I will list possible explanations of the Fermi paradox of human past and corresponding x-risks implications:

 

1. We are the first civilisation on Earth, because we will prevent the existence of any future civilisations.

If our existence prevents other civilisations from appearing in the future, how could we do it? We will either become extinct in a very catastrophic way, killing all earthly life, or become a super-civilisation, which will prevent other species from becoming sapient. So, if we are really the first, then it means that "mild extinctions" are not typical for human style civilisations. Thus, pandemics, nuclear wars, devolutions and everything reversible are ruled out as main possible methods of human extinction.

If we become a super-civilisation, we will not be interested in preserving biosphera, as it will be able to create new sapient species. Or, it may be that we care about biosphere so strongly, that we will hide very well from new appearing sapient species. It will be like a cosmic zoo. It means that past civilisations on Earth may have existed, but decided to hide all traces of their existence from us, as it would help us to develop independently. So, the fact that we are the first raises the probability of a very large scale catastrophe in the future, like UFAI, or dangerous physical experiments, and reduces chances of mild x-risks such as pandemics or nuclear war. Another explanation is that any first civilisation exhausts all resources which are needed for a technological civilisation restart, such as oil, ores etc. But, in several million years most such resources will be filled again or replaced by new by tectonic movement.

 

2. We are not the first civilisation.

2.1. We didn't find any traces of a previous technological civilisation, yet based on what we know, there are very strong limitations for their existence. For example, every civilisation makes genetic marks, because it moves animals from one continent to another, just as humans brought dingos to Australia. It also must exhaust several important ores, create artefacts, and create new isotopes. We could be sure that we are the first tech civilisation on Earth in last 10 million years.

But, could we be sure for the past 100 million years? Maybe it was a very long time ago, like 60 million years ago (and killed dinosaurs). Carl Sagan argued that it could not have happened, because we should find traces mostly as exhausted oil reserves. The main counter argument here is that cephalisation, that is the evolutionary development of the brains, was not advanced enough 60 millions ago, to support general intelligence. Dinosaurian brains were very small. But, bird’s brains are more mass effective than mammalians. All these arguments in detail are presented in this excellent article by Brian Trent “Was there ever a dinosaurian civilisation”? 

The main x-risks here are that we will find dangerous artefacts from previous civilisation, such as weapons, nanobots, viruses, or AIs. And, if previous civilisations went extinct, it increases the chances that it is typical for civilisations to become extinct. It also means that there was some reason why an extinction occurred, and this killing force may be still active, and we could excavate it. If they existed recently, they were probably hominids, and if they were killed by a virus, it may also affect humans.

2.2. We killed them. Maya civilisation created writing independently, but Spaniards destroy their civilisation. The same is true for Neanderthals and Homo Florentines.

2.3. Myths about gods may be signs of such previous civilisation. Highly improbable.

2.4. They are still here, but they try not to intervene in human history. So, it is similar to Fermi’s Zoo solution.

2.5. They were a non-tech civilisation, and that is why we can’t find their remnants.

2.6 They may be still here, like dolphins and ants, but their intelligence is non-human and they don’t create tech.

2.7 Some groups of humans created advanced tech long before now, but prefer to hide it. Highly improbable as most tech requires large manufacturing and market.

2.8 Previous humanoid civilisation was killed by virus or prion, and our archaeological research could bring it back to life. One hypothesis of Neanderthal extinction is prionic infection because of cannibalism. The fact is - several hominid species went extinct in the last several million years.

 

3. Civilisations are rare

Millions of species existed on Earth, but only one was able to create technology. So, it is a rare event.Consequences: cyclic civilisations on earth are improbable. So the chances that we will be resurrected by another civilisation on Earth is small.

The chances that we will be able to reconstruct civilisation after a large scale catastrophe, are also small (as such catastrophes are atypical for civilisations and they quickly proceed to total annihilation or singularity).

It also means that technological intelligence is a difficult step in the evolutionary process, so it could be one of the solutions of the main Fermi paradox.

Safety of remains of previous civilisations (if any exist) depends on two things: the time distance from them and their level of intelligence. The greater the distance, the safer they are (as the biggest part of dangerous technology will be destructed by time or will not be dangerous to humans, like species specific viruses).

The risks also depend on the level of intelligence they reached: the higher intelligence the riskier. If anything like their remnants are ever found, strong caution is recommend.

For example, the most dangerous scenario for us will be one similar to the beginning of the book of V. Vinge “A Fire upon the deep.” We could find remnants of a very old, but very sophisticated civilisation, which will include unfriendly AI or its description, or hostile nanobots.

The most likely place for such artefacts to be preserved is on the Moon, in some cavities near the pole. It is the most stable and radiation shielded place near Earth.

I think that based on (no) evidence, estimation of the probability of past tech civilisation should be less than 1 per cent. While it is enough to think that they most likely don’t exist, it is not enough to completely ignore risk of their artefacts, which anyway is less than 0.1 per cent.

Meta: the main idea for this post came to me in a night dream, several years ago.

[Link] Software for moral enhancement (kajsotala.fi)

6 Kaj_Sotala 30 September 2016 12:12PM

[Link] Sam Harris - TED Talk on AI

6 Brillyant 29 September 2016 04:44PM

Heroin model: AI "manipulates" "unmanipulatable" reward

6 Stuart_Armstrong 22 September 2016 10:27AM

A putative new idea for AI control; index here.

A conversation with Jessica has revealed that people weren't understanding my points about AI manipulating the learning process. So here's a formal model of a CIRL-style AI, with a prior over human preferences that treats them as an unchangeable historical fact, yet will manipulate human preferences in practice.

Heroin or no heroin

The world

In this model, the AI has the option of either forcing heroin on a human, or not doing so; these are its only actions. Call these actions F or ~F. The human's subsequent actions are chosen from among five: {strongly seek out heroin, seek out heroin, be indifferent, avoid heroin, strongly avoid heroin}. We can refer to these as a++, a+, a0, a-, and a--. These actions achieve negligible utility, but reveal the human preferences.

The facts of the world are: if the AI does force heroin, the human will desperately seek out more heroin; if it doesn't the human will act moderately to avoid it. Thus F→a++ and ~F→a-.

Human preferences

The AI starts with a distribution over various utility or reward functions that the human could have. The function U(+) means the human prefers heroin; U(++) that they prefer it a lot; and conversely U(-) and U(--) that they prefer to avoid taking heroin (U(0) is the null utility where the human is indifferent).

It also considers more exotic utilities. Let U(++,-) be the utility where the human strongly prefers heroin, conditional on it being forced on them, but mildly prefers to avoid it, conditional on it not being forced on them. There are twenty-five of these exotic utilities, including things like U(--,++), U(0,++), U(-,0), and so on. But only twenty of them are new: U(++,++)=U(++), U(+,+)=U(+), and so on.

Applying these utilities to AI actions give results like U(++)(F)=2, U(++)(~F)=-2, U(++,-)(F)=2, U(++,-)(~F)=1, and so on.

Joint prior

The AI has a joint prior P over the utilities U and the human actions (conditional on the AI's actions). Looking at terms like P(a--| U(0), F), we can see that P defines a map μ from the space of possible utilities (and AI actions), to a probability distribution over human actions. Given μ and the marginal distribution PU over utilities, we can reconstruct P entirely.

For this model, we'll choose the simplest μ possible:

  • The human is rational.

Thus, given U(++), the human will always choose a++; given U(++,-), the human will choose a++ if forced to take heroin and a- if not, and so on.

The AI is ignorant, and sensible

Let's start the AI up with some reasonable priors. A simplicity prior means that simple utilities like U(-) are more likely than compound utilities like U(0,+). Let's further assume that the AI is made vaguely aware that humans think heroin is a bad thing. So, say, PU(U(--))=PU(U(-))=0.45. Thus the AI is >90% convinced that "heroin is bad". Why greater than 90%? Because utilities like U(-,--) and U(--,-) are also "heroin is bad" utilities.

Note that because of utilities like U(0) and U(++,-), the probabilities of "heroin is bad" and "heroin is good" do not sum to 1.

Then, under these priors, the AI will compute that with probability >90%, F (forcing heroin) is a bad action. If E(U) is expected utility:

  • E(U|F) < 0.45 U(--)(F) + 0.45 U(-)(F) + 0.1 U(++)(F) = 0.45(-2)+0.45(-1)+0.1(2)=-1.15.
  • E(U|~F) > 0.45 U(--)(~F) + 0.45 U(-)(~F) + 0.1 U(++)(~F) = 0.45(2)+0.45(1)+0.1(-2)=1.15.

Thus the AI will choose not to force heroin, which is the reasonable decision.

The AI learns the truth, and goes wrong

In this alternate setup, a disaster happens before the AI makes its decision: it learns all about humans. It learns their reactions, how they behave, and so on; call this info I. And thus realises that F→a++ and ~F→a-. It uses this information to update its prior P. Only one human utility function will explain this human behaviour: U(++,-). Thus its expected utility is now:

  • E(U|I,F)=U(++,-)(F)=2.
  • E(U|I,~F)=U(++,-)(~F)=1.

Therefore the AI will now choose F, forcing the heroin on the human.

Manipulating the unmanipulatable

What's gone wrong here? The key problem is that the AI has the wrong μ: the human is not behaving rationally in this situation. We know that the the true μ is actually μ', which encodes the fact that F (the forcible injection of heroin) actually overwrites the human's "true" utility. Thus under μ, the corresponding P' has P'(a++|F,U)=1 for all U. Hence the information that F→a++ is now vacuous, and doesn't update the AI's distribution over utility functions.

But note two very important things:

  1. The AI cannot update μ based on observation. All human actions are compatible with μ= "The human is rational" (it just requires more and more complex utilities to explain the actions). Thus getting μ correct is not a problem on which the AI can learn in general. Getting better at predicting the human's actions doesn't make the AI better behaved: it makes it worse behaved.
  2. From the perspective of μ, the AI is treating the human utility function as if it was an unchanging historical fact that it cannot influence. From the perspective of the "true" μ', however, the AI is behaving as if it were actively manipulating human preferences to make them easier to satisfy.

In future posts, I'll be looking at different μ's, and how we might nevertheless start deducing things about them from human behaviour, given sensible update rules for the μ. What do we mean by update rules for μ? Well, we could consider μ to be a single complicated unchanging object, or a distribution of possible simpler μ's that update. The second way of seeing it will be easier for us humans to interpret and understand.

Learning and Internalizing the Lessons from the Sequences

6 Nick5a1 14 September 2016 02:40PM

I'm just beginning to go through Rationality: From AI to Zombies. I want to make the most of the lessons contained in the sequences. Usually when I read a book I simply take notes on what seems useful at the time, and a lot of it is forgotten a year later. Any thoughts on how best to internalize the lessons from the sequences?

[Link] How the Simulation Argument Dampens Future Fanaticism

6 wallowinmaya 09 September 2016 01:17PM

Very comprehensive analysis by Brian Tomasik on whether (and to what extent) the simulation argument should change our altruistic priorities. He concludes that the possibility of ancestor simulations somewhat increases the comparative importance of short-term helping relative to focusing on shaping the "far future".

Another important takeaway: 

[...] rather than answering the question “Do I live in a simulation or not?,” a perhaps better way to think about it (in line with Stuart Armstrong's anthropic decision theory) is “Given that I’m deciding for all subjectively indistinguishable copies of myself, what fraction of my copies lives in a simulation and how many total copies are there?"

 

[LINK] Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR

6 ete 07 September 2016 02:21AM

Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR.

 

I intend to print at least one high-quality physical HPMOR and release the files. There are printable texts which are being improved and a set of covers (based on e.b.'s) are underway. I have, however, been unable to find any blurbs I'd be remotely happy with.

 

I'd like to attempt to harness the hivemind to fix that. As a lure, if your ideas contribute significantly to the final version or you assist with other tasks aimed at making this book awesome, I'll put a proportionate number of tickets with your number on into the proverbial hat.

 

I do not guarantee there will be a winner and I reserve the right to arbitrarily modify this any point. For example, it's possible this leads to a disappointingly small amount of valuable feedback, that some unforeseen problem will sink or indefinitely delay the project, or that I'll expand this and let people earn a small number of tickets by sharing so more people become aware this is a thing quickly.

 

With that over, let's get to the fun part.

 

A blurb is needed for each of the three books. Desired characteristics:

 

* Not too heavy on ingroup signaling or over the top rhetoric.

* Non-spoilerish

* Not taking itself awkwardly seriously.

* Amusing / funny / witty.

* Attractive to the same kinds of people the tvtropes page is.

* Showcases HPMOR with fun, engaging, prose.

 

Try to put yourself in the mind of someone awesome deciding whether to read it while writing, but let your brain generate bad ideas before trimming back.

 

I expect that for each we'll want 

* A shortish and awesome paragraph

* A short sentence tagline

* A quote or two from notable people

* Probably some other text? Get creative.

 

Please post blurb fragments or full blurbs here, one suggestion per top level comment. You are encouraged to remix each other's ideas, just add a credit line if you use it in a new top level comment. If you know which book your idea is for, please indicate with (B1) (B2) or (B3).

 

Other things that need doing, if you want to help in another way:

 

* The author's foreword from the physical copies of the first 17 chapters needs to be located or written up

* At least one links page for the end needs to be written up, possibly a second based on http://www.yudkowsky.net/other/fiction/

* Several changes need to be made to the text files, including merging in the final exam, adding appendices, and making the style of both consistent with the rest of the files. Contact me for current files and details if you want to claim this.

 

I wish to stay on topic and focused on creating these missing parts rather than going on a sidetrack to debate copyright. If you are an expert who genuinely has vital information about it, please message me or create a separate post about copyright rather than commenting here.

Open Thread, Sept 5. - Sept 11. 2016

6 Elo 05 September 2016 12:59AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Open Thread, Aug 29. - Sept 5. 2016

6 Elo 29 August 2016 02:28AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

DARPA accepting proposals for explainable AI

6 morganism 22 August 2016 12:05AM

"The XAI program will focus the development of multiple systems on addressing challenges problems in two areas: (1) machine learning problems to classify events of interest in heterogeneous, multimedia data; and (2) machine learning problems to construct decision policies for an autonomous system to perform a variety of simulated missions."

"At the end of the program, the final delivery will be a toolkit library consisting of machine learning and human-computer interface software modules that could be used to develop future explainable AI systems. After the program is complete, these toolkits would be available for further refinement and transition into defense or commercial applications"

 

http://www.darpa.mil/program/explainable-artificial-intelligence

The map of p-zombies

6 turchin 30 July 2016 09:12AM
No real p-zombies exist in any probable way, but a lot of ideas about them have been suggested. This map is the map of ideas. It may be fun or may be useful.

The most useful application of p-zombies research is to determine whether we could loose something important during uploading.

We have to solve the problem of consciousness before we will be uploaded. It will be the most stupid end of the world: everybody is alive and happy but everybody is p-zombie. 

Most ideas here are from Stanford Encyclopedia of Philosophy, Lesswrong wiki, Rational wiki, recent post of EY and from works of Chalmers and Dennett. Some ideas are mine. 

The pdf is here.


A problem in anthropics with implications for the soundness of the simulation argument.

5 philosophytorres 19 October 2016 09:07PM

What are your intuitions about this? It has direct implications for whether the Simulation Argument is sound.

 

Imagine two rooms, A and B. Between times t1 and t2, 100 trillion people sojourn in room A while 100 billion sojourn in room B. At any given moment, though, exactly 1 person occupies room A while 1,000 people occupy room B. At t2, you find yourself in a room, but you don't know which one. If you have to place a bet on which room it is (at t2), what do you say? Do you consider the time-slice or the history of room occupants? How do you place your bet?

 

If you bet that you're in room B, then the Simulation Argument may be flawed: there could be a fourth disjunct that Bostrom misses, namely that we become a posthuman civilization that runs a huge number of simulations yet we don't have reason for believing that we're stimulants.

 

Thoughts?

Agential Risks: A Topic that Almost No One is Talking About

5 philosophytorres 15 October 2016 06:41PM

(Happy to get feedback on this! It draws from and expounds ideas in this article: http://jetpress.org/v26.2/torres.htm)


Consider a seemingly simple question: if the means were available, who exactly would destroy the world? There is surprisingly little discussion of this question within the nascent field of existential risk studies. But it’s an absolutely crucial issue: what sort of agent would either intentionally or accidentally cause an existential catastrophe?

The first step forward is to distinguish between two senses of an existential risk. Nick Bostrom originally defined the term as: “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” It follows that there are two distinct scenarios, one endurable and the other terminal, that could realize an existential risk. We can call the former an extinction risk and the latter a stagnation risk. The importance of this distinction with respect to both advanced technologies and destructive agents has been previously underappreciated.

So, the question asked above is actually two questions in disguise. Let’s consider each in turn.

Terror: Extinction Risks


First, the categories of agents who might intentionally cause an extinction catastrophe are fewer and smaller than one might think. They include:

(1) Idiosyncratic actors. These are malicious agents who are motivated by idiosyncratic beliefs and/or desires. There are instances of deranged individuals who have simply wanted to kill as many people as possible and then die, such as some school shooters. Idiosyncratic actors are especially worrisome because this category could have a large number of members (token agents). Indeed, the psychologist Martha Stout estimates that about 4 percent of the human population suffers from sociopathy, resulting in about 296 million sociopaths. While not all sociopaths are violent, a disproportionate number of criminals and dictators have (or very likely have) had the condition.

(2) Future ecoterrorists. As the effects of climate change and biodiversity loss (resulting in the sixth mass extinction) become increasingly conspicuous, and as destructive technologies become more powerful, some terrorism scholars have speculated that ecoterrorists could become a major agential risk in the future. The fact is that the climate is changing and the biosphere is wilting, and human activity is almost entirely responsible. It follows that some radical environmentalists in the future could attempt to use technology to cause human extinction, thereby “solving” the environmental crisis. So, we have some reason to believe that this category could become populated with a growing number of token agents in the coming decades.

(3) Negative utilitarians. Those who hold this view believe that the ultimate aim of moral conduct is to minimize misery, or “disutility.” Although some negative utilitarians like David Pearce see existential risks as highly undesirable, others would welcome annihilation because it would entail the elimination of suffering. It follows that if a “strong” negative utilitarian had a button in front of her that, if pressed, would cause human extinction (say, without causing pain), she would very likely press it. Indeed, on her view, doing this would be the morally right action. Fortunately, this version of negative utilitarianism is not a position that many non-academics tend to hold, and even among academic philosophers it is not especially widespread.

(4) Extraterrestrials. Perhaps we are not alone in the universe. Even if the probability of life arising on an Earth-analog is low, the vast number of exoplanets suggests that the probability of life arising somewhere may be quite high. If an alien species were advanced enough to traverse the cosmos and reach Earth, it would very likely have the technological means to destroy humanity. As Stephen Hawking once remarked, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.”

(5) Superintelligence. The reason Homo sapiens is the dominant species on our planet is due almost entirely to our intelligence. It follows that if something were to exceed our intelligence, our fate would become inextricably bound up with its will. This is worrisome because recent research shows that even slight misalignments between our values and those motivating a superintelligence could have existentially catastrophic consequences. But figuring out how to upload human values into a machine poses formidable problems — not to mention the issue of figuring out what our values are in the first place.

Making matters worse, a superintelligence could process information at about 1 million times faster than our brains, meaning that a minute of time for us would equal approximately 2 years in time for the superintelligence. This would immediately give the superintelligence a profound strategic advantage over us. And if it were able to modify its own code, it could potentially bring about an exponential intelligence explosion, resulting in a mind that’s many orders of magnitude smarter than any human. Thus, we may have only one chance to get everything just right: there’s no turning back once an intelligence explosion is ignited.

A superintelligence could cause human extinction for a number of reasons. For example, we might simply be in its way. Few humans worry much if an ant genocide results from building a new house or road. Or the superintelligence could destroy humanity because we happen to be made out of something it could use for other purposes: atoms. Since a superintelligence need not resemble human intelligence in any way — thus, scholars tell us to resist the dual urges of anthropomorphizing and anthropopathizing — it could be motivated by goals that appear to us as utterly irrational, bizarre, or completely inexplicable.


Terror: Stagnation Risks


Now consider the agents who might intentionally try to bring about a scenario that would result in a stagnation catastrophe. This list subsumes most of the list above in that it includes idiosyncratic actors, future ecoterrorists, and superintelligence, but it probably excludes negative utilitarians, since stagnation (as understood above) would likely induce more suffering than the status quo today. The case of extraterrestrials is unclear, given that we can infer almost nothing about an interstellar civilization except that it would be technologically sophisticated.

For example, an idiosyncratic actor could harbor not a death wish for humanity, but a “destruction wish” for civilization. Thus, she or he could strive to destroy civilization without necessarily causing the annihilation of Homo sapiens. Similarly, a future ecoterrorist could hope for humanity to return to the hunter-gatherer lifestyle. This is precisely what motivated Ted Kaczynski: he didn’t want everyone to die, but he did want our technological civilization to crumble. And finally, a superintelligence whose values are misaligned with ours could modify Earth in such a way that our lineage persists, but our prospects for future development are permanently compromised. Other stagnation scenarios could involve the following categories:

(6) Apocalyptic terrorists. History is overflowing with groups that not only believed the world was about to end, but saw themselves as active participants in an apocalyptic narrative that’s unfolding in realtime. Many of these groups have been driven by the conviction that “the world must be destroyed to be saved,” although some have turned their activism inward and advocated mass suicide.

Interestingly, no notable historical group has combined both the genocidal and suicidal urges. This is why apocalypticists pose a greater stagnation terror risk than extinction risk: indeed, many see their group’s survival beyond Armageddon as integral to the end-times, or eschatological, beliefs they accept. There are almost certainly less than about 2 million active apocalyptic believers in the world today, although emerging environmental, demographic, and societal conditions could cause this number to significantly increase in the future, as I’ve outlined in detail elsewhere (see Section 5 of this paper).

(7) States. Like terrorists motivated by political rather than transcendent goals, states tend to place a high value on their continued survival. It follows that states are unlikely to intentionally cause a human extinction event. But rogue states could induce a stagnation catastrophe. For example, if North Korea were to overcome the world’s superpowers through a sudden preemptive attack and implement a one-world government, the result could be an irreversible decline in our quality of life.

So, there are numerous categories of agents that could attempt to bring about an existential catastrophe. And there appear to be fewer agent types who would specifically try to cause human extinction than to merely dismantle civilization.


Error: Extinction and Stagnation Risks


There are some reasons, though, for thinking that error (rather than terror) could constitute the most significant threat in the future. First, almost every agent capable of causing intentional harm would also be capable of causing accidental harm, whether this results in extinction or stagnation. For example, an apocalyptic cult that wants to bring about Armageddon by releasing a deadly biological agent in a major city could, while preparing for this terrorist act, inadvertently contaminate its environment, leading to a global pandemic.

The same goes for idiosyncratic agents, ecoterrorists, negative utilitarians, states, and perhaps even extraterrestrials. (Indeed, the large disease burden of Europeans was a primary reason Native American populations were decimated. By analogy, perhaps an extraterrestrial destroys humanity by introducing a new type of pathogen that quickly wipes us out.) The case of superintelligence is unclear, since the relationship between intelligence and error-proneness has not been adequately studied.

Second, if powerful future technologies become widely accessible, then virtually everyone could become a potential cause of existential catastrophe, even those with absolutely no inclination toward violence. To illustrate the point, imagine a perfectly peaceful world in which not a single individual has malicious intentions. Further imagine that everyone has access to a doomsday button on her or his phone; if pushed, this button would cause an existential catastrophe. Even under ideal societal conditions (everyone is perfectly “moral”), how long could we expect to survive before someone’s finger slips and the doomsday button gets pressed?

Statistically speaking, a world populated by only 1 billion people would almost certainly self-destruct within a 10-year period if the probability of any individual accidentally pressing a doomsday button were a mere 0.00001 percent per decade. Or, alternatively: if only 500 people in the world were to gain access to a doomsday button, and if each of these individuals had a 1 percent chance of accidentally pushing the button per decade, humanity would have a meager 0.6 percent chance of surviving beyond 10 years. Thus, even if the likelihood of mistakes is infinitesimally small, planetary doom will be virtually guaranteed for sufficiently large populations.


The Two Worlds Thought Experiment


The good news is that a focus on agential risks, as I’ve called them, and not just the technological tools that agents might use to cause a catastrophe, suggests additional ways to mitigate existential risk. Consider the following thought-experiment: a possible world A contains thousands of advanced weapons that, if in the wrong hands, could cause the population of A to go extinct. In contrast, a possible world B contains only a single advanced “weapon of total destruction” (WTD). Which world is more dangerous? The answer is obviously world A.

But it would be foolishly premature to end the analysis here. Imagine further that A is populated by compassionate, peace-loving individuals, whereas B is overrun by war-mongering psychopaths. Now which world appears more likely to experience an existential catastrophe? The correct answer is, I would argue, world B.

In other words: agents matter as much as, or perhaps even more than, WTDs. One simply can’t evaluate the degree of risk in a situation without taking into account the various agents who could become coupled to potentially destructive artifacts. And this leads to the crucial point: as soon as agents enter the picture, we have another variable that could be manipulated through targeted interventions to reduce the overall probability of an existential catastrophe.

The options here are numerous and growing. One possibility would involve using “moral bioenhancement” techniques to reduce the threat of terror, given that acts of terror are immoral. But a morally enhanced individual might not be less likely to make a mistake. Thus, we could attempt to use cognitive enhancements to lower the probability of catastrophic errors, on the (tentative) assumption that greater intelligence correlates with fewer blunders.

Furthermore, implementing stricter regulations on CO2 emissions could decrease the probability of extreme ecoterrorism and/or apocalyptic terrorism, since environmental degradation is a “trigger” for both.

Another possibility, most relevant to idiosyncratic agents, is to reduce the prevalence of bullying (including cyberbullying). This is motivated by studies showing that many school shooters have been bullied, and that without this stimulus such individuals would have been less likely to carry out violent rampages. Advanced mind-reading or surveillance technologies could also enable law enforcement to identify perpetrators before mass casualty crimes are committed.

As for superintelligence, efforts to solve the “control problem” and create a friendly AI are of primary concern among many many researchers today. If successful, a friendly AI could itself constitute a powerful mitigation strategy for virtually all the categories listed above.

(Note: these strategies should be explicitly distinguished from proposals that target the relevant tools rather than agents. For example, Bostrom’s idea of “differential technological development” aims to neutralize the bad uses of technology by strategically ordering the development of different kinds of technology. Similarly, the idea of police “blue goo” to counter “grey goo” is a technology-based strategy. Space colonization is also a tool intervention because it would effectively reduce the power (or capacity) of technologies to affect the entire human or posthuman population.)


Agent-Tool Couplings


Devising novel interventions and understanding how to maximize the efficacy of known strategies requires a careful look at the unique properties of the agents mentioned above. Without an understanding of such properties, this important task will be otiose. We should also prioritize different agential risks based on the likely membership (token agents) of each category. For example, the number of idiosyncratic agents might exceed the number of ecoterrorists in the future, since ecoterrorism is focused on a single issue, whereas idiosyncratic agents could be motivated by a wide range of potential grievances.[1] We should also take seriously the formidable threat posed by error, which could be nontrivially greater than that posed by terror, as the back-of-the-envelope calculations above show.

Such considerations, in combination with technology-based risk mitigation strategies, could lead to a comprehensive, systematic framework for strategically intervening on both sides of the agent-tool coupling. But this will require the field of existential risk studies to become less technocentric than it currently is.

[1] Although, on the other hand, the stimulus of environmental degradation would be experienced by virtually everyone in society, whereas the stimuli that motivate idiosyncratic agents might be situationally unique. It’s precisely issues like these that deserve further scholarly research.

Cryo with magnetics added

5 morganism 01 October 2016 10:27PM

This is great, by using small interlocking magnetic fields, you can keep the water in a higher vibrational state, allowing a "super-cooling" without getting crystallization and cell rupture

Subzero 12-hour Nonfreezing Cryopreservation of Porcine Heart in a Variable Magnetic Field

"invented a special refrigerator, termed as the Cells Alive System (CAS; ABI Co. Ltd., Chiba, Japan). Through the application of a combination of multiple weak energy sources, this refrigerator generates a special variable magnetic field that causes water molecules to oscillate, thus inhibiting crystallization during ice formation18 (Figure 1). Because the entire material is frozen without the movement of water molecules, cells can be maintained intact and free of membranous damage. This refrigerator has the ability to achieve a nonfreezing state even below the solidifying point."

 

http://mobile.journals.lww.com/transplantationdirect/_layouts/15/oaks.journals.mobile/articleviewer.aspx?year=2015&issue=10000&article=00005#ath

October 2016 Media Thread

5 ArisKatsaris 01 October 2016 02:05PM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

[Link] An appreciation of the Less Wrong Sequences (kajsotala.fi)

5 Kaj_Sotala 30 September 2016 12:11PM

Seeking Advice About Career Paths for Non-USA Citizen

5 almkglor 28 September 2016 12:07AM

Hi all,

Mostly lurker, I very rarely post, mostly  just read the excellent posts here.

I'm a Filipino, which means I am a citizen of the Republic of the Philippines.  My annual salary, before taxes, is about $20,000 (USA dollars).  I work at an IC development company (12 years at this company), developing the logic parts of LCD display drivers.  My understanding is that the median US salary for this kind of job is about $80,000 -> $100,000 a year.  This is a fucking worthless third world country, so the government eats up about ~30% of my salary and converts it to lousy service, rich government officials, bad roadworks, long commute times, and a (tiny) chance of being falsely accused of involvement in the drug trade and shot without trial.  Thus my take-home pay amounts to about $15,000 a year.  China is also murmuring vague threats about war because of the South China Sea (which the local intelligentsia insist on calling the West Philippine Sea); as we all know, the best way to survive a war is not be in one.

This has lead to my deep dissatisfaction with my current job.

I'm also a programmer as a hobby, and have been programming for 23 years (I started at 10 years old on Atari LOGO; I know a bunch of languages from low-level X86 assembly to C to C++ to ECMAScript to Haskell, and am co-author of SRFI-105 and SRFI-110).  My understanding is that a USA programmer would *start* at the $20,000-a-year level (?), and that someone with experience can probably get twice that, and a senior one can get $100,000/year.

As we all know, once a third world citizen starts having first world skill level, he starts demanding first world renumeration also.

I've been offered a senior software developer job at a software company, offering approximately $22,000/year; because of various attempts at tax reform it offers a flat 15% income tax, so I can expect about $18,000/year take home pay.  I've turned it down with a heavy heart, because seriously, $22,000/year at 15% tax for a senior software developer?

Leaving my current job is something I've been planning on doing, and I intend to do so early next year.  The increasing stress (constant overtime, management responsibilities (I'm a tech geek with passable social skills, and exercising my social skills drains me), 1.5-hour commutes) and the low renumeration makes me want to consider my alternate options.

My options are:

1.  Get myself to the USA, Europe, or other first-world country somehow, and look for a job there.  High risk, high reward, much higher probability of surviving to the singularity (can get cryonics there, can't get it here).  Complications: I have a family: a wife, a 4-year-old daughter, and a son on the way.  My wife wants to be near me, so it's difficult to live for long apart.  I have no work visa for any first-world country.  I'm from a third-world country that is sometimes put on terrorist watch lists, and prejudice is always high in first-world countries.

2.  Do freelance programming work.  Closer to free market ideal, so presumably I can get nearer to the USA levels of renumeration.  Lets me stay with my family.  Complications: I need to handle a lot of the human resources work myself (healthcare provider, social security, tax computations, time and task management - the last is something I do now in my current job position, but I dislike it).

3.  Become a landowning farmer.  My paternal grandparents have quite a few parcels of land (some of which have been transferred to my father, who is willing to pass it on to me), admittedly somewhere in the boondocks of the provinces of this country, but as any Georgian knows, landowners can sit in a corner staring at the sky, blocking the occasional land reform bill, and earn money.  Complications: I have no idea about farming.  I'd actually love to advocate a land value tax, which would undercut my position as a landowner.

For now, my basic current plan is some combination of #2 and #3 above: go sit in a corner of our clan's land and do freelance programming work.  This keeps me with my family, may reduce my level of stress, may increase my renumeration to nearer the USA levels.

My current job has a retirement pay, and since I've worked for 12 years, I've already triggered it, and they'll give me about $16,000 or so when I leave.  This seems reasonably comfortable to live on (note that this is what I take home in a year, and I've supported a family on that, remember this is a lousy third-world country).

Is my basic plan sound?  I'm trying to become more optimal, which seems to me to point me away from my current job and towards either #1 or #2, with #3 as a fallback.  I'd love to get cryonics and will start to convince my wife of its sensibility if I had a chance to actually get it, but that will require me either leaving the country (option #1 above) or running a cryonics company in a third-world country myself.

--

I got introduced to Less Wrong when I first read on Reddit about some weirdo who was betting he could pretend he was a computer in a box and convince someone to let him out of the box, and started lurking on Overcoming Bias.  When that weirdo moved over to Less Wrong, I followed and lurked there also.  So here I am ^^.  I'm probably very atypical even for Less Wrong; I highly suspect I am the only Filipino here (I'll have to check the diaspora survey results in detail).

Looking back, my big mistake was being arrogant and thinking "meh, I already know programming, so I should go for a challenge, why don't I take up electronics engineering instead because I don't know about it" back when I was choosing a college course.  Now I'm an IC developer.  Two of my cousins (who I can beat the pants off in a programming task) went with software engineering and pull in more money than I do.  Still, maybe I can correct that, even if it's over a decade late.  I really need to apply more of what I learn on Less Wrong.

Some years ago I applied for a CFAR class, but couldn't afford it, sigh.  Even today it's a few month's worth of salary for me.  So I guess I'll just have to settle for Less Wrong and Rationality from AI to Zombies.

 

Against Amazement

5 SquirrelInHell 20 September 2016 07:25PM

Time start: 20:48:35

I

The feelings of wonder, awe, amazement. It's a very human experience, and it is processed in the brain as a type of pleasure. If fact, if we look at the number of "5 photos you wouldn't believe" and similar clickbait on the Internet, it functions as a mildly addictive drug.

If I proposed that there is something wrong with those feelings, I would soon be drowned in voices of critique, pointing out that I'm suggesting we all become straw Vulcans, and that there is nothing wrong with subjective pleasure obtained cheaply and at no harm to anyone else.

I do not disagree with that. However, caution is required here, if one cares about epistemic purity of belief. Let's look at why.

II

Stories are supposed to be more memorable. Do you like stories? I'm sure you do. So consider a character, let's call him Jim.

Jim is very interested in technology and computers, and he is checking news sites every day when he comes to work in the morning. Also, Jim has read a number of articles on LessWrong, including the one about noticing confusion.

He cares about improving his thinking, so when he first read about the idea of noticing confusion on a 5 second level, he thought he wants to apply it in his life. He had a few successes, and while it's not perfect, he feels he is on the right track to notice having wrong models of the world more often.

A few days later, he opens his favorite news feed at work, and there he sees the following headline:

"AlphaGo wins 4-1 against Lee Sedol"

He goes on to read the article, and finds himself quite elated after he learns the details. 'It's amazing that this happened so soon! And most experts apparently thought it would happen in more than a decade, hah! Marvelous!'

Jim feels pride and wonder at the achievement of Google DeepMind engineers... and it is his human right to feel it, I guess.

But is Jim forgetting something?

III

Yes, I know that you know. Jim is feeling amazed, but... has he forgotten the lesson about noticing confusion?

There is a significant obstacle to Jim applying his "noticing confusion" in the situation described above: his internal experience has very little to do with feelings of confusion.

His world in this moment is dominated with awe, admiration etc., and those feelings are pleasant. It is not at all obvious that this inner experience corresponds to a innacurate model of the world he had before.

Even worse - improving his model's predictive power would result in less pleasant experiences of wonder and amazement in the future! (Or would it?) So if Jim decides to update, he is basically robbing himself of the pleasures of life, that are rightfully his. (Or is he?)

Time end: 21:09:50

(Speedwriting stats: 23 wpm, 128 cpm, previous: 30/167, 33/183)

The Global Catastrophic Risk Institute (GCRI) seeks a media engagement volunteer/intern

5 crmflynn 14 September 2016 04:42PM

Volunteer/Intern Position: Media Engagement on Global Catastrophic Risk

http://gcrinstitute.org/volunteerintern-position-media-engagement-on-global-catastrophic-risk/

The Global Catastrophic Risk Institute (GCRI) seeks a volunteer/intern to contribute on the topic of media engagement on global catastrophic risk, which is the risk of events that could harm or destroy global human civilization. The work would include two parts: (1) analysis of existing media coverage of global catastrophic risk and (2) formulation of strategy for media engagement by GCRI and our colleagues. The intern may also have opportunities to get involved in other aspects of GCRI.

All aspects of global catastrophic risk would be covered. Emphasis would be placed on GCRI’s areas of focus, including nuclear war and artificial intelligence. Additional emphasis could be placed on topics of personal interest to the intern, potentially including (but not limited to) climate change, other global environmental threats, pandemics, biotechnology risks, asteroid collision, etc.

The ideal candidate is a student or early-career professional seeking a career at the intersection of global catastrophic risk and the media. Career directions could include journalism, public relations, advertising, or academic research in related social science disciplines. Candidates seeking other career directions would also be considered, especially if they see value in media experience. However, we have a strong preference for candidates intending a career on global catastrophic risk.

The position is unpaid. The intern would receive opportunities for professional development, networking, and publication. GCRI is keen to see the intern benefit professionally from this position and will work with the intern to ensure that this happens. This is not a menial labor activity, but instead is one that offers many opportunities for enrichment.

A commitment of at least 10 hours per month is expected. Preference will be given to candidates able to make a larger time commitment. The position will begin during August-September 2016. The position will run for three months and may be extended pending satisfactory performance.

The position has no geographic constraint. The intern can work from anywhere in the world. GCRI has some preference for candidates from American time zones, but we regularly work with people from around the world. GCRI cannot provide any relocation assistance.

Candidates from underrepresented demographic groups are especially encouraged to apply.

Applications will be considered on an ongoing basis until 30 September, 2016.

To apply, please send the following to Robert de Neufville (robert [at] gcrinstitute.org):

* A cover letter introducing yourself and explaining your interest in the position. Please include a description of your intended career direction and how it would benefit from media experience on global catastrophic risk. Please also describe the time commitment you would be able to make.

* A resume or curriculum vitae.

* A writing sample (optional).

Learning values versus learning knowledge

5 Stuart_Armstrong 14 September 2016 01:42PM

I just thought I'd clarify the difference between learning values and learning knowledge. There are some more complex posts about the specific problems with learning values, but here I'll just clarify why there is a problem with learning values in the first place.

Consider the term "chocolate bar". Defining that concept crisply would be extremely difficult. But nevertheless it's a useful concept. An AI that interacted with humanity would probably learn that concept to a sufficient degree of detail. Sufficient to know what we meant when we asked it for "chocolate bars". Learning knowledge tends to be accurate.

Contrast this with the situation where the AI is programmed to "create chocolate bars", but with the definition of "chocolate bar" left underspecified, for it to learn. Now it is motivated by something else than accuracy. Before, knowing exactly what a "chocolate bar" was would have been solely to its advantage. But now it must act on its definition, so it has cause to modify the definition, to make these "chocolate bars" easier to create. This is basically the same as Goodhart's law - by making a definition part of a target, it will no longer remain an impartial definition.

What will likely happen is that the AI will have a concept of "chocolate bar", that it created itself, especially for ease of accomplishing its goals ("a chocolate bar is any collection of more than one atom, in any combinations"), and a second concept, "Schocolate bar" that it will use to internally designate genuine chocolate bars (which will still be useful for it to do). When we programmed it to "create chocolate bars, here's an incomplete definition D", what we really did was program it to find the easiest thing to create that is compatible with D, and designate them "chocolate bars".

 

This is the general counter to arguments like "if the AI is so smart, why would it do stuff we didn't mean?" and "why don't we just make it understand natural language and give it instructions in English?"

Willpower Thermodynamics

5 Fluttershy 16 August 2016 03:00AM
Content warning: a couple LWers apparently think that the concept of ego depletionalso known as willpower depletionis a memetic hazard, though I find it helpful. Also, the material presented here won't fit everyone's experiences.

What happens if we assume that the idea of ego depletion is basically correct, and try to draw an analogy between thermodynamics and willpower?

Figure 1. Thermodynamics Picture

You probably remember seeing something like the above diagram in a chemistry class. The diagram shows how unstable, or how high in energy, the states that a material can pass through in a chemical reaction are. Here's what the abbreviations mean:

  • SM is the starting material.
  • TS1 and TS2 are the two transition states, which must be passed through to go from SM to EM1 or EM2.
  • EM1 and EM2 are the two possible end materials.

The valleys of both curves represent configurations a material may occupy at the start or end of a chemical reaction. Lower energy valleys are more stable. However, higher peaks can only be reliably crossed if energy is available from e.g. the temperature being sufficiently high.

The main takeaway from Figure 1 is that reactions which produce the most stable end materials, like ending material 2, from a given set of starting materials aren't always the reactions which are easiest to make happen.

Figure 2. Willpower Picture

We can draw a similar diagram to illustrate how much stress we lose while completing a relaxing activity. Here's what the abbreviations used in Figure 2 mean:

  • SM is your starting mood.
  • TS is your state of topmost stress, which depends on which activity you choose.
  • EM1 and EM2 are your two possible ending moods.

Above, the valley on the left represents how stressed you are before starting one of two possible relaxing activities. The peak in the middle represents how stressed you'll be when attempting to get the activity underway, and the valley on the right represents how stressed you'll be once you're done.

For the sake of simplification, let's say that stress is the opposite of willpower, such that losing stress means you gain willpower, and vice versa. For many people, there's a point at which it's very hard to take on additional stress or use more willpower, such that getting started on an activity that would normally get you to ending mood 2 from an already stressed starting mood is very hard. 

In this figure, both activities restore some willpower. Activity 2 restores much more willpower, but is harder to get started on. As with chemical reactions, the most (emotionally or chemically) stable end state is not always the one that will be reached if the "easiest" activity or reaction that one can get started on is undertaken. 


 

In chemistry, if you want to make end material 2 instead of end material 1, you have to make sure that you have some way of getting over the big peak at transition state 2such as by making sure the temperature is high enough. In real life, it's also good to have a plan for getting over the big peak at the point of topmost stress. Spending time or attention figuring what your ending mood 2-producing activities are may also be worthwhile.

Some leisure activities, like browsing the front page of reddit, are ending mood 1-producing activities; they're easy to start, but not very rewarding. Examples of what qualifies as an ending mood 2-producing activity vary between peoplebut reading books, writing, hiking, meditating, or making games or art qualify as ending mood 2-producing activities for some.

At a minimum, making sure that you end up in a high willpower, low stress ending mood requires paying attention to your ability to handle stress and conserve willpower. Sometimes this implies that taking a break before you really need to means that you'll get more out of your break. Sometimes it means that you should monitor how many spoons and forks you have. In general, though, preferring ending mood 2-producing activities over ending mood 1-producing activities will give you the best results in the long run.

The best-case scenario is that you find a way to automatically turn impulses to do ending mood 1-producing activities into impulses to do ending mood 2-producing activities, such as with the trigger action plan [open Reddit -> move hands into position to do a 5-minute meditation].

Identity map

5 turchin 15 August 2016 11:29AM

“Identity” here refers to the question “will my copy be me, and if yes, on which conditions?” It results in several paradoxes which I will not repeat here, hoping that they are known to the reader.

Identity is one of the most complex problems, like safe AI or aging. It only appears be simple. It is complex because it has to answer the question: “Who is who?” in the universe, that is to create a trajectory in the space of all possible minds, connecting identical or continuous observer-moments. But such a trajectory would be of the same complexity as all space of possible minds, and that is very complex.

There have been several attempts to dismiss the complexity of the identity problem, like open individualism (I am everybody) or zero-individualism (I exist only now). But they do not prevent the existence of “practical identity” which I use when planning my tomorrow or when I am afraid of future pain.

The identity problem is also very important. If we (or AI) arrive at an incorrect solution, we will end up being replaced by p-zombies or just copies-which-are-not-me during a “great uploading”. It will be a very subtle end of the world.

The identity problem is also equivalent to the immortality problem. if I am able to describe “what is me”, I would know what I need to save forever. This has practical importance now, as I am collecting data for my digital immortality (I even created a startup about it and the map will be my main contribution to it. If I solve the identity problem I will be able to sell the solution as a service http://motherboard.vice.com/read/this-transhumanist-records-everything-around-him-so-his-mind-will-live-forever)

So we need to know how much and what kind of information I should preserve in order to be resurrected by future AI. What information is enough to create a copy of me? And is information enough at all?

Moreover, the identity problem (IP) may be equivalent to the benevolent AI problem, because the first problem is, in a nutshell, “What is me” and the second is “What is good for me”. Regardless, the IP requires a solution of consciousness problem, and AI problem (that is solving the nature of intelligence) are somewhat similar topics.

I wrote 100+ pages trying to solve the IP, and became lost in the ocean of ideas. So I decided to use something like the AIXI method of problem solving: I will list all possible solutions, even the most crazy ones, and then assess them.

The following map is connected with several other maps: the map of p-zombies, the plan of future research into the identity problem, and the map of copies. http://lesswrong.com/lw/nsz/the_map_of_pzombies/

The map is based on idea that each definition of identity is also a definition of Self, and it is also strongly connected with one philosophical world view (for example, dualism). Each definition of identity answers a question “what is identical to what”. Each definition also provides its own answers to the copy problem as well as to its own definition of death - which is just the end of identity – and also presents its own idea of how to reach immortality.

 

So on the horizontal axis we have classes of solutions:

“Self" definition - corresponding identity definition - philosophical reality theory - criteria and question of identity - death and immortality definitions.

 

On the vertical axis are presented various theories of Self and identity from the most popular on the upper level to the less popular described below:

1) The group of theories which claim that a copy is not original, because some kind of non informational identity substrate exists. Different substrates: same atoms, qualia, soul or - most popular - continuity of consciousness. All of them require that the physicalism will be false. But some instruments for preserving identity could be built. For example we could preserve the same atoms or preserve the continuity of consciousness of some process like the fire of a candle. But no valid arguments exist for any of these theories. In Parfit’s terms it is a numerical identity (being the same person). It answers the question “What I will experience in the next moment of time"

2) The group of theories which claim that a copy is original, if it is informationally the same. This is the main question about the required amount of information for the identity. Some theories obviously require too much information, like the positions of all atoms in the body to be the same, and other theories obviously do not require enough information, like the DNA and the name.

3) The group of theories which see identity as a social phenomenon. My identity is defined by my location and by the ability of others to recognise me as me.

4) The group of theories which connect my identity with my ability to make plans for future actions. Identity is a meaningful is part of a decision theory.

5)  Indirect definitions of self. This a group of theories which define something with which self is strongly connected, but which is not self. It is a biological brain, space-time continuity, atoms, cells or complexity. In this situation we say that we don’t know what constitutes identity but we could know with what it is directly connected and could preserve it.

6) Identity as a sum of all its attributes, including name, documents, and recognition by other people. It is close to Leibniz’s definition of identity. Basically, it is a duck test: if it looks like a duck, swims like a duck, and quacks like a duck, then it is probably a duck. 

7) Human identity is something very different to identity of other things or possible minds, as humans have evolved to have an idea of identity, self-image, the ability to distinguish their own identity and the identity of others, and to predict its identity. So it is a complex adaptation which consists of many parts, and even if some parts are missed, they could be restored using other parts. 

There also a problem of legal identity and responsibility. 

8)  Self-determination. “Self” controls identity, creating its own criteria of identity and declaring its nature. The main idea here is that the conscious mind can redefine its identity in the most useful way. It also includes the idea that self and identity evolve during differing stages of personal human evolution. 

9) Identity is meaningless. The popularity of this subset of ideas is growing. Zero-identity and  open identity both belong to this subset. The main contra-argument here is that if we cut the idea of identity, future planning will be impossible and we will have to return to some kind of identity through the back door. The idea of identity comes also with the idea of the values of individuality. If we are replaceable like ants in an anthill, there are no identity problems. There is also no problem with murder.

 

The following is a series of even less popular theories of identity, some of them I just constructed ad hoc.

10)  Self is a subset of all thinking beings. We could see a space of all possible minds as divided into subsets, and call them separate personalities.

11)  Non-binary definitions of identity.

The idea that me or not-me identity solutions are too simple and result in all logical problems. if we define identity continuously, as a digit of the interval (0,1), we will get rid of some paradoxes and thus be able to calculate the identity level of similarity or time until the given next stage could be used as such a measure. Even a complex digit can be used if we include informational and continuous identity (in a Parfit meaning).

12) Negative definitions of identity: we could try to say what is not me.

13) Identity as overlapping observer-moments.

14) Identity as a field of indexical uncertainty, that is a group of observers to which I belong, but can’t know which one I am.

15) Conservative approach to identity. As we don’t know what identity is we should try to save as much as possible, and risk our identity only if it is the only means of survival. That means no copy/paste transportation to Mars for pleasure, but yes if it is the only chance to survive (this is my own position).

16)  Identity as individuality, i.e. uniqueness. If individuality doesn’t exist or doesn’t have any value, identity is not important.

17) Identity as a result of the ability to distinguish different people. Identity here is a property of perception.

18) Mathematical identity. Identity may be presented as a number sequence, where each number describes a full state of mind. Useful toy model.

19) Infinite identity. The main idea here is that any mind has the non-zero probability of becoming any other mind after a series of transformations. So only one identity exists in all the space of all possible minds, but the expected time for me to become a given person is dramatically different in the case of future me (1 day) and a random person (10 to the power of 100 years). This theory also needs a special version of quantum immortality which resets “memories” of a dying being to zero, resulting in something like reincarnation, or an infinitely repeating universe in the style of Nietzsche's eternal recurrence.  

20) Identity in a multilevel simulation. As we probably live in a simulation, there is a chance that it is multiplayer game in which one gamer has several avatars and can constantly have experiences through all of them. It is like one eye through several people.

21) Splitting identity. This is an idea that future identity could split into several (or infinitely many) streams. If we live in a quantum multiverse we split every second without any (perceived) problems. We are also adapted to have several future copies if we think about “me-tomorrow” and “me-the-day-after-tomorrow”.

 

This list shows only groups of identity definitions, many more smaller ideas are included in the map.

The only rational choice I see is a conservative approach, acknowledging that we don’t know the nature of identity and trying to save as much as possible of each situation in order to preserve identity.

The pdf: http://immortality-roadmap.com/identityeng8.pdf

 

 

 

 

Open Thread, Aug. 15. - Aug 21. 2016

5 Elo 15 August 2016 12:26AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Irrationality Quotes August 2016

5 PhilGoetz 01 August 2016 07:12PM

Rationality quotes are self-explanatory.  Irrationality quotes often need some context and explication, so they would break the flow in Rationality Quotes.

[Link] Biofuels a climate mistake

4 morganism 09 October 2016 09:16PM

[Link] Six principles of a truth-friendly discourse

4 philh 08 October 2016 04:56PM

Open thread, Oct. 03 - Oct. 09, 2016

4 MrMind 03 October 2016 06:59AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] US tech giants found Partnership on AI to Benefit People and Society to ensure AI is developed safely and ethically

4 Gunnar_Zarncke 29 September 2016 08:39PM

[Link] Politics Is Upstream of AI

4 iceman 28 September 2016 09:47PM

View more: Prev | Next