You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[link] Bayesian inference with probabilistic population codes

8 Gunnar_Zarncke 13 May 2015 09:11PM

Bayesian inference with probabilistic population codes by Wei Ji Ma et al 2006

Recent psychophysical experiments indicate that humans perform near-optimal Bayesian inference in a wide variety of tasks, ranging from cue integration to decision making to motor control. This implies that neurons both represent probability distributions and combine those distributions according to a close approximation to Bayes’ rule. At first sight, it would seem that the high variability in the responses of cortical neurons would make it difficult to implement such optimal statistical inference in cortical circuits. We argue that, in fact, this variability implies that populations of neurons automatically represent probability distributions over the stimulus, a type of code we call probabilistic population codes. Moreover, we demonstrate that the Poissonlike variability observed in cortex reduces a broad class of Bayesian inference to simple linear combinations of populations of neural activity. These results hold for arbitrary probability distributions over the stimulus, for tuning curves of arbitrary shape and for realistic neuronal variability.

Note that "humans perform near-optimal Bayesian inference" refers to the integration of information - not conscious symbolic reasoning. Nonetheless I think this is of interest here. 

Astronomy, space exploration and the Great Filter

23 JoshuaZ 19 April 2015 07:26PM

Astronomical research has what may be an under-appreciated role in helping us understand and possibly avoiding the Great Filter. This post will examine how astronomy may be helpful for identifying potential future filters. The primary upshot is that we may have an advantage due to our somewhat late arrival: if we can observe what other civilizations have done wrong, we can get a leg up.

This post is not arguing that colonization is a route to remove some existential risks. There is no question that colonization will reduce the risk of many forms of Filters, but the vast majority of astronomical work has no substantial connection to colonization. Moreover, the case for colonization has been made strongly by many others already, such as Robert Zubrin's book "The Case for Mars" or this essay by Nick Bostrom

Note: those already familiar with the Great Filter and proposed explanations may wish to skip to the section "How can we substantially improve astronomy in the short to medium term?"


What is the Great Filter?

There is a worrying lack of signs of intelligent life in the universe. The only intelligent life we have detected has been that on Earth. While planets are apparently numerous, there have been no signs of other life. There are three possible lines of evidence we would expect to see if civilizations were common in the universe: radio signals, direct contact, and large-scale constructions. The first two of these issues are well-known, but the most serious problem arises from the lack of large-scale constructions: as far as we can tell the universe look natural. The vast majority of matter and energy in the universe appears to be unused. The Great Filter is one possible explanation for this lack of life, namely that some phenomenon prevents intelligent life from passing into the interstellar, large-scale phase. Variants of the idea have been floating around for a long time; the term was first coined by Robin Hanson in this essay. There are two fundamental versions of the Filter: filtration which has occurred in our past, and Filtration which will occur in our future. For obvious reasons the second of the two is more of a concern. Moreover, as our technological level increases, the chance that we are getting to the last point of serious filtration gets higher since as one has a civilization spread out to multiple stars, filtration becomes more difficult.  

Evidence for the Great Filter and alternative explanations:

At this point, over the last few years, the only major updates to the situation involving the Filter since Hanson's essay have been twofold:

First, we have confirmed that planets are very common, so a lack of Earth-size planets or planets in the habitable zone are not likely to be a major filter.

Second, we have found that planet formation occurred early in the universe. (For example see this article about this paper.) Early planet formation weakens the common explanation of the Fermi paradox that the argument that some species had to be the first intelligent species and we're simply lucky. Early planet formation along with the apparent speed at which life arose on Earth after the heavy bombardment ended, as well as the apparent speed with which complex life developed from simple life,  strongly refutes this explanation. The response has been made that early filtration may be so common that if life does not arise early on a planet's star's lifespan, then it will have no chance to reach civilization. However, if this were the case, we'd expect to have found ourselves orbiting a more long-lived star like a red dwarf. Red dwarfs are more common than sun-like stars and have much longer lifespans by multiple orders of magnitude. While attempts to understand the habitable zone of red dwarfs are still ongoing, current consensus is that many red dwarfs contain habitable planets

These two observations, together with further evidence that the universe looks natural  makes future filtration seem likely. If advanced civilizations existed, we would expect them to make use of the large amounts of matter and energy available. We see no signs of such use.  We've seen no indication of ring-worlds, Dyson spheres, or other megascale engineering projects. While such searches have so far been confined to around 300 parsecs and some candidates were hard to rule out, if a substantial fraction of stars in a galaxy have Dyson spheres or swarms we would notice the unusually high infrared spectrum. Note that this sort of evidence is distinct from arguments about contact or about detecting radio signals. There's a very recent proposal for mini-Dyson spheres around white dwarfs  which would be much easier to engineer and harder to detect, but they would not reduce the desirability of other large-scale structures, and they would likely be detectable if there were a large number of them present in a small region. One recent study looked for signs of large-scale modification to the radiation profile of galaxies in a way that should show presence of large scale civilizations. They looked at 100,000 galaxies and found no major sign of technologically advanced civilizations (for more detail see here). 

We will not discuss all possible rebuttals to case for a Great Filter but will note some of the more interesting ones:

There have been attempts to argue that the universe only became habitable more recently. There are two primary avenues for this argument. First, there is the point that  early stars had very low metallicity (that is had low concentrations of elements other than hydrogen and helium) and thus the universe would have had too low a metal level for complex life. The presence of old rocky planets makes this argument less viable, and this only works for the first few billion years of history. Second, there's an argument that until recently galaxies were more likely to have frequent gamma bursts. In that case, life would have been wiped out too frequently to evolve in a complex fashion. However, even the strongest version of this argument still leaves billions of years of time unexplained. 

There have been attempts to argue that space travel may be very difficult. For example, Geoffrey Landis proposed that a percolation model, together with the idea that interstellar travel is very difficult, may explain the apparent rarity of large-scale civilizations. However, at this point, there's no strong reason to think that interstellar travel is so difficult as to limit colonization to that extent. Moreover, discoveries made in the last 20 years that brown dwarfs are very common  and that most stars do contain planets is evidence in the opposite direction: these brown dwarfs as well as common planets would make travel easier because there are more potential refueling and resupply locations even if they are not used for full colonization.  Others have argued that even without such considerations, colonization should not be that difficult. Moreover, if colonization is difficult and civilizations end up restricted to small numbers of nearby stars, then it becomes more, not less, likely that civilizations will attempt the large-scale engineering projects that we would notice. 

Another possibility is that we are underestimating the general growth rate of the resources used by civilizations, and so while extrapolating now makes it plausible that large-scale projects and endeavors will occur, it becomes substantially more difficult to engage in very energy intensive projects like colonization. Rather than a continual, exponential or close to exponential growth rate, we may expect long periods of slow growth or stagnation. This cannot be ruled out, but even if growth continues at only slightly higher than linear rate, the energy expenditures available in a few thousand years will still be very large. 

Another possibility that has been proposed are variants of the simulation hypothesis— the idea that we exist in a simulated reality. The most common variant of this in a Great Filter context suggests that we are in an ancestor simulation, that is a simulation by the future descendants of humanity of what early humans would have been like.

The simulation hypothesis runs into serious problems, both in general and as an explanation of the Great Filter in particular. First, if our understanding of the laws of physics is approximately correct, then there are strong restrictions on what computations can be done with a given amount of resources. For example, BQP, the set of problems which can be solved efficiently by quantum computers is contained in PSPACE,  the set of problems which can solved when one has a polynomial amount of space available and no time limit.  Thus, in order to do a detailed simulation, the level of resources needed would likely be large since one would even if one made a close to classical simulation still need about as many resources. There are other results, such as Holevo's theorem, which place other similar restrictions.  The upshot of these results is that one cannot make a detailed simulation of an object without using at least much resources as the object itself. There may be potential ways of getting around this: for example, consider a simulator  interested primarily in what life on Earth is doing. The simulation would not need to do a detailed simulation of the inside of planet Earth and other large bodies in the solar system. However, even then, the resources involved would be very large. 

The primary problem with the simulation hypothesis as an explanation is that it requires the future of humanity to have actually already passed through the Great Filter and to have found their own success sufficiently unlikely that they've devoted large amounts of resources to actually finding out how they managed to survive. Moreover, there are strong limits on how accurately one can reconstruct any given quantum state which means an ancestry simulation will be at best a rough approximation. In this context, while there are interesting anthropic considerations here, it is more likely that the simulation hypothesis  is wishful thinking.

Variants of the "Prime Directive" have also been proposed. The essential idea is that advanced civilizations would deliberately avoid interacting with less advanced civilizations. This hypothesis runs into two serious problems: first, it does not explain the apparent naturalness, only the lack of direct contact by alien life. Second, it assumes a solution to a massive coordination problem between multiple species with potentially radically different ethical systems. In a similar vein, Hanson in his original essay on the Great Filter raised the possibility of a single very early species with some form of faster than light travel and a commitment to keeping the universe close to natural looking. Since all proposed forms of faster than light travel are highly speculative and would involve causality violations this hypothesis cannot be assigned a substantial probability. 

People have also suggested that civilizations move outside galaxies to the cold of space where they can do efficient reversible computing using cold dark matter. Jacob Cannell has been one of the most vocal proponents of this idea. This hypothesis suffers from at least three problems. First, it fails to explain why those entities have not used the conventional matter to any substantial extent in addition to the cold dark matter. Second, this hypothesis would either require dark matter composed of cold conventional matter (which at this point seems to be only a small fraction of all dark matter), or would require dark matter which interacts with itself using some force other than gravity. While there is some evidence for such interaction, it is at this point, slim. Third, even if some species had taken over a large fraction of dark matter to use for their own computations, one would then expect later species to use the conventional matter since they would not have the option of using the now monopolized dark matter. 

Other exotic non-Filter explanations have been proposed but they suffer from similar or even more severe flaws.

It is possible that future information will change this situation.  One of the more plausible explanations of the Great Filter is that there is no single Great Filter in the past but rather a large number of small filters which come together to drastically filter out civilizations. However, the evidence for such a viewpoint at this point is slim but there is some possibility that astronomy can help answer this question.

For example, one commonly cited aspect of past filtration is the origin of life. There are at least three locations, other than Earth, where life could have formed: Europa, Titan and Mars. Finding life on one, or all of them, would be a strong indication that the origin of life is not the filter. Similarly, while it is highly unlikely that Mars has multicellular life, finding such life would indicate that the development of multicellular life is not the filter. However, none of them are as hospitable to the extent of Earth, so determining whether there is life will require substantial use of probes. We might also look for signs of life in the atmospheres of extrasolar planets, which would require substantially more advanced telescopes. 

Another possible early filter is that planets like Earth frequently get locked into a "snowball" state which planets have difficulty exiting. This is an unlikely filter since Earth has likely been in near-snowball conditions multiple times— once very early on during the Huronian and later, about 650 million years ago. This is an example of an early partial Filter where astronomical observation may be of assistance in finding evidence of the filter. The snowball Earth filter does have one strong virtue: if many planets never escape a snowball situation, then this explains in part why we are not around a red dwarf: planets do not escape their snowball state unless their home star is somewhat variable, and red dwarfs are too stable. 

It should be clear that none of these explanations are satisfactory and thus we must take seriously the possibility of future Filtration. 

How can we substantially improve astronomy in the short to medium term?

Before we examine the potentials for further astronomical research to understand a future filter we should note that there are many avenues in which we can improve our astronomical instruments. The most basic way is to simply make better conventional optical, near-optical telescopes, and radio telescopes. That work is ongoing. Examples include the European Extreme Large Telescope and the Thirty Meter Telescope. Unfortunately, increasing the size of ground based telescopes, especially size of the aperture, is running into substantial engineering challenges. However,  in the last 30 years the advent of adaptive optics, speckle imaging, and other techniques have substantially increased the resolution of ground based optical telescopes and near-optical telescopes. At the same time, improved data processing and related methods have improved radio telescopes. Already, optical and near-optical telescopes have advanced to the point where we can gain information about the atmospheres of extrasolar planets although we cannot yet detect information about the atmospheres of rocky planets. 

Increasingly, the highest resolution is from space-based telescopes. Space-based telescopes also allow one to gather information from types of radiation which are blocked by the Earth's atmosphere or magnetosphere. Two important examples are x-ray telescopes and gamma ray telescopes. Space-based telescopes also avoid many of the issues created by the atmosphere for optical telescopes. Hubble is the most striking example but from a standpoint of observatories relevant to the Great Filter, the most relevant space telescope (and most relevant instrument in general for all Great Filter related astronomy), is the planet detecting Kepler spacecraft which is responsible for most of the identified planets. 

Another type of instrument are neutrino detectors. Neutrino detectors are generally very large bodies of a transparent material (generally water) kept deep underground so that there are minimal amounts of light and cosmic rays hitting the the device. Neutrinos are then detected when they hit a particle  which results in a flash of light. In the last few years, improvements in optics, increasing the scale of the detectors, and the development of detectors like IceCube, which use naturally occurring sources of water, have drastically increased the sensitivity of neutrino detectors.  

There are proposals for larger-scale, more innovative telescope designs but they are all highly speculative. For example, in the ground based optical front, there's been a suggestion to make liquid mirror telescopes with ferrofluid mirrors which would give the advantages of liquid mirror telescopes, while being able to apply adaptive optics which can normally only be applied to solid mirror telescopes.  An example of potential space-based telescopes is the Aragoscope which would take advantage of diffraction to make a space-based optical telescope with a resolution at least an order of magnitude greater than Hubble. Other examples include placing telescopes very far apart in the solar system to create effectively very high aperture telescopes. The most ambitious and speculative of such proposals involve such advanced and large-scale projects that one might as well presume that they will only happen if we have already passed through the Great Filter.

 

What are the major identified future potential contributions to the filter and what can astronomy tell us? 

Natural threats: 

One threat type where more astronomical observations can help are natural threats, such as asteroid collisions, supernovas, gamma ray bursts, rogue high gravity bodies, and as yet unidentified astronomical threats. Careful mapping of asteroids and comets is ongoing and requires more  continued funding rather than any intrinsic improvements in technology. Right now, most of our mapping looks at objects at or near the plane of the ecliptic and so some focus off the plane may be helpful. Unfortunately, there is very little money to actually deal with such problems if they arise. It might be possible to have a few wealthy individuals agree to set up accounts in escrow which would be used if an asteroid or similar threat arose. 

Supernovas are unlikely to be a serious threat at this time. There are some stars which are close to our solar system and are large enough that they will go supernova. Betelgeuse is the most famous of these with a projected supernova likely to occur in the next 100,000 years. However, at its current distance, Betelgeuse is unlikely to pose much of a problem unless our models of supernovas are very far off. Further conventional observations of supernovas need to occur in order to understand this further, and better  neutrino observations will also help  but right now, supernovas do not seem to be a large risk. Gamma ray bursts are in a situation similar to supernovas. Note also that if an imminent gamma ray burst or supernova is likely to occur, there's very little we can at present do about it. In general, back of the envelope calculations establish that supernovas are highly unlikely to be a substantial part of the Great Filter. 

Rogue planets, brown dwarfs or other small high gravity bodies such as wandering black holes can be detected and further improvements will allow faster detection. However, the scale of havoc created by such events is such that it is not at all clear that detection will help. The entire planetary nuclear arsenal would not even begin to move their orbits a substantial extent. 

Note also it is unlikely that natural events are a large fraction of the Great Filter. Unlike most of the other threat types, this is a threat type where radio astronomy and neutrino information may be more likely to identify problems. 

Biological threats: 

Biological threats take two primary forms: pandemics and deliberately engineered diseases. The first is more likely than one might naively expect as a serious contribution to the filter, since modern transport allows infected individuals to move quickly and come into contact with a large number of people. For example, trucking has been a major cause of the spread of HIV in Africa and it is likely that the recent Ebola epidemic had similar contributing factors. Moreover, keeping chickens and other animals in very large quanities in dense areas near human populations makes it easier for novel variants of viruses to jump species. Astronomy does not seem to provide any relevant assistance here; the only plausible way of getting such information would be to see other species that were destroyed by disease. Even with resolutions and improvements in telescopes by many orders of magnitude this is not doable.  

Nuclear exchange:

For reasons similar to those in the biological threats category, astronomy is unlikely to help us detect if nuclear war is a substantial part of the Filter. It is possible that more advanced telescopes could detect an extremely large nuclear detonation if it occurred in a very nearby star system. Next generation telescopes may be able to detect a nearby planet's advanced civilization purely based on the light they give off and a sufficiently large  detonation would be of the same light level. However, such devices would be multiple orders of magnitude larger than the largest current nuclear devices. Moreover, if a telescope was not looking at exactly the right moment, it would not see anything at all, and the probability that another civilization wipes itself out at just the same instant that we are looking is vanishingly small. 

Unexpected physics: 

This category is one of the most difficult to discuss because it so open. The most common examples people point to involve high-energy physics. Aside from theoretical considerations, cosmic rays of very high energy levels are continually hitting the upper atmosphere. These particles frequently are multiple orders of magnitude higher energy than the particles in our accelerators. Thus high-energy events seem to be unlikely to be a cause of any serious filtration unless/until humans develop particle accelerators whose energy level is orders of magnitude higher than that produced by most cosmic rays.  Cosmic rays with energy levels  beyond what is known as the GZK energy limit are rare.  We have observed occasional particles with energy levels beyond the GZK limit, but they are rare enough that we cannot rule out a risk from many collisions involving such high energy particles in a small region. Since our best accelerators are nowhere near the GZK limit, this is not an immediate problem.

There is an argument that we should if anything worry about unexpected physics, it is on the very low energy end. In particular, humans have managed to make objects substantially colder than the background temperature of 4 K with temperature as on the order of 10-9 K. There's an argument that because of the lack of prior examples of this, the chance that something can go badly wrong should be higher than one might estimate (See here.) While this particular class of scenario seems unlikely, it does illustrate that it may not be obvious which situations could cause unexpected, novel physics to come into play. Moreover, while the flashy, expensive particle accelerators get attention, they may not be a serious source of danger compared to other physics experiments.  

Three of the more plausible catastrophic unexpected physics dealing with high energy events are, false vacuum collapse, black hole formation, and the formation of strange matter which is more stable than regular matter.  

False vacuum collapse would occur if our universe is not in its true lowest energy state and an event occurs which causes it to transition to the true lowest state (or just a lower state). Such an event would be almost certainly fatal for all life. False vacuum collapses cannot be avoided by astronomical observations since once initiated they would expand at the speed of light. Note that the indiscriminately destructive nature of false vacuum collapses make them an unlikely filter.  If false vacuum collapses were easy we would not expect to see almost any life this late in the universe's lifespan since there would be a large number of prior opportunities for false vacuum collapse. Essentially, we would not expect to find ourselves this late in a universe's history if this universe could easily engage in a false vacuum collapse. While false vacuum collapses and similar problems raise issues of observer selection effects, careful work has been done to estimate their probability

People have mentioned the idea of an event similar to a false vacuum collapse but which occurs at a speed slower than the speed of light. Greg Egan used it is a major premise in his novel, "Schild's Ladder." I'm not aware of any reason to believe such events are at all plausible. The primary motivation seems to be for the interesting literary scenarios which arise rather than for any scientific considerations. If such a situation can occur, then it is possible that we could detect it using astronomical methods. In particular, if the wave-front of the event is fast enough that it will impact the nearest star or nearby stars around it, then we might notice odd behavior by the star or group of stars. We can be confident that no such event has a speed much beyond a few hundredths of the speed of light or we would already notice galaxies behaving abnormally. There is a very narrow range where such expansions could be quick enough to devastate the planet they arise on but take too long to get to their parent star in a reasonable amount of time. For example, the distance from the Earth to the Sun is on the order of 10,000 times the diameter of the Earth, so any event which would expand to destroy the Earth would reach the Sun in about 10,000 times as long. Thus in order to have a time period which would destroy one's home planet but not reach the parent star it would need to be extremely slow.

The creation of artificial black holes are unlikely to be a substantial part of the filter— we expect that small black holes will quickly pop out of existence due to Hawking radiation.  Even if the black hole does form, it is likely to fall quickly to the center of the planet and eat matter very slowly and over a time-line which does not make it constitute a serious threat.  However, it is possible that black holes would not evaporate; the fact that we have not detected the evaporation of any primordial black holes is weak evidence that the behavior of small black holes is not well-understood. It is also possible that such a hole would eat much faster than we expect but this doesn't seem likely. If this is a major part of the filter, then better telescopes should be able to detect it by finding very dark objects with the approximate mass and orbit of habitable planets. We also may be able to detect such black holes via other observations such as from their gamma or radio signatures.  

The conversion of regular matter into strange matter, unlike a false vacuum collapse or similar event, might  be naturally limited to the planet where the conversion started. In that case, the only hope for observation would be to notice planets formed of strange matter and notice changes in the behavior of their light. Without actual samples of strange matter, this may be very difficult to do unless we just take notice of planets looking abnormal as similar evidence. Without substantially better telescopes and a good idea of what the range is for normal rocky planets, this would be tough.  On the other hand, neutron stars which have been converted into strange matter may be more easily detectable. 

Global warming and related damage to biosphere: 

Astronomy is unlikely to help here. It is possible that climates are more sensitive than we realize and that comparatively small changes can result in Venus-like situations.  This seems unlikely given the general variation level in human history and the fact that current geological models strongly suggest that any substantial problem would eventually correct itself. But if we saw many planets that looked Venus-like in the middle of their habitable zones, this would be a reason to be worried. Note that this would require detailed ability to analyze atmospheres on planets well beyond current capability. Even if it is possible Venus-ify a planet, it is not clear that the Venusification would last long. Thus there may be very few planets in this state at any given time.  Since stars become brighter as they age, so high greenhouse gas levels have more of an impact on climate when the parent star is old.  If civilizations are more likely to arise in a late point of their home star's lifespan, global warming becomes a more plausible filter, but even given given such considerations, global warming does not seem to be sufficient as a filter. It is also possible that global warming by itself is not the Great Filter but rather general disruption of the biosphere including possibly for some species global warming, reduction in species diversity, and other problems. There is some evidence that human behavior is collectively causing enough damage to leave an unstable biosphere

A change in planetary overall temperature of 10o C would likely be enough to collapse civilization without leaving any signal observable to a telescope. Similarly, substantial disruption to a biosphere may be very unlikely to be detected. 

Artificial intelligence

AI is a complicated existential risk from the standpoint of the Great Filter. AI is not likely to be the Great Filter if one considers simply the Fermi paradox. The essential problem has been brought up independently by a few people. (See for example Katja Grace's remark here and my blog here.) The central issue is that if an AI takes over it is likely to attempt to control all resources in its future light-cone. However, if the AI spreads out at a substantial fraction of the speed of light, then we would notice the result. The argument has been made that we would not see such an AI if it expanded its radius of control at very close to the speed of light but this requires expansion at 99% of the speed of light or greater. It is highly questionable that velocities more than 99% of the speed of light are practically possible due to collisions with the interstellar medium and the need to slow down if one is going to use the resources in a given star system. Another objection is that AI may expand at a large fraction of light speed but do so stealthily. It is not likely that all AIs would favor stealth over speed. Moreover, this would lead to the situation of what one would expect when multiple slowly expanding, stealth AIs run into each other. It is likely that such events would have results would catastrophic enough that they would be visible even with comparatively primitive telescopes.

While these astronomical considerations make AI unlikely to be the Great Filter, it is important to note that if the Great Filter is largely in our past then these considerations do not apply. Thus, any discovery which pushes more of the filter into the past makes AI a larger fraction of total expected existential risks since the absence of observable AI becomes  much weaker evidence against strong AI if there are no major civilizations out there to hatch such explosions. 

Note also that AI as a risk cannot be discounted if one assigns a high probability to existential risk based on non-Fermi concerns, such as the Doomsday Argument

Resource depletion:

Astronomy is unlikely to provide direct help here for reasons similar to the problems with nuclear exchange, biological problems, and global warming.  This connects to the problem of civilization bootstrapping: to get to our current technology level, we used a large number of non-renewable resources, especially energy sources. On the other hand, large amounts of difficult-to-mine and refine resources (especially aluminum and titanium) will be much more accessible to future civilization. While there remains a large amount of accessible fossil fuels, the technology required to obtain deeper sources is substantially more advanced than the relatively easy to access oil and coal. Moreover, the energy return rate, how much energy one needs to put in to get the same amount of energy out, is lower.  Nick Bostrom has raised the possibility that the depletion of easy-to-access resources may contribute to making civilization-collapsing problems that, while not  full-scale existential risks by themselves, prevent the civilizations from recovering. Others have begun to investigate the problem of rebuilding without fossil fuels, such as here.

Resource depletion is unlikely to be the Great Filter, because small changes to human behavior in the 1970s would have drastically reduced the current resource problems. Resource depletion may contribute to existential threat to humans if it leads to societal collapse, global nuclear exchange, or motivate riskier experimentation.  Resource depletion may also combine with other risks such as a global warming where the combined problems may be much greater than either at an individual level. However there is a risk that large scale use of resources to engage in astronomy research will directly contribute to the resource depletion problem. 

Nanotechnology: 

Nanotechnology disasters are one of the situations where astronomical considerations could plausibly be useful. In particular, planets which are in the habitable zone, but have highly artificial and inhospitable atmospheres and surfaces, could plausibly be visible. For example, if a planet's surface were transformed into diamond, telescopes not much more advanced beyond our current telescopes could detect that surface. It should also be noted that at this point, many nanotechnologists consider the classic "grey goo" scenario to be highly unlikely. See, for example, Chris Phoenix's comment here. However, catastrophic replicator events that cause enough damage to the biosphere without grey-gooing everything are a possibility and it is unclear if we would detect such events. 

Aliens:

Hostile aliens are a common explanation of the Great Filter when people first find out about it. However, this idea comes more from science fiction than any plausible argument. In particular, if a single hostile alien civilization were wiping out or drastically curtailing other civilizations, then one would still expect the civilization to make use of available resources after a long enough time. One could do things like positing such aliens who also have a religious or ideological ideal of leaving the universe looking natural but this is an unlikely speculative hypothesis that also requires them to dominate a massive region, not just a handful of galaxies but many galaxies. 

Note also that astronomical observations might be able to detect the results of extremely powerful weapons but any conclusions would be highly speculative. Moreover, it is not clear that knowing about such a threat would allow us at all to substantially mitigate the threat. 

Other/Unkown: 

Unknown risks are by nature very difficult to estimate. However, there is an argument that we should expect that the Great Filter is an unknown risk, and is something so unexpected that no civilization gets sufficient warning.  This is one of the easiest ways for the filter to be truly difficult to prevent. In that context, any information we can possibly get about other civilizations and what happened to them would be a major leg-up.
 

Conclusions 


Astronomical observations have potential to give us data about the Great Filter, but many potential filters will leave no observable astronomical evidence unless one's astronomical ability is so high that one has likely already passed all major filters. Therefore, one potential strategy to pass the Great Filter is to drastically increase the skill of our astronomy capability to the point where it would be highly unlikely that a pre-Filter civilization would have access to those observations.  Together with our comparatively late arrival, this might allow us to actually detect failed civilizations that did not survive the Great Filter and see what they did wrong.

Unfortunately, it is not clear how cost-effective this sort of increase in astronomy would be compared to other existential risk mitigating uses. It may be more useful to focus on moving resources in astronomy into those areas most relevant to understanding the Great Filter. 

Musings on the LSAT: "Reasoning Training" and Neuroplasticity

4 Natha 22 November 2014 07:14PM

The purpose of this post is to provide basic information about the LSAT including the format  of the test and a few sample questions. I also wanted to bring light to some research that has found LSAT preparation to alter brain structure in ways that strengthen hypothesized "reasoning pathways". These studies have not been discussed here before; I thought they were interesting and really just wanted to call your collective attention to them.

I really like taking tests; I get energized by intense race-against-the-clock problem solving and, for better or worse, I relish getting to see my standing relative to others when the dust settles. I like the the purity of the testing situation --how conditions are standardized in theory and more or less the same for all comers. This guilty pleasure has played no small part in the course my life has taken: I worked as a test prep tutor for 3 years and loved every minute of it, I met my wife through academic competitions in high school, and I am a currently a graduate student doing lots of coursework in psychometrics.

Well, my brother-in-law is a lawyer, and when we chat the topic of the LSAT has served as some conversational common ground. Since I like taking tests for fun, he suggested I give it a whirl because he thought it was interesting and felt like it was a fair assessment of one's logical reasoning ability. So I did, I took a practice test cold a couple Saturdays ago and I was very impressed. Here the one I took. (This is a full practice exam provided by the test-makers; it's also like the top google result for "LSAT practice test".) I wanted to post here about it because the LSAT hasn't been discussed very much on this site and I thought that some of you might find it useful to know about.

A brief run-down of the LSAT:

The test has four parts: two Logical Reasoning sections, a Critical Reading section (akin to SAT et al.), and an Analytical Reasoning, or "logic games", section. Usually when people talk about the LSAT, the logic games get emphasized because they are unusual and can be pretty challenging (the only questions I missed were of this type; I missed a few and I ran out of time). Essentially, you get a premise and a bunch of conditions from which you are required to draw conclusions. Here's an example:

A cruise line is scheduling seven week-long voyages for the ship Freedom. 
Each voyage will occur in exactly one of the first seven weeks of the season: weeks 1 through 7.
Each voyage will be to exactly one of four destinations:Guadeloupe, Jamaica, Martinique, or Trinidad.
Each destination will be scheduled for at least one of the weeks.
The following conditions apply: Jamaica will not be its destination in week 4.
Trinidad will be its destination in week 7. Freedom will make exactly two voyages to Martinique,
and at least one voyage to Guadeloupe will occur in some week between those two voyages.
Guadeloupe will be its destination in the week preceding any voyage it makes to Jamaica.
No destination will be scheduled for consecutive weeks.
11. Which of the following is an acceptable schedule of destinations in order from week 1 through week 7?

(A) Guadeloupe, Jamaica, Martinique, Trinidad,Guadeloupe, Martinique, Trinidad
(B) Guadeloupe, Martinique, Trinidad, Martinique, Guadeloupe, Jamaica, Trinidad
(C) Jamaica, Martinique, Guadeloupe, Martinique, Guadeloupe, Jamaica, Trinidad
(D) Martinique, Trinidad, Guadeloupe, Jamaica, Martinique, Guadeloupe, Trinidad
(E) Martinique, Trinidad, Guadeloupe, Trinidad, Guadeloupe, Jamaica, Martinique


Clearly, this section places a huge burden on working memory and is probably the most g-loaded of the four. I'd guess that most LSAT test prep is about strategies for dumping this burden into some kind of written scheme that makes it all more manageable. But I just wanted to show you the logic games for completeness; what I was really excited by were the Logical Reasoning questions (sections II and III). You are presented with some scenario containing a claim, an argument, or a set of facts, and then asked to analyze, critique, or to draw correct conclusions. Here are most of the question stems used in these sections:

Which one of the following most accurately expresses the main conclusion of the economist’s argument?
Which one of the following uses flawed reasoning that most closely resembles the flawed reasoning in the argument?
Which one of the following most logically completes the argument?
The reasoning in the consumer’s argument is most vulnerable to criticism on the grounds that the argument...
The argument’s conclusion follows logically if which one of the following is assumed?
Which one of the following is an assumption required by the argument?


Heyo! This is exactly the kind of stuff I would like to become better at! Most of the questions were pretty straightforward, but the LSAT is known to be a tough test (score range: 120-180, 95th %ile: ~167, 99th %ile: ~172) and these practice questions probably aren't representative. What a cool test though! Here's a whole question from this section, superficially about utilitarianism:

3. Philosopher: An action is morally right if it would be reasonably expected
to increase the aggregate well-being of the people affected by it. An action
is morally wrong if and only if it would be reasonably expected to reduce the
aggregate well-being of the people affected by it. Thus, actions that would
be reasonably expected to leave unchanged the aggregate well-being of the
people affected by them are also right.
The philosopher’s conclusion follows logically if which one of the following is assumed?
(A) Only wrong actions would be reasonably expected to reduce the aggregate 
well-being of the people affected by them.
(B) No action is both right and wrong.
(C) Any action that is not morally wrong is morally right.
(D) There are actions that would be reasonably expected to leave unchanged the
 aggregate well-being of the people affected by them.
(E) Only right actions have good consequences.


Also, the LSAT is a good test, in that it measures well one's ability to succeed in law school. Validity studies boast that “LSAT score alone continues to be a better predictor of law school performance than UGPA [undergraduate GPA] alone.” Of course, the outcome variable can be regressed on both predictors and account for more of the variance than either one taken singly, but it is uncommon for a standardized test to beat prior GPA in predicting a students future GPA.

 

Intensive LSAT preparation and neuroplasticity:

In two recent studies (same research team), learning to reason in the logically formal way required by the LSAT was found to alter brain structure in ways consistent with literature reviews of the neural correlates of logical reasoning. Note: my reading of these articles was pretty surface-level; I do not intend to provide a thorough review, only to bring them to your attention.

These researchers recruited pre-law students enrolling in an LSAT course and imaged their brains at rest using fMRI both before and after 3 months of this "reasoning training". As controls, they included age- and IQ-matched pre-law students intending to take LSAT in the future but not actively preparing for it.

The LSAT-prep group was found to have significantly increased connectivity between parietal and prefrontal cortices and the striatum, both within the left hemisphere and across hemispheres. In the first study, the authors note that

 

These experience-dependent changes fall into tracts that would be predicted by prior work showing that reasoning relies on an interhemispheric frontoparietal network (for review, see Prado et al., 2011). Our findings are also consistent with the view that reasoning is largely left-hemisphere dominent (e.g., Krawczyk, 2012), but that homologous cortex in the right hemisphere can be recruited as needed to support complex reasoning. Perhaps learning to reason more efficiently involves recruiting compensatory neural circuitry more consistently.


And in the second study, they conclude

 

An analysis of pairwise correlations between brain regions implicated in reasoning showed that fronto-parietal connections were strengthened, along with parietal-striatal connections. These findings provide strong evidence for neural plasticity at the level of large-scale networks supporting high-level cognition.

 

I think this hypothesized fronto-parietal reasoning network is supposed to go something like this:

The LSAT requires a lot of relational reasoning, the ability to compare and combine mental representations. The parietal cortex holds individual relationships between these mental representations (A->B, B->C), and the prefrontal cortex integrates this information to draw conclusions (A->B->C, therefore A->C). The striatum's role in this network would be to monitor the success/failure of reward predictions and encourage flexible problem solving. Unfortunately, my understanding here is very limited. Here are several reviews of this reasoning network stuff (I have not read any; just wanted to share them): Hampshire et al. (2011), Prado et al. (2011), Krawczyk (2012).

I hope this was useful information! According to the 2013 survey, only 2.2% of you are in law-related professions, but I was wondering (1) if anyone has personal experience studying for this exam, (2) if they felt like it improved their logical reasoning skills, and (3) if they felt that these effects were long-lasting. Studying for this test seems to have the potential to inculcate rationalist habits-of-mind; I know it's just self-report, but for those who went on to law school, did you feel like you benefited from the experience studying for the LSAT? I only ask because the Law School Admission Council, a non-profit organization made up of 200+ law schools, seems to actively encourage preparation for the exam, member schools say it is a major factor in admissions, preparation tends to increase performance, and LSAT performance is correlated moderately-to-strongly with first year law school GPA (r= ~0.4).

Talking to yourself: A useful thinking tool that seems understudied and underdiscussed

33 chaosmage 09 September 2014 04:56PM

I have returned from a particularly fruitful Google search, with unexpected results.

My question was simple. I was pretty sure that talking to myself aloud makes me temporarily better at solving problems that need a lot of working memory. It is a thinking tool that I find to be of great value, and that I imagine would be of interest to anyone who'd like to optimize their problem solving. I just wanted to collect some evidence on that, make sure I'm not deluding myself, and possibly learn how to enhance the effect.

This might be just lousy Googling on my part, but the evidence is surprisingly unclear and disorganized. There are at least three seperate Wiki pages for it. They don't link to each other. Instead they present the distinct models of three seperate fields: autocommunication in communication studies, semiotics and other cultural studies, intrapersonal communication ("self-talk" redirects here) in anthropology and (older) psychology and private speech in developmental psychology. The first is useless for my purpose, the second mentions "may increase concentration and retention" with no source, the third confirms my suspicion that this behavior boosts memory, motivation and creativity, but it only talks about children.

Google Scholar yields lots of sports-related results for "self-talk" because it can apparently improve the performance of athletes and if there's something that obviously needs the optimization power of psychology departments, it is competitive sports. For "intrapersonal communication" it has papers indicating it helps in language acquisition and in dealing with social anxiety. Both are dwarfed by the results for "private speech", which again focus on children. There's very little on "autocommunication" and what is there has nothing to do with the functioning of individual minds.

So there's a bunch of converging pieces of evidence supporting the usefulness of this behavior, but they're from several seperate fields that don't seem to have noticed each other very much. How often do you find that?

Let me quickly list a few ways that I find it plausible to imagine talking to yourself could enhance rational thought.

  • It taps the phonological loop, a distinct part of working memory that might otherwise sit idle in non-auditory tasks. More memory is always better, right?
  • Auditory information is retained more easily, so making thoughts auditory helps remember them later.
  • It lets you commit to thoughts, and build upon them, in a way that is more powerful (and slower) than unspoken thought while less powerful (but quicker) than action. (I don't have a good online source for this one, but Inside Jokes should convince you, and has lots of new cognitive science to boot.)
  • System 1 does seem to understand language, especially if it does not use complex grammar - so this might be a useful way for results of System 2 reasoning to be propagated. Compare affirmations. Anecdotally, whenever I'm starting a complex task, I find stating my intent out loud makes a huge difference in how well the various submodules of my mind cooperate.
  • It lets separate parts of your mind communicate in a fairly natural fashion, slows each of them down to the speed of your tongue and makes them not interrupt each other so much. (This is being used as a psychotherapy method.) In effect, your mouth becomes a kind of talking stick in their discussion.

All told, if you're talking to yourself you should be more able to solve complex problems than somebody of your IQ who doesn't, although somebody of your IQ with a pen and a piece of paper should still outthink both of you.

Given all that, I'm surprised this doesn't appear to have been discussed on LessWrong. Honesty: Beyond Internal Truth comes close but goes past it. Again, this might be me failing to use a search engine, but I think this is worth more of our attention that it has gotten so far.

I'm now almost certain talking to myself is useful, and I already find hindsight bias trying to convince me I've always been so sure. But I wasn't - I was suspicious because talking to yourself is an early warning sign of schizophrenia, and is frequent in dementia. But in those cases, it might simply be an autoregulatory response to failing working memory, not a pathogenetic element. After all, its memory enhancing effect is what the developmental psychologists say the kids use it for. I do expect social stigma, which is why I avoid talking to myself when around uninvolved or unsympathetic people, but my solving of complex problems tends to happen away from those anyway so that hasn't been an issue really.

So, what do you think? Useful?

If we live in a simulation, what does that imply?

18 JoshuaFox 25 October 2012 09:27PM

If we live in a simulation, what does that imply about the world of our simulators and our relationship to them? [1]

Here are some proposals, often mutually contradictory, none stated with anything near certainty.

1. The simulators are much like us, or at least are our post-human descendants.

Drawing on some of the key points in Bostrom's Simulation Argument:

Today, we often simulate our human ancestors' lives, e.g., Civilization. Our descendants will likely want to simulate their own ancestors, namely us, and they may have much-improved simulation technology which support sentience. So, our simulators are likely to be our (post-)human descendants.

2. Our world is smaller than we think.

Robin Hanson has said that computational power will be dedicated to running only a small part of the simulation at low resolution, including  the part which we are in. Other parts of the simulation will be run at a lower resolution. Everything outside our vicinity, e.g., outside our solar system, will be calculated planetarium-style, and not from the level of particle physics.

(I wonder what it would be like if we are in the low-res part of the simulation.)

3. The world is likely to end soon.

There is no a priori reason for an base-level (unsimulated) universe to flicker out of existence. In fact, it would merely add complexity to the laws of physics for time to suddenly end with no particular cause.

But a simulator may decide that they have learned all they wanted to from their simulation; or that acausal trade has been completed; or that they are bored with the game; and that continuing the simulation is not  worth the computational cost.

The previous point was that the world is spatially smaller than we think. This point is that the world is temporally smaller than we hope.

4. We are living in a particularly interesting part of our universe.

The small part of the universe which the simulators would choose to focus on is the part which is interesting or entertaining to  them. Today's video games are mostly about war, fighting, or various other challenges to be overcome. Some, like the Sims, are about everyday life, but even in those, the players want to see something interesting. 

So, you are likely to be playing a pivotal role in our (simulated) world. Moreover, if you want to continue to be simulated, do what you can to make a difference in the world, or at least to do something entertaining.

5. Our simulators want to trade with us.

One reason to simulate another agent is to trade acausally with it.

Alexander Kruel's blog entry and this LW Wiki entry summarize the concept. In brief, agent P simulates or otherwise analyzes agent Q and learns that Q does  something that P wants, and also learns that the symmetrical statement is true: Q can simulate or analyze P well enough to know that P likewise does something that Q wants. 

This process may involves simulating the other agent for the purpose of learning its expected behavior. Moreover, for P to "pay" Q, it may well run Q -- i.e., simulate it. 

So, if we live in a simulation, maybe our simulators are going to get some benefit from us humans, and we from them. (The latter will occur when we simulate these other intelligences).

In Jaan Tallinn's talk at Singularity Summit 2012, he gave an anthropic argument for our apparently unusual position at the cusp of the Singularity. If post-Singularity superintelligences across causally disconnected parts of the multiverse are trying to communicate with each other by mutual simulation, perhaps for the purpose of acausal trade, then they might simulate the entire history of the universe from the Big Bang to find the other superintelligences in mindspace. A depth-first search across all histories would spend most of the time where we are, right before the point at which superintelligences emerge.

6. We are part of a multiverse.

Today, we run many simulations in our world. Similarly, says Bostrom, our descendants are likely to be running many simulations of our universe: A multiverse.

Max Tegmark's Level IV multiverse theory is motivated partly by the idea that, following Occam's Razor, simpler universes are more likely. Treating the multiverse as a computation, among the most likely computations is one that generates all possible strings/programs/universes.

The idea of the universe/multiverse as computation is still philosophically controversial. But if we live in a simulation, then our universe is indeed a computation, and Tegmarks' Level IV argument applies.

However, this is  very different from the ancestor simulation described in points 1-3 above. That argument relies on the lower conditional complexity of the scenario -- we and our descendants are similar enough that if one exists, the other is not too improbable. 

A brute-force universal simulation is an abstract possibility that specifies no role for simulators. In addition, if the simulators are anything like us,  not enough computational power exists, nor would it be the most interesting possibility.

But we don't know what computational power is available to our simulators, what their goals are, nor even if their universe is constrained by laws of physics remotely similar to ours.

7. [Added] The simulations are stacked.

If we are in a simulation, then (a) at least one universe, ours, is a simulation; and (b) at least one world includes a simulation with sentience. This gives some evidence that being simulated or being a simulator are not too unusual. The stack may lead way down to the basement world, the ultimate unsimulated simulator; or else the stack may go down forever; or [H/T Pentashagon], all universes may be considered to be simulating all others.

Are there any other conclusions about our world that we can reach from the idea that we live in a simulation?

[1] If there is a stack of simulators, with one world simulating another, the "basement level" is the world in which the stack bottoms out, the one which is simulating and not simulated. This uses a metaphor in which the simulators are below the simulated. An alternative metaphor, in which the simulators "look down" on the simulated, is also used.

How to Draw Conclusions Like Sherlock Holmes

-5 abcd_z 27 December 2011 01:29PM

 

Eliezer Yudkowsky once wrote that

[...] when you look at what Sherlock Holmes does - you can't go out and do it at home.  Sherlock Holmes is not really operating by any sort of reproducible method.  He is operating by magically finding the right clues and carrying out magically correct complicated chains of deduction.  Maybe it's just me, but it seems to me that reading Sherlock Holmes does not inspire you to go and do likewise.  Holmes is a mutant superhero.  And even if you did try to imitate him, it would never work in real life.

 

A few days ago I was at an acquaintance's house after watching the Sherlock miniseries on Netflix. My mind whirling with the abilities displayed by the titular character and I wandered around the house while others were making small talk. I stopped by a large oil painting on one wall that was decent but had obvious problems with perspective. Additionally, it was missing a signature in the lower-right corner.

 

ANALYSIS:

Sub-par paintings don't generally get put on the market.

If the hostess thought it was worth putting on the wall, it was most likely because she had an emotional attachment to the piece.

Painters place their signatures in the corner of the painting to identify themselves as the creator. If the painter didn't bother leaving their mark, it was because they were confident that they didn't need to.

The conclusion I drew from this was that the painter was either the hostess herself or somebody very close to her. As it turns out, it was the hostess.

 

Now, this anecdote hardly proves anything.  Still, I think it's a fun little thing and the ability to show off like that, even a small percentage of the time, is too good to pass up.  So I present my analysis of How to Become a Regular Sherlock Holmes.

 

1) Pay attention to details. Look around you at your environment.  A scratch on a wall, a limp in somebody's walk, a smudge on somebody's cheek.  At this point it's probably hard to tell what details are important, so pay attention to everything.

 

2) Answer these two questions:

"What am I looking at?" and

"What could it mean (if anything)?"

 

3) Check your guesses.

This is an important step. It's easy to make any sort of judgments about the details and what they mean, but if you accept your own conclusions without checking the facts, you're likely to create false assumptions and associations that you take as fact.  That's the opposite of what we're trying to do here.

Fortunately, checking your guesses is very easy to do in most situations with another person. Just state what you've noticed and ask for information on the context.  For example, "I've noticed a large scratch on your end-table. Do you know how it happened?"

A follow-up question might be "why haven't you changed it out for another one?", but only if you think getting the information is more important than the possibility of being seen as rude and the potential consequences thereof.

 

In Summary:

 

Pay attention to details

"What am I looking at?"

"What could it mean?"

Check your guesses

 

Oh, and the painting I mentioned at the beginning? I actually didn't figure it out until she told me. I just about kicked myself when I realized I could have figured it out myself and pulled off a really cool Sherlock Summation if I hadn't asked first. C'est la vie.

 

Tidbit: “Semantic over-achievers”

6 kpreid 01 December 2011 03:49PM

[I'd put this in an open thread, but those don’t seem to happen these days, and while this is a quote it isn't a Rationality Quote.]

You know, one of the really weird things about us human beings […] is that we have somehow created for ourselves languages that are just a bit too flexible and expressive for our brains to handle. We have managed to build languages in which arbitrarily deep nesting of negation and quantification is possible, when we ourselves have major difficulties handling the semantics of anything beyond about depth 1 or 2. That is so weird. But that's how we are: semantic over-achievers, trying to use languages that are quite a bit beyond our intellectual powers.

Geoffrey K. Pullum, Language Log, “Never fails: semantic over-achievers”, December 1, 2011

This seems like it might lead to something interesting to say about the design of minds and the usefulness of generalization/abstraction, or perhaps just a good sound bite.

The Phobia or the Trauma: The Probem of the Chcken or the Egg in Moral Reasoning.

1 analyticsophy 15 June 2011 04:16AM

Introduction:

Today there is an almost universal prejudice against individuals with a certain sexual orientation. I am not talking about common homophobia; the prejudice I would like to bring to your attention is so rarely considered a prejudice that it has no particular name. Though the following words will most likely be met with harsh criticism, the prejudice referenced above is the prejudice that almost all of us have against pedophiles. At first thought, it may seem that having a phobia of pedophiles is no more a prejudice for a mother, than having a fear of lions is a prejudice for a mother chimpanzee, but I hope at least to show that the issue is not so clear.

This text does not at any point argue that pedophiles are regular people like you and I, they may well not be. If the hypothesis to be presented is true, however, it follows that the trauma children experience when molested would not happen if we didn't hold the moral judgments towards pedophiles that we do. If this is true then the best thing for us to do as a species for our children is, paradoxically, to stop making the moral judgements we make towards pedophiles. Of course, intuition would have us believe that we hold those moral judgements towards pedophiles precisely because of how traumatic a molestation is for children; this is an attempt to show that that causal interaction goes both ways and forms a loop.

This isn't a defense of pedophilia, nor is it a suggestion that we should stop morally judging pedophiles as a culture, it's an analysis of how circularity can enter the domain of social morality undetected and spread rapidly. We will take a memetic approach to figuring this out, and always ask "how it is useful for the meme to have such and such property?" rather than "how is it useful for us to have a belief with such and such property?".

I will apologize here and now for the graphic nature of this text's subject. But know that part of what I claim is that the reason the following considerations are so rarely even heard is precisely because of their graphic nature. Nowhere in this text is there an argument that can even be loosely interpreted as a defense of individual acts of pedophilia, but the reader may well conclude that in the end, less children would have been seriously hurt if we had refrained from involving our moral attitudes in our dealings with pedophiles.

Inherently Traumatic?:


Let's ask a simple question: "would a feral child be traumatized if molested at a young age?" Notice there was no mention of sodomy in that question. Sodomy is clearly as traumatic to a child as any intense pain caused by another would be. But what about molestation? How can an infant tell the difference between being cleaned and being molested? These two actions could be made to appear behaviorally identical to the child. How does the brain know to get traumatized from one and not from the other? Clearly, children are more frequently traumatized by molestation than by being cleaned. They must somehow make the distinction, either during the act, soon after the event, or retroactively upon remembering the event in adulthood. 

In any case, that distinction must either be learned or inherited. Though we are genetically designed to avoid certain stimuli, e.g., fire, sharp things, bitter chemicals, etc. it is unlikely that getting your genitals touched is one of those stimuli. There might be genes which give you a predisposition to being traumatized when molested as a child, but it is unlikely that we have a sense built into our bodies that distinguishes between acceptable and unacceptable genital touching before puberty. Again, any molestation that causes pain does not apply, we are considering only those cases of molestation which don't cause any physical pain.

If we somehow conclude that any given human does indeed react in a neurologically distinct way when touched on the genitals before puberty by an adult that isn't one of that human's parents, then certainly that sort of molestation would be out of the question. But at the risk of being far to graphic, the fact is that an infant or even a very young child would be largely incapable of distinguishing between grabbing a finger and grabbing an adult male genital. There is clearly nothing inherently evil about the foreskin of a male compared to the skin on his finger. The only difference is the adults intention, which children, or at least infants, are largely insensitive to. What then is the justification for not allowing pedophiles to come to our houses and have our infants reach out and grab their genitals as our infant's instincts would have them do?

It could be argued that children might be traumatized simply by being forced to do something that they do not want to do, and that is certainly likely. But does that mean that we should allow our children to be involved in sexual acts with adults if they are consenting? If we were to argue that children cannot consent, then we would have to ask "can they be non-consenting?" What we generally mean by saying that "children cannot consent." is that they can't consent responsibly because they lack the information to do so. This is granted, but they can simply consent. Children can be made to be the main actors in cases of molestation and even consensual sex. Again, at the risk of being far to graphic: it is not uncommon for one child to molest another, nor is it uncommon for young friends of the same gender to naively engage in games of a sexual nature. Even in the case of molestation from an adult to an infant: if the adult presents his/her genitals the infant will naturally grab. How this grabbing is to be distinguished by the infant from the thousands of other skin covered objects that he/she will grab through out his/her life remains a mystery to me.

Hypotheses:

Infants and children are not designed by evolution to avoid being involved in non-painful forms of sexual encounters which they are willing participants in. By "willing participant" all that is meant is not being forced to engage in the sexual act. The trauma that often follows sexual encounters with adults for children is caused by the reactions of the children's parents. There would be no trauma in the children if the parents and other role-models of said children saw sex with children as a routine part of growing up.

Experiments to Falsify:

(1): Take two appropriately large and randomized samples of infants and children. Have the control monitored by a brain imaging device while cleaned by their parents. Have the variable do the same only have researchers dressed in normal clothes do the cleaning as opposed to the parents. If there is a difference observed in the neurological behavior of these two groups which is larger than the difference between a group of children that are simply looking at their parents and looking at strangers, then there is likely a mechanism from birth which identifies sexual acts. All subjects must be sufficiently young so as to have no learned association with their genitals and sex.

(2): Find a closed population which has no concept of sex as a demonized act or of children as being too young to have sex with. Determine this by extensive interviews with the adult population designed to get them to be contradictory. After finding this population if it exists, show that the stability of those children which were involved in non-painful sexual acts with adults is lower than those children which were not involved. If this is accomplished it will suggest that the behaviors of parents of victims of molestation is not the source of the trauma caused in children after being molested.

Experiments to Verify:

(1): Setup the same control and variable as in (1) above. If we get the result that there is no significant difference between the neurological behavior of the control and the variable, then it becomes less likely that there is anything in children which allows them to tell the difference between non-painful acts of molestation, and cleaning of the genitals.

(2): Find a population as described in (2). Show that those individuals which engaged in sexual acts at a young age have no lower stability than those which did not. 

A Meme not a Gene:

If molestation is not inherently traumatic, why do we feel the need to protect our children from it? There are many possible reasons, but one of the most biological might be our jealousy. We are built to not let others have sex with loved ones, yes. But are we really biologically built to not let others have sex with our children? It'd be a strange adaptation to say the least. Why have children, and prevent them from reproducing? It might well be a side-effect of our evolved jealously. 

But more seems to be at play here then a confusion of jealousy. As my evidence for this I propose that you recall how salacious and downright offensive you found it when I mentioned that an infant would instinctively grab a genital if presented. It doesn't have to be your own infant in your mind to be repulsed by imagining the situation. It is a repulsive situation to imagine for almost anyone I have met that is not a pedophile, and even most pedophiles. If it is not our child we're imagining, just some random token child, and it is just some token child molester we are imagining, the image still repulses us greatly, which suggests that it does not come from biological design since our genetic fitness is not at all increased by worrying about the children of others.

We likely started demonizing pedophiles well after the development of language if the hypothesis stated above is correct. If trauma isn't caused in children from sexual acts with adults before learning about the taboo nature of sex, then it is likely the taboo nature of sex that causes such events to be traumatic. But sex is not taboo because of our genetic history, sex is taboo because of our memtic history.

Why the Meme is such a Success (Imagining Patient Zero):

Let's imagine a hypothetical culture which has demonized sex but doesn't really have an accepted attitude towards pedophiles. Suppose one parent catches another adult engaged in sexual behavior with his/her children. The parent, confused by and scared of sexual action, quickly pulls away the child while attacking the other adult and tells the child that he/she is not to do that anymore or go near that person. The child reacts negatively to this, now knowing that sex is demonic. We have all seen this sort of behavior before, if a child bumps his head and his/her parents say "Oh that's ok, come on, we gotta get going." in a lovely mommy voice the child is more likely to get up and keep on trucking. But if the parents react with "Oh God! Grab the ice pack, grab the ice pack!" yelling urgently, the child cries and may well act is if he/she is much more hurt than he/she really is.

When this hypothetical parent next sees his/her fellow parent friends he/she tells them of the event and how horrific it was for him/her, and how traumatic it was for his/her child. The other parents then warn their children of the strange man/woman that lured the first child and tell their own children never to go near that man/woman's house. The children of course need to find out why for themselves and go there anyway. Another child gets involved in acts of a sexual nature with the town pedophile. This catches the attention of a passerby, who by now knows of what goes on in that house, and how evil it is. This passerby alerts the others that it is happening again. At this point the town decides to do something about it. They lynch the pedophile. This becomes the talk of the town and of the local ruling government body.

Now all of the adults in the town know how to react to pedophilia: as if it would be a demonizing traumatic event for their children. Acting as such when one of their children is inevitably molested, causes that child to find it traumatic. News of the trauma it caused to the child spreads and the whole process is repeated, strengthening the believe that children become traumatized when molested. 

This thought experiment is likely not very much like what really happened to produce this meme in the first place. To actually understand how that happened we would have to trace the memetic evolution of our ancestors for much further than we have the ability to do now. But this hypothetical does at least give us a way of imagining how a belief like "Sexual acts with children and adults causes trauma in the children involved." might start off false and become truer as it becomes more widely accepted, and more widely accepted as it becomes truer. In the end holding that belief is going to cause more suffering in our children than if we didn't hold it provided the hypothesis above is correct. But we believe it anyway, and our moral judgements stray that way anyway, regardless of whether or not we have any benefit from the belief.

The true benefactor here is the meme itself. The meme of fearing and hating pedophiles need not be useful for us as a species, it needs only to be good at getting itself spread. Luckily for the meme, as it gets itself spread the belief associated with it becomes truer. This meme has a belief built in that is a self-fulfilling prophecy so that the more widespread the meme becomes the better its chances of replicating. It's a feedback loop, the meme predisposes us to act a certain way towards molested children, acting towards molested children this way makes them find the event traumatic, the observed trauma of the molested children enforces the meme.

Conclusion:

We can and do hold very basic moral attitudes as a culture which are completely unexamined. Even the most basic moral judgements that we make, like "pedophilia is wrong" are not on as firm of footing as we would like to believe them to be. But when we sharpen the issue and we are faced with the bluntness of the situation, things can become even more difficult. Our biases are very firmly rooted in us. Even I, who will tell you that I'm on the fence about the utility of demonizing pedophilia, am absolutely repulsed and ethically offended upon the thought of such an act. But I consider it important that we think sharply about the utility involved in such basic and unquestioned moral judgements and report our progress. If we find that those most basic moral judgements haven't been beneficial to us as a whole, we should start to wonder about whether or not ensuring utility really is the point of our moral system. Alternatively, our moral system might have little benefit to us and evolve only because it benefits the memes which it is. Our whole theory of ethics, might be the result of nothing more than the continued warfare of memes for our brains. Sometimes the memes convince us to adopt them by being beneficial, sometimes they just trick us into thinking they are right, and other times they make themselves true by the mere virtue of spreading themselves. This last class of memes we can call "self-proving memes" and it is this class of memes that the hypotheses above suggests the fearing and hating pedophiles meme belongs to. If that hypotheses is falsified by any of the suggested experiments or any other applicable experiment, we should still consider that the hypothesis has never even been suggested outside this text. Is this more likely because the hypotheses is so stupid, or because it is so rooted in us not to question such simple facts?