Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread, Jul. 27 - Aug 02, 2015

4 MrMind 27 July 2015 07:16AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Group rationality diary for July 12th - August 1st 2015

6 Gunnar_Zarncke 26 July 2015 11:31PM

This is the public group rationality diary for July 12th - August 1st, 2015. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit

  • Obtained new evidence that made you change your mind about some belief

  • Decided to behave in a different way in some set of situations

  • Optimized some part of a common routine or cached behavior

  • Consciously changed your emotions or affect with respect to something

  • Consciously pursued new valuable information about something that could make a big difference in your life

  • Learned something new about your beliefs, behavior, or life that surprised you

  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Archive of previous rationality diaries

Note to future posters: no one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it. It should run for about two weeks, finish on a Saturday, and have the 'group_rationality_diary' tag.

3 classifications of thinking, and a problem.

0 Elo 26 July 2015 03:33PM

I propose 3 areas of defining thinking "past", "future", "present".  followed by a hard question.

 

Past

This can be classified as any system of review, any overview of past progress, and any learning from the past broadly including history, past opportunities or challenges, shelved projects, known problems and previous progress.  Where a fraction of your time should be spent in the process of review in order to influence your plan for the future.

 

Future

Any planning-thinking tasks, or strategic intention about plotting a course forward towards a purposeful goal.  This can overlap with past-strategising by the nature of using the past to plan for the future.

 

Present

These actions include tasks that get done now, This is where stuff really happens; (technically both past-thinking and future-thinking classify as something you can do in the present, and take up time in the present, but I want to keep them apart for now)  This is the living-breathing getting things done time.  the bricks-and mortar of actually building something; creating and generating progress towards a designated future goal.

 

The hard question

I am stuck on finding a heuristic or estimate for how long should be spent in each area of being/doing.  I reached a point where I uncovered a great deal of neglect for both past events and making future purposeful plans.  

Where if 100% of time is spent on the past, nothing will ever get done, other than a clear understanding of your mistakes;

Similarly 100% on the future will lead to a lot of dreaming and no progress towards the future.  

Equally if all your time is spent running very fast in the present-doing-state you might be going very fast; but by the nature of not knowing where you are going in the future; you might be in a state of not-even-wrong, and not know.

10/10/80?  20/20/60?  25/25/50? 10/20/70?

I am looking for suggestions as to an estimate of how to spend each 168 hour week that might prove a fruitful division of time, or a method or reason for a certain division (at least before I go all empirical trial-and-error on this puzzle).

I would be happy with recommended reading on the topic if that can be provided.

Have you ever personally tackled the buckets? Did you come up with a strategy for how to decide between them?

Thanks for the considerations.

Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time

27 CellBioGuy 26 July 2015 07:38AM

This is the first in a series of posts I am putting together on a personal blog I just started two days ago as a collection of my musings on astrobiology ("The Great A'Tuin" - sorry, I couldn't help it), and will be reposting here.  Much has been written here about the Fermi paradox and the 'great filter'.   It seems to me that going back to a somewhat more basic level of astronomy and astrobiology is extremely informative to these questions, and so this is what I will be doing.  The bloggery is intended for a slightly more general audience than this site (hence much of the content of the introduction) but I think it will be of interest.  Many of the points I will be making are ones I have touched on in previous comments here, but hope to explore in more detail.

This post is a combined version of my first two posts - an introduction, and a discussion of our apparent position in space and time in the universe.  The blog posts may be found at:

http://thegreatatuin.blogspot.com/2015/07/whats-all-this-about.html

http://thegreatatuin.blogspot.com/2015/07/space-and-time.html

Text reproduced below.

 

 



What's all this about?


This blog is to be a repository for the thoughts and analysis I've accrued over the years on the topic of astrobiology, and the place of life and intelligence in the universe.  All my life I've been pulled to the very large and the very small.  Life has always struck me as the single most interesting thing on Earth, with its incredibly fine structure and vast, amazing history and fantastic abilities.  At the same time, the vast majority of what exists is NOT on Earth.  Going up in size from human-scale by the same number of orders of magnitude as you go down through to get to a hydrogen atom, you get just about to Venus at its closest approach to Earth - or one billionth the distance to the nearest star.  The large is much larger than the small is small.  On top of this, we now know that the universe as we know it is much older than life on Earth.  And we know so little of the vast majority of the universe.

There's a strong tendency towards specialization in the sciences.  These days, there pretty much has to be for anybody to get anywhere.  Much of the great foundational work of physics was done on tabletops, and the law of gravitation was derived from data on the motions of the planets taken without the benefit of so much as a telescope.  All the low-hanging fruit has been picked.  To continue to further knowledge of the universe, huge instruments and vast energies are put to bear in astronomy and physics.  Biology is arguably a bit different, but the very complexity that makes living systems so successful and so fascinating to study means that there is so much to study that any one person is often only looking at a very small problem.

This has distinct drawbacks.  The universe does not care for our abstract labels of fields and disciplines - it simply is, at all scales simultaneously at all times and in all places.  When people focus narrowly on their subject of interest, it can prevent them from realizing the implications of their findings on problems usually considered a different field.

It is one of my hopes to try to bridge some gaps between biology and astronomy here.  I very nearly double-majored in biology and astronomy in college; the only thing that prevented this (leading to an astronomy minor) was a bad attitude towards calculus.  As is, I am a graduate student studying basic cell biology at a major research university, who nonetheless keeps in touch with a number of astronomer friends and keeps up with the field as much as possible.  I quite often find that what I hear and read about has strong implications for questions of life elsewhere in the universe, but see so few of these implications actually get publicly discussed. All kinds of information shedding light on our position in space and time, the origins of life, the habitability of large chunks of the universe, the course that biospheres take, and the possible trajectories of intelligences seem to me to be out there unremarked.

It is another of my hopes to try, as much as is humanly possible, to take a step back from the usual narratives about extraterrestrial life and instead focus from something closer to first principles.  What we actually have observed and have not, what we can observe and what we cannot, and what this leaves open, likely, or unlikely.  In my study of the history of the ideas of extraterrestrial life and extraterrestrial intelligence, all too often these take a back seat to popular narratives of the day.  In the 16th century the notion that the Earth moved in a similar way to the planets gained currency and lead to the suppositions that they might be made of similar stuff and that the planets might even be inhabited.  The hot question was, of course, if their inhabitants would be Christians and their relationship with God given the anthropocentric biblical creation stories.  In the late 19th and early 20th century, Lowell's illusory canals on Mars were advanced as evidence for a Martian socialist utopia.  In the 1970s, Carl Sagan waxed philosophical on the notion that contacting old civilizations might teach us how to save ourselves from nuclear warfare.  Today, many people focus on the Fermi paradox - the apparent contradiction that since much of the universe is quite old, extraterrestrials experiencing continuing technological progress and growth should have colonized and remade it in their image long ago and yet we see no evidence of this.  I move that all of these notions have a similar root - inflating the hot concerns and topics of the day to cosmic significance and letting them obscure the actual, scientific questions that can be asked and answered.

Life and intelligence in the universe is a topic worth careful consideration, from as many angles as possible.  Let's get started.

 


Space and Time


Those of an anthropic bent have often made much of the fact that we are only 13.7 billion years into what is apparently an open-ended universe that will expand at an accelerating rate forever.  The era of the stars will last a trillion years; why do we find ourselves at this early date if we assume we are a ‘typical’ example of an intelligent observer?  In particular, this has lent support to lines of argument that perhaps the answer to the ‘great silence’ and lack of astronomical evidence for intelligence or its products in the universe is that we are simply the first.  This notion requires, however, that we are actually early in the universe when it comes to the origin of biospheres and by extension intelligent systems.  It has become clear recently that this is not the case. 

The clearest research I can find illustrating this is the work of Sobral et al, illustrated here http://arxiv.org/abs/1202.3436 via a paper on arxiv  and here http://www.sciencedaily.com/releases/2012/11/121106114141.htm via a summary article.  To simplify what was done, these scientists performed a survey of a large fraction of the sky looking for the emission lines put out by emission nebulae, clouds of gas which glow like neon lights excited by the ultraviolet light of huge, short-lived stars.  The amount of line emission from a galaxy is thus a rough proxy for the rate of star formation – the greater the rate of star formation, the larger the number of large stars exciting interstellar gas into emission nebulae.  The authors use redshift of the known hydrogen emission lines to determine the distance to each instance of emission, and performed corrections to deal with the known expansion rate of the universe.  The results were striking.  Per unit mass of the universe, the current rate of star formation is less than 1/30 of the peak rate they measured 11 gigayears ago.  It has been constantly declining over the history of the universe at a precipitous rate.  Indeed, their preferred model to which they fit the trend converges towards a finite quantity of stars formed as you integrate total star formation into the future to infinity, with the total number of stars that will ever be born only being 5% larger than the number of stars that have been born at this time. 

In summary, 95% of all stars that will ever exist, already exist.  The smallest longest-lived stars will shine for a trillion years, but for most of their history almost no new stars will have formed.

At first this seems to reverse the initial conclusion that we came early, suggesting we are instead latecomers.  This is not true, however, when you consider where and when stars of different types can form and the fact that different galaxies have very different histories.  Most galaxies formed via gravitational collapse from cool gas clouds and smaller precursor galaxies quite a long time ago, with a wide variety of properties.  Dwarf galaxies have low masses, and their early bursts of star formation lead to energetic stars with strong stellar winds and lots of ultraviolet light which eventually go supernova.  Their energetic lives and even more energetic deaths appear to usually blast star-forming gases out of their galaxies’ weak gravity or render it too hot to re-collapse into new star-forming regions, quashing their star formation early.  Giant elliptical galaxies, containing many trillions of stars apiece and dominating the cores of galactic clusters, have ample gravity but form with nearly no angular momentum.  As such, most of their cool gas falls straight into their centers, producing an enormous burst of low-heavy-element star formation that uses most of the gas.  The remaining gas is again either blasted into intergalactic space or rendered too hot to recollapse and accrete by a combination of the action of energetic young stars and the infall of gas onto the central black hole producing incredibly energetic outbursts.   (It should be noted that a full 90% of the non-dark-matter mass of the universe appears to be in the form of very thin X-ray-hot plasma clouds surrounding large galaxy clusters, unlikely to condense to the point of star formation via understood processes.)  Thus, most dwarf galaxies and giant elliptical galaxies contributed to the early star formation of the universe but are producing few or no stars today, have very low levels of heavy element rich stars, and are unlikely to make many more going into the future.

Spiral galaxies are different.  Their distinguishing feature is the way they accreted – namely with a large amount of angular momentum.  This allows large amounts of their cool gas to remain spread out away from their centers.  This moderates the rate of star formation, preventing the huge pulses of star formation and black hole activation that exhausts star-forming gas and prevents gas inflow in giant ellipticals.  At the same time, their greater mass than dwarf galaxies ensures that the modest rate of star formation they do undergo does not blast nearly as much matter out of their gravitational pull.  Some does leave over time, and their rate of inflow of fresh cool gas does apparently decrease over time – there are spiral galaxies that do seem to have shut down star formation.  But on the whole a spiral is a place that maintains a modest rate of star formation for gigayears, while heavy elements get more and more enriched over time.  These galaxies thus dominate the star production in the later eras of the universe, and dominate the population of stars produced with large amounts of heavy elements needed to produce planets like ours.  They do settle down slowly over time, and eventually all spirals will either run out of gas or merge with each other to form giant ellipticals, but for a long time they remain a class apart.

Considering this, we’re just about where we would expect a planet like ours (and thus a biosphere-as-we-know-it) to exist in space and on a coarse scale in time.  Let’s look closer at our galaxy now.  Our galaxy is generally agreed to be about 12 billion years old based on the ages of globular clusters, with a few interloper stars here and there that are older and would’ve come from an era before the galaxy was one coherent object.  It will continue forming stars for about another 5 gigayears, at which point it will undergo a merger with the Andromeda galaxy, the nearest large spiral galaxy.  This merger will most likely put an end to star formation in the combined resultant galaxy, which will probably wind up as a large elliptical after one final exuberant starburst.  Our solar system formed about 4.5 gigayears ago, putting its formation pretty much halfway along the productive lifetime of the galaxy (and probably something like 2/3 of the way along its complement of stars produced, since spirals DO settle down with age, though more of its later stars will be metal-rich).

On a stellar and planetary scale, we once again find ourselves where and when we would expect your average complex biosphere to be.  Large stars die fast – star brightness goes up with the 3.5th power of star mass, and thus star lifetime goes down with the 2.5th power of mass.  A 2 solar mass star would be 11 times as bright as the sun and only live about 2 billion years – a time along the evolution of life on Earth before photosynthesis had managed to oxygenate the air and in which the majority of life on earth (but not all – see an upcoming post) could be described as “algae”.  Furthermore, although smaller stars are much more common than larger stars (the Sun is actually larger than over 80% of stars in the universe) stars smaller than about 0.5 solar masses (and thus 0.08 solar luminosities) are usually ‘flare stars’ – possessing very strong convoluted magnetic fields and periodically putting out flares and X-ray bursts that would frequently strip away the ozone and possibly even the atmosphere of an earthlike planet. 

All stars also slowly brighten as they age – the sun is currently about 30% brighter than it was when it formed, and it will wind up about twice as bright as its initial value just before it becomes a red giant.  Depending on whose models of climate sensitivity you use, the Earth’s biosphere probably has somewhere between 250 million years and 2 billion years before the oceans boil and we become a second Venus.  Thus, we find ourselves in the latter third-to-twentieth of the history of Earth’s biosphere (consistent with complex life taking time to evolve).

Together, all this puts our solar system – and by extension our biosphere – pretty much right where we would expect to find it in space, and right in the middle of where one would expect to find it in time.  Once again, as observers we are not special.  We do not find ourselves in the unexpectedly early universe, ruling out one explanation for the Fermi paradox sometimes put forward – that we do not see evidence for intelligence in the universe because we simply find ourselves as the first intelligent system to evolve.  This would be tenable if there was reason to think that we were right at the beginning of the time in which star systems in stable galaxies with lots of heavy elements could have birthed complex biospheres.  Instead we are utterly average, implying that the lack of obvious intelligence in the universe must be resolved either via the genesis of intelligent systems being exceedingly rare or intelligent systems simply not spreading through the universe or becoming astronomically visible for one reason or another. 

In my next post, I will look at the history of life on Earth, the distinction between simple and complex biospheres, and the evidence for or against other biospheres elsewhere in our own solar system.

Catastrophe Engines: A possible resolution to the Fermi Paradox

-4 snarles 25 July 2015 07:00PM

The Fermi Paradox leads us to conclude that either A) intelligent life is extremely improbable, B) intelligent life very rarely grows to a higher-level civilization, or C) that higher-level civilizations are common, but are not easy to spot.  But each of these explanations are hard to believe.  It is hard to believe that intelligent life is rare, given that hominids evolved intelligence so quickly.  It is hard to believe that intelligence is inherently self-destructive, since as soon as an intelligent species gains the ability to colonize distant planets, it becomes increasingly unlikely that the entire species could be wiped out; meanwhile, it appears that our own species is on the verge of attaining this potential.  It is hard to believe C, since natural selection favors expansionism, so if even a tiny fraction of higher-level civilizations value expansion, then that civilization becomes extremely visible to observers due to its exponential rate of expansion.  Not to mention that our own system should have already been colonized by now.

Here I present a new explanation on why higher-level civilizations might be common, and yet still undetected.  The key assumption is the existence of a type of Matrioshka brain which I call a "Catastrophe Engine."  I cannot even speculate on the exotic physics which might give rise to such a design.  However, the defining characteristics of a Catastrophe Engine are as follows:

  1. The Catastrophe Engine is orders or magnitude more computationally powerful than any Matrioshka Brain possible by conventional physics.
  2. The Catastrophe Engine has a fixed probability 1-et of "meltdown" in any interval of t seconds.  In other words, the lifetime of a Catastrophe Engine is an exponentially distributed random variable with a mean lifetime of 1/λ seconds.
  3. When the Catastrophe Engine suffers a meltdown, it has a destructive effect of radius r, which, among other things, results in the destruction of all other Catastrophe Engines within the radius, and furthermore renders it permanently impossible to rebuild Engines within the radius.
A civilization using Catastrophe Engines would be incentivized to construct the Engines far apart from each other, hence explaining why such we have never detected such a civilization.  Some simple math shows why this would be the case.

Consider a large spherical volume of space.  A civilization places a number of Catastrophe Engines in the volume: suppose the Engines are placed in a density so that each Engine is within a radius r of n other such Engines.  The civilization seeks to maximize the total computational lifetime of the collection of Engines.

The probability that any given Engine will be destroyed by itself or its neighbors in any given interval of t seconds is 1-e-nλt.
The expected lifetime of an Engine is therefore T = 1/(n λ).
The total computational lifetime of the system is proportional to nT = n/(n λ) = 1/λ.

Hence, there is no incentive for the civilization to build Catastrophe Engines to a density n greater than 1.  If the civilization gains extra utility from long computational lifetimes, as we could easily imagine, then the civilization is in fact incentivized to keep the Catastrophe Engines from getting too close.

Now suppose the radius r is extremely huge, i.e. on the order of intergalatic distances.  Then the closest Catastrophe Engine is likely on the order of r distance from ourselves, and may be quite difficult to spot even if it is highly visible.

On the other hand, the larger the radius of destruction r, the more likely it is that we would be able to observe the effects of a meltdown given that it occurs within our visible universe.  But since a larger radius also implies a smaller number of Catastrophe Engines, a sufficiently large radius (and long expected lifetime) makes it more likely that a meltdown has simply not yet occurred in our visible universe.

The existence of Catastrophe Engines alone does not explain the Fermi Paradox.  We also have to rule out the possibility that a civilization with Catastrophe Engines will still litter the universe with visible artifacts, or that highly visible expansionist civilizations which have not yet developed Catastrophe Engines would coexist with invisible civilizations using Catastrophe Engines.  But there are many ways to fill in these gaps.  Catastrophe Engines might be so potent that a civilization ceases to bother with any other kinds of possibly visible projects other than construction of additional Catastrophe Engines.  Furthermore, it could be possible that civilizations using Catastrophe Engines actively neutralize other spacefaring civilizations, fearing disruption to the Catastrophe Engines.  Or that Catastrophe Engines are rapidly discovered: their principles become known to most civilizations before those civilizations have become highly visible.

 

The Catastrophe Engine is by no means a conservative explanation of the Fermi Paradox, since only the very most speculative principles of physics could possibly explain how an object of such destructive power could be constructed.  Nevertheless, it is one explanation of how higher civilizations might be hard to detect as a consequence of purely economical motivations.

Supposing this is a correct explanation of the Fermi paradox, does it result in a desirable outcome for the long-term future of the human race?  Perhaps not, since it necessarily implies the existence of a destructive technology that could damage a distant civilization.  Any civilization lying close enough to be affected by our civilization would be incentivized to neutralize us before we gain this technology.  On the other hand, if we could gain the technology before being detected, then mutually assured destruction could give us a bargaining chip, say, to be granted virtual tenancy in one of their Matrioshka Brains.

MIRI Fundraiser: Why now matters

20 So8res 24 July 2015 10:38PM

Our summer fundraiser is ongoing. In the meantime, we're writing a number of blog posts to explain what we're doing and why, and to answer a number of common questions. Previous posts in the series are listed at the above link.


I'm often asked whether donations to MIRI now are more important than donations later. Allow me to deliver an emphatic yes: I currently expect that donations to MIRI today are worth much more than donations to MIRI in five years. As things stand, I would very likely take $10M today over $20M in five years.

That's a bold statement, and there are a few different reasons for this. First and foremost, there is a decent chance that some very big funders will start entering the AI alignment field over the course of the next five years. It looks like the NSF may start to fund AI safety research, and Stuart Russell has already received some money from DARPA to work on value alignment. It's quite possible that in a few years' time significant public funding will be flowing into this field.

(It's also quite possible that it won't, or that the funding will go to all the wrong places, as was the case with funding for nanotechnology. But if I had to bet, I would bet that it's going to be much easier to find funding for AI alignment research in five years' time).

In other words, the funding bottleneck is loosening — but it isn't loose yet.

We don't presently have the funding to grow as fast as we could over the coming months, or to run all the important research programs we have planned. At our current funding level, the research team can grow at a steady pace — but we could get much more done over the course of the next few years if we had the money to grow as fast as is healthy.

Which brings me to the second reason why funding now is probably much more important than funding later: because growth now is much more valuable than growth later.

There's an idea picking up traction in the field of AI: instead of focusing only on increasing the capabilities of intelligent systems, it is important to also ensure that we know how to build beneficial intelligent systems. Support is growing for a new paradigm within AI that seriously considers the long-term effects of research programs, rather than just the immediate effects. Years down the line, these ideas may seem obvious, and the AI community's response to these challenges may be in full swing. Right now, however, there is relatively little consensus on how to approach these issues — which leaves room for researchers today to help determine the field's future direction.

People at MIRI have been thinking about these problems for a long time, and that puts us in an unusually good position to influence the field of AI and ensure that some of the growing concern is directed towards long-term issues in addition to shorter-term ones. We can, for example, help avert a scenario where all the attention and interest generated by Musk, Bostrom, and others gets channeled into short-term projects (e.g., making drones and driverless cars safer) without any consideration for long-term risks that are more vague and less well-understood.

It's likely that MIRI will scale up substantially at some point; but if that process begins in 2018 rather than 2015, it is plausible that we will have already missed out on a number of big opportunities.

The alignment research program within AI is just now getting started in earnest, and it may even be funding-saturated in a few years' time. But it's nowhere near funding-saturated today, and waiting five or ten years to begin seriously ramping up our growth would likely give us far fewer opportunities to shape the methodology and research agenda within this new AI paradigm. The projects MIRI takes on today can make a big difference years down the line, and supporting us today will drastically affect how much we can do quickly. Now matters.

I encourage you to donate to our ongoing fundraiser if you'd like to help us grow!


This post is cross-posted from the MIRI blog.

Test Driven Thinking

3 adamzerner 24 July 2015 06:38PM

Programmers do something called Test Driven Development. Basically, they write tests that say "I expect my code to do this", then write more code, and if the subsequent code they write breaks a test they wrote, they'll be notified.

Wouldn't it be cool if there was Test Driven Thinking?

  1. Write tests: "I expect that this is true."
  2. Think: "I claim that A is true. I claim that B is true."
  3. If A or B causes any of your tests to fail, you'd be notified.

I don't know where to run with this though. Maybe someone else will be able to take this idea further. My thoughts:
  • It'd be awesome if you could apply TDT and be notified when your tests fail, but this seems very difficult to implement.
  • I'm not sure what a lesser but still useful version would look like.
  • Maybe this idea could serve as some sort of intuition pump for intellectual hygiene ("What do you think you know, and why do you think you know it?"). Ie. having understood the idea of TDT, maybe it'd motivate/help people apply intellectual hygiene. Which is sort of like a manual version of TDT, where you're the one constantly running the tests.

 

New LW Meetups: Indianapolis, Kyiv

2 FrankAdamek 24 July 2015 03:27PM

This summary was posted to LW Main on July 17th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Bayesian Reasoning - Explained Like You're Five

4 Satoshi_Nakamoto 24 July 2015 03:59AM

(This post is not an attempt to convey anything new, but is instead an attempt to convey the concept of Bayesian reasoning as simply as possible. There have been other elementary posts that have covered how to use Bayes’ theorem: here, here, here and here)

 

Bayes’ theorem is about the probability that something is true given some piece or pieces of evidence. In a really simple form it is basically the equation below:


This will be explained using the following coin flipping scenario:

If someone is flipping two coins: one fair and one biased (has heads on both sides), then what is the probability that the coin flipped was the fair coin given that you know that the result of the coin being flipped was heads?

 

Let’s figure this out by listing out the potential states using a decision tree:

 

We know that the tail state is not true because the result of the coin being flipped was heads. So, let’s update the decision tree:

 

 

The decision tree now lists all of the possible states given that the result was heads. 

Let's now plug in the values into the formula. We know that there are three potential states. One in which the coin is fair and two in which it is biased. Let's assume that each state has the same likelihood.

So, the result is: 1 / 1 + 2 which is 1 / 3 which equals 33%.Using the formula we have found out that there is 33% chance that the coin flipped was the fair one when we already know that the result of the flip was heads.

 

At this point you you may be thinking what any of this has to do with bayesian reasoning. Well, the relation is that the above formula is pretty much the same as Bayes’ theorem which in its explicit form is:

 

You can see that P(B|A) * P(A) (in bold) is on both the top and the bottom of the equation. It represents “expected number of times it’s true” in the generic formula above. P(B|~A) * P(~A) represents "expected number of times it's false".

 

You don’t need to worry about what the whole formula means yet as this post is just about how to use Bayesian reasoning and why it is useful. If you want to find out how to deduce Bayes' theorem, check out this post. If you want some examples of how to use Bayes' theorem see one of these posts: 123 and 4.


Let’s now continue on. This time we will be going through a totally different example. This example will demonstrate what it is like to use Bayesian reasoning.

Imagine a scenario with a teacher and a normally diligent student. The student tells the teacher that they have not completed their homework because their dog ate it. Take note of the following:

  • H stands for the hypothesis which is that the student did their home work. This is possible, but the teacher does not think that it is very likely. The teacher only has the evidence of the student’s diligence to back up this hypothesis which does affect the probability that the hypothesis is correct, but not by much.
  • ~H stands for the opposite hypothesis which is that the student did not do their homework. The teacher thinks that this is likely and also believes that the evidence (no extra evidence backing up the students claim and a cliché excuse) points towards this opposite hypothesis.

 

Which do you think is more probable: H or ~H? If you look at how typical ~H is and how likely the evidence is if ~H is correct, then I believe that we must see ~H (which stands for the student did not do their homework) as more probable. The below picture demonstrates this. Please note that higher probability is represented as being heavier i.e. lower in the weight-scale pictures below.

 

The teacher is using Bayesian reasoning, so they don’t actually take ~H (student did not do their homework) as being true. They take it as being probable given the available evidence. The teacher knows that if new evidence is provided then this could make the H more probable and ~H less probable. So, knowing this the teacher tells the student that if they bring in their completed homework tomorrow and provide some new evidence then they will not get a detention tomorrow. 

 

Let’s assume that the next day the student does bring in their completed homework and they also bring in the remains of the original homework that looks like it has been eaten by a dog. Now, the teacher, since they have received new evidence, must update the probabilities of the hypotheses. The teacher also remembers the original evidence (the student’s diligence). When the teacher updates the probabilities of the hypotheses, H (student did their homework) becomes more probable and ~H (student did not do their homework) becomes less probable, but note that it is not considered impossible. After updating the probabilities of the hypotheses the teacher decides to let the student out of the detention. This is because the teacher now sees H as being the best hypothesis that is able to explain the evidence.

 

The below picture demonstrates the updated probabilities.

 

 

If your reasoning is similar to the teachers, then congratulations. Because this means that you are using Bayesian reasoning. Bayesian reasoning involves incorporating conditional probabilities and updating these probabilities when new evidence is provided.

 

You may be looking at this and wondering what all the fuss is over Bayes’ Theorem. You might be asking yourself: why do people think this is so important? Well, it is true that the actual process of weighing evidence and changing beliefs is not a new practice, but the importance of the theorem does not actually come from the process, but from the fact that this process has been quantified, i,e, made it into an expressible equation (Bayes’ Theorem). 

 

Overall, the theorem and its related reasoning are useful because they take into account alternative explanations and how likely they are given the evidence that you are seeing. This means that you can’t just get a theory and take it to be true if it fits the evidence. You need to also look at alternative hypotheses and see if they explain the evidence better. This leads you to start thinking about all hypotheses in terms of probabilities rather than certainties. It also leads you to think about beliefs in terms of evidence. If we follow Bayes’ Theorem, then nothing is just true. Thing are instead only probable because they are backed up by evidence. A corollary of this is that different evidence leads to different probabilities.  

An example demonstrating how to deduce Bayes' Theorem

3 Satoshi_Nakamoto 24 July 2015 03:58AM

(This post is not an attempt to convey anything new, but is instead just an attempt to provide background context on how  Bayes' theorem works by describing how it can be deduced. This is not meant to be a formal proof. There have been other elementary posts that have covered how to use Bayes’ theorem: here, here, here and here)

 

Consider the following example

Imagine that your friend has a bowl that contains cookies in two varieties: chocolate chip and white chip macadamia nut. You think to yourself: “Yum. I would really like a chocolate chip cookie”. So you reach for one, but before you can pull one out your friend lets you know that you can only pick one, that you cannot look into the bowl and that all the cookies are either fresh or stale. Your friend also tells you that there are 80 fresh cookies, 40 chocolate chip cookies, 15 stale white chip macadamia nut cookies and 100 cookies in total. What is the probability that you will pull out a fresh chocolate chip cookie?

 

To figure this out we will create a truth table. If we fill in the values that we do know, then we will end up with the below table. I have highlighted in yellow the cell that we want to find the value of.

 

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

 

 

80

Stale

 

15

 

Total

40

 

100

 

If we look at the above table we can notice that, like in Sudoku, there are some values that we can fill in based on the information that we already know. These values are coloured in grey and they are:

  • The number of stale cookies. We know that 80 cookies are fresh and that there are 100 cookies in total, so this means that there must be 20 stale cookies.
  • The number of white chip macadamia nut cookies. We know that there are 40 chocolate chip cookies and 100 cookies in total, so this means that there must be 60 white chip macadamia nut cookies

 

If we fill in both these values we end up with the below table:

 

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

 

 

80

Stale

 

15

20

Total

40

60

100

 

If we look at the table now, we can see that there are two more values that can be filled in. These values are coloured in grey and they are:

  • The number of fresh white chip macadamia nut cookies. We know that there are 60 white chip macadamia nut cookies and that 15 of these are stale, so this means that there must be 45 fresh white chip macadamia nut cookies.
  • The number of stale chocolate chip cookies. We know that there are 20 stale cookies and that 15 of these are white chip macadamia nut, so this means that there must be 5 stale chocolate chip cookies.

 

If we fill in both these values we end up with the below table:

 

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

 

45

80

Stale

5

15

20

Total

40

60

100

 

We can now find out the number of fresh chocolate chip cookies. It is important to note that there are two ways in which we can do this. These two ways are called the inverse of each other (this will be used later):

  • Using the filled in row values. We know that there are 80 fresh cookies and that 45 of these are white chip macadamia nut, so this means that there must be 35 fresh chocolate chip cookies.
  • Using the filled in column values. We know that there are 40 chocolate chip cookies and the 5 of these are stale, so this means that there must be 35 fresh chocolate chip cookies.

 

 If we fill in the last value in the table we end up with the below table:

 

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

35

45

80

Stale

5

15

20

Total

40

60

100

 

We can now find out the probability of choosing a fresh chocolate chip cookie by dividing the number of fresh chocolate chip cookies (35) by the total number of cookies (100). This is 35 / 100 which is 35%. We now have the probability of choosing a fresh chocolate chip cookie (35%).

 

To get to the Bayes' theorem I will need to reduce the terms to a simpler form.

  • P(A) = probability of finding some observation A. You can think of this as the probability of the picked cookie being chocolate chip.
  • P(B)  = the probability of finding some observation B. You can think of this as the probability of the picked cookie being fresh. Please note that A is what we want to find given B. If it was desired, then A could be fresh and B chocolate chip.
  • P(~A) = negated version of finding some observation A. You can think of this as the probability of the picked cookie not being chocolate i.e. being a white chip macadamia nut instead.
  • P(~B) = a negated version of finding some observation B. You can think of this as the probability of the picked cookie not being fresh i.e. being stale instead.
  • P(A∩B) = probability of being both A and B. You can think of this as the probability of the picked cookie being fresh and chocolate chip.

 

Now, we will start getting a bit more complicated as we start moving into the basis of the Bayes’ Theorem. Let’s go through another example based on the original.

Let’s assume that before you pull out a cookie you notice that it is fresh. Can you then figure out the likelihood of it being chocolate chip before you pull it out? The answer is yes.

 

We will find this out using the table that we filled in previously. The important row is underlined.

 

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

35

45

80

Stale

5

15

20

Total

40

60

100

 

Since we already know that the cookie is fresh, we can say that the likelihood of it being a chocolate chip cookie is equal to the number of fresh chocolate chip cookies (35) divided by the total number of fresh cookies (80). This is 35 / 80 which is 43.75%.

 

In a simpler form this is:

  • P(A|B) - The probability of A given B. You can think of this as the probability of the picked cookie being chocolate chip if you already know that it is fresh.

If we relook at the table we can see that there is some extra important information that we can find out about P(A|B). We can discover that it is equal to P(A∩B) / P(B) You can think of this as the probability of the picked cookie being chocolate chip if you know that it is fresh (35 / 80) is equal to the probability of the picked cookie being fresh and chocolate chip (35 / 100) divided by the probability of it being fresh (80 / 100). This is P(A|B) = (35 / 100) / (80 / 100) which becomes 0.35 / 0.8 which is the same as the answer we found out above (43.75%). Take note of the fact that P(A|B) = P(A∩B) / P(B) as we will use this later.

 

Let’s now return to the inverse idea that was raised previously. If we want to know the probability of the picked cookie being fresh and chocolate chip, i.e. P(A∩B), then we can use the underlined parts of the filled in truth table.

 

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

35

45

80

Stale

5

15

20

Total

40

60

100

If we know that the cookie is known to be fresh like in the top row above, then we can find out that: P(A∩B) = P(A|B) * P(B). This means that the probability of the picked cookie being fresh and chocolate chip (35 / 100)  (remember that there were 100 cookies in total) is equal to the probability of it being chocolate chip given that you know that it is fresh (35 / 80) times the probability of it being fresh (80 / 100) . So, we end up with P(A∩B) = (35 / 80) * (80 / 100) which becomes 35% which is the same as 35 / 100 which we know is the right answer.

 

Alternatively, since we know that we can convert P(A|B) to P(A∩B) / P(B) (we found this out previously) we can also find out that:P(A∩B) = P(A|B) * P(B). We can do this by using the following method:

  1. Assume P(A∩B) = P(A|B) * P(B)
  2. Convert P(A|B) to P(A∩B) / P(B) so we get P(A∩B) = (P(A∩B) * P(B)) / P(B).
  3. Notice that P(B) is on both the top and bottom of the equation, which means that it can be crossed out
  4. Cross out P(B) to give you P(A∩B) = P(A∩B)

 

The inverse situation is when you know that the cookie is chocolate chip like in the left column in the table above. Using the left column we can find out that:  P(A∩B) = P (B|A) * P(A). This means that the probability of the picked cookie being fresh and chocolate chip (35 / 100) is equal to the probability of it being fresh given that you know it is chocolate chip (35 / 40) times the probability of it being chocolate chip (40 / 100). This is: P(A∩B) = (35 / 40) * (40 / 100). This becomes 35% which we know is the right answer.

 

Now, we have enough information to deduce the simple form of Bayes’ Theorem.

Let’s first recount what we know:

  1. P(A|B) = P(A∩B) / P(B)
  2. P(A∩B) = P(B|A) * P(A)

By taking the first fact: P(A|B) = P(A∩B) / P(B) and using the second fact to convert P(A∩B) to P(B|A) * P(A) you end up with P(A|B) = (P(B|A) * P(A)) / P(B) which is Bayes' Theorem in its simple form.

 

From the simple form of Bayes' Theorem, there is one more conversion that we need to make to derive the explicit form of Bayes' Theorem which is the one we are trying to deduce.

 

To get to the explicit form version we need to first find out that P(B) = P(A) * P(B|A) + P(~A) * P(B|~A).

To do this let’s refer to the table again:

 

Chocolate Chip

White Chip Macadamia Nut

Total

Fresh

35

45

80

Stale

5

15

20

Total

40

60

100

 

We can see that the probability that the picked cookie is fresh (80 / 100) is equal to the probability that it is fresh and chocolate chip (35 / 100) plus the probability that it is fresh and white chip macadamia nut (45 / 100). So, we can find out that the probability of P(B) (cookie is fresh) is equal to 35 / 100 + 45 / 100 which is 0.8 or 80% which we know is the answer. This gives the formula:P(B) = P(A∩B) + P(~A∩B)

 

We know that P(A∩B) = P(B|A) * P(A) as we found this out earlier. Similarly we can find out that  P(~A∩B) = P(~A) * P(B|~A)This means that the probability of the picked cookie being fresh and white chip macadamia nut (45 / 100) is equal to the probability of it being white chip macadamia nut (60 / 100) times the probability of it being fresh cookie given that you know that it is white chip macadamia nut (45 / 60). This is: (60 / 100) * (45 / 60) which is 45% which we know is the answer.

 

Using this information, we can now get to the explicit form of Bayes' Theorem:

  1. We know the simple form of Bayes' Theorem: P(A|B) = (P(B|A) * P(A)) / P(B)
  2. We can convert P(B) to P(A∩B) + P(~A∩B) to get P(A|B) = (P(B|A) * P(A)) / (P(A∩B) + P(~A∩B))
  3. We can convert P(A∩B) to P(A) * P(B|A) to get P(A|B) = (P(B|A) * P(A)) / (P(A) * P(B|A) + P(~A∩B))
  4. We can convert P(~A∩B) to P(~A) * P(B|~A) to get P(A|B) = (P(B|A) * P(A)) / (P(A) * P(B|A) + P(~A) * P(B|~A))

Congratulations we have now reached the explicit form of Bayes' Theorem:  


Anyone in the Madison area who'd attend a talk on Acausal Trade on Sunday?

2 wobster109 24 July 2015 03:15AM

Hi, I'm deciding if we have enough people to have Joshua Fox give a talk on Acausal Trade on Sunday evening. Anyone in the Madison area who'd be interested?

Steelmaning AI risk critiques

19 Stuart_Armstrong 23 July 2015 10:01AM

At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.

EDIT: Thanks for all the contribution! Keep them coming...

Self-improvement without self-modification

3 Stuart_Armstrong 23 July 2015 09:59AM

This is just a short note to point out that AIs can self-improve without having to self-modify. So locking down an agent from self-modification is not an effective safety measure.

How could AIs do that? The easiest and the most trivial is to create a subagent, and transfer their resources and abilities to it ("create a subagent" is a generic way to get around most restriction ideas).

Or it the AI remains unchanged and in charge, it could change the whole process around itself, so that the whole process changes and improves. For instance, if the AI is inconsistent and has to pay more attention to problems that are brought to its attention than problems that aren't, it can start to act to manage the news (or the news-bearers) to hear more of what it wants. If it can't experiment on humans, it will give advice that will cause more "natural experiments", and so on. It will gradually try to reform its environment to get around its programmed limitations.

Anyway, that was nothing new or deep, just a reminder point I hadn't seen written out.

 

(Rational) website design and cognitive aesthetics generally- why no uptake?

1 TimothyScriven 23 July 2015 05:32AM

So I'm working for a friend's company at the moment (friend is a small business owner who designs websites and a bit of an entrepreneur) anyway, I've persuaded him that we should research the empirical literature on what makes websites effective (which we've done a lot of now) and to advertise ourselves as being special by reason of doing this (which we're only just starting to do). 

One thing that I found absolutely remarkable is how unfilled this space tends to be. Like a lot of things in the broad area of empirical aesthetics it seems like there are a lot of potentially useful results (c.f.http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485842/ ), but they're simply not being applied- either as points of real practice or of marketing differentiation. 

A fascinating gap.

Mathematics for AIXI and Gödel machine

0 Faustus2 22 July 2015 06:52PM

Just a quick question, does anyone know which math topics I'd have to learn to understand the work on AIXI and the Gödel machine? Any pointers or suggestions would be appreciated. 

Oracle AI: Human beliefs vs human values

3 Stuart_Armstrong 22 July 2015 11:54AM

It seems that if we can ever define the difference between human beliefs and values, we could program a safe Oracle by requiring it to maximise the accuracy of human beliefs on a question, while keeping human values fixed (or very little changing). Plus a whole load of other constraints, as usual, but that might work for a boxed Oracle answering a single question.

This is a reason to suspect it will not be easy to distinguish human beliefs and values ^_^

How to accelerate your cognitive growth in 8 difficult steps

10 BayesianMind 22 July 2015 04:01AM

I believe there is some truth in William James' conclusion that "compared with what we ought to be, we are only half awake." (James, 1907). So what can we do to awaken our slumbering potentials? I am especially interested in our potential for cognitive growth, that is learning to think, learn, and decide better. Early in life we learn amazingly fast, but as we transition into adulthood our cognitive development plateaus, and most of us get stuck in suboptimal mental habits and never realize our full potential. I think this is very sad, and I wish we could find a way to accelerate our cognitive growth. Yesterday, I organized a discussion on this very topic at the Consciousness Hacking Meetup, and it inspired me to propose the following eight steps as a starting point for our personal and scientific exploration of interventions to promote cognitive growth:

1.    Tap into your intrinsic motivation by mental contrasting: Who do you want to become and why? Imagine your best possible self and how wonderful it will be to become that person. Imagine how wonderful it will be to have perfected the skill you seek to develop and how it will benefit you. Next, contrast the ideal future self you just imagines with who you are right now and be brutally honest with yourself. Realizing the discrepancy between who you are and who you want to be is a powerful motivator (Oettingen, et al., 2009). Finally, make yourself aware that you and the world around you will benefit from any progress that you make on yourself for a very, very long time. A few hours of hard work per week is a small price to pay for the sustained benefits of being a better person and feeling better about yourself for the rest of your life.

2.    Become more self-aware: Introspect, observe yourself, and ask your friends to develop an accurate understanding and acceptance of how you currently fare in the skill you want to improve and why. What do you do in situations that require the skill? How well does it work? How do you feel? Have you tried doing it differently? Are you currently improving? Why or why not?

3.    Develop a growth mindset (Dweck, 2006): Convince yourself that you will learn virtually any cognitive skill if you invest the necessary hard work. Even talent is just a matter of training. Each failure is a learning opportunity and so are your little successes along the way.

4.    Understand the skill and how it is learned: What do masters of this skill do? How does it work? How did they develop the skill? What are the intermediate stages? How can the skill be learned and practiced? Are there any exercises, tutorials, tools, books, or courses for acquiring the skill you want to improve on?

5.    Create a growth structure for yourself:

a. Set SMART self-improvement goals (Doran, 1981). The first three steps give you a destination (i.e. a better version of yourself), a starting point (i.e. the awareness of your strengths and weaknesses), and a road map (i.e. how to practice). Now it is time to plan your journey. Which path do you want to take from who you are right now to who you want to be in the future? A good way to delineate your path might be to place a number of milestones and decide by when you want to have reached each of them. Milestones are specific, measurable goals that lie between where you are now and where you want to be. Starting with the first milestone, you can choose a series of steps and decide when to take each step. It helps to set concrete goals at the beginning of every day. To set good milestones and choose appropriate steps, you can ask yourself the following questions: What exactly do I want to learn? How will I know that I have learned it? What will I do to develop that skill? By which time do I want to have learned it?

b. Translate your goals into implementation intentions. An implementation intention is a simple IF-THEN plan. It specifies a concrete situation in which you will take action (IF) and what exactly you will do (THEN). Committing to an implementation intention will make you much more likely to seize opportunities to make progress towards your goals and eventually achieve them (Gollwitzer, 1999).

c. You can restructure your physical environment to make your goals and your progress more salient. To make your goals more salient you can write them down and post them on your desktop, in your office, and in your apartment. To make your progress more salient, make todo lists and celebrate checking off every subtask that you have completed. Give yourself points for every task you completed and compute your daily score, e.g. the percentage of daily goals that you have accomplished. Celebrate these small moments of victory! Post your path and score board in a visible manner.

d. Restructure your social environment to make it more conducive to growth. You can share your self-improvement goals with a friend or mentor who helps you understand where you are at, encourages you to grow, and will hold you accountable for following through with your plan. Friends can make suggestions for what to try and give you feedback on how you are doing. They can also help you notice, appreciate and celebrate your progress. Identify social interactions that help you grow and seek them out more while changing or avoiding social interactions that hinder your growth.

e. There are many things that you can do to also restructure your own mind for growth as well: There are at least three kinds of things you can do. First, you can be more mindful of what you do, how well it works, and why. Mindful learning is much more effective than mindless learning. Second, you an pay more attention to the moments when you do well at what you want to improve. Let yourself appreciate these small (or large) successes more—give yourself a compliment for getting better, smile, and give yourself a mental or physical pat and the shoulder. Attend specifically to your improvement. To do so, ask yourself, if you are getting better rather than how well you did. You can mentally contrast what you did this time to how poorly you used to do when you started working on that skill. Rate your improvement by how many percent better you perform now than you used to. Third, you can be kind to yourself: Don’t beat yourself up for failing and being imperfect. Instead, embrace failure as an opportunity for growth. This is will allow you to continue practicing a skill that you have not mastered yet rather than giving up in frustration.

6.   Seek advice, experiment, and get feedback: Accept that you don’t know how to do it yet and adopt a beginner’s mindset. Curious infants learn much more rapidly than seniors who think they know it all. So emulate a curious infant rather than pretending that you know everything already. With this mindset, it will be much easier to seek advice from other people. Experimenting with new ways of doing things is critical, because if you merely repeat what you have done a thousand times the results won’t be dramatically different. Sometimes we are unaware of something large or small that really matters, and it is often hard to notice what you are doing wrong and what you are doing well. This is why it is crucial to get feedback; ideally from somebody who has already mastered the skill you are trying to learn.

7.  Practice, practice, practice. Becoming a world-class expert requires 10,000 hours of deliberate practice (Ericsson, Krampe, & Tesch-Romer, 1993). Since you probably don’t need to become the world’s leading expert in the skill you are seeking to develop, fewer hours will be sufficient. But the point is that you will have to practice a lot. You will have to challenge yourself regularly and practicing will be hard. Schedule to practice the skill regularly. Make practicing a habit. Kindly help yourself resume the practice after you have let it slip.

8.  Reflect on your progress at a regular basis, perhaps at the end of every day. Ask yourself: What have I learned today/this week/this month? Am I making any progress? What did I do well? What will I do better tomorrow/this week/month.

References

Doran, G. T. (1981). There's a S.M.A.R.T. way to write management's goals and objectives. Management Review70 (11): 35–36.

Dweck, C. (2006). Mindset: The new psychology of success. Random House.

Ericsson, K.A., Krampe, R.Th. and Tesch-Romer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100, pp. 393-394.

Gollwitzer, P. M. (1999). Implementation intentions: Strong effects of simple plans. American Psychologist, 54, 493-503.

James, W. (1907). The energies of men. Science, 321-332.

Oettingen, G., Mayer, D., Sevincer, A. T., Stephens, E. J., Pak, H. J., & Hagenah, M. (2009). Mental contrasting and goal commitment: The mediating role of energization. Personality and Social Psychology Bulletin35(5), 608-622. 


AGI Safety Solutions Map

11 turchin 21 July 2015 02:41PM

When I started to work on the map of AI safety solutions, I wanted to illustrate the excellent article “Responses to Catastrophic AGI Risk: A Survey” by Kaj Sotala and Roman V. Yampolskiy, 2013, which I strongly recommend.

However, during the process I had a number of ideas to expand the classification of the proposed ways to create safe AI. In their article there are three main categories: social constraints, external constraints and internal constraints.

I added three more categories: "AI is used to create a safe AI", "Multi-level solutions" and "meta-level", which describes the general requirements for any AI safety theory.

In addition, I divided the solutions into simple and complex. Simple are the ones whose recipe we know today. For example: “do not create any AI”. Most of these solutions are weak, but they are easy to implement.

Complex solutions require extensive research and the creation of complex mathematical models for their implementation, and could potentially be much stronger. But the odds are less that there will be time to realize them and implement successfully.

After aforementioned article several new ideas about AI safety appeared.

These new ideas in the map are based primarily on the works of Ben Goertzel, Stuart Armstrong and Paul Christiano. But probably many more exist and was published but didn’t come to my attention.

Moreover, I have some ideas of my own about how to create a safe AI and I have added them into the map too. Among them I would like to point out the following ideas:

1.     Restriction of self-improvement of the AI. Just as a nuclear reactor is controlled by regulation the intensity of the chain reaction, one may try to control AI by limiting its ability to self-improve in various ways.

2.     Capture the beginning of dangerous self-improvement. At the start of potentially dangerous AI it has a moment of critical vulnerability, just as a ballistic missile is most vulnerable at the start. Imagine that AI gained an unauthorized malignant goal system and started to strengthen itself. At the beginning of this process, it is still weak, and if it is below the level of human intelligence at this point, it may be still more stupid than the average human even after several cycles of self-empowerment. Let's say it has an IQ of 50 and after self-improvement it rises to 90. At this level it is already committing violations that can be observed from the outside (especially unauthorized self-improving), but does not yet have the ability to hide them. At this point in time, you can turn it off. Alas, this idea would not work in all cases, as some of the objectives may become hazardous gradually as the scale grows (1000 paperclips are safe, one billion are dangerous, 10 power 20 are x-risk). This idea was put forward by Ben Goertzel.

3.     AI constitution. First, in order to describe the Friendly AI and human values we can use the existing body of criminal and other laws. (And if we create an AI that does not comply with criminal law, we are committing a crime ourselves.) Second, to describe the rules governing the conduct of AI, we can create a complex set of rules (laws that are much more complex than Asimov’s three laws), which will include everything we want from AI. This set of rules can be checked in advance by specialized AI, which calculates only the way in which the application of these rules can go wrong (something like mathematical proofs on the basis of these rules).

4.     "Philosophical landmines." In the map of AI failure levels I have listed a number of ways in which high-level AI may halt when faced with intractable mathematical tasks or complex philosophical problems. One may try to fight high-level AI using "landmines", that is, putting it in a situation where it will have to solve some problem, but within this problem is encoded more complex problems, the solving of which will cause it to halt or crash. These problems may include Godelian mathematical problems, nihilistic rejection of any goal system or the inability of AI to prove that it actually exists.

5. Multi-layer protection. The idea here is not that if we apply several methods at the same time, the likelihood of their success will add up, this notion will not work if all methods are weak. The idea is that the methods of protection work together to protect the object from all sides. In a sense, human society works the same way: a child is educated by an example as well as by rules of conduct, then he begins to understand the importance of compliance with these rules, but also at the same time the law, police and neighbours are watching him, so he knows that criminal acts will put him in jail. As a result, lawful behaviour is his goal which he finds rational to obey. This idea can be reflected in the specific architecture of AI, which will have at its core a set of immutable rules, around it will be built human emulation which will make high-level decisions, and complex tasks will be delegated to a narrow Tool AIs. In addition, independent emulation (conscience) will check the ethics of its decisions. Decisions will first be tested in a multi-level virtual reality, and the ability of self-improvement of the whole system will be significantly limited. That is, it will have an IQ of 300, but not a million. This will make it effective in solving aging and global risks, but it will also be predictable and understandable to us. The scope of its jurisdiction should be limited to a few important factors: prevention of global risks, death prevention and the prevention of war and violence. But we should not trust it in such an ethically delicate topic as prevention of suffering, which will be addressed with the help of conventional methods.

This map could be useful for the following applications:

1. As illustrative material in the discussions. Often people find solutions ad hoc, once they learn about the problem of friendly AI or are focused on one of their favourite solutions.

2. As a quick way to check whether a new solution really has been found.

3. As a tool to discover new solutions. Any systematisation creates "free cells" to fill for which one can come up with new solutions. One can also combine existing solutions or be inspired by them.

4. There are several new ideas in the map.

A companion to this map is the map of AI failures levels. In addition, this map is subordinated to the map of global risk prevention methods and corresponds to the block "Creating Friendly AI" Plan A2 within it.

The pdf of the map is here: http://immortality-roadmap.com/aisafety.pdf

 

Previous posts:

A map: AI failures modes and levels

A Roadmap: How to Survive the End of the Universe

A map: Typology of human extinction risks

Roadmap: Plan of Action to Prevent Human Extinction Risks

 

 

(scroll down to see the map)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Speculative rationality skills and appropriable research or anecdote

3 Clarity 21 July 2015 04:02AM

Is rationality training in it's infancy? I'd like to think so, given the paucity of novel, usable information produced by rationalists since the Sequence days. I like to model the rationalist body of knowledge as superset of pertinent fields such as decision analysis, educational psychology and clinical psychology. This reductionist model enables rationalists to examine the validity of rationalist constructs while standing on the shoulders of giants.

CFAR's obscurantism (and subsequent price gouging) capitalises on our [fear of missing out](https://en.wikipedia.org/wiki/Fear_of_missing_out). They brand established techniques like mindfulness as againstness or reference class forecasting as 'hopping' as if it's of their own genesis, spiting academic tradition and cultivating an insular community. In short, Lesswrongers predictably flouts [cooperative principles](https://en.wikipedia.org/wiki/Cooperative_principle).

This thread is to encourage you to speculate on potential rationality techniques, underdetermined by existing research which might be a useful area for rationalist individuals and organisations to explore. I feel this may be a better use of rationality skills training organisations time, than gatekeeping information.

To get this thread started, I've posted a speculative rationality skill I've been working on. I'd appreciate any comments about it or experiences with it. However, this thread is about working towards the generation of rationality skills more broadly.

Interesting things to do during a gap year after getting undergraduate degree

3 ChaosMote 21 July 2015 01:07AM

Hello, all. My sibling asked my for advice recently, and I'm making this post on his behalf.

 

Said sibling is currently currently has one more year to go at MIT before he gets his bachelors degree in Mathematics/CS. He is also enrolled in a 5-year masters program, so he will need one more year after that to finish a Masters, after which he anticipates getting a job somewhere the CS Industry / Finance / Academia. Anyway, he is interested in taking a gap year after finishing his Bachelors to pick up some novel experiences, and trying something different from what he has been doing already and plans to do after graduation. 

 

Right now, he is in the brainstorming stage, and is looking for ideas. Note that he is not opposed to getting a job or something of the like - as long as its a different experience that what he would get working for a large software company, or a hedge fund, or something of the like. Financially, he does need to earn enough to live on (this isn't quite a vacation), but he isn't worried about money aside from that (so the "money" constraint only needs to be satisficed, not optimized.) With that said, what are some things that he might consider doing?

LessWrong Diplomacy Game 2015

6 Sherincall 20 July 2015 03:10PM

Related: Diplomacy as a Game Theory Laboratory by Yvain.

I've been floating this idea around for a while, and there was enough interest to organize it.

Diplomacy is a board game of making and breaking alliances. It is a semi-iterative prisoner's dilemma with 7 prisoners. The rules are very simple, there is no luck factor and any tactical tricks can be learned quickly. You play as one of the great powers in pre-WW1 Europe, and your goal is to dominate over half of the board. To do this, you must negotiate alliances with the other players, and then stab them at the most opportune moment. But beware, if you are too stabby, no one will trust you. And if you are too trusting, you will get stabbed yourself.

If you have never played the game, don't worry. It is really quick to pick up. I explain the rules in detail here.

The game will (most likely) be played at webdiplomacy.net. You need an account, which requires a valid email. To play the game, you will need to spend at least 10 minutes every phase (3 days) to enter your orders. In the meantime, you will be negotiating with other players. That takes as much as you want it to, but I recommend setting away at least 30 minutes per day (in 5-minute quantums). A game usually lasts about 10 in-game years, which comes down to 30-something phases (60-90 days). A phase can progress early if everyone agrees. Likewise, the game can be paused indefinitely if everyone agrees (e.g. if a player will not have Internet access).

Joining a game is Serious Business, as missing a deadline can spoil it for the other 6 players. Please apply iff:

  1. You will be able to access the game for 10 minutes every 3 days (90% certainty required)
  2. If 1) changes, you will be able to let the others know at least 1 day in advance (95% certainty required)
  3. You will be able to spend an average of 30 minutes per day (standard normal distribution)
  4. You will not hold an out-of-game grudge against a player who stabbed you (adjusting for stabbyness in potential future games is okay)

If you still wish to play, please sign up in the comments. Please specify the earliest time it would suit you for the game to start. If we somehow get more than 7 players, we'll discuss our options (play a variant with more players, multiple games, etc).

 

See also: First game of LW Diplomacy

 


Well, the interest is there, so I've set up two games.

Game 1: http://webdiplomacy.net/board.php?gameID=164863  (started!)

Game 2: http://webdiplomacy.net/board.php?gameID=164912  (three spots left, starts 1st of August)

Password: clippy


Please note a couple important rules of the webdiplomacy.net website:

 

  1. You can only have one account. If you are caught with multiple accounts, they will all be banned.
  2. You may not blame your moves on the website bugs as a diplomacy tactic. This gives the site's mods extra work to do when someone actually reports the bug.
  3. Should go without saying, but you are not allowed to illegally access another player's account (i.e. hacking).

 

Should you write longer comments? (Statistical analysis of the relationship between comment length and ratings)

11 cleonid 20 July 2015 02:09PM

A few months ago we have launched an experimental website. In brief, our goal is to create a platform where unrestricted freedom of speech would be combined with high quality of discussion. The problem can be approached from two directions. One is to help users navigate through content and quickly locate the higher quality posts. Another, which is the topic of this article, is to help users improve the quality of their own posts by providing them with meaningful feedback.

One important consideration for those who want to write better comments is how much detail to leave out. Our statistical analysis shows that for many users there is a strong connection between the ratings and the size of their comments. For example, for Yvain (Scott Alexander) and Eliezer_Yudkowsky, the average number of upvotes grows almost linearly with increasing comment length.

 

 

This trend, however, does not apply to all posters. For example, for the group of top ten contributors (in the last 30 days) to LessWrong, the average number of upvotes increases only slightly with the length of the comment (see the graph below).  For quite a few people the change even goes in the opposite direction – longer comments lead to lower ratings.

 

 

Naturally, even if your longer comments are rated higher than the short ones, this does not mean that inflating comments would always produce positive results. For most users (including popular writers, such as Yvain and Eliezer), the average number of downvotes increases with increasing comment length. The data also shows that long comments that get most upvotes are generally distinct from long comments that get most downvotes. In other words, long comments are fine as long as they are interesting, but they are penalized more when they are not.

 

 

The rating patterns vary significantly from person to person. For some posters, the average number of upvotes remains flat until the comment length reaches some threshold and then starts declining with increasing comment length. For others, the optimal comment length may be somewhere in the middle. (Users who have accounts on both Lesswrong and Omnilibrium can check the optimal length for their own comments on both websites by using this link.)

Obviously length is just one among many factors that affect comment quality and for most users it does not explain more than 20% of variation in their ratings. We have a few other ideas on how to provide people with meaningful feedback on both the style and the content of their posts. But before implementing them, we would like to get your opinions first. Would such feedback be actually useful to you?

Open Thread, Jul. 20 - Jul. 26, 2015

4 MrMind 20 July 2015 06:55AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Thinking like a Scientist

4 FrameBenignly 19 July 2015 02:43PM
I've been often wondering why scientific thinking seems to be so rare.  What I mean by this is dividing problems into theory and empiricism, specifying your theory exactly then looking for evidence to either confirm or deny the theory, or finding evidence to later form an exact theory.

This is a bit narrower than the broader scope of rational thinking.  A lot of rationality isn't scientific.  Scientific methods don't just allow you to get a solution, but also to understand that solution.

For instance, a lot of early Renaissance tradesmen were rational, but not scientific.  They knew that a certain set of steps produced iron, but the average blacksmith couldn't tell you anything about chemical processes.  They simply did a set of steps and got a result.

Similarly, a lot of modern medicine is rational, but not too scientific.  A doctor sees something and it looks like a common ailment with similar symptoms they've seen often before, so they just assume that's what it is.  They may run a test to verify their guess.  Their job generally requires a gigantic memory of different diseases, but not too much knowledge of scientific investigation.

What's most damning is that our scientific curriculum in schools don't teach a lot of scientific thinking.

What we get instead is mostly useless facts.  We learn what a cell membrane is, or how to balance a chemical equation.  Learning about, say, the difference between independent and dependent variables is often left to circumstance.  You learn about type I and type II errors when you happen upon a teacher who thinks it's a good time to include that in the curriculum, or you learn it on your own.  Some curriculums include a required research methods course, but the availability and quality of this course varies greatly between both disciplines and colleges.  Why there isn't a single standardized method of teaching this stuff is beyond me.  Even math curriculums are structured around calculus instead of the much more useful statistics and data science placing ridiculous hurdles for the typical non-major that most won't surmount.

It should not be surprising then that so many fail at even basic analysis.  I have seen many people make basic errors that they are more than capable of understanding but simply were never taught.  People aren't precise with their definitions.  They don't outline their relevant variables.  They construct far too complex theoretical models without data.  They come to conclusions based on small sample sizes.  They overweight personal experiences, even those experienced by others, and underweight statistical data.  They focus too much on outliers and not enough on averages.  Even professors, who do excellent research otherwise, often suddenly stop thinking analytically as soon as they step outside their domain of expertise.  And some professors never learn the proper method.

Much of this site focuses on logical consistency and eliminating biases.  It often takes this to an extreme; what Yvain refers to as X-Rationality.  But eliminating biases barely scratches the surface of what is often necessary to truly understand a problem.  This may be why it is said that learning about rationality often reduces rationality.  An incomplete, slightly improved, but still quite terrible solution may generate a false sense of certainty.  Unbiased analysis won't fix a lousy dataset.  And it seems rather backwards to focus on what not to do (biases) rather than what to do (analytic techniques).

 

True understanding is often extremely hard.  Good scientific analysis is hard.  It's disappointing that most people don't seem to understand even the basics of science.

LessWrong Hamburg Meetup July 2015 Summary

6 Gunnar_Zarncke 18 July 2015 11:13PM

After a hiatus of about a year the LessWrong Hamburg Meetup had a very strong revival! Infused by motivation from the Berlin Weekend I tried a reachout to collegues and via meetup.com and an amazing 24 people gathered on July, 17th in a location kindly provided by my employer.

Because the number of participants quickly exceeded my expectations I had to scramble to put something together for a larger group. For this I had tactical aid from blob and practical support from colleagues putting everything together from name tags to getting food and drinks and chairs.

We had an easy start with getting to know each other with Fela's Ice-Breaking Game.

The main topics covered were:

Beside the main topics there was a good athmosphere with many people having smaller discussions.

The event ended with a short wrap-up based on Irinas Sustainable Change talk from the Berlin event which did prompt some people to take action based on what they heard.

What I learned from the event:

  • I still tend to do overplanning. Maybe having a plan for eventualities isn't bad but the agenda doesn't need to be as highly structured as I did. It could cause expectations that can't be met. 
  • Apparently I appeared stressed but I didn't feel that way myself. Probably from hurrying around. I wonder wheather that has a negative effect on other people and how I can avoid that. Esp. as I'm not feeling stressed myself. 
  • A standard-issue meeting room for 12 people can comfortably host 24 people if tables and furniture are rearranged and comfy beanbags etc. are added.
  • Whe number of people showing up can vary unpredictably. This may depend on weather or how the event is communicated and unknown factors.
  • Visualize the concrete effects of your charity. This can give you a specific intuition you can use to decide whether it's worth it. Imma's example was thinking about how your donated AMF bednets hang over children and protect from mosquitoes.

There will definitely be a follow-up meeting of a comparable size in a few month (no date yet). And maybe smaller get-together will be organized inbetween.

 

List of Fully General Counterarguments

8 Gunnar_Zarncke 18 July 2015 09:49PM

Follow-up to: Knowing About Biases Can Hurt People

See also: Fully General Counterargument (LW Wiki)

fully general counterargument [FGCA] is an argument which can be used to discount any conclusion the arguer does not like.

With the caveat that the arguer doesn't need to be aware that this is the case. But if (s)he is not aware of that, this seems like the other biases we are prone to. The question is: Is there a tendency or risk to accidentally form FGCAs? Do we fall easily into this mind-trap? 

This post tries to (non-exhaustively) list some FGCAs as well as possible countermeasures.

continue reading »

You are (mostly) a simulation.

-4 Eitan_Zohar 18 July 2015 04:40PM

This post was completely rewritten on July 17th, 2015, 6:10 AM. Comments before that are not necessarily relevant.

Assume that our minds really do work the way Unification tells us: what we are experiencing is actually the sum total of every possible universe which produces them. Some universes have more 'measure' than others, and that is typically the stable ones; we do not experience chaos. I think this makes a great deal of sense- if our minds really are patterns of information I do not see why a physical world should have a monopoly on it.

Now to prove that we live in a Big World. The logic is simple- why would something finite exist? If we're going to reason that some fundamental law causes everything to exist, I don't see why that law restricts itself to this universe and nothing else. Why would it stop? It is, arguably, simply the nature of things for an infinite multiverse to exist.

I'm pretty terrible at math, so please try to forgive me if this sounds wrong. Take the 'density' of physical universes where you exist- the measure, if you will- and call it j. Then take the measure of universes where you are simulated and call it p. So, the question become is j greater than p? You might be thinking yes, but remember that it doesn't only have to be one simulation per universe. According to our Big World model there is a universe out there in which all processing power (or a significant portion) as been turned into simulations of you.

So we take the amount of minds being simulated per universe and call that x. Then the real question becomes if j > px. What sort of universe is common enough and contains enough minds to overcome j? If you say that approximately 10^60 simulated human minds could fit in it (a reasonable guess for this universe) but that such universes are five trillion times rarer than the universe we live in, than it's clear that our own 'physical' measure is hopelessly lower than our simulated measure.

Should we worry about this? It would seem highly probable that in most universes where I am being simulated I once existed in, or humans did, since the odds of randomly stumbling upon me in Mind Space seem unlikely enough to ignore. Presumably they are either AIs gone wrong or someone trying to grab some of my measure, for whatever reason.

As way of protecting measure, pretty much all of our postsingularity universes would divide up the matter of the universe for each person living, create as many simulations as possible of them from birth, and allow them to go through the Singularity. I expect that my ultimate form is a single me, not knowing if he is simulated or not, with billions of perfect simulations of himself across our universe, all reasoning the same way (he would be told this by the AI, since there isn't any more reason for secrecy). This, I think, would be able to guard my measure against nefarious or bizarre universes in which I am simulated. It cannot just simulate the last few moments of my life because those other universes might try to grab younger versions of me. So if we take j to be safe measure rather than physical measure, and p to be unsafe or alien, it becomes jx > px, which I think is quite reasonable.

I do not think of this as some kind of solipsist nightmare; the whole point of this is to simulate the 'real' you, the one that really existed, and part of your measure is, after all, always interacting in a real universe. I would suggest that by any philosophical standard the simulations could be ignored, with the value of your life being the same as ever.

I have just donated $10,000 to the Immortality Bus, which was the most rational decision of my life

0 turchin 18 July 2015 01:13PM

I have non-zero probability to die next year. In my age of 42 it is not less than 1 per cent, and probably more. I could do many investment which will slightly lower my chance of dying – from healthy life style to cryo contract.  And I did many of them.

From economical point of view the death is at least loosing all you capital.

If my net worth is something like one million (mostly real estate and art), and I have 1 per cent chance to die, it is equal to loosing 10 k a year. But in fact more, because death it self is so unpleasant that it has large negative monetary value. And also I should include the cost of lost opportunities.

Once I had a discussion with Vladimir Nesov about what is better: to fight to immortality, or to create Friendly AI which will explain what is really good. My position was that immortality is better because it is measurable, knowable, and has instrumental value for most other goals, and also includes prevention of worst thing on earth which is the Death. Nesov said (as I remember) that personal immortality does not matter as much total value of humanity existence, and more over, his personal existence has no much value at all. All what we need to do is to create Friendly AI. I find his words contradictory because if his existence does not matter, than any human existence also doesn’t matter, because there is nothing special about him.

But later I concluded that the best is to make bets that will raise the probability of my personal immortality, existential risks prevention and creation of friendly AI simultaneously. Because it is easy to imagine situation where research in personal immortality like creation technology for longevity genes delivery will contradict our goal of existential risks reduction because the same technology could be used for creating dangerous viruses.

The best way here is invest in creating regulating authority which will be able to balance these needs, and it can’t be friendly AI because such regulation needed before it will be created.

That is why I think that US needs Transhumanist president. A real person whose value system I can understand and support. And that is why I support Zoltan Istvan for 2016 campaign.

Me and Exponential Technologies Institute donated 10 000 USD for Immortality bus project. This bus will be the start of Presidential campaign for the writer of “Transhumanist wager”. 7 film crews agreed to cover the event. It will create high publicity and cover all topics of immortality, aging research, Friendly AI and x-risks prevention. It will help to raise more funds for such type of research. 

 

Experiences in applying "The Biodeterminist's Guide to Parenting"

59 juliawise 17 July 2015 07:19PM

I'm posting this because LessWrong was very influential on how I viewed parenting, particularly the emphasis on helping one's brain work better. In this context, creating and influencing another person's brain is an awesome responsibility.


It turned out to be a lot more anxiety-provoking than I expected. I don't think that's necessarily a bad thing, as the possibility of screwing up someone's brain should make a parent anxious, but it's something to be aware of. I've heard some blithe "Rational parenting could be a very high-impact activity!" statements from childless LWers who may be interested to hear some experiences in actually applying that.


One thing that really scared me about trying to raise a child with the healthiest-possible brain and body was the possibility that I might not love her if she turned out to not be smart. 15 months in, I'm no longer worried. Evolution has been very successful at producing parents and children that love each other despite their flaws, and our family is no exception. Our daughter Lily seems to be doing fine, but if she turns out to have disabilities or other problems, I'm confident that we'll roll with the punches.

 

Cross-posted from The Whole Sky.

 


Before I got pregnant, I read Scott Alexander's (Yvain's) excellent Biodeterminist's Guide to Parenting and was so excited to have this knowledge. I thought how lucky my child would be to have parents who knew and cared about how to protect her from things that would damage her brain.

Real life, of course, got more complicated. It's one thing to intend to avoid neurotoxins, but another to arrive at the grandparents' house and find they've just had ant poison sprayed. What do you do then?


Here are some tradeoffs Jeff and I have made between things that are good for children in one way but bad in another, or things that are good for children but really difficult or expensive.


Germs and parasites


The hygiene hypothesis states that lack of exposure to germs and parasites increases risk of auto-immune disease. Our pediatrician recommended letting Lily playing in the dirt for this reason.


While exposure to animal dander and pollution increase asthma later in life, it seems that being exposed to these in the first year of life actually protects against asthma. Apparently if you're going to live in a house with roaches, you should do it in the first year or not at all.


Except some stuff in dirt is actually bad for you.


Scott writes:

Parasite-infestedness of an area correlates with national IQ at about r = -0.82. The same is true of US states, with a slightly reduced correlation coefficient of -0.67 (p<0.0001). . . . When an area eliminates parasites (like the US did for malaria and hookworm in the early 1900s) the IQ for the area goes up at about the right time.


Living with cats as a child seems to increase risk of schizophrenia, apparently via toxoplasmosis. But in order to catch toxoplasmosis from a cat, you have to eat its feces during the two weeks after it first becomes infected (which it’s most likely to do by eating birds or rodents carrying the disease). This makes me guess that most kids get it through tasting a handful of cat litter, dirt from the yard, or sand from the sandbox rather than simply through cat ownership. We live with indoor cats who don’t seem to be mousers, so I’m not concerned about them giving anyone toxoplasmosis. If we build Lily a sandbox, we’ll keep it covered when not in use.


The evidence is mixed about whether infections like colds during the first year of life increase or decrease your risk of asthma later. After the newborn period, we defaulted to being pretty casual about germ exposure.


Toxins in buildings


Our experiences with lead. Our experiences with mercury.


In some areas, it’s not that feasible to live in a house with zero lead. We live in Boston, where 87% of the housing was built before lead paint was banned. Even in a new building, we’d need to go far out of town before reaching soil that wasn’t near where a lead-painted building had been.


It is possible to do some renovations without exposing kids to lead. Jeff recently did some demolition of walls with lead paint, very carefully sealed off and cleaned up, while Lily and I spent the day elsewhere. Afterwards her lead level was no higher than it had been.


But Jeff got serious lead poisoning as a toddler while his parents did major renovations on their old house. If I didn’t think I could keep the child away from the dust, I wouldn’t renovate.


Recently a house across the street from us was gutted, with workers throwing debris out the windows and creating big plumes of dust (presumable lead-laden) that blew all down the street. Later I realized I should have called city building inspection services, which would have at least made them carry the debris into the dumpster instead of throwing it from the second story.


Floor varnish releases formaldehyde and other nasties as it cures. We kept Lily out of the house for a few weeks after Jeff redid the floors. We found it worthwhile to pay rent at our previous house in order to not have to live in the new house while this kind of work was happening.

 

Pressure-treated wood was treated with arsenic and chromium until around 2004 in the US. It has a greenish tint, though this may have faded with time. Playing on playsets or decks made of such wood increases children's cancer risk. It should not be used for furniture (I thought this would be obvious, but apparently it wasn't to some of my handyman relatives).


I found it difficult to know how to deal with fresh paint and other fumes in my building at work while I was pregnant. Women of reproductive age have a heightened sense of smell, and many pregnant women have heightened aversion to smells, so you can literally smell things some of your coworkers can’t (or don’t mind). The most critical period of development is during the first trimester, when most women aren’t telling the world they’re pregnant (because it’s also the time when a miscarriage is most likely, and if you do lose the pregnancy you might not want to have to tell the world). During that period, I found it difficult to explain why I was concerned about the fumes from the roofing adhesive being used in our building. I didn’t want to seem like a princess who thought she was too good to work in conditions that everybody else found acceptable. (After I told them I was pregnant, my coworkers were very understanding about such things.)


Food


Recommendations usually focus on what you should eat during pregnancy, but obviously children’s brain development doesn’t stop there. I’ve opted to take precautions with the food Lily and I eat for as long as I’m nursing her.


Claims that pesticide residues are poisoning children scare me, although most scientists seem to think the paper cited is overblown. Other sources say the levels of pesticides in conventionally grown produce are fine. We buy organic produce at home but eat whatever we’re served elsewhere.


I would love to see a study with families randomly selected to receive organic produce for the first 8 years of the kids’ lives, then looking at IQ and hyperactivity. But no one’s going to do that study because of how expensive 8 years of organic produce would be.
The Biodeterminist’s Guide doesn’t mention PCBs in the section on fish, but fish (particularly farmed salmon) are a major source of these pollutants. They don’t seem to be as bad as mercury, but are neurotoxic. Unfortunately their half-life in the body is around 14 years, so if you have even a vague idea of getting pregnant ever in your life you shouldn’t be eating farmed salmon (or Atlantic/farmed salmon, bluefish, wild striped bass, white and Atlantic croaker, blackback or winter flounder, summer flounder, or blue crab).


I had the best intentions of eating lots of the right kind of high-omega-3, low-pollutant fish during and after pregnancy. Unfortunately, fish was the only food I developed an aversion to. Now that Lily is eating food on her own, we tried several sources of omega-3 and found that kippered herring was the only success. Lesson: it’s hard to predict what foods kids will eat, so keep trying.


In terms of hassle, I underestimated how long I would be “eating for two” in the sense that anything I put in my body ends up in my child’s body. Counting pre-pregnancy (because mercury has a half-life of around 50 days in the body, so sushi you eat before getting pregnant could still affect your child), pregnancy, breastfeeding, and presuming a second pregnancy, I’ll probably spend about 5 solid years feeding another person via my body, sometimes two children at once. That’s a long time in which you have to consider the effect of every medication, every cup of coffee, every glass of wine on your child. There are hardly any medications considered completely safe during pregnancy and lactationmost things are in Category C, meaning there’s some evidence from animal trials that they may be bad for human children.


Fluoride


Too much fluoride is bad for children’s brains. The CDC recently recommended lowering fluoride levels in municipal water (though apparently because of concerns about tooth discoloration more than neurotoxicity). Around the same time, the American Dental Association began recommending the use of fluoride toothpaste as soon as babies have teeth, rather than waiting until they can rinse and spit.


Cavities are actually a serious problem even in baby teeth, because of the pain and possible infection they cause children. Pulling them messes up the alignment of adult teeth. Drilling on children too young to hold still requires full anesthesia, which is dangerous itself.


But Lily isn’t particularly at risk for cavities. 20% of children get a cavity by age six, and they are disproportionately poor, African-American, and particularly Mexican-American children (presumably because of different diet and less ability to afford dentists). 75% of cavities in children under 5 occur in 8% of the population.


We decided to have Lily brush without toothpaste, avoid juice and other sugary drinks, and see the dentist regularly.


Home pesticides


One of the most commonly applied insecticides makes kids less smart. This isn’t too surprising, given that it kills insects by disabling their nervous system. But it’s not something you can observe on a small scale, so it’s not surprising that the exterminator I talked to brushed off my questions with “I’ve never heard of a problem!”


If you get carpenter ants in your house, you basically have to choose between poisoning them or letting them structurally damage the house. We’ve only seen a few so far, but if the problem progresses, we plan to:

1) remove any rotting wood in the yard where they could be nesting

2) have the perimeter of the building sprayed

3) place gel bait in areas kids can’t access

4) only then spray poison inside the house.


If we have mice we’ll plan to use mechanical traps rather than poison.


Flame retardants


Since the 1970s, California required a high degree of flame-resistance from furniture. This basically meant that US manufacturers sprayed flame retardant chemicals on anything made of polyurethane foam, such as sofas, rug pads, nursing pillows, and baby mattresses.

The law recently changed, due to growing acknowledgement that the carcinogenic and neurotoxic chemicals were more dangerous than the fires they were supposed to be preventing. Even firefighters opposed the use of the flame retardants, because when people die in fires it’s usually from smoke inhalation rather than burns, and firefighters don’t want to breathe the smoke from your toxic sofa (which will eventually catch fire even with the flame retardants).


We’ve opted to use furniture from companies that have stopped using flame retardants (like Ikea and others listed here). Apparently futons are okay if they’re stuffed with cotton rather than foam. We also have some pre-1970s furniture that tested clean for flame retardants. You can get foam samples tested for free.


The main vehicle for children ingesting the flame retardants is that it settles into dust on the floor, and children crawl around in the dust. If you don’t want to get rid of your furniture, frequent damp-mopping would probably help.


The standards for mattresses are so stringent that the chemical sprays aren’t generally used, and instead most mattresses are wrapped in a flame-resistant barrier which apparently isn’t toxic. I contacted the companies that made our mattresses and they’re fine.


Ratings for chemical safety of children’s car seats here.


Thoughts on IQ


A lot of people, when I start talking like this, say things like “Well, I lived in a house with lead paint/played with mercury/etc. and I’m still alive.” And yes, I played with mercury as a child, and Jeff is still one of the smartest people I know even after getting acute lead poisoning as a child.

But I do wonder if my mind would work a little better without the mercury exposure, and if Jeff would have had an easier time in school without the hyperactivity (a symptom of lead exposure). Given the choice between a brain that works a little better and one that works a little worse, who wouldn’t choose the one that works better?


We’ll never know how an individual’s nervous system might have been different with a different childhood. But we can see population-level effects. The Environmental Protection Agency, for example, is fine with calculating the expected benefit of making coal plants stop releasing mercury by looking at the expected gains in terms of children’s IQ and increased earnings.


Scott writes:

A 15 to 20 point rise in IQ, which is a little more than you get from supplementing iodine in an iodine-deficient region, is associated with half the chance of living in poverty, going to prison, or being on welfare, and with only one-fifth the chance of dropping out of high-school (“associated with” does not mean “causes”).


Salkever concludes that for each lost IQ point, males experience a 1.93% decrease in lifetime earnings and females experience a 3.23% decrease. If Lily would earn about what I do, saving her one IQ point would save her $1600 a year or $64000 over her career. (And that’s not counting the other benefits she and others will reap from her having a better-functioning mind!) I use that for perspective when making decisions. $64000 would buy a lot of the posh prenatal vitamins that actually contain iodine, or organic food, or alternate housing while we’re fixing up the new house.


Conclusion


There are times when Jeff and I prioritize social relationships over protecting Lily from everything that might harm her physical development. It’s awkward to refuse to go to someone’s house because of the chemicals they use, or to refuse to eat food we’re offered. Social interactions are good for children’s development, and we value those as well as physical safety. And there are times when I’ve had to stop being so careful because I was getting paralyzed by anxiety (literally perched in the rocker with the baby trying not to touch anything after my in-laws scraped lead paint off the outside of the house).


But we also prioritize neurological development more than most parents, and we hope that will have good outcomes for Lily.

New LW Meetup: Kyiv, New Hampshire

4 FrankAdamek 17 July 2015 03:38PM

This summary was posted to LW Main on July 10th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

AI: requirements for pernicious policies

7 Stuart_Armstrong 17 July 2015 02:18PM

Some have argued that "tool AIs" are safe(r). Recently, Eric Drexler decomposed AIs into "problem solvers" (eg calculators), "advisors" (eg GPS route planners), and actors (autonomous agents). Both solvers and advisors can be seen as examples of tools.

People have argued that tool AIs are not safe. It's hard to imagine a calculator going berserk, no matter what its algorithm is, but it's not too hard to come up with clear examples of dangerous tools. This suggests the solvers vs advisors vs actors (or tools vs agents, or oracles vs agents) is not the right distinction.

Instead, I've been asking: how likely is the algorithm to implement a pernicious policy? If we model the AI as having an objective function (or utility function) and algorithm that implements it, a pernicious policy is one that scores high in the objective function but is not at all what is intended. A pernicious function could be harmless and entertaining or much more severe.

I will lay aside, for the moment, the issue of badly programmed algorithms (possibly containing its own objective sub-functions). In any case, to implement a pernicious function, we have to ask these questions about the algorithm:

  1. Do pernicious policies exist? Are there many?
  2. Can the AI find them?
  3. Can the AI test them?
  4. Would the AI choose to implement them?

The answer to 1. seems to be trivially yes. Even a calculator could, in theory, output a series of messages that socially hack us, blah, take over the world, blah, extinction, blah, calculator finishes its calculations. What is much more interesting is some types of agents have many more pernicious policies than others. This seems the big difference between actors and other designs. An actor AI in complete control of the USA or Russia's nuclear arsenal has all sort of pernicious policies easily to hand; an advisor or oracle has much fewer (generally going through social engineering), a tool typically even less. A lot of the physical protection measures are about reducing the number of sucessfull pernicious policies the AI has a cess to.

The answer to 2. is mainly a function of the power of the algorithm. A basic calculator will never find anything dangerous: its programming is simple and tight. But compare an agent with the same objective function and the ability to do an unrestricted policy search with vast resources... So it seems that the answer to 2. does not depend on any solver vs actor division, but purely on the algorithm used.

And now we come to the big question 3., whether the AI can test these policies. Even if the AI can find pernicious policies that rank high on its objective function, it will never implement them unless it can ascertain this fact. And there are several ways it could do so. Let's assume that a solver AI has a very complicated objective function - one that encodes many relevant facts about the real world. Now, the AI may not "care" about the real world, but it has a virtual version of that, in which it can virtually test all of its policies. With a detailed enough computing power, it can establish whether the pernicious policy would be effective at achieving its virtual goal. If this is a good approximation of how the pernicious policy would behave in the real world, we could have a problem.

But extremely detailed objective functions are unlikely. But even simple ones can show odd behaviour if the agents gets to interact repeatedly with the real world - this is the issue with reinforcement learning. Suppose that the agent attempts a translation job, and is rewarded on the accuracy of its translation. Depending on the details of what the AI knows and who choose the rewards, the AI could end up manipulating its controllers, similarly to this example. The problem is that one there is any interaction, all the complexity of humanity could potentially show up in the reward function, even if the objective function is simple.

Of course, some designs make this very unlikely - resetting the AI periodically can help to alleviate the problem, as can choosing more objective criteria for any rewards. Lastly on this point, we should mention the possibility that human R&D, by selecting and refining the objective function and the algorithm, could take the roll of testing the policies. This is likely to emerge only in cases where many AI designs are considered, and the best candiates are retained based on human judgement.

Finally we come to the question of whether the AI will implement the policy if it's found it and tested it. You could say that the point of FAI is to create an AI that doesn't choose to implement pernicious policies - but, more correctly, the point of FAI is to ensure that very few (or zero) pernicious policies exist in the first place, as they all score low on the utility function. However, there are a variety of more complicated designs - satisficers, agents using crude measures - where the questions of "Do pernicious policies exist?" and "Would the AI choose to implement them?" could become quite distinct.

 

Conclusion: a more through analysis of AI designs is needed

A calculator is safe, because it is a solver, it has a very simple objective function, with no holes in the algorithm, and it can neither find nor test any pernicious policies. It is the combination of these elements that makes it almost certainly safe. If we want to make the same claim about other designs, neither "it's just a solver" or "it's objective function is simple" would be enough; we need a careful analysis.

Though, as usual, "it's not certainly safe" is a quite distinct claim from "it's (likely) dangerous", and they should not be conflated.

On the Galactic Zoo hypothesis

-8 estimator 16 July 2015 07:12PM

Recently, I was reading some arguments about Fermi paradox and aliens and so on; also there was an opinion among the lines of "humans are monsters and any sane civilization avoids them, that's why Galactic Zoo". As implausible as it is, but I've found one more or less sane scenario where it might be true.

Assume that intelligence doesn't always imply consciousness, and assume that evolution processes are more likely to yield intelligent, but unconscious life forms, rather than intelligent and conscious. For example, if consciousness is resource-consuming and otherwise almost useless (as in Blindsight).

Now imagine that all the alien species evolved without consciousness. Being an important coordination tool, their moral system takes that into account -- it relies on a trait that they have -- intelligence, rather than consciousness. For example, they consider destroying anything capable of performing complex computations immoral.

Then human morality system would be completely blind to them. Killing such an alien would be no more immoral, then, say, recycling a computer. So, for these aliens, human race would be indeed monstrous.

The aliens consider extermination of an entire civilization immoral, since that would imply destroying a few billions of devices, capable of performing complex enough computations. So they decide to use their advanced technology to render their civilizations invisible for human scientists.

Scope sensitivity?

1 AnthonyC 16 July 2015 02:03PM

Just wanted to share a NYT article on empathy and how different circumstances can reverse the usual bias to feel more empathy for 1 suffering child than 8, and a bunch of other interesting observations.

http://www.nytimes.com/2015/07/12/opinion/sunday/empathy-is-actually-a-choice.html?ref=international

Examples of AI's behaving badly

20 Stuart_Armstrong 16 July 2015 10:01AM

Some past examples to motivate thought on how AI's could misbehave:

An algorithm pauses the game to never lose at Tetris.

In "Learning to Drive a Bicycle using Reinforcement Learning and Shaping", Randlov and Alstrom, describes a system that learns to ride a simulated bicycle to a particular location. To speed up learning, they provided positive rewards whenever the agent made progress towards the goal. The agent learned to ride in tiny circles near the start state because no penalty was incurred from riding away from the goal.

A similar problem occurred with a soccer-playing robot being trained by David Andre and Astro Teller (personal communication to Stuart Russell). Because possession in soccer is important, they provided a reward for touching the ball. The agent learned a policy whereby it remained next to the ball and “vibrated,” touching the ball as frequently as possible. 

Algorithms claiming credit in Eurisko: Sometimes a "mutant" heuristic appears that does little more than continually cause itself to be triggered, creating within the program an infinite loop. During one run, Lenat noticed that the number in the Worth slot of one newly discovered heuristic kept rising, indicating that had made a particularly valuable find. As it turned out the heuristic performed no useful function. It simply examined the pool of new concepts, located those with the highest Worth values, and inserted its name in their My Creator slots.

Rationality Reading Group: Part E: Overly Convenient Excuses

7 Gram_Stone 16 July 2015 03:38AM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part E: Overly Convenient Excuses (pp. 211-252)This post summarizes each article of the sequence, linking to the original LessWrong post where available.

Essay: Rationality: An Introduction

E. Overly Convenient Excuses

46. The Proper Use of Humility - There are good and bad kinds of humility. Proper humility is not being selectively underconfident about uncomfortable truths. Proper humility is not the same as social modesty, which can be an excuse for not even trying to be right. Proper scientific humility means not just acknowledging one's uncertainty with words, but taking specific actions to plan for the case that one is wrong.

47. The Third Alternative People justify Noble Lies by pointing out their benefits over doing nothing. But, if you really need these benefits, you can construct a Third Alternative for getting them. How? You have to search for one. Beware the temptation not to search or to search perfunctorily. Ask yourself, "Did I spend five minutes by the clock trying hard to think of a better alternative?"

48. Lotteries: A Waste of Hope - Some defend lottery-ticket buying as a rational purchase of fantasy. But you are occupying your valuable brain with a fantasy whose probability is nearly zero, wasting emotional energy. Without the lottery, people might fantasize about things that they can actually do, which might lead to thinking of ways to make the fantasy a reality. To work around a bias, you must first notice it, analyze it, and decide that it is bad. Lottery advocates are failing to complete the third step.

49. New Improved Lottery - If the opportunity to fantasize about winning justified the lottery, then a "new improved" lottery would be even better. You would buy a nearly-zero chance to become a millionaire at any moment over the next five years. You could spend every moment imagining that you might become a millionaire at that moment.

50. But There's Still A Chance, Right? - Sometimes, you calculate the probability of a certain event and find that the number is so unbelievably small that your brain really can't keep track of how small it is, any more than you can spot an individual grain of sand on a beach from 100 meters off. But, because you're already thinking about that event enough to calculate the probability of it, it feels like it's still worth keeping track of. It's not.

51. The Fallacy of Gray - Nothing is perfectly black or white. Everything is gray. However, this does not mean that everything is the same shade of gray. It may be impossible to completely eliminate bias, but it is still worth reducing bias.

52. Absolute Authority - Those without the understanding of the Quantitative Way will often map the process of arriving at beliefs onto the social domains of Authority. They think that if Science is not infinitely certain, or if it has ever admitted a mistake, then it is no longer a trustworthy source, and can be ignored. This cultural gap is rather difficult to cross.

53. How to Convince Me That 2 + 2 = 3 - The way to convince Eliezer that 2+2=3 is the same way to convince him of any proposition, give him enough evidence. If all available evidence, social, mental and physical, starts indicating that 2+2=3 then you will shortly convince Eliezer that 2+2=3 and that something is wrong with his past or recollection of the past.

54. Infinite Certainty - If you say you are 99.9999% confident of a proposition, you're saying that you could make one million equally likely statements and be wrong, on average, once. Probability 1 indicates a state of infinite certainty. Furthermore, once you assign a probability 1 to a proposition, Bayes' theorem says that it can never be changed, in response to any evidence. Probability 1 is a lot harder to get to with a human brain than you would think.

55. 0 And 1 Are Not Probabilities - In the ordinary way of writing probabilities, 0 and 1 both seem like entirely reachable quantities. But when you transform probabilities into odds ratios, or log-odds, you realize that in order to get a proposition to probability 1 would require an infinite amount of evidence.

56. Your Rationality Is My Business - As a human, I have a proper interest in the future of human civilization, including the human pursuit of truth. That makes your rationality my business. The danger is that we will think that we can respond to irrationality with violence. Relativism is not the way to avoid this danger. Instead, commit to using only arguments and evidence, never violence, against irrational thinking.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part F: Politics and Rationality (pp. 255-289). The discussion will go live on Wednesday, 29 July 2015, right here on the discussion forum of LessWrong.

The Just-Be-Reasonable Predicament

3 Satoshi_Nakamoto 16 July 2015 03:17AM

If people don't see you as being “reasonable”, then you are likely to have troublesome interactions with them. Therefore, it is often valuable to be seen as “reasonable”. Reasonableness is a general perception that is determined by the social context and norms. It includes, but is not limited to, being seen as fair, sensible and socially cooperative. In summary, we can describe it as being noticeably rational in socially acceptable ways. What is “reasonable” and what is rational often converges, but it is important to note that they can also diverge and be different. For example, it was deemed “unreasonable” to free African-Americans from slavery because slavery was deemed necessary for the economy of the South.

 

The just-be-reasonable predicament occurs when you are chastised for doing something that you believe to be more rational and/or optimal than the norm or what is expected or desired. The chastiser has either: not considered, cannot fathom or does not care that what you are doing or want to do might be more rational and/or optimal than what is the default course of action. The predicament is similar to the one described in lonely dissent in that you must choose between making what you to believe to be the most rational and/or optimal course of action and the one that will be meet with the least amount of social disapproval. 

 

An example of this predicament is when you are playing a game with a scrub (a player who is handicapped by self-imposed rules that the game knows nothing about). The scrub might criticise for continuing to use the best strategy that you are aware of, but that they thinks is cheap. If you try to argue that a strategy is a strategy, then the argument is likely to end with the scrub getting angry and saying the equivalent of “just be reasonable”, which basically means: “why can’t you just follow what I see as the rules and the way things should be done?” When you encounter this predicament, you need to weigh up the costs of leaving the way or choosing a non-optimal action vs. facing potential social disapproval. The way opposes being “reasonable” when it is not aligned with being rational. In the scrub situation, the main benefit of being “reasonable” is that you are less likely to annoy the scrub and the main cost is that you are giving up a way to improve for both you and the scrub. The scrub will never learn how to counter the “cheap” strategy and you won’t be looking for other strategies as you know you can always just fall back to the “cheap” strategy if you want to win.

 

In general, you have three choices for how to deal with this predicament: you can be “reasonable”, explain yourself or try to ignore it. Ignoring it means that you continue or go ahead with the ration/optimal course of action that you had planned and that you also to change the conversation or situation so that you don't continue getting chastised. Which choice you should make depends on thecorrigibility and state of mind of the person that you need to explain yourself to as well as how much being “reasonable” differs from being rational. If we reconsider the scrub situation, then we can think of times when you should, or at least most people would, avoid the so called “cheap” strategy. Maybe, it is a bug in the game or it’s overpowered or your goal is fun rather than becoming better at the game. (Note, though, that becoming better at a game often makes it more fun).

 

The just-be-reasonable predicament is especially troubling because, like with the counter man syndrome, repeated erroneous thinking can become embedded into how you reason. In this case, repeated acquiescence can lead to embedding irrational and/or non-optimal ways of thinking into your thought processes.

 

If you continually encounter the just-be-reasonable predicament, then it indicates that your values are out of alignment with the person that you are dealing with. That is, they don’t value rationality, but just want you to do things in the way that they expect and want. Trying to get them to adopt a more rational way of doing things will often be a hard task because it involves having to convince them that their current paradigm from which they are deriving their beliefs as to what is “reasonable” is non-optimal.


Situations involving this predicament come in four main varieties:

  • You actually should just be “reasonable” – this occurs when you are being un”reasonable” not because the most rational or optimal thing is opposed to what is currently considered “reasonable”, but because you are being irrational. If this is the case, then make sure that you don’t try to rationalize and instead just be “reasonable” or try to ignore the situation so that you can think about it later when you are in a better state of mind.
  • Someone wants you to be “reasonable”, but hasn’t really thought about or cares about whether this is rational – this might occur when someone is angry at you because you are not following what they think is the right way to do things. It is important in this situation to not use the predicament as a way of avoiding thoughts about how you might be wrong or how the situation might be like from the other person’s perspective. This is important because, ultimately, you want to change the other person’s opinion about what is “reasonable” so that it matches up more with what is rational. To do this well you often need to be empathetic, understanding and strategic. You need to be strategic because sometimes you may need to ignore the situation or be what they think is “reasonable” so that you can reapproach the topic later without it being contaminated with negative valence. A good idea if you want to avoid making the other person feel like you are imposing is to get them to agree to try out your more rational method on a trial basis. This is also useful for two other reasons: what you think is more rational may turn out not to be and the “reasonable” way of doing things, on reflection, may turn out to be more rational than you think. Something additional to consider is that everyone has different dispositions, propensities and tendencies and what might be the most optimal strategy for you might not be for someone else. If this is the case, then don’t try to change their strategy, but just try to explain why you want to use yours.
  • Someone is telling you to be “reasonable” as a power play or as a method of control – this situation happens when someone is using their power to make you follow their way of doing things. This situation requires a different tact than the last one because your strategies to explain yourself probably won’t work. This is because being told to “just be “reasonable”” is a method that they are using to put you in your place. The other person is not interested in whether the “reasonable” thing is actually rational. They just want you to do something that benefits them. This kind of situation is tough to deal with. You may need to ignore and avoid them or if you do try to explain yourself make sure that you get the support of others first.
  • You don’t want to explain yourself – sometimes we notice that what people think is “reasonable” is not actually rational, but we do the “reasonable” thing anyway because the effort or potential cost involved with explaining yourself is considered to be too high. In this case, you either have to be “reasonable” or try to avoid the issue. Please note that this solution is not optimal because avoiding something when you don’t have evidence that it will go away is a choice to reface the same or worse situation in the future and accepting an unsavoury situation in resignation is letting fear control and limit you.

If you encounter the just-be-reasonable predicament, I recommend running through the below process:  

 

Some other types of this predicament would be “just do as you’re told”, “why can’t you just conform to my belief of what is the best course of action for you here” and any other type of social disapproval, implicit or explicit, that you get from doing what is rational or optimal rather than what is expected or the default.

Philosophy professors fail on basic philosophy problems

16 shminux 15 July 2015 06:41PM

Imagine someone finding out that "Physics professors fail on basic physics problems". This, of course, would never happen. To become a physicist in academia, one has to (among million other things) demonstrate proficiency on far harder problems than that.

Philosophy professors, however, are a different story. Cosmologist Sean Carroll tweeted a link to a paper from the Harvard Moral Psychology Research Lab, which found that professional moral philosophers are no less subject to the effects of framing and order of presentation on the Trolley Problem than non-philosophers. This seems as basic an error as, say, confusing energy with momentum, or mixing up units on a physics test.

Abstract:

We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.

Some quotes (emphasis mine):

When scenario pairs were presented in order AB, participants responded differently than when the same scenario pairs were presented in order BA, and the philosophers showed no less of a shift than did the comparison groups, across several types of scenario.

[...] we could find no level of philosophical expertise that reduced the size of the order effects or the framing effects on judgments of specific cases. Across the board, professional philosophers (94% with PhD’s) showed about the same size order and framing effects as similarly educated non-philosophers. Nor were order effects and framing effects reduced by assignment to a condition enforcing a delay before responding and encouraging participants to reflect on “different variants of the scenario or different ways of describing the case”. Nor were order effects any smaller for the majority of philosopher participants reporting antecedent familiarity with the issues. Nor were order effects any smaller for the minority of philosopher participants reporting expertise on the very issues under investigation. Nor were order effects any smaller for the minority of philosopher participants reporting that before participating in our experiment they had stable views about the issues under investigation.

I am confused... I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily... What is going on?

 

Recommended Reading for Evolution?

3 Sable 15 July 2015 06:04PM

I'll make this short and sweet.

I've been reading Dawkin's The Selfish Gene, and it's been really helpful filling in some of the gaps I have in my understanding of how evolution actually works.

The last biology class I took was in high school, and I don't think the mechanics of evolution is covered particularly well in American high schools.

I'm looking for recommendations - has anyone read any books that accurately describe the process of evolution for someone without specialized knowledge of biology?  I've already checked LessWrong's recommended textbooks, and while it recommends some books on evolutionary psychology and on animal behavior from an evolutionary perspective, it doesn't appear to have anything that describes evolution itself in sufficient detail to model it.

I'm toying with the idea of trying to program an evolution simulator, and so I need a fairly detailed, accessible account.

Thanks for the help!

Biases and Fallacies Game Cards

7 Gunnar_Zarncke 15 July 2015 08:19AM

On the Stupid Questions Thread I asked

I need some list of biases for a game of Biased Pandemic for our Meet-Up. Do suitably prepared/formatted lists exist somewhere?

But none came forward.

Therefore I created a simple deck based on Wikipedia entries. I selected those that can be presumably be used easily in a game, summarized the description and added an illustrative quote.

The deck can be found in Dropbox here (PDF and ODT).

I'd be happy for corrections and further suggestions.

ADDED: We used these cards during the LW Hamburg Meetup. They attracted significant interest and even though we did use them during a board game we drew them and tried to act them out during a discussion round (which didn't work out that well but stimulated discussion nonetheless).

Bragging Thread July 2015

4 Viliam 13 July 2015 10:01PM

Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.

Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread. This thread is solely for people to talk about the awesome things they have done. Not "will do". Not "are working on"Have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.

So, what's the coolest thing you've done this month?

(Previous Bragging Thread)

View more: Next