What degree of cousins are you and I? Estimates of Consanguinity to promote feelings of kinship and empathy

1 chaosmage 20 May 2015 05:10PM

Epistemic status: Wild guesswork based on half-understood studies from way outside my field. More food for thought than trustworthy information.

tl/dr: Estimates of familial relatedness between people should help promote empathy, so here's how to make them - and might this be useful for Effective Altruism?

The why

I don't know how it is for you, but for me, knowing I'm related to someone makes a specific emotional difference. Scenario: I'm at a big family-and-friends get-together, I meet a guy, we get along. (For clarity, let's assume no sexual tension.) And then we're told we're third cousins via some weird aunt. From the moment I'm told, I feel different towards him. Firm, forthcoming, obliging. Some kind of basic kinship emotion, I guess, noticeable when it shifts on these rare occasions but basically going on, deep down in System 1, every time that emailing a remote uncle feels different from emailing a similarly remote associate.

Meanwhile, my System 2 has heard that all humans are at least 50th degree cousins and likes to point out everyone I've ever had sex with was a cousin of some degree. That similarly remote associate where I don't have that kinship feeling - he's a relative too, just a more distant one. And when I notice that, I get a bit of that kinship feeling too...

With me so far? Here's my thesis: the two human feelings of kinship and empathy are closely connected, and to make one of them more salient is to increase the salience of the other.

I don't think this has been tested properly. A. J. Jacobs, who is running a huge family reunion event in New York this summer, said "some ambitious psychology professor needs to conduct a study about whether we deliver lower electrical shocks to people if we know we’re related" and I think he's exactly right.

Has anybody here not heard of circles of empathy? They're a concept invented by the very cool 19th century rationalist William Edward Hartpole Lecky in his "History of European Morals From Augustus to Charlemagne". Peter Singer summarizes it as follows:

Lecky wrote of human concern as an expanding circle which begins with the individual, then embraces the family and ‘soon the circle... includes first a class, then a nation, then a coalition of nations, then all humanity, and finally, its influence is felt in the dealings of man [sic] with the animal world’.

There's more to read about this in Peter Singer's "The Expanding Circle" or Steven Pinker's "The Better Angels of Our Nature", but what strikes me about it is contained in that single sentence: The expansion that is described tracks actual genetic relatedness, or Consanguinity. The list goes down a gradient of (expected) genetic relatedness. This makes the size of the circle of empathy seem to depend on a threshold of how related you need to be to someone in order to care about them.

(Note that Becky published his "History of European Morals" - with this inclusion of concern about animals - in 1869, i.e. only ten years after the publication of "On the Origin of Species". There was some animal rights legislation before Darwin, but animal rights as a movement only arose after we knew animals to be our relatives.)

On the other hand, those who would promote empathy have always relied on familial vocabulary, chiefly "brother" and "sister", to refer to people who evidently weren't actual brothers or sisters. Martin Luther King, Jesus, the Buddha, Mandela, Gandhi, they all do this. So maybe it works a bit. Maybe it helps trigger that emotional kinship response and that somehow helps people get along.

Now to see how these emotional responses would arise, we could discuss reciprocal altruism and gene-centered Darwinism and whatnot, but "The Selfish Gene" is required reading anyway and I assume you've done your homework. I'd like to instead go to the second part of my thesis, the one about increasing salience.

Recognizing you're related to somebody does something. (Especially if you have an incest fetish, of course.) I propose that whatever it does increases empathy. And empathy might not be a categorically good thing, but it comes pretty close, at least until you extend it to all food groups. So maybe we could increase empathy among people by pointing out their relatedness. And maybe we can do this more vividly, more strikingly than by simply saying "we're all descended from apes, so we're all related, duh" or by boring the non-nerd majority to death with talk of human genetic clustering and fixation indexes.

So I'd like to revisit that "brothers and sisters" thing from MLK and those other guys. Maybe they shouldn't have used figurative language. Maybe a more lasting feeling of kinship can be created by literal language: By telling people how related they are. Detailed ancestry information is being collected at various Wiki-like sites, but even assuming they'll grow and become less US-centric, they don't go back very far (except around very famous people) and what came before remains guesswork. So let's do some Fermi-ish estimates.

The how

The drop dead amazing Nature Article Modelling the recent common ancestry of all living humans is way too careful and scientific to put an exact number on how long ago the last common ancestor lived, unfortunately. But the mean date their simulations come up with is 1415 BC, which will be approximately 120 generations ago, so let's say really remote people like the Karitiana tribe are, at most, something like 125th degree cousins of all of us. So that's a useful upper bound for the degree of cousinhood between any two arbitrary humans, such as you and me.

The lower bound could be something like 3 - if you and I were that closely related, we'd share a great-great-grandparent and could probably ascertain rather than guess that. With fairly extensive genealogy, the lower bound might go up to around 5 - which is the level where you need to look at 64 ancestors for each of us who lived in the middle of the 19th century and failed to use Facebook. We'd find it hard to ascertain whether your great-great-great-great-grandmother Mary was identical to mine.

There are a lot of special cases where the lower bound can be higher. If both people involved know their family more than 3 generations were deep-rooted peasant folks from two distinct populations, the history books might tell them how many centuries further back are very unlikely to contain a common ancestor. (This will of course be much rarer among descendants of immigrants, like Americans, than it is for citizens of older or more rural countries.) If they're of different ethnicities, castes or classes that wouldn't normally date each other 80 years ago, the lower bound should probably go up a few more generations. If both people involved are Icelanders, they can just look up their last common ancestor in the comprehensive Icelandic family tree. But let's assume you and I don't have any of these special cases, and we're stuck with a lower bound of 3. Now between that and 125, how do we narrow it down?

Turns out the authors of that gorgeous Nature paper don't hand out access to their simulations to random dudes who just email them. So lets see how far we get on the hard way.

In a completely random mating model (where people do not tend to mate with people who happen to live near them, i.e. happen to be descendants of the same people), your number of ancestors doubles with every generation you go back, in a sort of ancestor tree that grows backwards. We're looking for the point where the two ancestor trees first meet. If we assume generations have homogenous lengths (which implies further simplifying assumptions like moms and dads are the same age) and further assume only people from within the same generation have kids with each other, cousins of the Nth degree have a common ancestor N+1 generations ago, and each has 2N+1 ancestors belonging to that generation.

This means that for you and me to be, say, 15th degree cousins, our two sets of 215+1=65536 ancestors have to have one person in common, some 480 years ago, assuming 30 years as mean parenthood age. Of course we each probably have less than 65536 unique ancestors due to... um... "reticulations".

But empirically, it seems that "a pair of modern Europeans living in neighboring populations share around 2–12 genetic common ancestors from the last 1,500 years" and even individuals from opposite ends of Europe will normally have common ancestors if you search back 3000 years (source). That isn't what you get from the simplistic model above - the numbers of ancestors it calculates exceed the world population less than 32 generations (about 800 years) ago. The empirical genetic data from this paper would indicate that it is likely the median first common ancestor between me and anybody in central Europe is somewhere like 1200 years (or 40 generations) ago and any two people anywhere in Europe would probably be at most 100th degree cousins.

Around 600 years ago is a good time to look at, because that's shortly before intercontinental travel started to intricately connect all regions of the world, including genetically. If most of your 600-years-ago ancestors lived outside Europe, you and I might still be <25 degrees cousins - maybe you have some ancestor who left for Europe 300 years ago, leaving siblings behind (your ancestors) and having kids in Europe (mine). Or vice versa. But that kind of thing is unlikely and since we're doing rough estimates I suggest we round that probability down to zero.

In genetic studies, no other continent is anywhere near as well-studied as Europe, so I guess we'll just have to roll with it and assume that other places are about the same as this paper found and the nice exponential drop-off with geographic distance that's the case in Europe is also the case elsewhere. America and Australia as continents of immigrants continue to be a special cases. But for two people with families from, say, West Africa, I'd be comfortable assuming that if they're from roughly the same large region (say around the Bight of Benin) they're probably something like 40th degree cousins and if not, they're still something like 100 degree cousins at least.

It gets only slightly more complicated if the set of ancestors you know - say your four grandparents - are a mix of descendants from different regions or continents. Just add the number of generations between you and them to your expected degree of cousinhood to everybody from that region or continent.

Needless to say these are all wild guesses. I'm basically hoping someone more qualified than me will see this and be horrified enough to go do the job properly.

Now I'm not an American but statistically you probably are, and you might be more interested in know how closely you're related to other Americans - your boss, your sexual partners, or Mel Gibson. The bad news is that as a member of a nation of relatively recent immigrants, and particularly if your ancestors didn't all come from different continents, you have a harder time estimating most recent common ancestors with people than most other people on Earth. The good news, however, is that the data collected at the large ancestry sites ancestry.com, FamilySearch.org, Geni.com and WikiTree.com are all growing fastest in the US-centric part of their "world trees".

For cousinhood between people whose ancestors seem to have lived on entirely seperate continents as far as anyone knows, I think we can only fall back on our upper bound of 125 degrees of cousinhood. Things get fuzzy so far back, the world population was much smaller, and the population of those who have descendants living today is smaller still. Shared ancestry within any particular generation remains unlikely, but over the centuries and millenia, between trade (particularly in slaves), the various empires and the mass rapes of warfare, genes did get mixed around. Again, see that spectacular Nature paper if you still haven't.

Side note: The most recent common ancestor of two arbitrarily chosen people on different continents is likely to be someone who had kids on different continents. So it is probably a very rich person, a sailor or a soldier, i.e. a male. In general, the number of unique males in anybody's ancestor tree will likely be much smaller than the number of unique females. I expect the difference will be sharper in most recent common ancestors of humans from different continents, because women have shorter fertility windows inside which to travel intercontinentally and don't seem to have moved nearly as much as men except as slaves.

The point of all this is simple. Now you can look at somebody and figure she's not only your cousin, you even have a guess as to the degree of cousin she is. I like to do that when I'm angry with people, because for me, it makes a distinct emotional difference. Maybe try if that works for you too.

Relation to the care allocation problem

I suspect this cousinhood thing could be a fairly principled solution to the problem of how to allocate caring between humans and animals, which Yvain/Scott laid out in a recent SSC post. Why not go by actual (known or estimated) blood relations, and privilege closer relatives over more distant ones?

Our last common ancestor with chimps was something like 5 to 6 million years ago, so our ancestor trees merge about 250000 (human) generations ago, making chimps something like quarter-million-degrees-cousins of all of us. Generations get a lot shorter further back, so our last common ancestor with cattle and dogs, about 92 million years ago, may be 30 million generations ago. Birds would be much more distant, our last common ancestor with them was around 310 million years ago, and so forth. (Richard Dawkins The Ancestor's Tale has much more on this.) For me, this maps rather nicely onto my intuitive prejudices as to how much I should care about which creatures. It fails to map my caring for plants far more than I care for bacteria, but EA has nothing to improve on in that department.

If EA has to have impartiality in the sense that your neighbor can't be more important to you than a tribesman in Mongolia, this isn't EA. Quoth Yvain:

allowing starving Third World people into the circle of concern totally pushes out most First World charities like art museums and school music programs and holiday food drives. This is a scary discovery and most people shy away from it. Effective altruists are the people who are selected for not having shied away from it.

So anybody trying to grow EA might want to make that step easier. Maybe a "closeness multiplier" on units of caring works better than a series of unprincipled exceptions, and still gets across the idea that units of caring are to be distributed between everybody (or everybody's QALYs), if unevenly. And then to become more impartial would be to have that multiplier approach 1.

And if that were the case, my personal preference for how to design that multiplier would be that it shouldn't rely on arbitrary constructs like citizenships. Maybe if EAs want to find a principled solution to the care allocation problem, consanguinity should be one of the options.

Nick Bostrom's TED talk on Superintelligence is now online

23 chaosmage 27 April 2015 03:15PM

http://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

I realize this might go into a post in a media thread, rather than its own topic, but it seems big enough, and likely-to-prompt-discussion enough, to have its own thread.

I liked the talk, although it was less polished than TED talks often are. What was missing I think was any indication of how to solve the problem. He could be seen as just an ivory tower philosopher speculating on something that might be a problem one day, because apart from mentioning in the beginning that he works with mathematicians and IT guys, he really does not give an impression that this problem is already being actively worked on.

3-day Solstice in Leipzig, Germany: small, nice, very low cost, includes accommodation, 19th-21st Dec

9 chaosmage 09 October 2014 04:38PM

Hi everyone,

like last year, we'll have a Secular Solstice in Leipzig, Germany. You're invited - message me if you'd like to attend.

We have space for about 25 people. So this isn't a huge event like you'd have in NYC - but it is special in a different way, because it goes Friday to Sunday and involves lots of things to do. We have a big and very nice appartment in the center of Leipzig where lots of people can sleep, so spreading this over several days is easy, and an obvious way to kick it up a notch from last year's event.

We'll do some of the beautiful ceremonial pieces and songs from Raymonds Hymnal and ride the same general vide. And on top of that, we'll do freestyle, participatory work in groups where we design ways to celebrate the Solstice, using an Open Space Technology inspired method. After all, we're only getting things started, and surely there are many kinds of celebration to explore. Lets find some of them, try them out together and by comparing effects, help optimize Secular Solstices!

We'll cook together and share the cost for ingredients and drinks - apart from that the event is free. Up to 18 guests can sleep right on the premises - half of them on comfortable beds and mattresses, the rest needs to bring sleeping bags and camping mats. If you really prefer a single or double room, there are fairly cheap hotels and hostels nearby, message me for assistance if necessary.

The outline

Arrivals are Friday 6pm-7:30. We'll have a welcome round and a few things to get us in the mood, then discuss ideas for Solstice activities to explore together. We'll find the most popular ones and get into groups that design them into something they want to share with everyone. Groups should self-organize fairly fluidly, i.e. you can switch groups, steal ideas from each other etc. and get to know each other in the process. So this will basically be a very social evening of preparation for the next day. Also, cooking.

On Saturday we will meet in the morning to plan the day, spend some time decorating and cooking, shopping for stuff groups have found they need to do their things, and probably rehearsing. Groups who are done preparing their thing will in some cases probably prepare another, because that is just what happens. We'll have time to chat and get to know each other better. The ceremonial part starts at sunset and is expected to take several hours. After that we'll party - some people will probably want to stay up all night and welcome the sunrise just like last year.

On Sunday we'll have less cohesion probably, because of high variance in how much people have slept. Still we should be able to come together for feedback discussion, have a nice closing, clean up a bit, and say farewell. If you need more sleep before you get on the road, you're welcome to have it.

Any questions?

In order to greatly reduce X-risk, design self-replicating spacecraft without AGI

1 chaosmage 20 September 2014 08:25PM

tl/dr: If we'd build a working self-replicating spacecraft, that'd prove we're past the Great Filter. Therefore, certainty we can do that would eliminate much existential risk. It is a potentially highly visible project that gives publicity to reasons not to include AGI. Therefore, serious design work on a self-replicating spacecraft should have a high priority.

I'm assuming you've read Stuart_Armstrong's excellent recent article on the Great Filter. In the discussion thread for that, RussellThor observed:

if we make a simple replicator and have it successfully reach another solar system (with possibly habitable planets) then that would seem to demonstrate that the filter is behind us.

If that is obvious to you, skip to the next subheading.

The evolution from intelligent spacefaring species to producer of self-replicating spacecraft (henceforth SRS, used in the plural) is inevitable, if SRS are possible. This is simply because the matter and negentropy available in the wider universe is a staggeringly vast resource of staggering value. Even species who are unlikely to ever visit and colonize other stars in the form that evolution gave them (this includes us) can make use of these resources. For example, if we could build on (or out of) empty planets supercomputers that receive computation tasks by laser beam and output results the same way, we would be economically compelled to do so simply because those supercomputers could handle computational tasks that no computer on Earth could complete in less than the time it takes that laser beam to travel forth and back. That supercomputer would not need to run even a weak AI to be worth more than the cost of sending the probe that builds it.

Without a doubt there are countless more possible uses for these, shall we say, exoresources. If Dyson bubbles or mind uploads or multistellar hypertelescopes or terraforming are possible, each of these alone create another huge incentive to build SRS. Even mere self-replicating refineries that break up planets into more readily accessible resources for future generations to draw from would be an excellent investment. But the obvious existence of this supercomputer incentive is already reason enough to do it.

All the Great Filter debate boils down to the question of how improbable our existence really is. If we're probable, many intelligent species capable of very basic space travel should exist. If we're not, they shouldn't. We know there doesn't appear to be any species inside a large fraction of our light cone so capable of space travel it has sent out SRS. So the only way we could be probable is if there's a Great Filter ahead of us, stopping us (and everyone else capable of basic space travel) from becoming the kind of species that sends out SRS. If we became such a species, we'd know we're past the Filter and while we still wouldn't know how improbable which of the conditions that allowed for our existence was, we'd know that when putting them all together, they multiply into some very small probability of our existence, and a very small probability of any comparable species existing in a large section of our light cone.

LW users generally seem to think SRS are doable and that means we're quite improbable, i.e. the Filter is behind us. But lots of people are less sure, and even more people haven't thought about it. The original formulation of the Drake equation included a lifespan of civilizations partly to account for the intuition that a Great Filter type event could be coming in the future. We could be more sure than we are now, and make a lot of people much more sure than they are now, about our position in reference to that Filter. And that'd have some interesting consequences.

How knowing we're past the Great Filter reduces X-risk

The single largest X-risk we've successfully eliminated is the impact of an asteroid large enough to destroy us entirely. And we didn't do that by moving any asteroids; we simply mapped all of the big ones. We now know there's no asteroid that is both large enough to kill us off and coming soon enough that we can't do anything about it. Hindsight bias tells us this was never a big threat - but look ten years back and you'll find The Big Asteroid on every list of global catastrophic risks, usually near the top. We eliminated that risk simply by observation and deduction, by finding out it did not exist rather than removing it.

Obviously a working SRS that gives humanity outposts in other solar systems would reduce most types of X-risk. But even just knowing we could build one should decrease confidence in the ability of X-risks to take us out entirely. After all, if as Bostrom argues, the possibility that the Filter is ahead of is increases the probability of any X-risk, the knowledge that it is not ahead of us has to be evidence against all of them except those that could kill a Type 3 civilization. And if, as Bostrom says in that same paper, finding life elsewhere that is closer to our stage of development is worse news than finding life further from it, to increase the distance between us and either type of life decreases the badness of the existence of either.

Of course we'd only be certain if we had actually built and sent such a spacecraft. But in order to gain confidence we're past the filter, and to gain a greater lead to life possibly discovered elsewhere, a design that is agreed to be workable would go most of the way. If it is clear enough that someone with enough capital could claim incredible gains by doing that, we can be sure enough someone eventually (e.g. Elon Musk after SpaceX's IPO around 2035) will do that, giving high confidence we've passed the filter.

I'm not sure what would happen if we could say (with more confidence than currently) that we're probably the species that's furthest ahead at least in this galaxy. But if that's true, I don't just want to believe it, I want everyone else to believe it too, because it seems like a fairly important fact. And an SRS design would help do that.

We'd be more sure we're becoming a Type 3 civilization, so we should then begin to think about what type of risk could kill that, and UFAI would probably be more pronounced on that list than it is on the current geocentric ones.

What if we find out SRS are impossible at our pre-AGI level of technology? We still wouldn't know if an AI could do it. But even knowing our own inability would be very useful information, especially about the dangerousness of vatrious types of X-risk.

How easily this X-risk reducing knowledge can be attained

Armstrong and Sandberg claim the feasibility of self-replicating spacecraft has been a settled matter since the Freitag design of 1980. But that paper, while impressively detailed and a great read, glosses over the exact computing abilities such a system would need, does not mention hardening against interstellar radiation, assumes fusion drives and probably has a bunch of other problems that I'm not qualified to discover. I haven't looked at all the papers that cite it (yet), but the ones I've seen seem to agree self-replicating spacecraft are plausible. Sandberg has some good research questions that I agree need to be answered, but never seems to waver from his assumption that SRS are basically possible, although he's aware of the gaps in knowledge that preclude such an assumption from being safe.

There are certainly some questions that I'm not sure we can answer. For example:

  1. Can we build fission-powered spacecraft (let alone more speculative designs) that will survive the interstellar environment for decades or centuries?
  2. How can we be certain to avoid mutations that grow outside of our control, and eventually devour Earth?
  3. Can communication between SRS and colonies, especially software updates, be made secure enough?
  4. Can a finite number of probe designs (to be included on any of them) provide a vehicle for every type of journey we'd want the SRS network to make?
  5. Can a fiinite number of colony designs provide a blueprint for every source of matter and negentropy we'd want to develop?
  6. What is the ethical way to treat any life the SRS network might encounter?

But all of these except for the last one, and Sandberg's questions, are engineering questions and those tend to be answerable. If not, remember, we don't need to have a functioning SRS to manage X-risk, any reduction of uncertainty around their feasibility already helps. And again, the only design I could find that gives any detail at all is from a single guy writing in 1980. If we merely do better than he did (find or rule out a few of the remaining obstacles), we already help ascertain our level of X-risk. Compare the asteroid detection analogy: We couldn't be certain that we wouldn't be hit by an asteroid until we looked at all of them, but getting started with part of the search space was a very valuable thing to do anyway.

Freitag and others use to assume SRS should be run by some type of AGI. Sandberg says SRS without AGI, with what he calls "lower order intelligence", "might be adequate". I disagree with both assessments, and with Sandberg's giving this question less priority than, say, study of mass drivers. Given the issues of AGI safety, a probe that works without AGI should be distinctly preferable. And (unlike an intelligent one) its computational components can be designed right now, down to the decision tree it should follow. While at it, and in order to use the publicity such a project might generate, give an argument for this design choice that highlights the AGI safety issues. A scenario where a self-replicating computer planet out there decides for itself should serve to highlight the dangers of AGI far more viscerally than conventional "self-aware desktop box" scenarios.

If we're not looking for an optimal design, but the bare minimum necessary to know we're past the filter, that gives us somewhat relaxed design constraints. This probe wouldn't necessarily need to travel at a significant fraction of light speed, and its first generation wouldn't need to be capable of journeys beyond, say, five parsec. It does have to be capable of interstellar travel, and of progressing to intergalactic travel at some point, say when it finds all nearby star systems to contain copies of itself. A non-interstellar probe fit to begin the self-replication process on a planet like Jupiter, refining resources and building launch facilities there, would be a necessary first step.

Talking to yourself: A useful thinking tool that seems understudied and underdiscussed

33 chaosmage 09 September 2014 04:56PM

I have returned from a particularly fruitful Google search, with unexpected results.

My question was simple. I was pretty sure that talking to myself aloud makes me temporarily better at solving problems that need a lot of working memory. It is a thinking tool that I find to be of great value, and that I imagine would be of interest to anyone who'd like to optimize their problem solving. I just wanted to collect some evidence on that, make sure I'm not deluding myself, and possibly learn how to enhance the effect.

This might be just lousy Googling on my part, but the evidence is surprisingly unclear and disorganized. There are at least three seperate Wiki pages for it. They don't link to each other. Instead they present the distinct models of three seperate fields: autocommunication in communication studies, semiotics and other cultural studies, intrapersonal communication ("self-talk" redirects here) in anthropology and (older) psychology and private speech in developmental psychology. The first is useless for my purpose, the second mentions "may increase concentration and retention" with no source, the third confirms my suspicion that this behavior boosts memory, motivation and creativity, but it only talks about children.

Google Scholar yields lots of sports-related results for "self-talk" because it can apparently improve the performance of athletes and if there's something that obviously needs the optimization power of psychology departments, it is competitive sports. For "intrapersonal communication" it has papers indicating it helps in language acquisition and in dealing with social anxiety. Both are dwarfed by the results for "private speech", which again focus on children. There's very little on "autocommunication" and what is there has nothing to do with the functioning of individual minds.

So there's a bunch of converging pieces of evidence supporting the usefulness of this behavior, but they're from several seperate fields that don't seem to have noticed each other very much. How often do you find that?

Let me quickly list a few ways that I find it plausible to imagine talking to yourself could enhance rational thought.

  • It taps the phonological loop, a distinct part of working memory that might otherwise sit idle in non-auditory tasks. More memory is always better, right?
  • Auditory information is retained more easily, so making thoughts auditory helps remember them later.
  • It lets you commit to thoughts, and build upon them, in a way that is more powerful (and slower) than unspoken thought while less powerful (but quicker) than action. (I don't have a good online source for this one, but Inside Jokes should convince you, and has lots of new cognitive science to boot.)
  • System 1 does seem to understand language, especially if it does not use complex grammar - so this might be a useful way for results of System 2 reasoning to be propagated. Compare affirmations. Anecdotally, whenever I'm starting a complex task, I find stating my intent out loud makes a huge difference in how well the various submodules of my mind cooperate.
  • It lets separate parts of your mind communicate in a fairly natural fashion, slows each of them down to the speed of your tongue and makes them not interrupt each other so much. (This is being used as a psychotherapy method.) In effect, your mouth becomes a kind of talking stick in their discussion.

All told, if you're talking to yourself you should be more able to solve complex problems than somebody of your IQ who doesn't, although somebody of your IQ with a pen and a piece of paper should still outthink both of you.

Given all that, I'm surprised this doesn't appear to have been discussed on LessWrong. Honesty: Beyond Internal Truth comes close but goes past it. Again, this might be me failing to use a search engine, but I think this is worth more of our attention that it has gotten so far.

I'm now almost certain talking to myself is useful, and I already find hindsight bias trying to convince me I've always been so sure. But I wasn't - I was suspicious because talking to yourself is an early warning sign of schizophrenia, and is frequent in dementia. But in those cases, it might simply be an autoregulatory response to failing working memory, not a pathogenetic element. After all, its memory enhancing effect is what the developmental psychologists say the kids use it for. I do expect social stigma, which is why I avoid talking to myself when around uninvolved or unsympathetic people, but my solving of complex problems tends to happen away from those anyway so that hasn't been an issue really.

So, what do you think? Useful?

[link] The ethics of genetically enhanced monkey slaves

1 chaosmage 20 February 2014 09:40AM

Interesting interview with Julian Savulescu (professor of practical ethics at the University of Oxford) on "the ethics of the biological enhancement of the human race".

A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk

34 chaosmage 07 January 2014 05:48PM

There's a big Hollywood movie coming out with an apocalyptic Singularity-like story, called Transcendence. (IMDB, Wiki, official site) With an A-list cast and big budget, I contend this movie is the front-runner to be 2014's most significant influence on discussions of superintelligence outside specialist circles. Anyone hoping to influence those discussions should start preparing some talking points.

I don't see anybody here agree with me on this. The movie has been briefly discussed on LW when it was first announced in March 2013, but since then, only the trailer (out since December) has been mentioned. MIRI hasn't published a word about it. This amazes me. We have three months till millions of people who never considered superintelligence are going to start thinking about it - is nobody bothering to craft a response to the movie yet? Shouldn't there be something that lazy journalists, given the job to write about this movie, can find?

Because if there isn't, they'll dismiss the danger of AI like Erik Sofge already did in an early piece about the movie for Popular Science, and nudge their readers to do so too. And that'd be a shame, wouldn't it?

Measuring lethality in reduced expected heartbeats

5 chaosmage 03 January 2014 02:14PM

Some of us here are already familiar with Micromorts - a unit that stands for a 1 in a million chance of dying. The wikipedia page lists a number of sample values. One obvious example is that smoking 1.4 cigarettes is one micromort. This is a good tool for comparing the relative dangerousness of activites - for example, if you fly in a jet in the US, your micromorts per mile from increased background radiation are twice the micromorts per mile from terrorism. And you can compare activities to baseline average risks of death, given as about 39 per day (averaged over all age groups and sexes).

However, people suck at imagining small probabilities. So a different unit, which we used in a group exercise at the Secular Solstice in Leipzig, is the number of expected future heartbeats. While Micromorts are a step away from empirical reality, expected heartbeats are another step in the same direction. But the concept got good feedback, it makes people think about life in a new way, so I thought I'd just share it here.

The average human heart gets to beat about 2.5 billion times - about 100000 times per day. So a micromort is around 2500 expected heartbeats. So you can translate, say, smoking a cigarette into a cost of about 1800 expected heartbeats (or about 80 seconds of life expectancy). And maybe that'll help people optimize their behavior in ways that Micromorts, due to their microprobability nature, aren't very good at doing especially for those who aren't very habitual Bayesians.

Meetup : Secular Solstice Celebration! (And the Inauguration of the LW Leipzig Community)

4 chaosmage 30 November 2013 12:42PM

Discussion article for the meetup : Secular Solstice Celebration! (And the Inauguration of the LW Leipzig Community)

WHEN: 21 December 2013 05:05:05PM (+0100)

WHERE: Scherlstraße 2, Leipzig, Germany

Dear everyone,

Germany needs more LW communities! And Secular Solstice celebrations are fun! So let's have a secular solstice and get together a bunch of people who really want to start something! :)

The plan is simple. We meet around sunset, casually get to know each other and chat. We prepare and perform together the Ceremonial Part, which involves some things from Raymond Arnold's ritual book (https://dl.dropboxusercontent.com/u/2000477/SolsticeEve_2012.pdf), the First Secular Sermon (http://www.youtube.com/watch?v=_vIFloLATxo) and a few other artful pieces of atheist ritual performance. Then we party till sunrise and welcome the sun re-emerging after the longest night. If you fall asleep (some crash space is available), we'll wake you up for that last bit. We're expecting 20 to 30 people (mostly not current LW users, but people who liked HPMOR and other potential new faces), but the flat we're using can easily accommodate 60, so bring friends if you want. If there are too many of us to fit into the Ritual Space, we can simply do (parts of) the Ceremonial Part twice.

The event is free (including drinks and some food) but you can bring food if you want to contribute.

Languages at the event will be a mix of German and English. The Ceremonial Part will involve at least one German-language and one English-language performance.

Any questions?

Discussion article for the meetup : Secular Solstice Celebration! (And the Inauguration of the LW Leipzig Community)

[Link] Cognitive biases about violence as a negotiating tactic

3 chaosmage 25 October 2013 11:43AM

Max Abrahms, "The Credibility Paradox: Violence as a Double-Edged Sword in International Politics," International Studies Quarterly 2013.

Abstract: Implicit in the rationalist literature on bargaining over the last half-century is the political utility of violence. Given our anarchical international system populated with egoistic actors, violence is thought to promote concessions by lending credibility to their threats. From the vantage of bargaining theory, then, empirical research on terrorism poses a puzzle. For non-state actors, terrorism signals a credible threat in comparison to less extreme tactical alternatives. In recent years, however, a spate of studies across disciplines and methodologies has nonetheless found that neither escalating to terrorism nor with terrorism encourages government concessions. In fact, perpetrating terrorist acts reportedly lowers the likelihood of government compliance, particularly as the civilian casualties rise. The apparent tendency for this extreme form of violence to impede concessions challenges the external validity of bargaining theory, as traditionally understood. In this study, I propose and test an important psychological refinement to the standard rationalist narrative. Via an experiment on a national sample of adults, I find evidence of a newfound cognitive heuristic undermining the coercive logic of escalation enshrined in bargaining theory. Due to this oversight, mainstream bargaining theory overestimates the political utility of violence, particularly as an instrument of coercion.

I found this via Bruce Schneier's blog, which frequently features very valuable analysis clustered around societal and computer security.

View more: Next