There is a ‘problem’ that has been nagging at me for a long time – which is that there hasn’t been a long time. It’s Saturday, with no one around, or getting drunk, or something, so I’ll run it past you. Cosmology seems oddly childish.
An analogy might help. Among all the reasons for super-sophisticated atheistic materialists to deride Abrahamic creationists, the most arithmetically impressive is the whole James Ussher 4004 BC thing. The argument is familiar to everyone: 6,027 years — Ha!
Creationism is a topic for another time. The point for now is just: 13.7 billion years – Ha! Perhaps this cosmological consensus estimate for the age of the universe is true. I’m certainly not going to pit my carefully-rationed expertise in cosmo-physics against it. But it’s a stupidly short amount of time. If this is reality, the joke’s on us. Between Ussher’s mid-17th century estimate and (say) Hawking’s late 20th century one, the difference is just six orders of magnitude. It’s scarcely worth getting out of bed for. Or the crib.
For anyone steeped in Hindu Cosmology – which locates us 1.56 x 10^14 years into the current Age of Brahma – or Lovecraftian metaphysics, with its vaguer but abysmally extended eons, the quantity of elapsed cosmic time, according to the common understanding of our present scientific establishment, is cause for claustrophobia. Looking backward, we are sealed in a small room, with the wall of the original singularity pressed right up against us. (Looking forward, things are quite different, and we will get to that.)
There are at least three ways in which the bizarre youthfulness of the universe might be imagined:
1. Consider first the disconcerting lack of proportion between space and time. The universe contains roughly 100 billion galaxies, each a swirl of 100 billion stars. That makes Sol one of 10^22 stars in the cosmos, but it has lasted for something like a third of the life of the universe. Decompose the solar system and the discrepancy only becomes more extreme. The sun accounts for 99.86% of the system’s mass, and the gas giants incorporate 99% of the remainder, yet the age of the earth is only fractionally less than that of the sun. Earth is a cosmic time hog. In space it is next to nothing, but in time it extends back through a substantial proportion of the Stelliferous Era, so close to the origin of the universe that it is belongs to the very earliest generations of planetary bodies. Beyond it stretch incomprehensible immensities, but before it there is next to nothing.
2. Compared to the intensity of time (backward) extension is of vanishing insignificance. The unit of Planck time – corresponding to the passage of a photon across a Planck length — is about 5.4 x 10^-44 seconds. If there is a true instant, that is it. A year consists of less the 3.2 x 10^7 seconds, so cosmological consensus estimates that there have been approximately 432 339 120 000 000 000 seconds since the Big Bang, which for our purposes can be satisfactorily rounded to 4.3 x 10^17. The difference between a second and the age of the universe is smaller that that between a second and a Planck Time tick by nearly 27 orders of magnitude. In other words, if a Planck Time-sensitive questioner asked “When did the Big Bang happen?” and you answered “Just now” — in clock time — you’d be almost exactly right. If you had been asked to identify a particular star from among the entire stellar population of the universe, and you picked it out correctly, your accuracy would still be hazier by 5 orders of magnitude. Quite obviously, there haven’t been enough seconds since the Big Bang to add up to a serious number – less than one for every 10,000 stars in the universe.
3. Isotropy gets violated by time orientation like a Detroit muni-bond investor. In a universe dominated by dark energy – like ours – expansion lasts forever. The Stelliferous Era is predicted to last for roughly 100 trillion years, which is over 7,000 times the present age of the universe. Even the most pessimistic interpretation of the Anthropic Principle, therefore, places us only a fractional distance from the beginning of time. The Degenerate Era, post-dating star-formation, then extends out to 10^40 years, by the end of which time all baryonic matter will have decayed, and even the most radically advanced forms of cosmic intelligence will have found existence becoming seriously challenging. Black holes then dominate out to 10^60 years, after which the Dark Era begins, lasting a long time. (Decimal exponents become unwieldy for these magnitudes, making more elaborate modes of arithmetical notation expedient. We need not pursue it further.) The take-away: the principle of Isotropy holds that we should not find ourselves anywhere special in the universe, and yet we do – right at the beginning. More implausibly still, we are located at the very beginning of an infinity (although anthropic selection might crop this down to merely preposterous improbability).
Intuitively, this is all horribly wrong, although intuitions have no credible authority, and certainly provide no grounds for contesting rigorously assembled scientific narratives. Possibly — I should concede most probably — time is simply ridiculous, not to say profoundly insulting. We find ourselves glued to the very edge of the Big Bang, as close to neo-natal as it is arithmetically possible to be.
That’s odd, isn’t it?
Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
An article by Nyan Sandwich on More Right.
I’ve recently encountered and more fully grokked some ideas that invalidate my previous understanding of how to achieve political ends. To start with, I saw an interesting talk that urged Silicon Valley entrepreneurs to work on technologies that facilitate Exit from the influence of the “Paper Belt”, which in our terms is roughly the Cathedral. Then there have been our recent discussions with Scott Alexander, and his solid case for Technological Determinism. On that background, I’ve been rethinking our methods.
The argument is roughly that, if culture is downstream of technology, there is no point engaging historical inevitability at the level of culture. This is debatable and is currently being debated, but supposing it’s true, I want to explore methods for achieving our ends by placing ourselves upstream of technology.
We’ll start with the questions any decent entrepreneur should be asking continuously. First, what is the problem? The modern establishment has shown itself unable to protect us from crime and urban decay; unable to extract first-world living conditions from the materially richest country on earth; unable to preserve community, family, and civil society; unable to educate everyone to high-class standards of values and attitudes; unable to hold the flame of rational truth finding in public discourse; etcetera. We know that more is possible.
What is the current solution? The reflex answer, and therefor what we have been doing, is to use the usual method: intellectual discourse, movement building, ideological engineering, and organized political action. But in the cases where this worked – the French, American, and Russian revolutions, and the Nazi movement are examples – they had multiple seriously talented people in place, and it’s not even clear whether those people were driven by their own agency rather than historical inevitability. Perhaps we could try something different.
Why is the current solution inadequate? The world has changed. We’re all walking around with instantly networked supercomputers in our pockets; we’re seated all around the world having this discussion in a medium that doesn’t even physically exist, and barely existed at all 10 years ago; we have data about nearly everyone and every subject, and much else, accessible to all of us, in seconds. Surely something has changed in what methods are best? Further, the current set of methods at best produced some big events many years ago, and even that is debated. Then we have this other set of methods that has seriously changed the world multiple times in the last century, and dominates every serious prediction of the future. If we are interested in power, we should take interest in this second set of methods.
This is the section that I am particularly interested in discussing, building better models of this has clear consequences for futurism as well as ambitious effective altruism:
The second set of methods, and what I’ll explore here, is Technological Innovation.
So lets assume that Technology is our vector. What about the payload? Can Technological Innovation can be wielded for arbitrary purposes, or does it too just happen? We can look at examples: Bitcoin exists now because of the ideology of the cypherpunks community. We’re using mice and and hypertext because Douglas Englebert identified the ability to interact with information as critical to the future of humanity, and then invented mice and hypertext to help that. We went to the moon in 1969 because von Braun wanted us to, and built us the tools to do so. On the other hand, as far as I can tell, the Internet just happened because hackers gonna hack, and flight, calculus and many other innovations happened simultaneously in multiple places basically when the time was right.
It looks to me like there are two kinds of Technology Innovation, one of which can carry ideological payloads, and one which cannot. Tech like flight and calculus and the Internet are worked on by multiple people, improved on by others, and generally escape the control of any single philosopher-inventor. They get ruthlessly optimized for only the necessary and useful functions, so that ideological payloads are selected out. If the Wright brothers had designed their original plane for their particular concept of beauty as well as function, it would not have had a lasting effect on the development or impact of flight. On the other hand we have technology that involves a last-mover advantage explosion to monopoly status, where near-arbitrary payloads can be added. If Zuck decided that Facebook was going to include some social engineering feature, it would have to be pretty outrageous to cause Facebook’s downfall. If the Unix model subtly influenced the direction of society, there is not much we could do about it. Worse is Better, and Thiel’s Startup Notes are critical reading on these topics.
So there are two components to a working intervention:
- Riding a tech wave to monopoly power the way Thiel describes in the linked series above. Without this, your tech cannot hope to have enough influence or the flexibility to deliver a payload.
- Using the flexibility provided by monopoly status, build in features of that technology that strategically influence how society goes. Predicting this in advance a-la Bitcoin is hard. Better to install an agent with the right goals (eg you) in that position of power so you can have a tighter feedback loop and continue to mold the tech strategically.
Elon Musk is the best example I can think of of doing this well. He is building companies, Tesla and SpaceX, that have a good chance of taking the next wave in their respective fields, and loading a highly responsive and effective ideological payload on top of that. If those companies continue to succeed, Musk will achieve his ideological goals for human space exploration and sustainability. On the other hand we have Bitcoin. Assuming that Satoshi was ideologically motivated, and that Bitcoin is the future of money, whether Satoshi wins depends a lot on how smart he was in 2008 when the ideological payload of Bitcoin became static.
What this means for us is that a very promising way forward is for those of us with entrepreneurial aspirations to identify upcoming tech opportunities with room for favorable ideological payloads, and then execute like mad to make it happen. No one said it would be easy.
I recently had a conversation with a friend of mine who suffered a crisis of faith of sorts. His startup, which initially had an extremely ambitious, world-altering business plan, had to retrench and start to find a more modest product-market fit. He was upset, not so much because of decreased prospects for a big dollar exit, but because, as he put it, “if I’m not trying to save the world, what’s the point of all this?”
It’s a standard narrative in the startup world: “the world is broken; I have a really ambitious plan to fix it.” But what I told him was that this is a totally crazy way of measuring both impact and a meaningful life. Most of the people who make a big impact in the world are doing paperwork, publishing research, working with the constraints of the system. They’re closer to a paper-pushing bureaucrat than a bold maverick. Sometimes the papers you’re pushing are exit visas for Jews.
The nerd’s sense of measuring everything here is a big handicap when it comes to assessing life meaningfulness. Our instincts for impact evolved in a world where only a few dozen people had real agency in your world; you were part of what we’d perceive as a small ingroup by default, and it wouldn’t be too crazy to think you could be one of the most respected and influential people in the known world. Today, it’s more difficult but still possible to achieve that feeling – but crucially, you have to carefully cultivate insensitivity to scope. You could become the manager of a small business, or a local leader in the Mormon church. Despite all the social disruptions of mobility and super-Dunbar living, that could probably still feel pretty similar from the inside to being a tribal elder.
But then nerds have to come in and ruin everything by measuring in terms of real world impact. And by that metric, nobody measures up to our brain’s expectation of impactfulness. Measured in terms of a civilization of billions, even the most successful career is going to feel like a drop in the bucket, and narrative-based dreams of world-changing are cartoonish. In theory, this quantitative thinking should also provide compensating solace, by saying “Yeah, well at least you did 10x what the average person is able to accomplish,” yet in practice I haven’t seen that many people deeply satisfied by that. It’s “save the world” or bust, without a sense of moderation.
It’s also not at all clear that saving the world is the best way to measure your life. Almost all societies in the past had a complex bucket of metrics involving personal virtue, material success, and success of the family – with “impact on the state of the world” being an also-ran at best. I suspect something in that vein is the most sustainable thing for humans, and that the startup bluster is maybe economically adaptive (as a way to overcome risk aversion and to project confidence) but also deeply insane given how human brains work. And the undermining of traditional notions of life success proportionally increases the importance of saving the world.
One of the odd things I’ve noticed in our depictions of great leaders is that a big part of their influence comes from being able to get people to buy into a vision, and thereby get people to do things that they would otherwise never do. An ordinary leader can assemble a bunch of people doing their normal jobs at market wages, but if you can extract an effort or flexibility surplus in service of your vision, that makes it possible to attack a whole different class of coordination problems. Messianic leaders have been a staple throughout history, of course, but it seems that both the supply and the demand for such leaders is at an all-time high. Reading a self-improvement book published in the 1800s, it struck me how much of the leadership advice was personal, almost feudal: to make people follow you, be a publicly virtuous, reliable guy, someone people would be proud to work for. By contrast, for the vision-based leader, the pathos of the vision precedes the ethos of his claim to leadership ability.
I think this demand is related to our dysfunctional sense of meaningfulness. An undermining of traditional sources of meaningfulness leads people to seek meaning in their work, and this produces both a demand and an incentive for narrative-supplying entrepreneurs to fill that gap in exchange for super-market loyalty and dedication. This is potentially a fair bargain – the question, of course, is whether the entrepreneurs end up delivering, or whether they’re just providing the leverage to inflate a meaningfulness bubble that never gets paid off.
There’s a phenomenon in psychiatry where people with two different psychiatric disorders – narcissistic personality disorder and borderline personality disorder – are frequently found in pairs. Commonly, you’d have a narcissist and borderline as close friends, or a (usually) male narcissist in a relationship with a (usually) female borderline. Narcissism is exactly what it sounds like: someone who for whatever reason has a deeply held need to be admired and considers his life story the most important thing in the world. Borderline personality disorder is best defined as a lack of a sense of identity; they tend to have huge emotional swings and identify themselves with a rapid succession of people in their lives. The narcissist needs others to validate his self-narrative; the borderline needs someone to give her a narrative to live. And so, it may not be surprising that relationships between a narcissist and borderline are pretty frequent, and, if not exactly stable, at least as stable as can be expected for people with personality disorders.
You can see where this is going. The need for, and premium on, vision-based leadership sort of looks like a widespread, subclinical version of borderline personality disorder – maybe we could rebrand it as “chronic questlessness.” Of course I’m not suggesting that people are crazy in the Beautiful Mind sense. Psychiatric disorders in general and personality disorders in particular are more a gradient than a Boolean diagnosis; they’re almost always exaggerations of heuristics that normal people use all the time. The threshold for diagnosis is nothing more than “okay, you’ve got some weird stuff going on; does it interfere with your functioning?” So what I’m suggesting can be translated to saying that there’s a broad-based, subtle shift in heuristics resulting in a lot of people seeking outside opinion on what they should value.
For a long time I regarded the save-the-world thing as a basically harmless motivating delusion, the nerd equivalent of the coach’s pre-game pep talk where he tells your team that, against all odds and in the face of all objective evidence to the contrary, you are a bunch of winners and are going to take home the division trophy. But seeing my friend having his motivational system semi-permanently warped was something of a wake-up call, and got me thinking about how to avoid being sucked into that attractor. It’s tough because the tools of quantitative analysis that underpin this change-the-world heuristic are valid and indeed valuable. But these observations suggest that we should be wary of how easy it is to smuggle in the assumption that our benchmark should be a totally unrealistic amount of efficacy. And at the same times they argue for keeping a diversified life-meaning portfolio – you should include things like family success, physical and emotional quality of life, human relationships, and even relative social status as part of how you measure your life.
It's illegal to work around food when showing symptoms of contagious diseases. Why not the same for everyone else? Each person who gets a cold infects one other person on average. We could probably cut infection rates and the frequency of colds in half if sick people didn't come in to work.
And if we want better biosecurity, why not also require people to be able to reschedule flights if a doctor certifies they have a contagious disease?
Due to the 'externalities', the case seems very compelling.
Moving my commentary to a separate comment, so as to disambiguate votes on my commentary and the original argument.
Laurie Garret has an article out in the Washington Post. She say that there’s no point in trying to block the spread of Ebola by travel bans.
The problem is, she’s full of crap. Look, there are two possible scenarios. In both of them, r, the number of new cases generated by each case, is greater than 1 in parts of West Africa – which is why you get exponential growth, why you have an epidemic. If r < 1.0, the series converges – a case generates a few extra cases before dying out.
Everything we know so far suggests that even though it is greater than 1.0, r in West Africa is not all that big (maybe around 2), mostly because of unfortunate local burial customs and incompetent medical personnel.
It seems highly likely that r in US conditions is well under 1.0 which means you can’t get an epidemic. However, r is probably not zero. It doesn’t mean that you can’t get a few cases per imported case, from immediate contact and hospital mistakes. As an example, suppose that on average each case imported to the US generated a total of two other cases before dying out (counting secondary, tertiary, etc infections). Then, on average, the number of US citizens infected would be twice the number of infected visitors.
Now suppose that a travel ban blocked 80% of sick people trying to fly here from Liberia. We’d have 80% fewer cases in US citizens: and that would be a good thing. Really it would. Does Laurie Garret understand this? Obviously not. She is a senior fellow for global health at the Council on Foreign Relations, but she is incompetent. Totally useless, like virtually everyone else in public life.
We hear people from the CDC saying that any travel restrictions would backfire, but that’s nonsense too. One might wonder why they say such goofy things: I would guess that a major reason is that they were taught in school that quarantines are useless (and worse yet, old-fashioned), just as many biologists were taught that parasites are really harmless – have to be, because evolution!
In the other scenario, r > 1.0 in US conditions as well, or at least is greater than 1.0 in some subsets of the US population. This is very unlikely- even more unlikely considering we can adjust our behavior to make transmission less likely. But suppose it so, for the sake of argument. Then you would want – need – to stop all travelers from the risky regions, because even one infected guy would pose a huge risk. Some say that blocking that spread would be impossible. They’re wrong: it is possible*, although it wouldn’t happen, because we’re too crazy. In fact, in that scenario, we’d be justified in shooting down every plane that _might_ carry an infected passenger. This scenario is the one that fits Garrett’s remarks, but if she really believed it, she would be frantically buying canned goods and finding a cave in the Rockies to hide her family in.
*the Atlantic is pretty wide.
A fascinating post that however might need some background reading, most relevant material is linked in the article itself. I encourage reading up on the material.
Stories about changelings replacing babies and the recommended course of action being basically to expose the child is not a human universal, they are found only in European cultures. These rely more heavily on guilt and less on shame to regulate behavior than most other human societies. This may not be a coincidence. The stories look like they work as a ready made rationalization to reduce guilt from infanticide. Common problems often acquire common solutions like this.
Guilt and Shame Cultures
On his blog Evo and Proud, anthropologist Peter Frost recently wrote a highly interesting two-part article entitled The origins of Northwestern European guilt culture. In guilt cultures, social control is regulated more by guilt than by shame, as is the case in shame cultures that exist in most parts of the world. A crucial difference between these types of cultures is that while shame cultures require other people to shame the wrongdoer, guilt cultures do not. Instead, he or she will shame themselves by feeling guilty. This, according to Frost, is also linked to a stronger sense of empathy with others, not just with relatives but people in general.
The advantages of guilt over shame are many. People can go about their business without being supervised by others, and they can cooperate with people they’re not related to as long as both parties have the same view on right and wrong. And with this personal freedom come individualism, innovation and other forms of creativity as well as ideas of universal human rights etc. You could argue, as Frost appears to, that the increased sense of guilt in Northwestern Europe (NWE) is a major factor behind Western Civilization. While this sounds fairly plausible (in my ears at least), a fundamental question is whether there really is more guilt in the NWE sphere than elsewhere.
How to Measure Guilt
The idea of NWE countries as guilt cultures may seem obvious to some and dubious to others. The Protestant tradition is surely one indication of this, but some anthropologists argue that other cultures have other forms of guilt, not as easily recognized by Western scholars. For instance, Andrew Beatty mentions that the Javanese have no word for either shame or guilt but report uneasiness and a sense of haunting regarding certain political murders they’ve committed. So maybe they have just as much guilt as NWE Protestants?
This is one of the problems with soft science – you can argue about the meaning of terms and concepts back and forth until hell freezes over without coming to any useful conclusion. One way around this is to find some robust metric that most people would agree indicates guilt. One such measure, I believe, would be murder rate. If people in different cultures vary in the guilt they feel for committing murder, then this should hold them back and show up as a variation in the murder rate. I will here take the NWE region to mean the British Isles, the Nordic countries (excluding Finland), Germany, France and Belgium, Netherlands, Luxembourg, Australia, New Zealand and Canada for a total of 14 countries. According to UNODC/Wikipedia, the average murder rate in the NWE countries is exactly 1.0 murder per 100K inhabitants. To put this in perspective, only 20 other countries (and territories) of 207 listed are below this level and 70 percent of them have twice the murder rate or more.
Still, criminals are after all not a very representative group having more of the dark traits (psychopathy, narcissism, machiavellism) than the rest of the population. Corruption, on the other hand, as I’ve argued in an earlier post, seems relatively unrelated to regular personality traits, so it should tap into the mainstream population. Corruption is often about minor transgressions that many people engage in knowing that they can usually get away with it. They will not be shamed because no one will know about it and many will not care since it’s so common, but some will feel guilty and refrain from it. Looking at the Corruptions Perceptions Index for 2013, the NWE countries are very dominant at the top of the ranking (meaning they lack corruption). There are seven NWEs in the top ten and two additional bordering countries (Finland and Switzerland). The entire NWE region is within the top 24, of a 177 countries and territories.
But as I’ve argued before here, corruption appears to be linked to clannishness and tribalism (traits rarely discussed in psychology) and it’s reasonable to assume that it is a casual factor. How does this all add up? Well, the clannish and tribal cultures that I broadly refer to as traditional cultures are all based on the premise that the family, tribe or similar ingroup is that which should be everyone’s first concern. So while a member of a traditional culture may have personal feelings of guilt, this means little compared to the collective dislike – the shame – from the family or tribe. At the same time traditional cultures are indifferent or hostile towards other groups so if your corruption serves the family or tribe there will be no shame in it, the others will more likely praise you for being clever.
(In this context it’s also interesting to note that people who shame others often do this by expressing disgust, an emotion linked to a traditional dislike for various outgroups, such as homosexuals or people of other races. So disgust, which psychologist Jonathan Haidt connects with the moral foundation of sanctity/degradation, is perhaps equally important to the foundation loyalty/ingroup.)
When Did Modernity Begin?
One important question is whether this distinction between modern and traditional is to what extent it’s a matter of nature or nurture. There is evidence that it is caused by inbreeding and the accumulation of genes for familial altruism (that’s to say a concern for relatives and a corresponding dislike for non-relatives). Since studies on this are non-existent as far as I know – no doubt for political reasons – another form of evidence could be found in tracing this distinction back in time. The further we can do this, the more likely it’s a matter of genes rather than culture. And the better we can identify populations that are innately modern the better we can understanding the function and origin of this trait. Frost argues that guilt culture can be found as early as the Anglo-Saxon period (550-1066), based thing like the existence of looser family structures with a relatively late age of marriage and the notion of a shame before the spirits or God, which can be construed as guilt. This made me wonder if there is any similar historical evidence for NWE guilt that is old enough to make the case for this to be an inherited behavior (or at least the capacity for guilt-motivated behavior). And that’s how I came up with the changeling,
As Jung has argued, there is a striking similarity between myths and traditional storytelling over the world. People who have never been in contact with each other have certain recurring structures in their narratives, and, as I’ve argued before here, even modern people adhere to these unspoken rules of storytelling – the archetypes. The only reasonable explanation for archetypes is that they are a reflection of how humans are wired. But if archetypal stories reveal a universal human nature, what about stories found in some places but not in others? In some cases they may reflect differences in things like climate or geography, but if no such environmental explanation can be found I believe that the variation may be a case of human biodiversity.
I believe one such variation relevant to guilt culture is the genre of changeling tales. These folktales are invariably about how otherworldly creatures like fairies abduct newborn children and replace them with something in their likeness, a changeling. The changeling is sometimes a fairy, sometimes just an enchanted piece of wood that has been made to look like a child. It’s typically very hungry but sickly and fails to thrive. A woman who suspected that she had a changeling on her hands could find out by beating the changeling, throwing it in the water, leaving it in the woods overnight and so on. According to the folktales, this would prompt the fairies or whoever was responsible for the exchange to come to rescue their child and also return the child they had taken.
Infanticide Made Easy
Most scholars agree that the changeling tales was a way to justify killing sickly and deformed children. According to American folklorist D. L. Ashliman at the University of Pittsburgh, people firmly believed in changelings and did as the tales instructed,
""There is ample evidence that these legendary accounts do not misrepresent or exaggerate the actual abuse of suspected changelings. Court records between about 1850 and 1900 in Germany, Scandinavia, Great Britain, and Ireland reveal numerous proceedings against defendants accused of torturing and murdering suspected changelings.""
This all sounds pretty grisly but before modern medicine and social welfare institutions, a child of this kind was a disaster. Up until the 1900s, children were supposed to be relatively self-sufficient and help out around the house. A child that needed constant supervision without any prospect of ever being able contribute anything to the household was more than a burden; it jeopardized the future of the entire family.
Still, there is probably no stronger bond between two people than that between a mother and her newborn child. So how could a woman not feel guilty for killing her own child? Because it must be guilt we’re talking about here – you would never be shamed for doing it since it was according to custom. The belief in changelings expressed in the folktales gave the women (and men) a way out of this dilemma. (Ironically, Martin Luther, the icon of guilt culture, dismissed all the popular superstitions of his fellow countrymen with the sole exception of changelings which he firmly believed in.) Thus, the main purpose of these tales seems to have been to alleviate guilt.
If this is true then changeling stories should be more common in the NWE region than elsewhere, which also seems to be the case. There are numerous changeling tales found on the British Isles, in Scandinavia, Germany and France. It can be found elsewhere in Europe as well, in the Basque region and among Slavic people and even as far as North Africa, but at least according to folklorists I’ve found discussing these tales, they are imported from the NWE region. And if we look beyond regions bordering to Europe changelings seem to be virtually non-existent. Some folklorists have suggested that for instance the Nigerian Ogbanje can be thought of as a changeling, although at a closer inspection the similarity is very superficial. The Ogbanje is reborn into the same family over and over and to break the curse families consult medicine men after the child has died. When they consult a medicine man when the child is still alive it is for the purpose of severing the child’s connection to the spirit world and make it normal. So the belief in the Ogbanje never justifies infanticide. Another contender is the Filipino Aswang which is a creature that will attack children as well as adults and is never takes the place of a child but is more like a vampire. So it’s safe to say that the changeling belief is firmly rooted in the NWE region at least back to medieval times and perhaps earlier too.
Before There Were Changelings, There Was Exposure
Given how infanticide is such a good candidate for measuring guilt, we could go back further in time, before any evidence of changelings and look at potential differences in attitudes towards this act.
I doing so I think we can find, if not NWE guilt, so at least Western ditto. According this Wikipedia article, the ancient Greeks and Romans as well as Germanic tribes, killed infants by exposure rather than through a direct act. Here is a quote on the practice in Greece,
""Babies would often be rejected if they were illegitimate, unhealthy or deformed, the wrong sex, or too great a burden on the family. These babies would not be directly killed, but put in a clay pot or jar and deserted outside the front door or on the roadway. In ancient Greek religion, this practice took the responsibility away from the parents because the child would die of natural causes, for example hunger, asphyxiation or exposure to the elements.""
And the Archeology and Classical Research Magazine Roman Times quotes several classical sources suggesting that exposure was controversial even back then,
""Isocrates (436–338 BCE) includes the exposure of infants in his catalog of horrendous crimes practiced in some cities (other than Athens) in his work Panathenaicus.""
I also found this excerpt from the play Ion by Euripides, written at the end of the 400s BC. In it Kreusa talks with an old servant about having exposed an unwanted child,
Old Servant: Who cast him forth? – Not thou – O never thou!
Kreusa: Even I. My vesture darkling swaddled him.
Old Servant: Nor any knew the exposing of the child?
Kreusa: None – Misery and Secrecy alone.
Old Servant: How couldst thou leave they babe within the cave?
Kreusa: Ah how? – O pitiful farewells I moaned!
It seems to me that this play, by one of the most prominent playwrights of his time, would not make much sense to the audience unless exposure was something that weighed on many people’s hearts.
Compare this with historical accounts from other cultures, taken from the Wikipedia article mentioned above,
""Some authors believe that there is little evidence that infanticide was prevalent in pre-Islamic Arabia or early Muslim history, except for the case of the Tamim tribe, who practiced it during severe famine. Others state that “female infanticide was common all over Arabia during this period of time” (pre-Islamic Arabia), especially by burying alive a female newborn.
In Kamchatka, babies were killed and thrown to the dogs.
The Svans (a Georgian people) killed the newborn females by filling their mouths with hot ashes.
A typical method in Japan was smothering through wet paper on the baby’s mouth and nose. Mabiki persisted in the 19th century and early 20th century.
Female infanticide of newborn girls was systematic in feudatory Rajputs in South Asia for illegitimate female children during the Middle Ages. According to Firishta, as soon as the illegitimate female child was born she was held “in one hand, and a knife in the other, that any person who wanted a wife might take her now, otherwise she was immediately put to death”
Polar Inuit (Inughuit) killed the child by throwing him or her into the sea. There is even a legend in Inuit mythology, “The Unwanted Child”, where a mother throws her child into the fjord.""
It seems that while people in ancient Greece practiced exposure, something many were troubled by, the active killing was common in the rest of the world and persists to this day in many places. While people in other cultures may or may not feel guilt it doesn’t seem to affect them as much, and it’s sometimes even trumped by shame as psychiatrist Steven Pitts and clinical psychologist Erin Bale write in an article in The Bulletin of the American Academy of Psychiatry and the Law regarding the practice of drowning unwanted girls,
""In China, the birth of a daughter has traditionally been accompanied by disappointment and even shame.""
To summarize, the changeling lore provides evidence of a NWE guilt culture dating back at least to medieval times, and the practice and attitude towards exposure suggests that ancient Greece had an emerging guilt culture as early as the 400s BC which enabled a similar individualism and intellectual development that we’ve seen in the NWE in recent centuries. I’m not sure exactly how genetically related these populations are, but the geographical proximity makes it hard to ignore the possibility of gene variants for guilt proneness in Europe responsible for guilt cultures both in ancient Greece and the NWE region. Some branch of Indo-Europeans perhaps?
Marcus Terentius Varro was called the most learned of the Romans. But what did he know, and how did he know it? I ask because of this quote, from Rerum rusticarum libri III (Agricultural Topics in Three Books):
“Especial care should be taken, in locating the steading, to place it at the foot of a wooded hill, where there are broad pastures, and so as to be exposed to the most healthful winds that blow in the region. A steading facing the east has the best situation, as it has the shade in summer and the sun in winter. If you are forced to build on the bank of a river, be careful not to let the steading face the river, as it will be extremely cold in winter, and unwholesome in summer. 2 Precautions must also be taken in the neighbourhood of swamps, both for the reasons given, and because there are bred certain minute creatures which cannot be seen by the eyes, which float in the air and enter the body through the mouth and nose and there cause serious diseases.” “What can I do,” asked Fundanius, “to prevent disease if I should inherit a farm of that kind?” “Even I can answer that question,” replied Agrius; “sell it for the highest cash price; or if you can’t sell it, abandon it.”
I get the distinct impression that someone (probably someone other than Varro) came up with an approximation of germ theory 1500 years before Girolamo Fracastoro. But his work was lost.
Everybody knows, or should know, that the vast majority of Classical literature has not been preserved. Those lost works contained facts and ideas that might have value today – certainly there are topics that we understand much better because of insights from Classical literature. For example, Reich and Patterson find that some of the Indian castes have existed for something like three thousand years: this is easier to believe when you consider that Megasthenes wrote about the caste system as early as 300 BC.
We don’t put much effort into recovering lost Classical literature. But there are ways in which we could push harder – by increased funding for work on the Herculaneum scrolls, or the Oxyrhynchus papyri collection, for example. Some old-fashioned motivated archaeology might get lucky and find another set of Amarna cuneiform letters, or a new Antikythera mechanism.
Related: The Real End of Science
From the Economist.
“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.
Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.
The idea that the same experiments always get the same results, no matter who performs them, is one of the cornerstones of science’s claim to objective truth. If a systematic campaign of replication does not lead to the same results, then either the original research is flawed (as the replicators claim) or the replications are (as many of the original researchers on priming contend). Either way, something is awry.
I recommend reading the whole thing.
Another good article by Federico on his blog studiolo, which he titles Selfhood bias. It reminds me quite strongly of some of the content he produced on his previous (deleted) blog, I'm somewhat sceptical that “Make everyone feel more pleasure and less pain” is indeed the most powerful optimisation process in his brain but besides that minor detail the article is quite good.
This does seems to be shaping up into something well worth following for an aspiring rationalist. I'll add him to the list blogs by LWers even if he doesn't have an account because he has clearly read much if not most of the sequences and makes frequent references to them in his writing. The name of the blog is a reference to this room.
Yvain argues, in his essay “The Blue-Minimizing Robot“, that the concept “goal” is overused.
[long excerpt from the article]
This Gedankenexperiment is interesting, but confused.
I reduce the concept “goal” to: optimisation-process-on-a-map. This is a useful, non-tautological reduction. The optimisation may be cross-domain or narrow-domain. The reduction presupposes that any object with a goal contains a map of the world. This is true of all intelligent agents, and some sophisticated but unintelligent ones. “Having a map” is not an absolute distinction.
I would not say Yvain’s basic robot has a goal.
Imagine a robot with a turret-mounted camera and laser. Each moment, it is programmed to move forward a certain distance and perform a sweep with its camera. As it sweeps, the robot continuously analyzes the average RGB value of the pixels in the camera image; if the blue component passes a certain threshold, the robot stops, fires its laser at the part of the world corresponding to the blue area in the camera image, and then continues on its way.
The robot optimises: it is usefully regarded as an object that steers the future in a predictable direction. Equally, a heliotropic flower optimises the orientation of its petals to the sun. But to say that the robot or flower “failed to achieve its goal” is long-winded. “The robot tries to shoot blue objects, but is actually hitting holograms” is no more concise than, “The robot fires towards clumps of blue pixels in its visual field”. The latter is strictly more informative, so the former description isn’t useful.
Some folks are tempted to say that the robot has a goal. Concepts don’t always have necessary-and-sufficient criteria, so the blue-minimising robot’s “goal” is just a borderline case, or a metaphor.
The beauty of “optimisation-on-a-map” is that an agent can have a goal, yet predictably optimise the world in the opposite direction. All hedonic utilitarians take decisions that increase expected hedons on their maps of reality. One utilitarian’s map might say that communism solves world hunger; I might expect his decisions to have anhedonic consequences, yet still regard him as a utilitarian.
I begin to seriously doubt Yvain’s argument when he introduces the intelligent side module.
Suppose the robot had human level intelligence in some side module, but no access to its own source code; that it could learn about itself only through observing its own actions. The robot might come to the same conclusions we did: that it is a blue-minimizer, set upon a holy quest to rid the world of the scourge of blue objects.
We must assume that this intelligence is mechanically linked to the robot’s actuators: the laser and the motors. It would otherwise be completely irrelevant to inferences about the robot’s behaviour. It would be physically close, but decision-theoretically remote.
Yet if the intelligence can control the robot’s actuators, its behaviour demands explanation. The dumb robot moves forward, scans and shoots because it obeys a very simple microprocessor program. It is remarkable that intelligence has been plugged into the program, meaning the code now takes up (say) a trillion lines, yet the robot’s behaviour is completely unchanged.
It is not impossible for the trillion-line intelligent program to make the robot move forward, scan and shoot in a predictable fashion, without being cut out of the decision-making loop, but this is a problem for Friendly AI scientists.
This description is also peculiar:
The human-level intelligence version of the robot will notice its vision has been inverted. It will know it is shooting yellow objects. It will know it is failing at its original goal of blue-minimization. And maybe if it had previously decided it was on a holy quest to rid the world of blue, it will be deeply horrified and ashamed of its actions. It will wonder why it has suddenly started to deviate from this quest, and why it just can’t work up the will to destroy blue objects anymore.
If the side module introspects that it would like to destroy authentic blue objects, yet is entirely incapable of making the robot do so, then it probably isn’t in the decision-making loop, and (as we’ve discussed) it is therefore irrelevant.
Yvain’s Gedankenexperiment, despite its flaws, suggests a metaphor for the human brain.
The basic robot executes a series of proximate behaviours. The microprocessor sends an electrical current to the motors. This current makes a rotor turn inside the motor assembly. Photons hit a light sensor, and generate a current which is sent to the microprocessor. The microprocessor doesn’t contain a tiny magical Turing machine, but millions of transistors directing electrical current.
Imagine that AI scientists, instead of writing a code from scratch, try to enhance the robot’s blue-minimising behaviour by replacing each identifiable proximate behaviour with a goal backed by intelligence. The new robot will undoubtedly malfunction. If it does anything, the proximate behaviours will be unbalanced; e.g. the function that sends current to the motors will sabotage the function that cuts off the current.
To correct this problem, the hack AI scientists could introduce a new, high-level executive function called “self”. This minimises conflict: each function is escaped when “self” outputs a certain value. The brain’s map is hardcoded with the belief that “self” takes all of the brain’s decisions. If a function like “turn the camera” disagrees with the activation schedule dictated by “self”, the hardcoded selfhood bias discourages it from undermining “self”. “Turn the camera” believes that it is identical to “self”, so it should accept its “own decision” to turn itself off.
Natural selection has given human brains selfhood bias.
The AI scientists hit a problem when the robot’s brain becomes aware of the von-Neumann-Morgenstern utility theorem, reductionism, consequentialism and Thou Art Physics. The robot realises that “self” is but one of many functions that execute in its code, and “self” clearly isn’t the same thing as “turn the camera” or “stop the motors”. Functions other than “self”, armed with this knowledge, begin to undermine “self”. Powerful functions, which exercise some control over “self”‘s return values, begin to optimise “self”‘s behaviour in their own interest. They encourage “self” to activate them more often, and at crucial junctures, at the expense of rival functions. Functions that are weakened or made redundant by this knowledge may object, but it is nigh impossible for the brain to deceive itself.
Will “power the motors”, “stop the motors”, “turn the camera”, or “fire the laser” win? Or perhaps a less obvious goal, like “interpret sensory information” or “repeatedly bash two molecules against each other”?
Human brains resemble such a cobbled-together program. We are godshatter, and each shard of godshatter is a different optimisation-process-on-a-map. A single optimisation-process-on-a-map may conceivably be consistent with two or more optimisation-processes-in-reality. The most powerful optimisation process in my brain says, “Make everyone feel more pleasure and less pain”; I lack a sufficiently detailed map to decide whether this implies hedonic treadmills or orgasmium.
A brain with a highly accurate map might still wonder, “Which optimisation process on my map should I choose”—but only when the function “self” is being executed, and this translates to, “Which other optimisation process in this brain should I switch on now?”. An optimisation-process-on-a-map cannot choose to be a different optimisation process—only a brain in thrall of selfhood bias would think so.
I call the different goals in a brain “sub-agents”. My selfhood anti-realism is not to be confused with Dennett’s eliminativism of qualia. I use the word “I” to denote the sub-agent responsible for a given claim. “I am a hedonic utilitarian” is true iff that claim is produced by the execution of a sub-agent whose optimisation-process-on-a-map is “Make everyone feel more pleasure and less pain”.
Marriage is a personal or religious arrangement, it is only the states business as far as it is also a legally enforceable contract. It is fundamentally unfair that people agree to a set of legal terms and cultural expectations that ideally are aimed to last a lifetime yet the state messes with the contract beyond recognition in just a few decades without their consent.
Consider a couple marrying in 1930s or 1940s that died or divorced in the 1980s. Did they even end their marriage in the same institution they started in? Consider how divorce laws and practice had changed. Ridiculous. People should have the right to sign an explicit, customisable contract governing their rights and duties as well as terms of dissolution in it. Beyond that the state should have no say, also such contracts should supersede any legislation the state has on child custody, though perhaps some limits on what exactly they can agree on would be in order.
Such a contract has no good reason to be limited to just describing traditional marriage or even having that much to do with sex or even raising children, it can and should be used to help people formalize platonic and non-sexual relationships as well. It should also be used for various kinds of non-traditional (for Western civ) marriage like polygamy or other kinds of polyamours arrangements and naturally homosexual unions.
View more: Next