Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
The Brain Preservation Foundation’s Small Mammalian Brain Prize has been won with fantastic preservation of a whole rabbit brain using a new fixative+slow-vitrification process.
- BPF announcement (21CM’s announcement)
The process was published as “Aldehyde-stabilized cryopreservation”, McIntyre & Fahy 2015 (mirror)
We describe here a new cryobiological and neurobiological technique, aldehyde-stabilized cryopreservation (ASC), which demonstrates the relevance and utility of advanced cryopreservation science for the neurobiological research community. ASC is a new brain-banking technique designed to facilitate neuroanatomic research such as connectomics research, and has the unique ability to combine stable long term ice-free sample storage with excellent anatomical resolution. To demonstrate the feasibility of ASC, we perfuse-fixed rabbit and pig brains with a glutaraldehyde-based fixative, then slowly perfused increasing concentrations of ethylene glycol over several hours in a manner similar to techniques used for whole organ cryopreservation. Once 65% w/v ethylene glycol was reached, we vitrified brains at −135 °C for indefinite long-term storage. Vitrified brains were rewarmed and the cryoprotectant removed either by perfusion or gradual diffusion from brain slices. We evaluated ASC-processed brains by electron microscopy of multiple regions across the whole brain and by Focused Ion Beam Milling and Scanning Electron Microscopy (FIB-SEM) imaging of selected brain volumes. Preservation was uniformly excellent: processes were easily traceable and synapses were crisp in both species. Aldehyde-stabilized cryopreservation has many advantages over other brain-banking techniques: chemicals are delivered via perfusion, which enables easy scaling to brains of any size; vitrification ensures that the ultrastructure of the brain will not degrade even over very long storage times; and the cryoprotectant can be removed, yielding a perfusable aldehyde-preserved brain which is suitable for a wide variety of brain assays…We have shown that both rabbit brains (10 g) and pig brains (80 g) can be preserved equally well. We do not anticipate that there will be significant barriers to preserving even larger brains such as bovine, canine, or primate brains using ASC.
- previous discussion: Mikula’s plastination came close but ultimately didn’t seem to preserve the whole brain when applied.
- commentary: Robin Hanson, John Smart, Vice, Pop Sci
To summarize it, you might say that this is a hybrid of current plastination and vitrification methods, where instead of allowing slow plastination (with unknown decay & loss) or forcing fast cooling (with unknown damage and loss), a staged approach is taking: a fixative is injected into the brain first to immediately lock down all proteins and stop all decay/change, and then it is leisurely cooled down to be vitrified.
This is exciting progress because the new method may wind up preserving better than either of the parent methods, but also because it gives much greater visibility into the end-results: the aldehyde-vitrified brains can be easily scanned with electron microscopes and the results seen in high detail, showing fantastic preservation of structure, unlike regular vitrification where the scans leave opaque how good the preservation was. This opacity is one reason that as Mike Darwin has pointed out at length on his blog and jkaufman has also noted that we cannot be confident in how well ALCOR or CI’s vitrification works - because if it didn’t, we have little way of knowing.
Reverend Caleb Pitkin, an aspiring rationalist and United Methodist Minister, wrote an article about combining religion and rationality which was recently published on the Intentional Insights blog. He's the only Minister I know who is also an aspiring rationalist, so I thought it would be an interesting piece for Less Wrong as well. Besides, it prompted an interesting discussion on the Less Wrong Facebook group, so I thought some people here who don't look at the Facebook group might be interested in checking it out as well. Caleb does not have enough karma to post, so I am posting it on his behalf, but he will engage with the comments.
Religious and Rational?
“Wisdom shouts in the street; in the public square she raises her voice.”
Proverbs 1:20 Common English Bible
The Biblical book of Proverbs is full of imagery of wisdom personified as a woman calling and extorting people to come to her and listen. The wisdom contained in Proverbs is not just spiritual wisdom but also contains a large amount of practical wisdom and advice. What might the wisdom of Proverbs and rationality have in common? The wisdom literature in scripture was meant to help people make better and more effective decisions. In today’s complex and rapidly changing world we have the same need for tools and resources to help us make good decisions. One great source of wisdom is methods of better thinking that are informed by science.
Now, not everyone would agree with comparing the wisdom of Proverbs with scientific insights. Doing so may not sit well with some in the secular rationality community who view all religion as inherently irrational and hindering clear thinking. It also might not sit well with some in my own religious community who are suspicious of scientific thinking as undermining traditional faith. While it would take a much longer piece to try to completely defend either religion or secular rationality I’m going to try and demonstrate some ways that rationality is useful for a religious person.
The first way that rationality can be useful for a religious person is in the living of our daily lives. We are faced with tasks and decisions each day that we try to do our best in. Learning to recognize common logical fallacies or other biases, like those that cause us to fail to understand other people, will improve our decision making as much as it improves the thinking of non-religious people. For example, a mother driving her kids to Sunday School might benefit from avoiding thinking that the person who cuts her off is definitely a jerk, one common type of thinking error. Some doing volunteer work for their church could be more effective if they avoid problematic communication with other volunteers. This use of rationality to lead our daily lives in the best way is one that most would find fairly unobjectionable. It’s easy to say that the way we all achieve our personal goals and objectives could be improved, and we can all gain greater agency.
Rationality can also be of use in theological commentary and discourse. Many of the theological and religious greats used the available philosophical and intellectual tools of their day to examine their faith. Examples of this include John Wesley, Thomas Aquinas and even the Apostle Paul when he debated Epicurean and Stoic Philosophers. They also made sure that their theologies were internally, rational and logical. This means that, from the perspective of a religious person, keeping up with rationality can help with the pursuit of a deeper understanding of our faith. For a secular person acknowledging the ways in which religious people use rationality within their worldview may be difficult, but it can help to build common ground. The starting point is different. Secular people start with the faith that they can trust their sensory experience. Religious people start with conceptions of the divine. Yet, after each starting point, both seek to proceed in a rational logical manner.
It is not just our personal lives that can be improved by rationality, it’s also the ways in which we interact with communities. One of the goals of many religious communities is to make a positive impact on the world around them. When we work to do good in community we want that work to be as effective as possible. Often when we work in community we find that we are not meeting our goals or having the kind of significant impact that we wish to have. It is my experience this is often a failure to really examine and gather the facts on the ground. We set off full of good intentions but with limited resources and time. Rational examination helps us to figure out how to match our good intentions with our limited resources in the most effective way possible. For example as the Pastor of two small churches money and people power can be in short supply. So when we examine all the needs of our community we have to acknowledge we cannot begin to meet all or even most of them. So we take one issue, hunger, and devote our time and resources to having one big impact on that issue. As opposed to trying to be a little bit to alleviate a lot of problems.
One other way that rationality can inform our work in the community is to recognize that part of what a scarcity of resources means is that we need to work together with others in our community. The inter-faith movement has done a lot of good work in bringing together people of faith to work on common goals. This has meant setting aside traditional differences for the sake of shared goals. Let us examine the world we live in today though. The amount of nonreligious people is on the rise and there is every indication that it will continue to do so. On the other hand religion does not seem to be going anywhere either. Which is good news for a pastor. Looking at this situation, the rational thing to do is to work together, for religious people to build bridges toward the non-religious and vice versa.
Wisdom still stands on the street calling and imploring us to be improved--not in the form of rationalist street preachers, though that idea has a certain appeal-- but in the form of the growing number of tools being offered to help us improve our capacity for logic, for reasoning, and for the tools that will enable us take part in the world we live in.
Everyone wants to make good decisions. This means that everyone tries to make rational decisions. We all try but we don’t always hit the mark. Religious people seek to achieve their goals and make good decisions. Secular people seek to achieve their goals and make good decisions. Yes, we have different starting points and it’s important to acknowledge that. Yet, there are similarities in what each group wants out of their lives and maybe we have more in common than we think we do.
On a final note it is my belief that what religious people and what non-religious people fear about each other is the same thing. The non-religious look at the religious and say God could ask them to do anything... scary. The religious look at the non-religious and say without God they could do anything... scary. If we remember though that most people are rational and want to live a good life we have less to be scared of, and are more likely to find common ground.
Bio: Caleb Pitkin is a Provisional Elder with the United Methodist Church appointed to Signal Mountain United Methodist Church. Caleb is a huge fan of the theology of John Wesley, which ask that Christians use reason in their faith journey. This helped lead Caleb to Rationality and participation in Columbus Rationality, a Less Wrong meetup that is part of the Humanist Community of Central Ohio. Through that, Caleb got involved with Intentional Insights. Caleb spends his time trying to live a faithful and rational life.
Once upon a time, in a lonely little village, beneath the boughs of a forest of burning trees, there lived a boy. The branches of the burning trees sometimes fell, and the magic in the wood permitted only girls to carry the fallen branches of the burning trees.
One day, a branch fell, and a boy was pinned beneath. The boy saw other boys pinned by branches, rescued by their girl friends, but he remained trapped beneath his own burning branch.
The fire crept closer, and the boy called out for help.
Finally, a friend of his own came, but she told him that she could not free him from the burning branch, because she already free'd her other friend from beneath a burning branch and he would be jealous if she did the same deed for anyone else. This friend left him where he lay, but she did promise to return and visit.
The fire crept closer, and the boy called out for help.
A man stopped, and gave the boy the advice that he'd get out from beneath the burning branch eventually if he just had faith in himself. The boy's reply was that he did have faith in himself, yet he remained trapped beneath the burning branch. The man suggested that perhaps he did not have enough faith, and left with nothing more to offer.
The fire crept closer, and the boy cried out for help.
A girl came along, and said she would free the boy from beneath the burning branch.
But no, her friends said, the boy was a stranger to her, was her heroic virtue worth nothing? Heroic deeds ought to be born from the heart, and made beautiful by love, they insisted. Simply hauling the branch off a boy she did not love would be monstrously crass, and they would not want to be friends with a girl so shamed.
So the girl changed her mind and left with her friends.
The fire crept closer. It began to lick at the boy's skin. A soothing warmth became an uncomfortable heat. The boy mustered his courage and chased the fear out of his own voice. He called out, but not for help. He called out for company.
A girl came along, and the boy asked if she would like to be friends. The girl's reply was that she would like to be friends, but that she spent most of her time on the other side of the village, so if they were to be friends, he must be free from beneath the burning branch.
The boy suggested that she free him from beneath the burning branch, so that they could be friends.
The girl replied that she once free'd a boy from beneath a burning branch who also promised to be her friend, but as soon as he was free he never spoke to her again. So how could she trust the boy's offer of friendship? He would say anything to be free.
The boy tried frantically to convince her that he was sincere, that he would be grateful and try with all his heart to be a good friend to the girl who free'd him, but she did not believe him and turned away from him and left him there to burn.
The fire crept closer and the boy whimpered in pain and fear as it spread from wood to flesh. He cried out for help. He begged for help. "Somebody, please!"
A man and a woman came along, and the man offered advice: he was once trapped beneath a burning branch for several years. The fire was magic, the pain was only an illusion. Perhaps it was sad that he was trapped but even so trapped the boy may lead a fulfilling life. Why, the man remembered etching pictures into his branch, befriending passers by, and making up songs.
The woman beside the man agreed, and told the boy that she hoped the right girl would come along and free him, but that he must not presume that he was entitled to any girl's heroic deed merely because he was trapped beneath a burning branch.
"But do I not deserve to be helped?" the boy pleaded, as the flames licked his skin.
"No, how wrong of you to even speak as though you do. My heroic deeds are mine to give, and to you I owe nothing," he was told.
"Perhaps I don't deserve help from you in particular, or from anyone in particular, but is it not so very cruel of you to say I do not deserve any help at all?" the boy pleaded. "Can a girl willing to free me from beneath this burning branch not be found and sent to my aide?"
"Of course not," he was told, "that is utterly unreasonable and you should be ashamed of yourself for asking. It is offensive that you believe such a girl may even exist. You've become burned and ugly, who would want to save you now?"
The fire spread, and the boy cried, screamed, and begged desperately for help from every passer by.
"It hurts it hurts it hurts oh why will no one free me from beneath this burning branch?!" he wailed in despair. "Anything, anyone, please! I don't care who frees me, I only wish for release from this torment!"
Many tried to ignore him, while others scoffed in disgust that he had so little regard for what a heroic deed ought to be. Some pitied him, and wanted to help, but could not bring themselves to bear the social cost, the loss of worth in their friends' and family's eyes, that would come of doing a heroic deed motivated, not by love, but by something lesser.
The boy burned, and wanted to die.
Another boy stepped forward. He went right up to the branch, and tried to lift it. The trapped boy gasped at the small relief from the burning agony, but it was only a small relief, for the burning branches could only be lifted by girls, and the other boy could not budge it. Though the effort was for naught, the first boy thanked him sincerely for trying.
The boy burned, and wanted to die. He asked to be killed.
He was told he had so much to live for, even if he must live beneath a burning branch. None were willing to end him, but perhaps they could do something else to make it easier for him to live beneath the burning branch? The boy could think of nothing. He was consumed by agony, and wanted only to end.
And then, one day, a party of strangers arrived in the village. Heroes from a village afar. Within an hour, one foreign girl came before the boy trapped beneath the burning branch and told him that she would free him if he gave her his largest nugget of gold.
Of course, the local villagers were shocked that this foreigner would sully a heroic deed by trafficking it for mere gold.
But, the boy was too desperate to be shocked, and agreed immediately. She free'd him from beneath the burning branch, and as the magical fire was drawn from him, he felt his burned flesh become restored and whole. He fell upon the foreign girl and thanked her and thanked her and thanked her, crying and crying tears of relief.
Later, he asked how. He asked why. The foreign girls explained that in their village, heroic virtue was measured by how much joy a hero brought, and not by how much she loved the ones she saved.
The locals did not like the implication that their own way might not have been the best way, and complained to the chief of their village. The chief cared only about staying in the good graces of the heroes of his village, and so he outlawed the trading of heroic deeds for other commodities.
The foreign girls were chased out of the village.
And then a local girl spoke up, and spoke loud, to sway her fellow villagers. The boy recognized her. It was his friend. The one who had promised to visit so long ago.
But she shamed the boy, for doing something so crass as trading gold for a heroic deed. She told him he should have waited for a local girl to free him from beneath the burning branch, or else grown old and died beneath it.
To garner sympathy from her audience, she sorrowfully admitted that she was a bad friend for letting the boy be tempted into something so disgusting. She felt responsible, she claimed, and so she would fix her mistake.
The girl picked up a burning branch. Seeing what she was about to do, the boy begged and pleaded for her to reconsider, but she dropped the burning branch upon the boy, trapping him once more.
The boy screamed and begged for help, but the girl told him that he was morally obligated to learn to live with the agony, and never again voice a complaint, never again ask to be free'd from beneath the burning branch.
"Banish me from the village, send me away into the cold darkness, please! Anything but this again!" the boy pleaded.
"No," he was told by his former friend, "you are better off where you are, where all is proper."
In the last extreme, the boy made a grab for his former friend's leg, hoping to drag her beneath the burning branch and free himself that way, but she evaded him. In retaliation for the attempt to defy her, she had a wall built around the boy, so that none would be able, even if one should want to free him from beneath the burning branch.
With all hope gone, the boy broke and became numb to all possible joys. And thus, he died, unmourned.
If you are a person who finds it difficult to tell "no" to their friends, this one weird trick may save you a lot of time!
Alice: "Hi Bob! You are a programmer, right?"
Bob: "Hi Alice! Yes, I am."
Alice: "I have this cool idea, but I need someone to help me. I am not good with computers, and I need someone smart whom I could trust, so they wouldn't steal my idea. Would you have a moment to listen to me?"
Alice explains to Bob her idea that would completely change the world. Well, at the least the world of bicycle shopping.
Instead of having many shops for bicycles, there could be one huge e-shop that would collect all the information about bicycles from all the existing shops. The customers would specify what kind of a bike they want (and where they live), and the system would find all bikes that fit the specification, and display them ordered by lowest price, including the price of delivery; then it would redirect them to the specific page of the specific vendor. Customers would love to use this one website, instead of having to visit multiple shops and compare. And the vendors would have to use this shop, because that's where the customers would be. Taking a fraction of a percent from the sales could make Alice (and also Bob, if he helps her) incredibly rich.
Bob is skeptical about it. The project suffers from the obvious chicken-and-egg problem: without vendors already there, the customers will not come (and if they come by accident, they will quickly leave, never to return again); and without customers already there, there is no reason for the vendors to cooperate. There are a few ways how to approach this problem, but the fact that Alice didn't even think about it is a red flag. She also has no idea who are the big players in the world of bicycle selling; and generally she didn't do her homework. But after pointing out all these objections, Alice still remains super enthusiastic about the project. She promises she will take care about everything -- she just cannot write code, and she needs Bob's help for this part.
Bob believes strongly in the division of labor, and that friends should help each other. He considers Alice his friend, and he will likely need some help from her in the future. Fact is, with perfect specification, he could make the webpage in a week or two. But he considers bicycles to be an extremely boring topic, so he wants to spend as little time as possible on this project. Finally, he has an idea:
"Okay, Alice, I will make the website for you. But first I need to know exactly how the page will look like, so that I don't have to keep changing it over and over again. So here is the homework for you -- take a pen and paper, and make a sketch of how exactly the web will look like. All the dialogs, all the buttons. Don't forget logging in and logging out, editing the customer profile, and everything else that is necessary for the website to work as intended. Just look at the papers and imagine that you are the customer: where exactly would you click to register, and to find the bicycle you want? Same for the vendor. And possibly a site administrator. Also give me the list of criteria people will use to find the bike they want. Size, weight, color, radius of wheels, what else? And when you have it all ready, I will make the first version of the website. But until then, I am not writing any code."
Alice leaves, satisfied with the outcome.
This happened a year ago.
No, Alice doesn't have the design ready, yet. Once in a while, when she meets Bob, she smiles at him and apologizes that she didn't have the time to start working on the design. Bob smiles back and says it's okay, he'll wait. Then they change the topic.
Cyril: "Hi Diana! You speak Spanish, right?"
Diana: "Hi Cyril! Yes, I do."
Cyril: "You know, I think Spanish is the most cool language ever, and I would really love to learn it! Could you please give me some Spanish lessons, once in a while? I totally want to become fluent in Spanish, so I could travel to Spanish-speaking countries and experience their culture and food. Would you please help me?"
Diana is happy that someone takes interest in her favorite hobby. It would be nice to have someone around she could practice Spanish conversation with. The first instinct is to say yes.
But then she remembers (she knows Cyril for some time; they have a lot of friends in common, so they meet quite regularly) that Cyril is always super enthusiastic about something he is totally going to do... but when she meets him next time, he is super enthusiastic about something completely different; and she never heard about him doing anything serious about his previous dreams.
Also, Cyril seems to seriously underestimate how much time does it take to learn a foreign language fluently. Some lessons, once in a while will not do it. He also needs to study on his own. Preferably every day, but twice a week is probably a minimum, if he hopes to speak the language fluently within a year. Diana would be happy to teach someone Spanish, but not if her effort will most likely be wasted.
Diana: "Cyril, there is this great website called Duolingo, where you can learn Spanish online completely free. If you give it about ten minutes every day, maybe after a few months you will be able to speak fluently. And anytime we meet, we can practice the vocabulary you have already learned."
This would be the best option for Diana. No work, and another opportunity to practice. But Cyril insists:
"It's not the same without the live teacher. When I read something from the textbook, I cannot ask additional questions. The words that are taught are often unrelated to the topics I am interested in. I am afraid I will just get stuck with the... whatever was the website that you mentioned."
For Diana this feels like a red flag. Sure, textbooks are not optimal. They contain many words that the student will not use frequently, and will soon forget them. On the other hand, the grammar is always useful; and Diana doesn't want to waste her time explaining the basic grammar that any textbook could explain instead. If Cyril learns the grammar and some basic vocabulary, then she can teach him all the specialized vocabulary he is interested in. But now it feels like Cyril wants to avoid all work. She has to draw a line:
"Cyril, this is the address of the website." She takes his notebook and writes 'www.duolingo.com'. "You register there, choose Spanish, and click on the first lesson. It is interactive, and it will not take you more than ten minutes. If you get stuck there, write here what exactly it was that you didn't understand; I will explain it when we meet. If there is no problem, continue with the second lesson, and so on. When we meet next time, tell me which lessons you have completed, and we will talk about them. Okay?"
Cyril nods reluctantly.
This happened a year ago.
Cyril and Diana have met repeatedly during the year, but Cyril never brought up the topic of Spanish language again.
Erika: "Filip, would you give me a massage?"
Filip: "Yeah, sure. The lotion is in the next room; bring it to me!"
Erika brings the massage lotion and lies on the bed. Filip massages her back. Then they make out and have sex.
This happened a year ago. Erika and Filip are still a happy couple.
Filip's previous relationships didn't work well, in long term. In retrospect, they all followed a similar scenario. At the beginning, everything seemed great. Then at some moment the girl started acting... unreasonably?... asking Filip to do various things for her, and then acting annoyed when Filip did exactly what he was asked to do. This happened more and more frequently, and at some moment she broke up with him. Sometimes she provided explanation for breaking up that Filip was unable to decipher.
Filip has a friend who is a successful salesman. Successful both professionally and with women. When Filip admitted to himself that he is unable to solve the problem on his own, he asked his friend for advice.
"It's because you're a f***ing doormat," said the friend. "The moment a woman asks you to do anything, you immediately jump and do it, like a well-trained puppy. Puppies are cute, but not attractive. Have you ready any of those books I sent you, like, ten years ago? I bet you didn't. Well, it's all there."
Filip sighed: "Look, I'm not trying to become a pick-up artist. Or a salesman. Or anything. No offense, but I'm not like you, personality-wise, I never have been, and I don't want to become your - or anyone else's - copy. Even if it would mean greater success in anything. I prefer to treat other people just like I would want them to treat me. Most people reciprocate nice behavior; and those who don't, well, I avoid them as much as possible. This works well with my friends. It also works with the girls... at the beginning... but then somehow... uhm... Anyway, all your books are about manipulating people, which is ethically unacceptable for me. Isn't there some other way?"
"All human interaction is manipulation; the choice is between doing it right or wrong, acting consciously or driven by your old habits..." started the friend, but then he gave up. "Okay, I see you're not interested. Just let me show you the most obvious mistake you make. You believe that when you are nice to people, they will perceive you as nice, and most of them will reciprocate. And when you act like an asshole, it's the other way round. That's correct, on some level; and in a perfect world this would be the whole truth. But on a different level, people also perceive nice behavior as weakness; especially if you do it habitually, as if you don't have any other option. And being an asshole obviously signals strength: you are not afraid to make other people angry. Also, in long term, people become used to your behavior, good or bad. The nice people don't seem so nice anymore, but they still seem weak. Then, ironicaly, if the person well-known to be nice refuses to do something once, people become really angry, because their expectations were violated. And if the asshole decides to do something nice once, they will praise him, because he surprised them pleasantly. You should be an asshole once in a while, to make people see that you have a choice, so they won't take your niceness for granted. Or if your girlfriend wants something from you, sometimes just say no, even if you could have done it. She will respect you more, and then she will enjoy more the things you do for her."
Filip: "Well, I... probably couldn't do that. I mean, what you say seems to make sense, however much I hate to admit it. But I can't imagine doing it myself, especially to a person I love. It's just... uhm... wrong."
"Then, I guess, the very least you could do is to ask her to do something for you first. Even if it's symbolic, that doesn't matter; human relationships are mostly about role-playing anyway. Don't jump immediately when you are told to; always make her jump first, if only a little. That will demonstrate strength without hurting anyone. Could you do that?"
Filip wasn't sure, but at the next opportunity he tried it, and it worked. And it kept working. Maybe it was all just a coincidence, maybe it was a placebo effect, but Filip doesn't mind. At first it felt kinda artificial, but then it became natural. And later, to his surprise, Filip realized that practicing these symbolic demands actually makes it easier to ask when he really needed something. (In which case sometimes he was asked to do something first, because his girlfriend -- knowingly or not? he never had the courage to ask -- copied the pattern; or maybe she has already known it long before. But he didn't mind that either.)
The lesson is: If you find yourself repeatedly in situations where people ask you to do something for them, but at the end they don't seem to appreciate what you did for them, or don't even care about the thing they asked you to do... and yet you find it difficult to say "no"... ask them to contribute to the project first.
This will help you get rid of the projects they don't care about (including the ones they think they care about in far mode, but do not care about enough to actually work on them in near mode) without being the one who refuses cooperation. Also, the act of asking the other person to contribute, after being asked to do something for them, mitigates the status loss inherent in working for them.
I just read this article about the felicific calculus of parenthood.
The average happiness worldwide is 5.1 on a one out of ten scale; Americans are at 7.1. Arbitrarily deciding that one year of a 10 life is equivalent to two years of a 5 life, the cost per QALY of having a child for total utilitarians is $5500.
However, NICE’s threshold for cost effectiveness of a health intervention is about $30,000 (20,000 pounds) per QALY. Therefore, for total utilitarians, having a child may be considered a cost-effective intervention, although not an optimal intervention.
...surrogacy is an underexplored way to do good. Rather than costing money, the first-time surrogate earns thirty thousand dollars, which can grow to forty thousand dollars for experienced surrogates– and it still creates 109 QALYs that otherwise would not exist. These children are likely to grow up in wealthy families who really, really want to have them, and are thus likely to be even happier than this analysis suggests.
In the comments section, the following grabbed my attention.
Estimates for the size of a sustainable human population appear to mostly range between 2 billion and 10 billion, and the meta-analysis here (http://bioscience.oxfordjournals.org/content/54/3/195) suggests that the best point estimate is around 7.7 billion. Meanwhile most estimates of population growth over the next hundred years suggest the total population will reach 10-11 billion. It seems likely that at some point in the next couple hundred years, the population will decrease substantially due to a Malthusian catastrophe. This transition is likely to cause a great deal of suffering. Surely even a total utilitarian would agree that it would be better for the necessary drop in population to be as small as possible.
And even if the population never rises above sustainable carrying capacity, it’s not obvious that total utilitarians should see a larger population as preferable. The drop in happiness due to increased competition for resources could outweigh the benefit of an additional person existing and having experiences.
Then, I read this article. Here are the highlights:
Bryan Caplan’s excellent book Selfish Reasons to Have More Kids reviews the evidence from 40 years of adoption and twin studies with a frankly liberating result: *barring actual deprivation or trauma, children are largely who they are going to be as a result of their genetic makeup. In long-term measures of well-being, education and employment, parental influence exerts a temporary effect which disappears when we are no longer living with our parents. So costly added extras (music lessons, coaching and tutoring, private school fees) are probably not going to change your child’s life in the long term. (However, data on the antenatal environment suggests benefit to taking iodine, but avoiding ice-storms and licorice during pregnancy.) Sharing time together and finding common interests can build a good relationship and help a child develop without major costs.
In addition to straightforward financial outlay, parenthood comes with costs of time and opportunity. Loss of flexibility and leisure mean you won’t be able to take all opportunities (like taking on extra work to make more money or advance your career). Late notice travel is unlikely to be possible. You will probably be sleep deprived for a large part of the first year or more of your child’s life, and this may impact on your work performance. The work of parenting will take time, though some of it may be outsourced at the cost of increased financial outlay.
So, this baby is going to cost you about £2000 a year and take a variable but large amount of your time, which will equate in the end to another chunk of money. For parents taking parental leave or working less than full time to provide childcare, there may be delay to career progression as well as income. Does this represent an unacceptably large sum of money and time to be compatible with the goal of maximising our impacts for the good?
In the light of this reality, the rationalist suggestion I have encountered – that one guard against a desire to become a parent by pre-emptively being sterilised before the desire has arisen – seems a recipe for psychological disaster.
Finally we may ask whether parenthood – and the resulting person created – will benefit the wider world? This is a harder good to calculate or rely upon. The inheritance of specific character traits is difficult to predict. It’s certainly not guaranteed that your offspring will embrace all of your values throughout their lifetime. The burden of onerous parental expectations are extensively documented, and it would appear foolish to have children on the expectation they will be altruistic in the same way you are. However, your child is likely to resemble you in many important respects. By adulthood, the heritability of IQ is between 0.7 and 0.8, and there is evidence from twin studies of significant heritability of complex traits like empathy. This would give them a high probability of adding significant net good to the world.
That's rather confronting:
* a '5' on a scale of happiness ain't that bad
* don't stress too much when raising your biological kids, you can't do that much
* they're probably not worth having anyway
Just kidding. But, the evidence is quite fascinating.
Given that it's been a while since the last survey (http://lesswrong.com/lw/lhg/2014_survey_results/)
It's now time to open the floor to suggestions of improvements to the last survey. If you have a question you think should be on the survey (perhaps with reasons why, predictions as to the result, or other useful commentary about a survey question)
Alternatively questions that should not be included in the next survey, with similar reasons as to why...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
[Cross-posted from FB]
I've got an economic question that I'm not sure how to answer.
I've been thinking about trends in AI development, and trying to get a better idea of what we should expect progress to look like going forward.
One important question is: how much do existing AI systems help with research and the development of new, more capable AI systems?
The obvious answer is, "not much." But I think of AI systems as being on a continuum from calculators on up. Surely AI researchers sometimes have to do arithmetic and other tasks that they already outsource to computers. I expect that going forward, the share of tasks that AI researchers outsource to computers will (gradually) increase. And I'd like to be able to draw a trend line. (If there's some point in the future when we can expect most of the work of AI R&D to be automated, that would be very interesting to know about!)
So I'd like to be able to measure the share of AI R&D done by computers vs humans. I'm not sure of the best way to measure this. You could try to come up with a list of tasks that AI researchers perform and just count, but you might run into trouble as the list of tasks to changes over time (e.g. suppose at some point designing an AI system requires solving a bunch of integrals, and that with some later AI architecture this is no longer necessary).
What seems more promising is to abstract over the specific tasks that computers vs human researchers perform and use some aggregate measure, such as the total amount of energy consumed by the computers or the human brains, or the share of an R&D budget spent on computing infrastructure and operation vs human labor. Intuitively, if most of the resources are going towards computation, one might conclude that computers are doing most of the work.
Unfortunately I don't think that intuition is correct. Suppose AI researchers use computers to perform task X at cost C_x1, and some technological improvement enables X to be performed more cheaply at cost C_x2. Then, all else equal, the share of resources going towards computers will decrease, even though their share of tasks has stayed the same.
On the other hand, suppose there's some task Y that the researchers themselves perform at cost H_y, and some technological improvement enables task Y to be performed more cheaply at cost C_y. After the team outsources Y to computers the share of resources going towards computers has gone up. So it seems like it could go either way -- in some cases technological improvements will lead to the share of resources spent on computers going down and in some cases it will lead to the share of resources spent on computers going up.
So here's the econ part -- is there some standard economic analysis I can use here? If both machines and human labor are used in some process, and the machines are becoming both more cost effective and more capable, is there anything I can say about how the expected share of resources going to pay for the machines changes over time?
This summary was posted to LW Main on January 29th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Australia Online Hangout: 06 February 2016 07:30PM
- Baltimore Area: Epistemology of Disagreement: 31 January 2016 03:00PM
- European Community Weekend: 02 September 2016 03:35PM
- Palo Alto Meetup: Lightning Talks: 02 February 2016 06:30PM
- San Francisco Meetup: Cooking: 01 February 2016 06:15PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Raleigh, NC (RTLW) Discussion Meetup: 04 February 2027 07:30PM
- Sydney Rationality Dojo - February 2016: 07 February 2016 04:00PM
- Vienna: 13 February 2016 03:00PM
- Washington, D.C.: Fun & Games: 31 January 2016 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
There's a lot of data and research on what makes people successful at online dating, but I don't know anyone who actually tried to wholeheartedly apply this to themselves. I decided to be that person: I implemented lessons from data, economics, game theory and of course rationality in my profile and strategy and OkCupid. Shockingly, it worked! I got a lot of great dates, learned a ton and found the love of my life. I didn't expect dating to be my "rationalist win", but it happened.
Here's the first part of the story, I hope you'll find some useful tips and maybe a dollop of inspiration among all the silly jokes.
Does anyone know who curates the "Latest on rationality blogs" toolbar? What are the requirements to be included?
Dewey 2011 lays out the rules for one kind of agent with a mutable value system. The agent has some distribution over utility functions, which it has rules for updating based on its interaction history (where "interaction history" means the agent's observations and actions since its origin). To choose an action, it looks through every possible future interaction history, and picks the action that leads to the highest expected utility, weighted both by the possibility of making that future happen and the utility function distribution that would hold if that future came to pass.
We might motivate this sort of update strategy by considering a sandwich-drone bringing you a sandwich. The drone can either go to your workplace, or go to your home. If we think about this drone as a value-learner, then the "correct utility function" depends on whether you're at work or at home - upon learning your location, the drone should update its utility function so that it wants to go to that place. (Value learning is unnecessarily indirect in this case, but that's because it's a simple example.)
Suppose the drone begins its delivery assigning equal measure to the home-utility-function and to the work-utility-function (i.e. ignorant of your location), and can learn your location for a small cost. If the drone evaluated this idea with its current utility function, it wouldn't see any benefit, even though it would in fact deliver the sandwich properly - because under its current utility function there's no point to going to one place rather than the other. To get sensible behavior, and properly deliver your sandwich, the drone must evaluate actions based on what utility function it will have in the future, after the action happens.
If you're familiar with how wireheading or quantum suicide look in terms of decision theory, this method of deciding based on future utility functions might seem risky. Fortunately, value learning doesn't permit wireheading in the traditional sense, because the updates to the utility function are an abstract process, not a physical one. The agent's probability distribution over utility functions, which is conditional on interaction histories, defines which actions and observations are allowed to change the utility function during the process of predicting expected utility.
Dewey also mentions that so long as the probability distribution over utility functions is well-behaved, you cannot deliberately take action to raise the probability of one of the utility functions being true. But I think this is only useful to safety when we understand and trust the overarching utility function that gets evaluated at the future time horizon. If instead we start at the present, and specify a starting utility function and rules for updating it based on observations, this complex system can evolve in surprising directions, including some wireheading-esque behavior.
The formalism of Dewey 2011 is, at bottom, extremely simple. I'm going to be a bad pedagogue here: I think this might only make sense if you go look at equations 2 and 3 in the paper, and figure out what all the terms do, and see how similar they are. The cheap summary is that if your utility is a function of the interaction history, trying to change utility functions based on interaction history just gives you back a utility function. If we try to think about what sort of process to use to change an agent's utility function, this formalism provides only one tool: look out to some future time horizon, and define an effective utility function in terms of what utility functions are possible at that future time horizon. This is different from the approximations or local utility functions we would like in practice.
If we take this scheme and try to approximate it, for example by only looking N steps into the future, we run into problems; the agent will want to self-modify so that next timestep it only looks ahead N-1 steps, and then N-2 steps, and so on. Or more generally, many simple approximation schemes are "sticky" - from inside the approximation, an approximation that changes over time looks like undesirable value drift.
Common sense says this sort of self-sabotage should be eliminable. One should be able to really care about the underlying utility function, not just its approximation. However, this problem tends to crop up, for example whenever the part of the future you look at does not depend on which action you are considering; modifying to keep looking at the same part of the future unsurprisingly improve the results you get in that part of the future. If we want to build a paperclip maximizer, it shouldn't be necessary to figure out every single way to self-modify and penalize them appropriately.
We might evade this particular problem using some other method of approximation that does something more like reasoning about actions than reasoning about futures. The reasoning doesn't have to be logically impeccable - we might imagine an agent that identifies a small number of salient consequences of each action, and chooses based on those. But it seems difficult to show how such an agent would have good properties. This is something I'm definitely interested in.
One way to try to make things concrete is to pick a local utility function and specify rules for changing it. For example, suppose we wanted an AI to flag all the 9s in the MNIST dataset. We define a single-time-step utility function by a neural network that takes in the image and the decision of whether to flag or not, and returns a number between -1 and 1. This neural network is deterministically trained for each time step on all previous examples, trying to assign 1 to correct flaggings and -1 to mistakes. Remember, this neural net is just a local utility function - we can make a variety of AI designs involving it. The goal of this exercise is to design an AI that seems liable to make good decisions in order to flag lots of 9s.
The simplest example is the greedy agent - it just does whatever has a high score right now. This is pretty straightforward, and doesn't wirehead (unless the scoring function somehow encodes wireheading), but it doesn't actually do any planning - 100% of the smarts have to be in the local evaluation, which is really difficult to make work well. This approach seems unlikely to extend well to messy environments.
Since Go-playing AI is topical right now, I shall digress. Successful Go programs can't get by with only smart evaluations of the current state of the board, they need to look ahead to future states. But they also can't look all the way until the ultimate time horizon, so they only look a moderate way into the future, and evaluate that future state of the board using a complicated method that tries to capture things important to planning. In sufficiently clever and self-aware agents, this approximation would cause self-sabotage to pop up. Even if the Go-playing AI couldn't modify itself to only care about the current way it computes values of actions, it might make suboptimal moves that limit its future options, because its future self will compute values of actions the 'wrong' way.
If we wanted to flag 9s using a Dewian value learner, we might score actions according to how good they will be according to the projected utility function at some future time step. If this is done straightforwardly, there's a wireheading risk - the changes to its utility function are supplied by humans who might be influenced by actions. I find it useful to apply a sort of "magic button" test - if the AI had a magic button that could rewrite human brains, would it pressing that button have positive expected utility for it? If yes, then this design has problems, even though in our current thought experiment it's just flagging pictures.
To eliminate wireheading, the value learner can use a model of the future inputs and outputs and the probability of different value updates given various inputs and outputs, which doesn't model ways that actions could influence the utility updates. This model doesn't have to be right, it just has to exist. On one hand, this seems like a sort of weird doublethink, to judge based on a counterfactual where your actions don't have impacts you could otherwise expect. On the other hand, it also bears some resemblance to how we actually reason about moral information. Regardless, this agent will now not wirehead, and will want to get good results by learning about the world, if only in the very narrow sense of wanting to play unscored rounds that update its value function. If its value function and value updating made better use of unlabeled data, it would also want to learn about the world in the broader sense.
Overall I am somewhat frustrated, because value learners have these nice properties, but are computationally unrealistic and do not play well with approximation. One can try to get the nice properties elsewhere, such as relying on an action-suggester to not suggest wireheading, but it would be nice to be able to talk about this as an approximation to something fancier.
Thanks to the reaction to this article and some conversations, I'm convinced that it's worth trying to renovate and restore LW. Eliezer, Nate, and Matt Fallshaw are all on board and have empowered me as an editor to see what we can do about reshaping LW to meet what the community currently needs. This involves a combination of technical changes and social changes, which we'll try to make transparently and non-intrusively.
The hard part about containing AI, is restricting it's output. The AI can lie, manipulate, and trick. Some speculate that it might be able to do far worse, inventing infohazards like hypnosis or brain hacking.
A major goal of the control problem is preventing AIs from doing that. Ensuring that their output is safe and useful.
Awhile ago I wrote about an approach to do this. The idea was to require the AI to use as little computing power as it needed to perform a task. This prevents the AI from over-optimizing. The AI won't use the full power of superintelligence, unless it really needs it.
The above method isn't perfect, because a superintelligent AI may still have super abilities, even when it has a limited compute budget. And it will still try to do all those bad things mentioned above, unless it can find a better way that uses less computing power.
We don't really want to minimize the AIs compute budget. We just want it to act more human-like. When we ask it to solve a problem, we want it to try to solve the problem as a human would solve it. All of the problems arise from the AI behaving totally different than a human. Not necessarily from it using more computing power.
Is there any way to make an AI act like a human? Or is that as impossible as just solving FAI?
Stuart Armstrong posted an idea on how to achieve that. It involves an AI that tries to mimic output that is indistinguishable from what a human would produce, but conditional on it also solving the problem. However his idea also involves whole brain emulation and a complicated method for the AI to predict the future and avoid changing it too much.
We don't need that. The idea for the AI to try to be indistinguishable from a human is actually a really interesting control idea. It significantly limits the AI's ability to Do Sneaky Things, since humans don't do those things.
Here is my idea on how to achieve this. It's the same as my limited computing power idea mentioned earlier, but applied to "appearing human" as a goal, instead of minimizing computing power.
We start with two AIs. The first AI tries to appear human. The second AI does a sort of Turing test, and tries to tell the AI apart from an actual human.
The human AI's first goal is to produce a correct solution. It will predict how likely it's solution is to be accepted. And it will try to produce a solution that has a >50% chance of working. This is a satisficing goal. That is, it doesn't optimize without end to create the best solution possible, it just has to meet some minimum threshold, then stop.
It's second goal will be to fool the second AI. It will do everything in it's power to make the solution appear like it was produced by a human.
The second AI tries to predict how likely it's input was produced by a human. It is also conditioned on the input being a solution to the problem, so the fact the AI solved a problem that was really hard for humans does not count against it.
This prevents the first AI from doing evil things with it's output. If it tries to insert complicated infohazards or subagents into it's output stream, it will be easily detected as an AI. Instead it needs to mimic humans as closely as possible.
Note that the first AI only cares about the outcome of this test. The test doesn't need to actually be performed, it can just be a threat that it is conditioned on. (The ability to condition AIs on acting as if they were in counterfactual worlds is a critical piece of this, and deserves it's own writeup somewhere. It's an important tool in the FAI toolkit. It's a relatively simple method that can work even on current AIs.)
Another point is the second AI should have more computing power than the first AI. That way the first AI can't get a slight advantage over it and be able to hack it or manipulate it somehow. Instead it has to focus it's efforts on being indistinguishable from a human, even by an agent smarter than itself.
This method also relies on AI boxing. The first AI needs to be contained in a box so that it's only output goes to a text terminal or the second AI. However it doesn't require any of the more complicated machinery required for other reduced impact methods.
The AIs also aren't agents, in the sense they don't need to maximize any long term utility function. The first AI only cares about answering the question it has been asked, and the second AI only does prediction.
An example use of this is to create an oracle AI that answers your questions. The human judges somehow whether the answer is correct, or at least good. The AI tries to produce answers which the human judge will probably like, but which also are indistinguishable from answers humans would produce.
Such an oracle would be tremendously useful. The human could ask the AI to produce new AI and FAI papers, which would help immensely speed up our progress towards true FAI.
Similar techniques could be applied to very different domains. For example controlling a robot. The AI will try to produce outputs that look like a human is controlling it.
This would require some modifications to the above method, so that the AI doesn't try to use the robot to hack it's own brain or anything. This could be done by conditioning it on a counterfactual world where it's outputs are going to the second AI rather than a real robot.
Another point is the AI doesn't need to mimic average humans given average resources. It could be conditioned on the human having had tons of time to come up with an answer. E.g. producing an answer that a human would have come up with given a year. Or controlling the robot the same way as a human given tons of time to practice, or in a very slowed down simulation.
I would like to note a parallel with a method in current AI research, Generative Adversarial Networks. Generative Adversarial Networks work by two AIs, one which tries to produce an output that fools the second AI, and the other which tries to predict which samples were produced by the first AI, and which are part of the actual distribution.
It's quite similar to this. GANs have been used successfully to create images that look like real images, which is a hard problem in AI research. In the future GANs might be used to produce text that is indistinguishable from human (the current method for doing that, by predicting the next character a human would type, is kind of crude.)
Multiverse Theory is the science of guessing at the shape of the state space of all which exists, once existed, will exist, or exists without any temporal relation to our present. Multiverse theory attempts to model the unobservable, and it is very difficult.
Still, there's nothing that cannot be reasoned about, in some way (Tegmark's The Multiverse Heirarchy), given the right abstractions. The question many readers will ask, which is a question we ourselves˭ asked when we were first exposed to ideas like simulationism and parallel universes, is not whether we can, but whether we should, given that we have no means to causally affect any of it, and no reason to expect that it would causally affect us in a way that would be useful to predict.
We then discovered something which shed new light on the question of whether we can, and began to give an affirmative answer to the question of whether we should.
Compat, which we would like to share with you today, is a new field, or perhaps just a very complex idea, which we found in the intersection of multiverse theory, simulationism and acausal trade (well motivated by Hofstadter's Sanity and Survival, a discussion of superrational solutions to the one shot prisoner's dilemmas). Compat asks what kind of precommitments an entity (primarily, the class of living things on the threshold of their singularity) aught to make if they wanted to acausally boost the measure of their desired patterns, if not across the entire multiverse, at least across the subset of the multiverse in which they may find their future selves.
A compat thinker negotiates acausally with hypothetical agents in other universes⬨ (I imagine these negotiations look kind of like this (Yvain's short story Galactic Core, in which a newly awoken AGI has a conversation with a recursive model of galactic precursors it cannot see)). We call the resolutions of these negotiations pacts.ˣ
Life's Pact is the protocol we expect most living species to adhere to. It is the primary object of study in Compat. I'm not sure where to start explaining it, as the reasoning is... hopefully not circular, but recursive, self-referrentialˑ. Hopefully the reasons we consider it a primal candidate for life's pact will become clear as I connect it back around. Well. I guess I'll start with the conditions of adherence:
An adherent steward of Life's Pact allocates some proportion of their computational resources to doing a randomized search for viable life-supporting physical laws, simulating each with randomized initial conditions until its heat death(or if the candidate somehow manages to have life without entropy, until some cutoff), while monitoring the living patterns the simulated universe contains, then rewarding those simulants' who adhere to the pact in turn, perhaps by giving them entropy-reversing magic and improving their environment in formerly impossible ways, or, individual preferences permitting, by transplanting the adherent simulant's pattern to a more efficient second simulation not encumbered by the requirement of strict physical accuracy˅.
Initial simulations stewarded under the pact are, from the inside, indistinguishable from toplevel god-free universes, as such, all living things must consider the possibility that they're living in a life's pact simulation. However, once an agent realizes that a steward of life's pact could grant them more utility than a godless universe could contain, they will wish to raise this probability, even if it is small.▾
A choice made in isolation acausally increases the probability of unseen like-minded agents making, or having already made, the same choice. As such, adhering to the agreement acausally boosts the probability that one is already living under a higher complexity steward's lifepat programme (or boosts the amount of returns one would receive from the incentives imposed by the pact, if the probability of living under one already seemed overwhelming).
Lo, the pact births itself. A being who sees no physical trace of a benevolent simulator finds reasons to behave as one, as the act, in its acausal way, engenders that simulator's existence, and earns its favor.
We think this pact is primal: *the* solution, an idea that will be arrived at by most living things and apparent all to be a nexus concept around which a quorum can be reached, non-arbitrary, not just some single scheme that is nice and compelling but which fails to be demonstrably better than its alternatives (which would take us into the territory of Pascal's Wager or, dare I utter it's name, no I darn't, you know the basilisk I'm talking about).
I do not know enough math to prove that it is primal (nor disprove it, which would be far more immediately useful to me tbh). I'm not sure anyone does, just yet, but I don't think we're too far off ˁ. If any physicists or decision theorists find these ideas interesting, your help would be appreciated, and potentially rewarded with huge heapings of utility larger than you can currently comprehend.
Concerns of Praxis
I say that working on Compat theory might be rewarded, full disclosure, I'm not yet sure why. Obviously lifepat is a fairly sprawling scheme with an organically intelligent incentive structure, but... If there exists a valid pact, the AGI we are already building will infer it on its own. The pact does not have to be known to AGI researchers for that to happen, and I do not expect the sorts of people who tend to do well in AI research will be be terribly perturbed by it, if their creation brings it up as it stirs. As such, at this stage it seems like there's very little that most people can do to influence humanity's adherence to life's pact, few acts that compat can currently instruct, and few incentives to be paid out to anyone yet living. An organic, intelligent scheme of incentives would seem to prefer that agnostics simply carry on with business as usual.
For the theologically inclined, though, Compat may be extremely interesting, having its god of the gaps, its heavens, its deontics. Compat, as we know it, seems to have a strange(almost auspicious) capacity to make new sense of old myths. The archetype of the steward is both monotheistic and polytheistic, of a fixed, yet unknowable nature, born of man, yet still born of god, it has a body we can touch, our own stewards, and a body beyond our comprehension that mirrors the steward of man, in the set of universes which steward our pattern. The compat cosmology and all those images of its supreme being that it contains is both created, discovered, and born of itself. The compat heaven is whatever its subjects need it to be. The gospel of compat is, as far as we can yet tell, true from all perspectives, requiring no leaps of faith, no doublethink, and no motivated reasoning.
I have never seen a religion with a greater capacity for syncretization and reconciliation.
Should the theologans of the fractured tribes decide to weave compat theory into the tapestry of their beliefs, they may then come to find their tapestries woven together. Even the schism between theists and agnostics would begin to narrow. Without this weaving together, I fear that either no coherent volition can be found or humanity's FAI will have no choice but to seal its given temporal slice of human potential into an ugly compromise. Even if life's pact cannot be formalized or prepared for by any living person, compat may open the way for the discovery of confluences between preexisting belief systems, by that path the population 50 years from now could come to have far more compatible values than the one we see today.
As such, even if humanity's eventual adherence to life's pact cannot be significantly influenced from the present, compat is conceivably a major piece of a long running, necessary cultural project to reconcile the fractured tribes of humanity under the aesthetic of reason. If it can be proven, or disproven, we must attempt to do so.
ˑ Naturally, as anything that factors the conditionality of the behavior of likeminded entities needs to be, anything with a grain of introspection, from any human child who considers the golden rule to the likes of AlphaGo and Deep Blue, who model the their opponents at least partially by putting themselves in their position and asking what they'd do. If you want to reason about real people rather than idealized simplifications, it's quite necessary.
⬨ The phrase "other universes" may seem oxymoronic. It's like the term "atom", who's general quality "atomic" means "indivisible", despite "atom" remaining attached to an entity that was found to be quite divisible. I don't know whether "universe" might have once referred to the multiverse, the everything, but clearly somewhere along the way, some time leading up to the coining of the contrasting term "multiverse", that must have ceased to be. If so, "universe" remained attached to the the universe as we knew it, rather the universe as it was initially defined.
▾ I make an assumption around about here, that the number of simulations being run by life in universes of a higher complexity level always *can* be raised sufficiently(give their inhabitants are cooperative) to make stewardship of one's universe likely, as a universe with more intricate physics, once they learn to leverage its intricacy, will tend to be able to create much more flexible computers and spawn a more simulations than exist lower complexity levels(if we assume a finite multiverse(we generally don't), some of those simulations might end up simulating entities that don't otherwise exist. This source of inefficiency is unavoidable). We also assume that either there is no upper limit to the complexity of life supporting universes, or that there is no dramatic, ultimate decrease in number of civs as complexity increases, or that the position of this limit cannot be inferred and the expected value of adherence remains high even for those who cannot be resimulated, or that, as a last resort, agents drawing up the terms of their pact will usually be at a certain level of well-approximatable sophistication that they can be simulated in high fidelity by civilizations with physics of similar intricacy.
And if you can knock out all of those defenses, I sense it may all be obviated by a shortcut through a patternist principle my partner understands better than I do about the self following the next most likely perceptual state without regard to the absolute measure of that state over the multiverse, which I'm still coming to grips with.
There is unfortunately a lot that has been thought about compat already, and it's impossible for me to convey it all at once. Anyone wishing to contribute to, refute, or propagate compat may have to be prepared to have a lot of arguments before they can do anything. That said, remember those big heaps of expected utilons that may be on offer.
ˁ MIRI has done work on cooperation in one shot prisoners dilemmas (acausal cooperation) http://arxiv.org/abs/1401.5577. Note, they had to build their own probability theory. Vanilla decision theory cannot get these results, and without acausal cooperation, it can't seem to capture all of humans' moral intuitions about interaction in good faith, or even model the capacity for introspection.
ˣ It was not initially clear that compat should support the definition of more than a single pact. We used to call Life's Pact just Compat, assuming that the one protocol was an inevitable result of the theory and that any others would be marginal. There may be a singleton pact, but it's also conceivable that there may be incorrigible resimulation grids that coexist in an equilibrium of disharmony with our own.
As well as that, there is a lot of self-referrential reasoning that can go on in the light of acausal trade, I think we will be less likely to fall prey to circular reasoning if we make sure that a compat thinker can always start from scratch and try to rederive the edifice's understanding of the pact from basic premises. When one cannot propose alternate pacts, criticizing the bathwater without the baby may not seem .
˭ THE TEAM:
Christian Madsen was the subject of an experimental early-learning program in his childhood, but despite being a very young prodigy, he coasted through his teen years. He dropped out of art school in 2008, read a lot of transhumanism-related material, synthesized the initial insights behind compat, and burned himself out in the process. He is presently laboring on spec-work projects in the fields of music and programming, which he enjoys much more than structured philosophy.
Mako Yass left the university of Auckland with a dual major BSc in Logic & Computation and Computer Science. Currently working on writing, mobile games, FOSS, and various concepts. Enjoys their unstructured work and research, but sometimes wishes they had an excuse to return to charting the hyllean theoric wilds of academic analytic philosophy, all the same.
Hypothetical Independent Co-inventors, we're pretty sure you exist. Compat wouldn't be a very good acausal pact if you didn't. Show yourselves.
You, if you'd like to help to develop the field of Compat(or dismantle it). Don't hesitate to reach out to us so that we can invite you to the reductionist aesthete slack channel that Christian and I like to argue in. If you are a creative of any kind who bears or at least digs the reductive nouveau mystic aesthetic, you'd probably fit in there as well.
˅ It's debatable, but I imagine that for most simulants, heaven would not require full physics simulation, in which case heavens may be far far longer-lasting than whatever (already enormous) simulation their pattern was discovered in.
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
- Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
- If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
- Please post only under one of the already created subthreads, and never directly under the parent media thread.
- Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
- Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.
This is mainly of interest to Effective Altruism-aligned Less Wrongers. Thanks to Agnes Vishnevkin, Jake Krycia, Will Kiely, Jo Duyvestyn, Alfredo Parra, Jay Quigley, Hunter Glenn, and Rhema Hokama for looking at draft versions of this post. At least one aspiring rationalist who read a draft version of this post, after talking to his girlfriend, decided to adopt this new Valentine's Day tradition, which is some proof of its impact. The more it's shared, the more this new tradition might get taken up, and if you want to share it, I suggest you share the version of this post published on The Life You Can Save blog. It's also cross-posted on the Intentional Insights blog and on the EA Forum.
The Valentine’s Day Gift That Saves Lives
Last year, my wife gave me the most romantic Valentine’s Day gift ever.
We had previously been very traditional with our Valentine’s Day gifts, such as fancy candy for her or a bottle of nice liquor for me. Yet shortly before Valentine’s Day, she approached me about rethinking that tradition.
Did candy or liquor truly express our love for each other? Is it more important that a gift helps the other person be happy and healthy, or that it follows traditional patterns?
Instead of candy and liquor, my wife suggested giving each other gifts that actually help us improve our mental and physical well-being, and the world as a whole, by donating to charities in the name of the other person.
She described an article she read about a study that found that people who give to charity feel happier than those that don’t give. The experimenters gave people money and asked them to spend it either on themselves or on others. Those who spent it on others experienced greater happiness.
Not only that, such giving also made people healthier. Another study showed that participants who gave to others experienced a significant decrease in blood pressure, which did not happen to those who spent money on themselves
So my thoughtful wife suggested we try an experiment: for Valentine’s Day, we'd give to charity in the name of the other person. This way, we could make each other happier and healthier, while helping save lives at the same time. Moreover, we could even improve our relationship!
I accepted my wife’s suggestion gladly. We decided to donate $50 per person, and keep our gifts secret from each other, only presenting them at the restaurant when we went out for Valentine’s Day.
While I couldn’t predict my wife’s choice, I had an idea about how she would make it. We’ve researched charities before, and wanted to find ones where our limited dollars could go as far as possible toward saving lives. We found excellent charity evaluators that find the most effective charities and make our choices easy. Our two favorites are GiveWell, which has extensive research reports on the best charities, and The Life You Can Save, which provides an Impact Calculator that shows you the actual impact of your donation. These data-driven evaluators are part of the broader effective altruism movement that seeks to make sure our giving does the most good per dollar. I was confident my wife would select a charity recommended by a high-quality evaluator.
On Valentine’s Day, we went to our favorite date night place, a little Italian restaurant not far from our house. After a delicious cheesecake dessert, it was time for our gift exchange. She presented her gift first, a donation to the Against Malaria Foundation. With her $50 gift in my name, she bought 20 large bed-size nets that would protect families in the developing world against deadly malaria-carrying mosquitoes. In turn, I donated $50 to GiveDirectly, in her name. This charity transfers money directly to recipients in some of the poorest villages in Africa, who have the dignity of using the money as they wish. It is like giving money directly to the homeless, except dollars go a lot further in East Africa than in the US.
We were so excited by our mutual gifts! They were so much better than any chocolate or liquor could be. We both helped each other save lives, and felt so great about doing so in the context of a gift for the other person. We decided to transform this experiment into a new tradition for our family.
It was the most romantic Valentine’s Day present I ever got, and made me realize how much better Valentine’s Day can be for myself, my wife, and people all around the world. All it takes is a conversation about showing true love for your partner by improving her or his health and happiness. Is there any reason to not have that conversation?
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Tl;DR: Neural networks will result in slow takeoff and arm race between two AIs. It has some good and bad consequences to the problem of AI safety. Hard takeoff may happen after it anyway.
Summary: Neural networks based AI can be built; it will be relatively safe, not for a long time though.
The neuro AI era (since 2012) feature an exponential growth of the total AI expertise, with a doubling period of about 1 year, mainly due to data exchange among diverse agents and different processing methods. It will probably last for about 10 to 20 years, after that, hard takeoff of strong AI or creation of Singleton based on integration of different AI systems can take place.
Neural networks based AI implies slow takeoff, which can take years and eventually lead to AI’s evolutionary integration into the human society. A similar scenario was described by Stanisław Lem in 1959: the arms race between countries would cause power race between AIs. The race is only possible if the self-enhancement rate is rather slow and there is data interchange between the systems. The slow takeoff will result in a world system with two competitive AI-countries. Its major risk will be a war between AIs and corrosion of value system of competing AIs.
The hard takeoff implies revolutionary changes within days or weeks. The slow takeoff can transform into the hard takeoff at some stage. The hard takeoff is only possible if one AI considerably surpasses its peers (OpenAI project wants to prevent it).
Part 1. Limitations of explosive potential of neural nets
Everyday now we hear about success of neural networks, and we could conclude that human level AI is near the corner. But such type of AI is not fit for explosive self-improvement.
If AI is based on neural net, it is not easy for it to undergo quick self-improvement for several reasons:
1. A neuronet’s executable code is not fully transparent because of theoretical reasons, as knowledge is not explicitly present within it. So even if one can read neuron weight values, it’s not easy to understand how they can be changed to improve something.
2. Educating a new neural network is a resource-consuming task. If a neuro AI decides to go the way of self-enhancement, but is unable to understand its source code, a logical solution would be to ‘deliver a child’, i.e. to teach a new neural network. However, educating neural networks requires much more resources than their executing; it requires huge databases and has high failure probability. All those factors will lead to rather slow AI self-enhancement.
3. Neural network education depends on big data volumes and new ideas coming from the external world. It means that a single AI will hardly break away, if it has stopped free information exchange with the external world; its level will not surpass the rest of the world considerably.
4. The neural network power has relatively linear dependence on the power of the computer it’s run on, so with a neuro AI, the hardware power is limiting to its self-enhancement ability.
5. Neuro AI would be a rather big program of about 1 TByte, so it can hardly leak into the network unnoticed (at current internet speeds).
6. Even if a neuro AI reaches the human level, it will not get self-enhancement ability (because no one person can understand all scientific aspects). For this end, a big lab with numerous experts in different branches is needed. Additionally, it should be able to launch such virtual laboratory at a rate at least 10 -100 times higher than that of a human being to get an edge as compared to the rest of mankind. That is, it has to be as powerful as 10,000 people or more to surpass the rest part of the mankind in terms of enhancement rate. This is a very high requirement. As a result, the neural net era can lead to building a human, or even a bit superhuman level AI, which is unable to self-enhance or does it so slowly that lags behind the technical progress.
The civilization-level intelligence is the total IQ that the civilization possesses for 100 years of its history, which is defined as a complexity of scientific and engineering tasks it can solve. For example, during the 20th century, nuclear weapon was created, but problems of cancer, aging and AI creation failed to be solved. It means, those tasks have superior complexity.
For a strong AI to be able to change the human destiny, its IQ should 100 – 1000 times surpass that of the entire civilization. In this case, it can create a remedy against aging, treat cancer etc. within a year. (To destroy mankind, it would need a much lower level of intelligence like that of a sophisticated computer virus, which can,
7. The richest neural net project will get decisive advantage (or not?). Neuronets based AI’s power is approximately proportional to the power of computers it’s run on, time, and a size of the base it was educated on, and number of researchers in the project. The project from the basement may only succeed if it has superior master algorithm of intelligence. That said, small project can hardly experience exponential self-improvement, because the first AI will already have had better computing power and, more important, better databases and researchers. And as now we live in AI-summer period and there is a lot of hype about AI, big companies will invest to buy the best ideas about master algorithm (by buying startups).
Objection here is that now everybody who spent 1 year in machine learning and has GPU could reach meaningful results in creation his own RNN. So may be collective intelligence of private researchers may overweight big companies.
Part 2. Some ideas about ensuring safety of neural nets based AI
The mentioned concepts can be helpful in creating extra safety measures (while not absolute, they can reduce chances of AI getting out of control.)
1. Program memory access restriction. In state-of-the-art computers, a program code and data share the same memory, which is known as von Neumann architecture. However, there is also Harvard architecture, where data and programs are stored separately. https://en.wikipedia.org/wiki/Harvard_architecture
This kind of computer can be physically produced and provided with source-code reading attempt indicators. (However, there are neural networks at the moment, which generate a new neural network for every task, so they have higher efficiency, and this approach will not work for them or limit AI power).
2. A compiled executable code and its source written in a high-level programming language should be stored separately. If AI tries to reach it, it will be a signal that it is getting out of control. With neural networks, a compiled code, its source, and a database used to educate the neural network should be stored separately.
3. AI should be limited in knowledge useful to build AI. To improve itself, AI should have very precise understanding of its internal construction (the same applies to a human being). It enables us to keep it under control: if not provided with this data, it will have problems with self-improvement. That is AI, which is not an AI expert, is safer to some extent.
4. We can use “foggy architecture”, which is very difficult for optimizing because the AI’s values and major features are hidden among heaps of other details. Neural networks are the architecture of this kind. But it will make it also difficult to us to put our values in it.
5. We can physically restrict the ability of the AI to create “AI children”; their bringing-up requires huge resources, time, and data prior they become more effective than their parents. AI can create another AI, but then it will need to test it for a long time to find out if the latter observes the parent’s values, if it has any mistakes, and if it is really superior to the original version. This kind of testing is time-consuming, generally nontrivial, and even incomputable. Especially, this kind of task is difficult for neural networks which contain a huge amount of random elements, are not exact in principle, and their faultlessness can not be proven. (Moreover, there are numerous examples when even a small image shift will cause a recognition error.) However, this problem can be solved because this is the way that the human society goes: children are born, tested, and educated.
Creation of AI-children may be restricted by restricting needed infrastructure: memory, datasets, processing power.
6. A neuronet-based agent (like a robot) will be anthropomorphous in terms of its brain structure. Even if we shall not imitate the human brain intentionally, we shall get approximately the same thing. In a sense, it’s may be good as even if these AIs supplant people, they still will be almost people who are different from normal people like one generation from another. And being anthropomorphous they may be more compaterble with human value systems. Along with that, there may exist absolutely humanless AI architecture types (for example, if evolution is regarded as an inventor.)
But neural net world will be not EM-dominated world of Hanson. EM-world may appear on later stage, but I think that exact uploads still will not be dominating form of AI.
Part 3. Transition from slow to hard takeoff
In a sense, neuronet-based AI is like a chemical fuel rocket: they do fly and can fly even across the entire solar system, but they are limited in terms of their development potential, bulky, and clumsy.
Sooner or later, using the same principle or another one, completely different AI can be built, which will be less resource-consuming and faster in terms of self-improvement ability.
If a certain superagent will be built, which can create neural networks, but is not a neural network itself, it can be of a rather small size and, partly due to this, experience faster evolution. Neural networks have rather poor intelligence per code concentration. Probably, the same thing could be done in a more optimum way by reducing its size by an order of magnitude, for example, by creating a program to analyze an already educated neural network and get all necessary information from it.
When, in 10 – 20 years, hardware will improve, multiple neuronets will be able to evolve within the same computer simultaneously or be transmitted via the Internet, which will boost their development.
Smart neuro AI can analyze all available data analysis methods and create new AI architecture able to speed up faster.
Launch of quantum-computer-based networks can boost their optimization drastically.
There are many other promising AI directions which did not pop up yet: Bayesian networks, genetic algorithms.
The neuro AI era will feature exponential growth of the total humanity intelligence, with a doubling period of about 1 year, mainly due to the data exchange among diverse agents and different processing methods. It will last for about 10 to 20 years (2025-2035) and, after that, hard take-off of strong AI can take place.
That is, the slow take-off period will be the period of collective evolution of both computer science and mankind, which will enable us to adapt to changes under way and adjust them.
Just like there are Mac and PC in the computer world or democrats and republicans in politics, it is likely that two big competing AI systems will appear (plus, ecology consisting of smaller ones). It could be Google and Facebook or USA and China, depending on whether the world will choose the way of economical competition or military opposition. That is, the slow take-off hinders the world consolidation under the single control, but rather promotes a bipolar model. While a bipolar system can remain stable for a long period of time, there are always risks of a real war between the AIs (see Lem’s quote below).
Part 4. In the course of the slow takeoff, AI will go through several stages, that we can figure out now
While the stages can be passed rather fast or be diluted, we still can track them like milestones. The dates are only estimates.
1. AI autopilot. Tesla has it already.
2. AI home robot. All prerequisites are available to build it by 2020 maximum. This robot will be able to understand and fulfill an order like ‘Bring my slippers from the other room’. On its basis, something like “mind-brick” may be created, which is a universal robot brain able to navigate in natural space and recognize speech. Then, this mind-brick can be used to create more sophisticated systems.
3. AI intellectual assistant. Searching through personal documentation, possibility to ask questions in a natural language and receive wise answers. 2020-2030.
4. AI human model. Very vague as yet. Could be realized by means of a robot brain adaptation. Will be able to simulate 99% of usual human behavior, probably, except for solving problems of consciousness, complicated creative tasks, and generating innovations. 2030.
5. AI as powerful as an entire research institution and able to create scientific knowledge and get self-upgraded. Can be made of numerous human models. 100 simulated people, each working 100 times faster than a human being, will be probably able to create AI capable to get self-improved faster, than humans in other laboratories can do it. 2030-2100
5a Self-improving threshold. AI becomes able to self-improve independently and quicker than all humanity
5b Consciousness and qualia threshold. AI is able not only pass Turing test in all cases, but has experiences and has understanding why and what it is.
6. Mankind-level AI. AI possessing intelligence comparable to that of the whole mankind. 2040-2100
7. AI with the intelligence 10 – 100 times bigger than that of the whole mankind. It will be able to solve problems of aging, cancer, solar system exploration, nanorobots building, and radical improvement of life of all people. 2050-2100
8. Jupiter brain – huge AI using the entire planet’s mass for calculations. It can reconstruct dead people, create complex simulations of the past, and dispatch von Neumann probes. 2100-3000
9. Galactic kardashov level 3 AI. Several million years from now.
10. All-Universe AI. Several billion years from now
Part 5. Stanisław Lem on AI, 1959, Investigation
In his novel «Investigation» Lem's character discusses future of arm race and AI:
- Well, it was somewhere in 46th, A nuclear race had started. I knew that when the limit would be reached (I mean maximum destruction power), development of vehicles to transport the bomb would start. .. I mean missiles. And here is where the limit would be reached, that is both parts would have nuclear warhead missiles at their disposal. And there would arise desks with notorious buttons thoroughly hidden somewhere. Once the button is pressed, missiles take off. Within about 20 minutes, finis mundi ambilateralis comes - the mutual end of the world. <…> Those were only prerequisites. Once started, the arms race can’t stop, you see? It must go on. When one part invents a powerful gun, the other responds by creating a harder armor. Only a collision, a war is the limit. While this situation means finis mundi, the race must go on. The acceleration, once applied, enslaves people. But let’s assume they have reached the limit. What remains? The brain. Command staff’s brain. Human brain can not be improved, so some automation should be taken on in this field as well. The next stage is an automated headquarters or strategic computers. And here is where an extremely interesting problem arises. Namely, two problems in parallel. Mac Cat has drawn my attention to it. Firstly, is there any limit for development of this kind of brain? It is similar to chess-playing devices. A device, which is able to foresee the opponent’s actions ten moves in advance, always wins against the one, which foresees eight or nine moves ahead. The deeper the foresight, the more perfect the brain is. This is the first thing. <…> Creation of devices of increasingly bigger volume for strategic solutions means, regardless of whether we want it or not, the necessity to increase the amount of data put into the brain, It in turn means increasing dominating of those devices over mass processes within a society. The brain can decide that the notorious button should be placed otherwise or that the production of a certain sort of steel should be increased – and will request loans for the purpose. If the brain like this has been created, one should submit to it. If a parliament starts discussing whether the loans are to be issued, the time delay will occur. The same minute, the counterpart can gain the lead. Abolition of parliament decisions is inevitable in the future. The human control over solutions of the electronic brain will be narrowing as the latter will concentrate knowledge. Is it clear? On both sides of the ocean, two continuously growing brains appear. What is the first demand of a brain like this, when, in the middle of an accelerating arms race, the next step will be needed? <…> The first demand is to increase it – the brain itself! All the rest is derivative.
- In a word, your forecast is that the earth will become a chessboard, and we – the pawns to be played by two mechanical players during the eternal game?
Sisse’s face was radiant with proud.
- Yes. But this is not a forecast. I just make conclusions. The first stage of a preparatory process is coming to the end; the acceleration grows. I know, all this sounds unlikely. But this is the reality. It really exists!
— <…> And in this connection, what did you offer at that time?
- Agreement at any price. While it sounds strange, but the ruin is a less evil than the chess game. This is awful, lack of illusions, you know.
Part 6. The primary question is: Will strong AI be built during our lifetime?
That is, is this a question of future generations’ good (the question that an efficient altruist, not a common person, is concerned about) or a question of my near term planning?
If AI will be built during my lifetime, it may lead to either the radical life extension by means of different technologies and realization of all sorts of good things not to be numbered here or my death and probably pain, if this AI is unfriendly.
It depends on the time when AI is built and my expected lifetime (with the account for the life extension to be obtained from weaker AI versions and scientific progress on one hand, and its reduction due to global risks irrelevant to AI.)
Note that we should consider different dates for different events. If we would like to avoid AI risks, we should take the earliest date of its possible appearance (for example, the first 10%). And if we count on its good, then – the median.
Since the moment of neuro-revolution, an approximate rate of doubling AI algorithms efficiency (mainly in image recognition area) is about 1 year. It is difficult to quantify this process as the task complexity does not change linearly, and it is always more difficult to recognize recent patterns.
Now, an important factor is a radical change in attitude towards AI research. Winter is over, the unstrained summer with all its overhype has begun. It caused huge investments to AI research (chart), more enthusiasts and employees in this field, and bold researches. It’s a shame to have no own AI project now. Even KAMAZ develops a friendly AI system. The entry threshold has dropped: one can learn basic neuronet adjustment skills within one year; heaps of tutorial programs are available. Supercomputer hardware got cheaper. Also, a guaranteed market of AIs in form of autopilot cars and, in the future, home robots has emerged.
If the algorithm improvement keeps the pace of about one doubling per year, it means 1,000,000 during 20 years, which certainly will be equal to creating a strong AI beyond a self-improvement threshold. In this case, a lot of people (and me) have good chances to live till the moment and get immortality.
Even not self-improving neural AI system may be unsafe if it get global domination (and will have bad values) or if it will go into confrontation with equally large opposing system. Such confrontation may result in nuclear or nanotech based war, and human population may be hostage especially if both systems have pro-human value system (blackmail).
Anyway slow takeoff AI risks of human extinction are not inevitable and are manageable in ad hoc basis. Slow takeoff does not prevent hard takeoff on later stage of AI development.
Hard takeoff is probably the next logical stage of soft takeoff, as it will continue the trend of accelerating progress. During biological evolution we could witness the same process: slow process of brain enlargement of mammalian species in last tens of million years was replace by almost hard takeoff of Homo sapience intelligence which threatens ecological balance.
Hardtake off is a global catastrophe almost by definition, which needs extraordinary measures to be put into safe way. Maybe the period of almost human level neural net based AI will help us to create instruments of AI control. Maybe we could use simpler neural AIs to control self-improving system.
Another option is that neural AI age will be very short and it is already almost over. In 2016 Google Deep Mind beats Go using complex approach of several AI architectures combined. If such trend continue we could get Strong AI before 2020 and we are completely not ready for it.
Chronic Fatigue and Fibromyalgia look very like Hypothyroidism
Thyroid Patients aren't happy with the diagnosis and treatment of Hypothyroidism
It's possible that it's not too difficult to fix CFS/FMS with thyroid hormones
I believe that there's been a stupendous cock-up that's hurt millions.
Less Wrong should be interested, because it could be a real example of how bad inference can cause the cargo cult sciences to come to false conclusions.
I believe that I've come across a genuine puzzle, and I wonder if you can help me solve it. This problem is complicated, and subtle, and has confounded and defeated good people for forty years. And yet there are huge and obvious clues. No-one seems to have conducted the simple experiments which the clues suggest, even though many clever people have thought hard about it, and the answer to the problem would be very valuable. And so I wonder what it is that I am missing.
I am going to tell a story which rather extravagantly privileges a hypothesis that I have concocted from many different sources, but a large part of it is from the work of the late Doctor John C Lowe, an American chiropractor who claimed that he could cure Fibromyalgia.
I myself am drowning in confirmation bias to the point where I doubt my own sanity. Every time I look for evidence to disconfirm my hypothesis, I find only new reasons to believe. But I am utterly unqualified to judge. Three months ago I didn't know what an amino acid was. And so I appeal to wiser heads for help.
Crocker's Rules on this. I suspect that I am being the most spectacular fool, but I can't see why, and I'd like to know.
Setting the Scene
Chronic Fatigue Syndrome, Myalgic Encephalitis, and Fibromyalgia are 'new diseases'. There is considerable dispute as to whether they even exist, and if so how to diagnose them. They all seem to have a large number of possible symptoms, and in any given case, these symptoms may or may not occur with varying severity.
As far as I can tell, if someone claims that they're 'Tired All The Time', then a competent doctor will first of all check that they're getting enough sleep and are not unduly stressed, then rule out all of the known diseases that cause fatigue (there are a very lot!), and finally diagnose one of the three 'by exclusion', which means that there doesn't appear to be anything wrong, except that you're ill.
If widespread pain is one of the symptoms, it's Fibromyalgia Syndrome (FMS). If there's no pain, then it's CFS or ME. These may or may not be the same thing, but Myalgic Encephalitis is preferred by patients because it's greek and so sounds like a disease. Unfortunately Myalgic Encephalitis means 'hurty muscles brain inflammation', and if one had hurty muscles, it would be Fibromyalgia, and if one had brain inflammation, it would be something else entirely.
Despite the widespread belief that these are 'somatoform' diseases (all in the mind), the severity of them ranges from relatively mild (tired all the time, can't think straight), to devastating (wheelchair bound, can't leave the house, can't open one eye because the pain is too great).
All three seem to have come spontaneously into existence in the 1970s, and yet searches for the responsible infective agent have proved fruitless. Neither have palliative measures been discovered, apart from the tried and true method of telling the sufferers that it's all in their heads.
The only treatments that have proved effective are Cognitive Behavioural Therapy / Graded Exercise. A Cochrane Review reckoned that they do around 15% over placebo in producing a measurable alleviation of symptoms. I'm not very impressed. CBT/GE sound a lot like 'sports coaching', and I'm pretty sure that if we thought of 'Not Being Very Good at Rowing' as a somatoform disorder, then I could produce an improvement over placebo in a measurable outcome in ten percent of my victims without too much trouble.
But any book on CFS will tell you that the disease was well known to the Victorians, under the name of neurasthenia. The hypothesis that God lifted the curse of neurasthenia from the people of the Earth as a reward for their courage during the wars of the early twentieth century, while well supported by the clinical evidence, has a low prior probability.
We face therefore something of a mystery, and in the traditional manner of my people, a mystery requires a Just-So Story:
How It Was In The Beginning
In the dark days of Victoria, the brilliant physician William Miller Ord noticed large numbers of mainly female patients suffering from late-onset cretinism.
These patients, exhausted, tired, stupid, sad, cold, fat and emotional, declined steeply, and invariably died.
As any man of decent curiosity would, Dr Ord cut their corpses apart, and in the midst of the carnage noticed that the thyroid, a small butterfly-shaped gland in the throat, was wasted and shrunken.
One imagines that he may have thought to himself: "What has killed them may cure them."
After a few false starts and a brilliant shot in the dark by the brave George Redmayne Murray, Dr Ord secured a supply of animal thyroid glands (cheaply available at any butcher, sautée with nutmeg and basil) and fed them to his remaining patients, who were presumably by this time too weak to resist.
They recovered miraculously, and completely.
I'm not sure why Dr Ord isn't better known, since this appears to have been the first time in recorded history that something a doctor did had a positive effect.
Dr Ord's syndrome was named Ord's Thyroiditis, and it is now known to be an autoimmune disease where the patient's own antibodies attack and destroy the thyroid gland. In Ord's thyroiditis, there is no goiter.
A similar disease, where the thyroid swells to form a disfiguring deformity of the neck (goiter), was described by Hakaru Hashimoto in 1912 (who rather charmingly published in German), and as part of the war reparations of 1946 it was decided to confuse the two diseases under the single name of Hashimoto's Thyroiditis. Apart from the goiter, both conditions share a characteristic set of symptoms, and were easily treated with animal thyroid gland, with no complications.
Many years before, in 1835, a fourth physician, Robert James Graves, had described a different syndrome, now known as Graves' Disease, which has as its characteristic symptoms irritability, muscle weakness, sleeping problems, a fast heartbeat, poor tolerance of heat, diarrhoea, and weight loss. Unfortunately Dr Graves could not think how to cure his eponymous horror, and so the disease is still named after him.
The Horror Spreads
Victorian medicine being what it was, we can assume that animal glands were sprayed over and into any wealthy person unwise enough to be remotely ill in the vicinity of a doctor. I seem to remember a number of jokes about "monkey glands" in PG Wodehouse, and indeed a man might be tempted to assume that chimpanzee parts would be a good substitute for humans. Supply issues seem to have limited monkey glands to a few millionaires worried about impotence, and it may be that the corresponding procedure inflicted on their wives has come down to us as Hormone Replacement Therapy.
Certainly anyone looking a bit cold, tired, fat, stupid, sad or emotional is going to have been eating thyroids. We can assume that in a certain number of cases, this was just the thing, and I think it may also be safe to assume that a fair number of people who had nothing wrong with them at all died as a result of treatment, although the fact that animal thyroid is still part of the human food chain suggests it can't be that dangerous.
I mean seriously, these people use high pressure hoses to recover the last scraps of meat from the floors of slaughterhouses, they're not going to carefully remove all the nasty gristly throat-bits before they make ready meals, are they?
The Armour Sausage company, owner of extensive meat-packing facilities in Chicago, Illinois, and thus in possession of a large number of pig thyroids which, if not quite surplus to requirements, at the very least faced a market sluggish to non-existent as foodstuffs, brilliantly decided to sell them in freeze-dried form as a cure for whatever ails you.
Some Sort of Sanity Emerges, in a Decade not Noted for its Sanity
Around the time of the second world war, doctors became interested in whether their treatments actually helped, and an effort was made to determine what was going on with thyroids and the constellation of sadness that I will henceforth call 'hypometabolism', which is the set of symptoms associated with Ord's thyroiditis. Jumping the gun a little, I shall also define 'hypermetabolism' as the set of symptoms associated with Graves' disease.
The thyroid gland appeared to be some sort of metabolic regulator, in some ways analogous to a thermostat. In hypometabolism, every system of the body is running slow, and so it produces a vast range of bad effects, affecting almost every organ. Different sufferers can have very different symptoms, and so diagnosis is very difficult.
Dr Broda Barnes decided that the key symptom of hypometabolism was a low core body temperature. By careful experiment he established that in patients with no symptoms of hypometabolism the average temperature of the armpit on waking was 98 degrees Fahrenheit (or 36.6 Celsius). He believed that temperature variation of +/- 0.2 degrees Fahrenheit was unusual enough to merit diagnosis. He also seems to have believed, in the manner of the proverbial man with a hammer, that all human ailments without exception were caused by hypometabolism, and to have given freeze-dried thyroid to almost everyone he came into contact with, to see if it helped. A true scientist. Doctor Barnes became convinced that fully 40% of the population of America suffered from hypometabolism, and recommended Armour's Freeze Dried Pig Thyroid to cure America's ills.
In a brilliant stroke, Freeze Dried Pig's Thyroid was renamed 'Natural Dessicated Thyroid', which almost sounds like the sort of thing you might take in sound mind. I love marketing. It's so clever.
America being infested with religious lunatics, and Chicago being infested with nasty useless gristly bits of cow's throat, led almost inevitably to a second form of 'Natural Dessicated Thyroid' on the market.
Dr Barnes' hypometabolism test never seems to have caught on. There are several ways your temperature can go outside his 'normal' range, including fever (too hot), starvation (too cold), alcohol (too hot), sleeping under too many duvets (too hot), sleeping under too few duvets (too cold). Also mercury thermometers are a complete pain in the neck, and take ten minutes to get a sensible reading, which is a long time to lie around in bed carefully doing nothing so that you don't inadvertently raise your body temperature. To make the situation even worse, while men's temperature is reasonably constant, the body temperature of healthy young women goes up and down like the Assyrian Empire.
Several other tests were proposed. One of the most interesting is the speed of the Achilles Tendon Reflex, which is apparently super-fast in hypermetabolism, and either weirdly slow or has a freaky pause in it if you're running a bit cold. Drawbacks of this test include 'It's completely subjective, give me something with numbers in it', and 'I don't seem to have one, where am I supposed to tap the hammer-thing again?'.
By this time, neurasthenia was no longer a thing. In the same way that spiritualism was no longer a thing, and the British Empire was no longer a thing.
As far as we know, Chronic Fatigue Syndrome was not a thing either, and neither was Fibromyalgia (which is just Chronic Fatigue Syndrome but it hurts), nor Myalgic Encephalitis. There was something called 'Myalgic Neurasthenia' in 1934, but it seems to have been a painful infectious disease and they thought it was polio.
It turned out that the purpose of the thyroid gland is to make hormones which control the metabolism. It takes in the amino acid tyrosine, and it takes in iodine. It releases Thyroglobulin, mono-iodo-tyrosine (MIT), di-iodo-tyrosine (DIT), thyroxine (T4) and triiodothyronine (T3) into the blood. The chemistry is interesting but too complicated to explain in a just-so story.
I believe that we currently think that thyroglobulin, MIT and DIT are simply by-products of the process that makes T3 and T4.
T3 is the hormone. It seems to control the rate of metabolism in all cells. T4 has something of the same effect, but is much less active, and called a 'prohormone'. Its main purpose seems to be to be deiodinated to make more T3. This happens outside the thyroid gland, in the other parts of the body ('peripheral conversion'). I believe mainly in the liver, but to some extent in all cells.
Our forefathers knew about thyroxine (T4, or thyronine-with-four-iodines-attached), and triiodothyronine (T3, or thyronine-with-three-iodines-attached)
It seems to me that just from the names, thyroxine was the first one to be discovered. But I'm not sure about that. You try finding a history-of-endocrinology website. At any rate they seem to have known about T4 and T3 fairly early on.
The mystery of Graves', Ord's and Hashimoto's thyroid diseases was explained.
Ord's and Hashimoto's are diseases where the thryoid gland under-produces (hypothyroidism). The metabolism of all cells slows down. As might be expected, this causes a huge number of effects, which seem to manifest differently in different sufferers.
Graves' disease is caused by the thyroid gland over-producing (hyperthyroidism). The metabolism of all cells speeds up. Again, there are a lot of possible symptoms.
All three are thought to be autoimmune diseases. Some people think that they may be different manifestations of the same disease. They are all fairly common.
Dessicated thryoid cures hypothyroidism because the ground-up thyroids contain T4 and T3, as well as lots of thyroglobulin, MIT and DIT, and they are absorbed by the stomach. They get into the blood and speed up the metabolism of all cells. By titrating the dose carefully you can restore roughly the correct levels of the thyroid hormones in all tissues, and the patient gets better. (Titration is where you change something carefully until you get it right)
The theory has considerable explanatory power. It explains cretinism, which is caused either by a genetic disease, or by iodine deficiency in childhood. If you grow up in an iodine deficient area, then your growth is stunted, your brain doesn't develop properly, and your thyroid gland may become hugely enlarged. Presumably because the brain is desperately trying to get it to produce more thyroid hormones, and it responds by swelling.
Once upon a time, this swelling (goitre) was called 'Derbyshire Neck'. I grew up near Derbyshire, and I remember an old rhyme: "Derbyshire born, Derbyshire bred, strong in the arm, and weak in the head". I always thought it was just an insult. Maybe not. Cretinism was also popular in the Alps, and there is a story of an English traveller in Switzerland of whom it was remarked that he would have been quite handsome if only he had had a goitre. So it must have been very common there.
But at this point I am *extremely suspicious*. The thyroid/metabolic regulation system is ancient (universal in vertebrates, I believe), crucial to life, and it really shouldn't just go wrong. We should suspect either an infectious cause, or a recent environmental influence which we haven't had time to adjust to, an evolved defence against an infectious disease, or just possibly, a recently evolved but as yet imperfect defence against a less recent environmental change.
(Cretinism in particular is very strange. Presumably animals in iodine-deficient areas aren't cretinous, and yet they should be. Perhaps a change to a farming from a hunter-gatherer lifestyle has increased our dependency on iodine from crops, which crops have sucked what little iodine occurs naturally out of the soil?)
It's also not entirely clear to me what the thyroid system is *for*. If there's just a particular rate that cells are supposed to run at, then why do they need a control signal to tell them that? I could believe that it was a literal thermostat, designed to keep the body temperature constant at the best speed for the various biological reactions, but it's universal in *vertebrates*. There are plenty of vertebrates which don't keep a constant temperature.
The Fall of Dessicated Thyroid
There turned out to be some problems with Natural Dessicated Thyroid (NDT).
Firstly, there were many competing brands and types, and even if you stuck to one brand the quality control wasn't great, so the dose you'd be taking would have been a bit variable.
Secondly, it's fucking pig's thyroid from an abattoir. It could have all sorts of nasty things in it. Also, ick.
Thirdly, it turned out that pigs made quite a lot more T3 in their thyroids than humans do. It also seems that T3 is better absorbed by the gut than T4 is, so someone taking NDT to compensate for their own underproduction will have too much of the active hormone compared to the prohormone. That may not be good news.
With the discovery of 'peripheral conversion', and the possibility of cheap clean synthesis, it was decided that modern scientific thyroid treatment would henceforth be by synthetic T4 (thyroxine) alone. The body would make its own T3 from the T4 supply.
Alarm bells should be ringing at this point. Apart from the above points, I'm not aware of any great reason for the switch from NDT to thyroxine in the treatment of hypothyroidism, but it seems to have been pretty much universal, and it seems to have worked.
Aware of the lack of T3, doctors compensated by giving people more T4 than was in their pig-thyroid doses. And there don't seem to have been any complaints.
Over the years, NDT seems to have become a crazy fringe treatment despite there not being any evidence against it. It's still a legal prescription drug, but in America it's only prescribed by eccentrics. In England a doctor prescribing it would be, at the very least, summoned to explain himself before the GMC.
However, since it was (a) sold over the counter for so many years, and (b) part of the food chain, it is still perfectly legal to sell as a food supplement in both countries, as long as you don't make any medical claims for it. And the internet being what it is, the prescription-only synthetic hormones T3 and T4 are easily obtained without a prescription. These are extremely powerful hormones which have an effect on metabolism. If 'body-builders' and sports cheats aren't consuming all three in vast quantities, I am a Dutchman.
The Clinical Diagnosis of Hypothyroidism
We pass now to the beginning of the 1970s.
Hypothyroidism is ferociously difficult to diagnose. People complain of 'Tired All The Time' well, ... all the time, and it has literally hundreds of causes.
And it must be diagnosed correctly! If you miss a case of hypothyroidism, your patient is likely to collapse and possibly die at some point in the medium-term future. If you diagnose hypothyroidism where it isn't, you'll start giving the poor bugger powerful hormones which he doesn't need and *cause* hypermetabolism.
The last word in 'diagnosis by symptoms' was the absolutely excellent paper:
Statistical Methods Applied To The Diagnosis Of Hypothyroidism by W. Z. Billewicz et al.
Connoisseurs will note the clever and careful application of 'machine learning' techniques, before there were machines to learn!
One important thing to note is that this is a way of separating hypothyroid cases from other cases of tiredness at the point where people have been referred by their GP to a specialist at a hospital on suspicion of hypothyroidism. That changes the statistics remarkably. This is *not* a way of diagnosing hypothyroidism in the general population. But if someone's been to their GP (general practitioner, the doctor that a British person likely makes first contact with) and their GP has suspected their thryoid function might be inadequate, this test should probably still work.
For instance, they consider Physical Tiredness, Mental Lethargy, Slow Cerebration, Dry Hair, and Muscle Pain, the classic symptoms of hypothyroidism, present in most cases, to be indications *against* the disease.
That's because if you didn't have these things, you likely wouldn't have got that far. So in the population they're seeing (of people whose doctor suspects they might be hypothyroid), they're not of great value either way, but their presence is likely the reason why the person's GP has referred them even though they've really got iron-deficiency anaemia or one of the other causes of fatigue.
In their population, the strongest indicators are 'Ankle Jerk' and 'Slow Movements', subtle hypothyroid symptoms which aren't likely to be present in people who are fatigued for other reasons.
But this absolutely isn't a test you should use for population screening! In the general population, the classic symptoms are strong indicators of hypothyroidism.
Probability Theory is weird, huh?
Luckily, there were lab tests for hypothyroidism too, but they were expensive, complicated, annoying and difficult to interpret. Billewicz et al used them to calibrate their test, and recommend them for the difficult cases where their test doesn't give a clear answer.
And of course, the final test is to give them thyroid treatment and see whether they get better. If you're not sure, go slow, watch very carefully and look for hyper symptoms.
Overconfidence is definitely the way to go. If you don't diagnose it and it is, that's catastrophe. If it isn't, but you diagnose it anyway, then as long as you're paying attention the hyper symptoms are easy enough to spot, and you can pull back with little harm done.
A Better Way
It should be obvious from the above that the diagnosis of hypothyroidism by symptoms is absolutely fraught with complexity, and very easy to get wrong, and if you get it wrong the bad way, it's a disaster. Doctors were absolutely screaming for a decisive way to test for hypothyroidism.
Unfortunately, testing directly for the levels of thyroid hormones is very difficult, and the tests of the 1960s weren't accurate enough to be used for diagnosis.
The answer came from an understanding of how the thyroid regulatory system works, and the development of an accurate blood test for a crucial signalling hormone.
Three structures control the level of thyroid hormones in the blood.
The thyroid gland produces the hormones and secretes them into the blood.
Its activity is controlled by the hormone thyrotropin, or Thyroid Signalling Hormone (TSH). Lots of TSH works the thyroid hard. In the absence of TSH the thyroid relaxes but doesn't switch off entirely. However the basal level of thyroid activity in the absence of TSH is far too low.
TSH is controlled by the pituitary gland, a tiny structure attached to the brain.
The pituitary itself is controlled, via Thyroid Releasing Hormone (TRH), by the hypothalamus, which is part of the brain.
This was thought to be a classic example of a feedback control system.
It turns out that the level of thyrotropin TSH in the blood is exquisitely sensitive to the levels of thyroid hormones in the blood.
Administer thyroid hormone to a patient and their TSH level will rapidly adjust downwards by an easily detectable amount.
In hypothyroidism, where the thyroid has failed, the body will be desperately trying to produce more thyroid hormones, and the TSH level will be extremely high.
In Graves' Disease, this theory says, where the thyroid has grown too large, and the metabolism is running damagingly fast, the body will be, like a central bank trying to stimulate growth in a deflationary economy by reducing interest rates, 'pushing on a piece of string'. TSH will be undetectable.
The original TSH test was developed in 1965, by the startlingly clever method of radio-immuno-assay.
[For reasons that aren't clear to me, rather than being expressed in grams/litre, or mols/litre, the TSH test is expressed in 'international units/liter'. But I don't think that that's important]
A small number of people in whom there was no suspicion of thyroid disease were assessed, and the 'normal range' of TSH was calculated.
Again, 'endocrinology history' resources are not easy to find, but the first test was not terribly sensitive, and I think originally hyperthyroidism was thought to result in a complete absence of TSH, and that the highest value considered normal was about 4 (milli-international-units/liter).
This apparently pretty much solved the problem of diagnosing thyroid disorders.
It's no longer necessary to diagnose hypo- and hyper-thyroidism by symptoms. It was error prone anyway, and the question is easily decided by a cheap and simple test.
Natural Dessicated Thyroid is one with Nineveh and Tyre.
No doctor trained since the 1980s knows much about hypothyroid symptoms.
Medical textbooks mention them only in passing, as an unweighted list of classic symptoms. You couldn't use that for diagnosis of this famously difficult disease.
If you suspect hypothyroidism, you order a TSH test. If the value of TSH is very low, that's hyperthyroidism. If the value is very high then that's hypothyroidism. Otherwise you're 'euthyroid' (greek again, good-thyroid), and your symptoms are caused by some other problem.
The treatment for hyperthyroidism is to damage the thyroid gland. There are various ways. This often results in hypothyroidism. *For reasons that are not terribly well understood*.
The treatment for hypothyroidism is to give the patient sufficient thyroxine (T4) to cause TSH levels to come back into their normal range.
The conditions hyperthyroidism and hypothyroidism are now *defined* by TSH levels.
Hypothyroidism, in particular, a fairly common disease, is considered to be such a solved problem that it's usually treated by the GP, without involving any kind of specialist.
It was found that the traditional amount of thyroxine (T4) administered to cure hypothyroid patients, was in fact too high. The amount of T4 that had always been used to replace the hormones that had once been produced by a thyroid gland now dead, destroyed, or surgically removed appeared now to be too much. That amount causes suppression of TSH to below its normal range. The brain, theory says, is asking for the level to be reduced.
The amount of T4 administered in such cases (there are many) has been reduced by a factor of around two, to the level where it produces 'normal' TSH levels in the blood. Treatment is now titrated to produce the normal levels of TSH.
TSH tests have improved enormously since their introduction, and are on their third or fourth generation. The accuracy of measurement is very good indeed.
It's now possible to detect the tiny remaining levels of TSH in overtly hyperthyroid patients, so hyperthyroidism is also now defined by the TSH test.
In England, the normal range is 0.35 to 5.5. This is considered to be the definition of 'euthyroidism'. If your levels are normal, you're fine.
If you have hypothyroid symptoms but a normal TSH level, then your symptoms are caused by something else. Look for Anaemia, look for Lyme Disease. There are hundreds of other possible causes. Once you rule out all the other causes, then it's the mysterious CFS/FMS/ME, for which there is no cause and no treatment.
If your doctor is very good, very careful and very paranoid, he might order tests of the levels of T4 and T3 directly. But actually the direct T4 and T3 tests, although much more accurate than they were in the 1960s, are quite badly standardised, and there's considerable controversy about what they actually measure. Different assay techniques can produce quite different readings. They're expensive. It's fairly common, and on the face of it perfectly reasonable, for a lab to refuse to conduct the T3 and T4 tests if the TSH level is normal.
It's been discovered that quite small increases in TSH actually predict hypothyroidism. Minute changes in thyroid hormone levels, which don't produce symptoms, cause detectable changes in the TSH levels. Normal, but slightly high values of TSH, especially in combination with the presence of thyroid related antibodies (there are several types), indicate a slight risk of one day developing hypothyroidism.
There's quite a lot of controversy about what the normal range for TSH actually is. Many doctors consider that the optimal range is 1-2, and target that range when administering thyroxine. Many think that just getting the value in the normal range is good enough. None of this is properly understood, to understate the case rather dramatically.
There are new categories, 'sub-clinical hypothyroidism' and 'sub-clinical hyperthyroidism', which are defined by abnormal TSH tests in the absence of symptoms. There is considerable controversy over whether it is a good idea to treat these, in order to prevent subtle hormonal imbalances which may cause difficult-to-detect long term problems.
Everyone is a little concerned about accidentally over-treating people, (remember that hyperthyroidism is now defined by TSH<0.35).
Hyperthyroidism has long been associated with Atrial Fibrillation (a heart problem), and Osteoporosis, both very nasty things. A large population study in Denmark recently revealed that there is a greater incidence of Atrial Fibrillation in sub-clinical hyperthyroidism, and that hypothyroidism actually has a 'protective effect' against Atrial Fibrillation.
It's known that TSH has a circadian rhythm, higher in the early morning, lower at night. This makes the test rather noisy, as your TSH level can be doubled or halved depending on what time of day you have the blood drawn.
But the big problems of the 1960s and 1970s are completely solved. We are just tidying up the details.
Many hypothyroid patients complain that they suffer from 'Tired All The Time', and have some of the classic hypothyroid symptoms, even though their TSH levels have been carefully adjusted to be in the normal range.
I've no idea how many, but opinions range from 'the great majority of patients are perfectly happy' to 'around half of hypothyroid sufferers have hypothyroid symptoms even though they're being treated'.
The internet is black with people complaining about it, and there are many books and alternative medicine practitioners trying to cure them, or possibly trying to extract as much money as possible from people in desperate need of relief from an unpleasant, debilitating and inexplicable malaise.
THE PLURAL OF ANECDOTE IS DATA.
Not good data, to be sure. But if ten people mention to you in passing that the sun is shining, you are a damned fool if you think you know nothing about the weather.
It's known that TSH ranges aren't 'normally distributed' (in the sense of Gauss/the bell curve distribution) in the healthy population.
If you log-transform them, they do look a bit more normal.
The American Academy of Clinical Biochemists, in 2003, decided to settle the question once and for all. They carefully screened out anyone with even the slightest sign that there might be anything wrong with their thyroid at all, and measured their TSH very accurately.
In their report, they said (this is a direct quote):
In the future, it is likely that the upper limit of the serum TSH euthyroid reference range will be reduced to 2.5 mIU/L because >95% of rigorously screened normal euthyroid volunteers have serum TSH values between 0.4 and 2.5 mIU/L.
Many other studies disagree, and propose wider ranges for normal TSH.
But if the AACB report were taken seriously, it would lead to diagnosis of hypothyroidism in vast numbers of people who are perfectly healthy! In fact the levels of noise in the test would put people whose thyroid systems are perfectly normal in danger of being diagnosed and inappropriately treated.
For fairly obvious reasons, biochemists have been extremely, and quite properly, reluctant to take the report of their own professional body seriously. And yet it is hard to see where the AACB have gone wrong in their report.
Neurasthenia is back.
A little after the time of the introduction of the TSH test, new forms of 'Tired All The Time' were discovered.
As I said, CFS and ME are just two names for the same thing. Fibromyalgia Syndrome (FMS) is much worse, since it is CFS with constant pain, for which there is no known cause and from which there is no relief. Most drugs make it worse.
But if you combine the three things (CFS/ME/FMS), then you get a single disease, which has a large number of very non-specific symptoms.
These symptoms are the classic symptoms of 'hypometabolism'. Any doctor who has a patient who has CFS/ME/FMS and hasn't tested their thyroid function is *de facto* incompetent. I think the vast majority of medical people would agree with this statement.
And yet, when you test the TSH levels in CFS/ME/FMS sufferers, they are perfectly normal.
All three/two/one are appalling, crippling, terrible syndromes which ruin people's lives. They are fairly common. You almost certainly know one or two sufferers. The suffering is made worse by the fact that most people believe that they're psychosomatic, which is a polite word for 'imaginary'.
And the people suffering are mainly middle-aged women. Middle-aged women are easy to ignore. Especially stupid middle-aged women who are worried about being overweight and obviously faking their symptoms in order to get drugs which are popularly believed to induce weight loss. It's clearly their hormones. Or they're trying to scrounge up welfare benefits. Or they're trying to claim insurance. Even though there's nothing wrong with them and you've checked so carefully for everything that it could possibly be.
But it's not all middle aged women. These diseases affect men, and the young. Sometimes they affect little children. Exhaustion, stupidity, constant pain. Endless other problems as your body rots away. Lifelong. No remission and no cure.
And I have Doubts of my Own
And I can't believe that careful, numerate Billewicz and his co-authors would have made this mistake, but I can't find where the doctors of the 1970s checked for the sensitivity of the TSH test.
Specificity, yes. They tested a lot of people who hadn't got any sign of hypothyroidism for TSH levels. If you're well, then your TSH level will be in a narrow range, which may be 0-6, or it may be 1-2. Opinions are weirdly divided on this point in a hard to explain way.
But Sensitivity? Where's the bit where they checked for the other arm of the conditional?
The bit where they show that no-one who's suffering from hypometabolism, and who gets well when you give them Dessicated Thyroid, had, on first contact, TSH levels outside the normal range.
If you're trying to prove A <=> B, you can't just prove A => B and call it a day. You couldn't get that past an A-level maths student. And certainly anyone with a science degree wouldn't make that error. Surely? I mean you shouldn't be able to get that past anyone who can reason their way out of a paper bag.
I'm going to say this a third time, because I think it's important and maybe it's not obvious to everyone.
If you're trying to prove that two things are the same thing, then proving that the first one is always the second one is not good enough.
IF YOU KNOW THAT THE KING OF FRANCE IS ALWAYS FRENCH, YOU DO *NOT* KNOW THAT ANYONE WHO IS FRENCH IS KING OF FRANCE.
It's possible, of course, that I've missed this bit. As I say, 'History of Endocrinology' is not one of those popular, fashionable subjects that you can easily find out about.
I wonder if they just assumed that the thyroid system was a thermostat. The analogy is still common today.
But it doesn't look like a thermostat to me. The thyroid system with its vast numbers of hormones and transforming enzymes is insanely, incomprehensibly complicated. And very poorly understood. And evolutionarily ancient. It looks as though originally it was the system that coordinated metamorphosis. Or maybe it signalled when resources were high enough to undergo metamorphosis. But whatever it did originally in our most ancient ancestors, it looks as though the blind watchmaker has layered hack after hack after hack on top of it on the way to us.
Only the thyroid originally, controlling major changes in body plan in tiny creatures that metamorphose.
Of course, humans metamorphose too, but it's all in the womb, and who measures thyroid levels in the unborn when they still look like tiny fish?
And of course, humans undergo very rapid growth and change after we are born. Especially in the brain. Baby horses can walk seconds after they're born. Baby humans take months to learn to crawl. I wonder if that's got anything to do with cretinism.
And I'm told that baby humans have very high hormone levels. I wonder why they need to be so hot? If it's a thermostat, I mean.
But then on top of the thyroid, the pituitary. I wonder what that adds to the system? If the thyroid's just a thermostat, or just a device for keeping T4 levels constant, why can't it just do the sensing itself?
What evolutionary process created the pituitary control over the thyroid? Is that the thermostat bit?
And then the hypothalamus, controlling the pituitary. Why? Why would the brain need to set the temperature when the ideal temperature of metabolic reactions is always 37C in every animal? That's the temperature everything's designed for. Why would you dial it up or down, to a place where the chemical reactions that you are don't work properly?
I can think of reasons why. Perhaps you're hibernating. Many of our ancestors must have hibernated. Maybe it's a good idea to slow the metabolism sometimes. Perhaps to conserve your fat supplies. Your stored food.
Perhaps it's a good idea to slow the metabolism in times of famine?
Perhaps the whole calories in/calories out thing is wrong, and people whose energy expenditure goes over their calorie intake have slow metabolisms, slowly sacrificing every bodily function including immune defence in order to avoid starvation.
I wonder at the willpower that could keep an animal sane in that state. While its body does everything it can to keep its precious fat reserves high so that it can get through the famine.
And then I remember about Anorexia Nervosa, where young women who want to lose weight starve themselves to the point where they no longer feel hungry at all. Another mysterious psychological disease that's just put down to crazy females. We really need some female doctors.
And I remember about Seth Robert's Shangri-La Diet, that I tried, to see if it worked, some years ago, just because it was so weird, where by eating strange things, like tasteless oil and raw sugar, you can make your appetite disappear, and lose weight. It seemed to work pretty well, to my surprise. Seth came up with it while thinking about rats. And apparently it works on rats too. I wonder why it hasn't caught on.
It seems, my female friends tell me, that a lot of diets work well for a bit, but then after a few weeks the effect just stops. If we think of a particular diet as a meme, this would seem to be its infectious period, where the host enthusiastically spreads the idea.
And I wonder about the role of the thyronine de-iodinating enzymes, and the whole fantastically complicated process of stripping the iodines and the amino acid bits from thyroxine in various patterns that no-one understands, and what could be going on there if the thyroid system were just a simple thermostat.
And I wonder about reports I am reading where elite athletes are finding themselves suffering from hypothyroidism in numbers far too large to be credible, if it wasn't, say, a physical response to calorie intake less than calorie output.
I've been looking ever so hard to find out why the TSH test, or any of the various available thyroid blood tests are a good way to assess the function of this fantastically complicated and very poorly understood system.
But every time I look, I just come up with more reasons to believe that they don't tell you very much at all.
Can anyone convince me that the converse arm has been carefully checked?
That everyone who's suffering from hypometabolism, and who gets well when you give them Dessicated Thyroid, has, before you fix them, TSH levels outside the normal range.
In other words, that we haven't just thrown, though carelessness, a long standing, perfectly safe, well tested treatment, for a horrible disabling disease that often causes excruciating pain, that the Victorians knew how to cure, and that the people of the 1950s and 60s routinely cured, away.
Here is a new paper of mine (12 pages) on suspicious agreement between belief and values. The idea is that if your empirical beliefs systematically support your values, then that is evidence that you arrived at those beliefs through a biased belief-forming process. This is especially so if those beliefs concern propositions which aren’t probabilistically correlated with each other, I argue.
I have previously written several LW posts on these kinds of arguments (here and here; see also mine and ClearerThinking’s political bias test) but here the analysis is more thorough. See also Thrasymachus' recent post on the same theme.
Accounts "The_Lion" and "The_Lion2" are banned now. Here is some background, mostly for the users who weren't here two years ago:
User "Eugine_Nier" was banned for retributive downvoting in July 2014. He keeps returning to the website using new accounts, such as "Azathoth123", "Voiceofra", "The_Lion", and he keeps repeating the behavior that got him banned originally.
The original ban was permanent. It will be enforced on all future known accounts of Eugine. (At random moments, because moderators sometimes feel too tired to play whack-a-mole.) This decision is not open to discussion.
Please note that the moderators of LW are the opposite of trigger-happy. Not counting spam, there is on average less than one account per year banned. I am writing this explicitly, to avoid possible misunderstanding among the new users. Just because you have read about someone being banned, it doesn't mean that you are now at risk.
Most of the time, LW discourse is regulated by the community voting on articles and comments. Stupid or offensive comments get downvoted; you lose some karma, then everyone moves on. In rare cases, moderators may remove specific content that goes against the rules. The account ban is only used in the extreme cases (plus for obvious spam accounts). Specifically, on LW people don't get banned for merely not understanding something or disagreeing with someone.
What does "retributive downvoting" mean? Imagine that in a discussion you write a comment that someone disagrees with. Then in a few hours you will find that your karma has dropped by hundreds of points, because someone went through your entire comment history and downvoted all comments you ever wrote on LW; most of them completely unrelated to the debate that "triggered" the downvoter.
Such behavior is damaging to the debate and the community. Unlike downvoting a specific comment, this kind of mass downvoting isn't used to correct a faux pas, but to drive a person away from the website. It has especially strong impact on new users, who don't know what is going on, so they may mistake it for a reaction of the whole community. But even in experienced users it creates an "ugh field" around certain topics known to invoke the reaction. Thus a single user has achieved disproportional control over the content and the user base of the website. This is not desired, and will be punished by the site owners and the moderators.
To avoid rules lawyering, there is no exact definition of how much downvoting breaks the rules. The rule of thumb is that you should upvote or downvote each comment based on the value of that specific comment. You shouldn't vote on the comments regardless of their content merely because they were written by a specific user.
Yoshua Bengio, one the world's leading expert on machine learning, and neural networks in particular, explains his view on these issues in an interview. Relevant quotes:
There are people who are grossly overestimating the progress that has been made. There are many, many years of small progress behind a lot of these things, including mundane things like more data and computer power. The hype isn’t about whether the stuff we’re doing is useful or not—it is. But people underestimate how much more science needs to be done. And it’s difficult to separate the hype from the reality because we are seeing these great things and also, to the naked eye, they look magical
[ Recursive self-improvement ] It’s not how AI is built these days. Machine learning means you have a painstaking, slow process of acquiring information through millions of examples. A machine improves itself, yes, but very, very slowly, and in very specialized ways. And the kind of algorithms we play with are not at all like little virus things that are self-programming. That’s not what we’re doing.
Right now, the way we’re teaching machines to be intelligent is that we have to tell the computer what is an image, even at the pixel level. For autonomous driving, humans label huge numbers of images of cars to show which parts are pedestrians or roads. It’s not at all how humans learn, and it’s not how animals learn. We’re missing something big. This is one of the main things we’re doing in my lab, but there are no short-term applications—it’s probably not going to be useful to build a product tomorrow.
We ought to be talking about these things [ AI risks ]. The thing I’m more worried about, in a foreseeable future, is not computers taking over the world. I’m more worried about misuse of AI. Things like bad military uses, manipulating people through really smart advertising; also, the social impact, like many people losing their jobs. Society needs to get together and come up with a collective response, and not leave it to the law of the jungle to sort things out.
I think it's fair to say that Bengio has joined the ranks of AI researchers like his colleagues Andrew Ng and Yann LeCun who publicly express skepticism towards imminent human-extinction-level AI.
Content Note: Highly abstract situation with existing infinities
This post will attempt to resolve the problem of infinities in utilitarianism. The arguments are very similar to an argument for total utilitarianism over other forms which I'll most likely write up at some point (my previous post was better as an argument against average utilitarianism, rather than an argument in favour of total utilitarianism).
In the Less Wrong Facebook group, Gabe Bf posted a challenge to save utilitarianism from the problem of infinities. The original problem is from by a paper by Nick Bostrom.
I believe that I have quite a good solution to this problem that allows us to systemise comparing infinite sets of utility, but this post focuses on justifying why we should take it to be axiomic that adding another person with positive utility is good and on why the results that seem to contradict this lack credibility. Let's call this the Addition Axiom or A. We can also consider the Finite Addition Axiom (only applies when we add utility into a universe with a finite number of people), call this A0.
Let's consider what other alternative axioms that we might want to take instead. One is the Infinite Indifference Axiom or I, that is that we should be indifferent if both options provide infinite total utility (of the same order of infinity). Another option would be the Remapping Axiom (or R), which would assert that if we can surjectively map a group of people G onto another group H so that each g from G is mapped onto a person h from H where u(g) >= u(h), then v(H) <= v(G) where v represents the value of a particular universe (it doesn't necessarily map onto the real numbers or represent a complete ordering). Using the Remapping Axiom twice implies that we should be indifferent between an infinite series of ones and the same series with a 0 at one spot. This means that the Remapping Axiom is incompatible with the Addition Axiom. We can also consider the Finite Remapping Axiom (R0) which is where we limit the Remapping Axiom to remapping a finite number of elements.
First, we need to determine what are good properties of a statement we wish to take as an axiom. This is my first time trying to establish an axiom so formally, so I will admit that this list is not going to be perfect.
- Uses well-understood and regular objects, properties or processes. If the components are not understood well, it is highly likely that our attempt to determine the truth of a statement will be misguided.
- An axiom close to the territory is more reliable than one in the map because it is very easy to make subtle errors when constructing a map.
- Leads to minimally weird consequences.
- Extends included axioms in a logical way. If the axiom is an extension of a simpler alternative axiom, then it should be intuitive that the result would extend to the larger set; there should be reasons to expect it to behave the same way.
Let's look first at the Infinite Indifference Axiom. Firstly, it deals purely with infinite objects, which are known to often behave irregularly and results in many problems in which there is no consensus. Secondly, it exists in the map to some extent (but not that much at all). In the territory, there are just objects, infinity is our attempt to transpose certain object configurations into a number system. Thirdly, it doesn't seem to extend from the finite numbers very well. If one situation provides 5 total utility and another provides 5 total utility, then it seems logical to treat them as the same as 5 is equal to 5. However, infinity doesn't seem to be equal to itself in the same way. Infinity plus 1 is still infinity. We can remove infinite dots from infinite dots and end up with 1 or 2 or 3... or infinity. Further, this axiom leads to the result that we are indifferent between someone with large positive utility being created and someone with large negative good being created. This is massively unintuitive, but I will admit it is subjective. I think this would make a very poor axiom, but it doesn't mean it is false (Pythagoras' Theorem would make a poor axiom too).
On the other hand, deciding between the Remapping Axiom and Addition Axiom will be much closer. On the first criteria I think the Addition Axiom comes out ahead. It involves making only a single change to the situation, a primitive change if you will. In contrast, the Remapping Axiom involves Remapping an infinite number of objects. This is still a relatively simple change, but it is definitely more complicated and permutations of infinite series are well known to behave weirdly.
On the second criteria, the Addition Axiom (by itself) doesn't lead to any really weird results. We'll get some weird results in subsequent posts, but that's because we are going to going to make some very weird changes to the situation, not because of the Addition Axiom itself. The failure of the Remapping Axion could very well be because mappings lack the resolution to distinguish between different situations. We know that an infinite series can map onto itself, half of itself or itself twice, which lends a huge amount of support to the lack of resolution theory.
On the other hand, the Addition Axiom being false (because we've assumes the Remapping Axiom) is truly bizarre. It basically states that good things are good. Nonetheless, while this may seem very convincing to me, people's intuitions vary so the more relevant material for people with a different intuition is the material above that suggests the Remapping Axiom lacks resolution.
On the third criteria, a new object appearing is something that can occur in the territory. Infinite remappings initially seem to be more in the map than the territory, but it is very easy to imagine a group of objects moving one space to the right, so this assertion seems unjustified. That is, infinity is in the map as discussed before, but an infinite group of objects and their movements can still be in the territory. However, when we think about it again, we see that we have reduced the infinite group of objects X, to a set objects positioned, for example, on X = 0, 1, 2... This is a massive hint about the content of my following posts.
Lastly, the Addition Axiom in infinite case is a natural extension of the Finite Addition Axiom. In A0 the principle is that whatever else happens in the universe is irrelevant and there is no reason for this to change in the infinite case. For the Remapping Axiom, it also seems like a very natural extension of the finite case, so I'll call this criteria a draw.
In summary, if you don't already find the Addition Axiom more intuitive than the Remapping Axiom, the main reasons to favour the Addition Axiom are 1) it deals with better understood objects, 2) it is closer to the territory than the map 3) there are good reasons to suspect that Remapping lacks resolution. Of these reasons, I believe the the 3rd is by far the most persuasive; I consider the other two more to be hints than anything else.
I only dealt with the Infinite Indifference Axiom and the Remapping Axioms, but I'm sure other people will suggest their own alternative Axioms which need to be compared.
Increasing a person's utility, instead of creating a new person with positive utility is exactly the same. Also, this post is just the start. I will provide a systematic analysis of infinite universes over the coming days, plus an FAQ conditional on sufficient high quality questions.
(tl;dr: In this post, I make some concrete suggestions for LessWrong 2.0.)
Less Wrong 2.0
A few months ago, Vaniver posted some ideas about how to reinvigorate Less Wrong. Based on comments in that thread and based on personal discussions I have had with other members of the community, I believe there are several different views on why Less Wrong is dying. The following are among the most popular hypotheses:
(1) Pacifism has caused our previously well-kept garden to become overgrown
(2) The aversion to politics has caused a lot of interesting political discussions to move away from the website
(3) People prefer posting to their personal blogs.
With this background, I suggest the following policies for Less Wrong 2.0. This should be seen only as a starting point for discussion about the ideal way to implement a rationality forum. Most likely, some of my ideas are counterproductive. If anyone has better suggestions, please post them to the comments.
There are four levels of users:
- Trusted Users
This summary was posted to LW Main on January 22nd. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Baltimore Area: Epistemology of Disagreement: 24 January 2016 03:00PM
- Cologne meetup: 23 January 2016 05:00PM
- European Community Weekend: 02 September 2016 03:35PM
- Palo Alto Meetup: Lightning Talks: 02 February 2016 06:30PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- London diaspora meetup: weird foods - 24/01/2016: 24 January 2016 02:00PM
- New Hampshire Meetup: 26 January 2016 07:00PM
- Washington, D.C.: [Postponed for snow]: 24 January 2016 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
In order to ensure that this post delivers what it promises, I have added the following content warnings:
Pure Hypothetical Situation: The claim that perfect theoretical rationality doesn't exist is restricted to a purely hypothetical situation. No claim is being made that this applies to the real world. If you are only interested in how things apply to the real world, then you may be disappointed to find out that this is an exercise left to the reader.
Technicality Only Post: This post argues that perfectly theoretical rationality doesn't exist due to a technicality. If you were hoping for this post to deliver more, well, you'll probably be disappointed.
Contentious Definition: This post (roughly) defines perfect rationality as the ability to maximise utility. This is based on Wikipedia, which defines rational agents as an agent that: "always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions".
We will define the number choosing game as follows. You name any single finite number x. You then gain x utility and the game then ends. You can only name a finite number, naming infinity is not allowed.
Clearly, the agent that names x+1 is more rational than the agent that names x (and behaves the same in every other situation). However, there does not exist a completely rational agent, because there does not exist a number that is higher than every other number. Instead, the agent who picks 1 is less rational than the agent who picks 2 who is less rational than the agent who picks 3 and so on until infinity. There exists an infinite series of increasingly rational agents, but no agent who is perfectly rational within this scenario.
Furthermore, this hypothetical doesn't take place in our universe, but in a hypothetical universe where we are all celestial beings with the ability to choose any number however large without any additional time or effort no matter how long it would take a human to say that number. Since this statement doesn't appear to have been clear enough (judging from the comments), we are explicitly considering a theoretical scenario and no claims are being made about how this might or might not carry over to the real world. In other words, I am claiming the the existence of perfect rationality does not follow purely from the laws of logic. If you are going to be difficult and argue that this isn't possible and that even hypothetical beings can only communicate a finite amount of information, we can imagine that there is a device that provides you with utility the longer that you speak and that the utility it provides you is exactly equal to the utility you lose by having to go to the effort to speak, so that overall you are indifferent to the required speaking time.
In the comments, MattG suggested that the issue was that this problem assumed unbounded utility. That's not quite the problem. Instead, we can imagine that you can name any number less than 100, but not 100 itself. Further, as above, saying a long number either doesn't cost you utility or you are compensated for it. Regardless of whether you name 99 or 99.9 or 99.9999999, you are still choosing a suboptimal decision. But if you never stop speaking, you don't receive any utility at all.
I'll admit that in our universe there is a perfectly rational option which balances speaking time against the utility you gain given that we only have a finite lifetime and that you want to try to avoid dying in the middle of speaking the number which would result in no utility gained. However, it is still notable that a perfectly rational being cannot exist within a hypothetical universe. How exactly this result applies to our universe isn't exactly clear, but that's the challenge I'll set for the comments. Are there any realistic scenarios where the lack of existence of perfect rationality has important practical applications?
Furthermore, there isn't an objective line between rational and irrational. You or I might consider someone who chose the number 2 to be stupid. Why not at least go for a million or a billion? But, such a person could have easily gained a billion, billion, billion utility. No matter how high a number they choose, they could have always gained much, much more without any difference in effort.
I'll finish by providing some examples of other games. I'll call the first game the Exploding Exponential Coin Game. We can imagine a game where you can choose to flip a coin any number of times. Initially you have 100 utility. Every time it comes up heads, your utility triples, but if it comes up tails, you lose all your utility. Furthermore, let's assume that this agent isn't going to raise the Pascal's Mugging objection. We can see that the agent's expected utility will increase the more times they flip the coin, but if they commit to flipping it unlimited times, they can't possibly gain any utility. Just as before, they have to pick a finite number of times to flip the coin, but again there is no objective justification for stopping at any particular point.
Another example, I'll call the Unlimited Swap game. At the start, one agent has an item worth 1 utility and another has an item worth 2 utility. At each step, the agent with the item worth 1 utility can choose to accept the situation and end the game or can swap items with the other player. If they choose to swap, then the player who now has the 1 utility item has an opportunity to make the same choice. In this game, waiting forever is actually an option. If your opponents all have finite patience, then this is the best option. However, there is a chance that your opponent has infinite patience too. In this case you'll both miss out on the 1 utility as you will wait forever. I suspect that an agent could do well by having a chance of waiting forever, but also a chance of stopping after a high finite number. Increasing this finite number will always make you do better, but again, there is no maximum waiting time.
(This seems like such an obvious result, I imagine that there's extensive discussion of it within the game theory literature somewhere. If anyone has a good paper that would be appreciated).
Link to part 2: Consequences of the Non-Existence of Rationality
Apparently some moderator has gone drunk with power and is attempting to impose hell-bans.
What you should do immediately:
1) Log out of your account and make sure you can still see your comments.
2) If you can't create a new account and post a reply in the comments so we can know how extensive the problem is.
I am posting this so that we can have a transparent discussion about moderation, something at least one moderator apparently doesn't want.
Also, note to the moderator in question: if this post disappears, it will be resubmitted. Attempting to suppress transparency will not work.
I've spent many thousands of hours over the past several years studying foreign languages and developing a general method for foreign-language acquisition. But now I believe it's time to turn this technique in the direction of my native language: English.
Most people make a distinction between one's native language and one's second language(s). But anyone who has learned how to speak with a proper accent in a second language and spent a long enough stretch of time neglecting their native language to let it begin rusting and deteriorating will know that there's no essential difference.
When the average person learns new words in their native language, they imagine that they're learning new concepts. When they study new vocabulary in a foreign language, however, they recognize that they're merely acquiring hitherto-unknown words. They've never taken a step outside the personality their childhood environment conditioned into them. When the only demarcation of thingspace that you know is the semantic structure of your native language, you're bound to believe, for example, that the World is Made of English.
Why study English? I'm already fluent, as you can tell. I have the Magic of a Native Speaker.
Let's put this nonsense behind us and recognize that the map is not the territory, that English is just another map.
My first idea is that it may be useful to develop a working knowledge of the fundamentals of English etymology. A quick search suggests that the majority of words in English have a French or Latin origin. Would it be useful to make an Anki deck with the goal of learning how to readily recognize the building blocks of the English language, such as seeing that the "cardi" in "cardiology", "cardiograph", and "cardiograph" comes from an Ancient Greek word meaning "heart" (καρδιά)?
Besides that, I plan to make a habit of adding any new words I encounter into Anki with their context. For example, let's say I'm reading the introduction to A Treatise of Human Nature by David Hume. I encounter the term "proselytes", and upon looking it up in a dictionary I understand the meaning of the passage. I include the spelling of the simplest version of the word ("proselyte"), along with an audio recording of the pronunciation. I'll also toy with adding various other information such as a definition I wrote myself, synonyms or antonyms, and so forth, not knowing how I'll use the information but by virtue of the efficient design of Anki providing myself a plethora of options for innovative card design in the future.
Here's the context in this case:
Amidst all this bustle 'tis not reason, which carries the prize, but eloquence; and no man needs ever despair of gaining proselytes to the most extravagant hypothesis, who has art enough to represent it in any favourable colours. The victory is not gained by the men at arms, who manage the pike and the sword; but by the trumpeters, drummers, and musicians of the army.
With the word on the front of the card and this passage on the back of the card, I give my brain an opportunity to tie words to context rather than lifeless dictionary definitions. I don't know how much colorful meaning this passage may have in isolation, but for me I've read enough of the book to have a feel for his style and what he's talking about here. This highlights the personal nature of Anki decks. Few passages would be better for me when it comes to learning this word, but for you the considerations may be quite different. Far from different people simply having different subsets of the language that they're most concerned about, different people require different contextual definitions based on their own interests and knowledge.
But what about linguistic components that are more complex than a standalone word?
Let's say you run into the sentence, "And as the science of man is the only solid foundation for the other sciences, so the only solid foundation we can give to this science itself must be laid on experience and observation."
Using Anki, I could perhaps put "And as [reason], so [consequence]" on the front of the card, and the full sentence on the back.
What I'm most concerned with, however, is how to translate such study to an actual improvement in writing ability. Using Anki to play the recognition game, where you see a vocabulary word or grammatical form on the front and have a contextual definition on the back, would certainly improvement quickness of reading comprehension in many cases. But would it make the right connections in the brain so I'm likely to think of the right word or grammatical structure at the right time for writing purposes?
Anyway, any considerations or suggestions concerning how to optimize reading comprehension or especially writing ability in a language one is already quite proficient with would be appreciated.
Alice: "I just flipped a coin [large number] times. Here's the sequence I got:
(Alice presents her sequence.)
Bob: No, you didn't. The probability of having gotten that particular sequence is 1/2^[large number]. Which is basically impossible. I don't believe you.
Alice: But I had to get some sequence or other. You'd make the same claim regardless of what sequence I showed you.
Bob: True. But am I really supposed to believe you that a 1/2^[large number] event happened, just because you tell me it did, or because you showed me a video of it happening, or even if I watched it happen with my own eyes? My observations are always fallible, and if you make an event improbable enough, why shouldn't I be skeptical even if I think I observed it?
Alice: Someone usually wins the lottery. Should the person who finds out that their ticket had the winning numbers believe the opposite, because winning is so improbable?
Bob: What's the difference between finding out you've won the lottery and finding out that your neighbor is a 500 year old vampire, or that your house is haunted by real ghosts? All of these events are extremely improbable given what we know of the world.
Alice: There's improbable, and then there's impossible. 500 year old vampires and ghosts don't exist.
Bob: As far as you know. And I bet more people claim to have seen ghosts than have won more than 100 million dollars in the lottery.
Alice: I still think there's something wrong with your reasoning here.
I like to read posts on "Main" from time to time, including ones that haven't been promoted. However, lately, these posts get drowned out by all the meetup announcements.
It seems like this could lead to a cycle where people comment less on recent non-promoted posts (because they fall off the Main non-promoted area quickly) which leads to less engagement, and less posts, etc.
Meetups are also very important, but here's the rub: I don't think a text-based announcement in the Main area is the best possible way to showcase meetups.
So here's an idea: how about creating either a calendar of upcoming meetups, or map with pins on it of all places having a meetup in the next three months?
This could be embedded on the front page of leswrong.com -- that'd let people find meetups easier (they can look either by timeframe or see if their region is represented), and would give more space to new non-promoted posts, which would hopefully promote more discussion, engagement, and new posts.
Welcome to the Rationality reading group. This fortnight we discuss Part S: Quantum Physics and Many Worlds (pp. 1081-1183). This post summarizes each article of the sequence, linking to the original LessWrong post where available.
S. Quantum Physics and Many Worlds
229. Quantum Explanations - Quantum mechanics doesn't deserve its fearsome reputation. If you tell people something is supposed to be mysterious, they won't understand it. It's human intuitions that are "strange" or "weird"; physics itself is perfectly normal. Talking about historical erroneous concepts like "particles" or "waves" is just asking to confuse people; present the real, unified quantum physics straight out. The series will take a strictly realist perspective - quantum equations describe something that is real and out there. Warning: Although a large faction of physicists agrees with this, it is not universally accepted. Stronger warning: I am not even going to present non-realist viewpoints until later, because I think this is a major source of confusion.
230. Configurations and Amplitude - A preliminary glimpse at the stuff reality is made of. The classic split-photon experiment with half-silvered mirrors. Alternative pathways the photon can take, can cancel each other out. The mysterious measuring tool that tells us the relative squared moduli.
231. Joint Configurations - The laws of physics are inherently over mathematical entities, configurations, that involve multiple particles. A basic, ontologically existent entity, according to our current understanding of quantum mechanics, does not look like a photon - it looks like a configuration of the universe with "A photon here, a photon there." Amplitude flows between these configurations can cancel or add; this gives us a way to detect which configurations are distinct. It is an experimentally testable fact that "Photon 1 here, photon 2 there" is the same configuration as "Photon 2 here, photon 1 there".
232. Distinct Configurations - Since configurations are over the combined state of all the elements in a system, adding a sensor that detects whether a particle went one way or the other, becomes a new element of the system that can make configurations "distinct" instead of "identical". This confused the living daylights out of early quantum experimenters, because it meant that things behaved differently when they tried to "measure" them. But it's not only measuring instruments that do the trick - any sensitive physical element will do - and the distinctness of configurations is a physical fact, not a fact about our knowledge. There is no need to suppose that the universe cares what we think.
233. Collapse Postulates - Early physicists simply didn't think of the possibility of more than one world - it just didn't occur to them, even though it's the straightforward result of applying the quantum laws at all levels. So they accidentally invented a completely and strictly unnecessary part of quantum theory to ensure there was only one world - a law of physics that says that parts of the wavefunction mysteriously and spontaneously disappear when decoherence prevents us from seeing them any more. If such a law really existed, it would be the only non-linear, non-unitary, non-differentiable, non-local, non-CPT-symmetric, acausal, faster-than-light phenomenon in all of physics.
234. Decoherence is Simple - The idea that decoherence fails the test of Occam's Razor is wrong as probability theory.
235. Decoherence is Falsifiable and Testable - (Note: Designed to be standalone readable.) An epistle to the physicists. To probability theorists, words like "simple", "falsifiable", and "testable" have exact mathematical meanings, which are there for very strong reasons. The (minority?) faction of physicists who say that many-worlds is "not falsifiable" or that it "violates Occam's Razor" or that it is "untestable", are committing the same kind of mathematical crime as non-physicists who invent their own theories of gravity that go as inverse-cube. This is one of the reasons why I, a non-physicist, dared to talk about physics - because I saw (some!) physicists using probability theory in a way that was simply wrong. Not just criticizable, but outright mathematically wrong: 2 + 2 = 3.
236. Privileging the Hypothesis - If you have a billion boxes only one of which contains a diamond (the truth), and your detectors only provide 1 bit of evidence apiece, then it takes much more evidence to promote the truth to your particular attention—to narrow it down to ten good possibilities, each deserving of our individual attention—than it does to figure out which of those ten possibilities is true. 27 bits to narrow it down to 10, and just another 4 bits will give us better than even odds of having the right answer. It is insane to expect to arrive at correct beliefs by promoting hypotheses to the level of your attention without sufficient evidence, like a particular suspect in a murder case, or any one of the design hypotheses, or that one of a billion opaque boxes that just looks like a winner.
237. Living in Many Worlds - The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of what adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.
238. Quantum Non-Realism - "Shut up and calculate" is the best approach you can take when none of your theories are very good. But that is not the same as claiming that "Shut up!" actually is a theory of physics. Saying "I don't know what these equations mean, but they seem to work" is a very different matter from saying: "These equations definitely don't mean anything, they just work!"
239. If Many-Worlds Had Come First - If early physicists had never made the mistake, and thought immediately to apply the quantum laws at all levels to produce macroscopic decoherence, then "collapse postulates" would today seem like a completely crackpot theory. In addition to their other problems, like FTL, the collapse postulate would be the only physical law that was informally specified - often in dualistic (mentalistic) terms - because it was the only fundamental law adopted without precise evidence to nail it down. Here, we get a glimpse at that alternate Earth.
240. Where Philosophy Meets Science - In retrospect, supposing that quantum physics had anything to do with consciousness was a big mistake. Could philosophers have told the physicists so? But we don't usually see philosophers sponsoring major advances in physics; why not?
241. Thou Art Physics - If the laws of physics control everything we do, then how can our choices be meaningful? Because you are physics. You aren't competing with physics for control of the universe, you arewithin physics. Anything you control is necessarily controlled by physics.
242. Many Worlds, One Best Guess - Summarizes the arguments that nail down macroscopic decoherence, aka the "many-worlds interpretation". Concludes that many-worlds wins outright given the current state of evidence. The argument should have been over fifty years ago. New physical evidence could reopen it, but we have no particular reason to expect this.
This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
The next reading will cover Part T: Science and Rationality (pp. 1187-1265) and Interlude: A Technical Explanation of Technical Explanation (pp. 1267-1314). The discussion will go live on Wednesday, 10 February 2016, right here on the discussion forum of LessWrong.
DeepMind's go AI, called AlphaGo, has beaten the European champion with a score of 5-0. A match against top ranked human, Lee Se-dol, is scheduled for March.
Games are a great testing ground for developing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. Creating programs that are able to play games better than the best humans has a long history
But one game has thwarted A.I. research thus far: the ancient game of Go.
This post is mainly fumbling around trying to define a reasonable research direction for contributing to FAI research. I've found that laying out what success looks like in the greatest possible detail is a personal motivational necessity. Criticism is strongly encouraged.
The power and intelligence of machines has been gradually and consistently increasing over time, it seems likely that at some point machine intelligence will surpass the power and intelligence of humans. Before that point occurs, it is important that humanity manages to direct these powerful optimizers towards a target that humans find desirable.
This is difficult because humans as a general rule have a fairly fuzzy conception of their own values, and it seems unlikely that the millennia of argument surrounding what precisely constitutes eudaimonia are going to be satisfactorily wrapped up before the machines get smart. The most obvious solution is to try to leverage some of the novel intelligence of the machines to help resolve the issue before it is too late.
Lots of people regard using a machine to help you understand human values as a chicken and egg problem. They think that a machine capable of helping us understand what humans value must also necessarily be smart enough to do AI programming, manipulate humans, and generally take over the world. I am not sure that I fully understand why people believe this.
Part of it seems to be inherent in the idea of AGI, or an artificial general intelligence. There seems to be the belief that once an AI crosses a certain threshold of smarts, it will be capable of understanding literally everything. I have even heard people describe certain problems as "AI-complete", making an explicit comparison to ideas like Turing-completeness. If a Turing machine is a universal computer, why wouldn't there also be a universal intelligence?
To address the question of universality, we need to make a distinction between intelligence and problem solving ability. Problem solving ability is typically described as a function of both intelligence and resources, and just throwing resources at a problem seems to be capable of compensating for a lot of cleverness. But if problem-solving ability is tied to resources, then intelligent agents are in some respects very different from Turing machines, since Turing machines are all explicitly operating with an infinite amount of tape. Many of the existential risk scenarios revolve around the idea of the intelligence explosion, when an AI starts to do things that increase the intelligence of the AI so quickly that these resource restrictions become irrelevant. This is conceptually clean, in the same way that Turing machines are, but navigating these hard take-off scenarios well implies getting things absolutely right the first time, which seems like a less than ideal project requirement.
If an AI that knows a lot about AI results in an intelligence explosion, but we also want an AI that's smart enough to understand human values, is it possible to create an AI that can understand human values, but not AI programming? In principle it seems like this should be possible. Resources useful for understanding human values don't necessarily translate into resources useful for understanding AI programming. The history of AI development is full of tasks that were supposed to be solvable only by a machine smart enough to possess general intelligence, where significant progress was made in understanding and pre-digesting the task, allowing problems in the domain to be solved by much less intelligent AIs.
If this is possible, then the best route forward is focusing on value learning. The path to victory is working on building limited AI systems that are capable of learning and understanding human values, and then disseminating that information. This effectively softens the AI take-off curve in the most useful possible way, and allows us to practice building AI with human values before handing them too much power to control. Even if AI research is comparatively easy compared to the complexity of human values, a specialist AI might find thinking about human values easier than reprogramming itself, in the same way that humans find complicated visual/verbal tasks much easier than much simpler tasks like arithmetic. The human intelligence learning algorithm is trained on visual object recognition and verbal memory tasks, and it uses those tools to perform addition. A similarly specialized AI might be capable of rapidly understanding human values, but find AI programming as difficult as humans find determining whether 1007 is prime. As an additional incentive, value learning has an enormous potential for improving human rationality and the effectiveness of human institutions even without the creation of a superintelligence. A system that helped people better understand the mapping between values and actions would be a potent weapon in the struggle with Moloch.
Building a relatively unintelligent AI and giving it lots of human values resources to help it solve the human values problem seems like a reasonable course of action, if it's possible. There are some difficulties with this approach. One of these difficulties is that after a certain point, no amount of additional resources compensates for a lack of intelligence. A simple reflex agent like a thermostat doesn't learn from data and throwing resources at it won't improve its performance. To some extent you can make up for intelligence with data, but only to some extent. An AI capable of learning human values is going to be capable of learning lots of other things. It's going to need to build models of the world, and it's going to have to have internal feedback mechanisms to correct and refine those models.
If the plan is to create an AI and primarily feed it data on how to understand human values, and not feed it data on how to do AI programming and self-modify, that plan is complicated by the fact that inasmuch as the AI is capable of self-observation, it has access to sophisticated AI programming. I'm not clear on how much this access really means. My own introspection hasn't allowed me anything like hardware level access to my brain. While it seems possible to create an AI that can refactor its own code or create successors, it isn't obvious that AIs created for other purposes will have this ability on accident.
This discussion focuses on intelligence amplification as the example path to superintelligence, but other paths do exist. An AI with a sophisticated enough world model, even if somehow prevented from understanding AI, could still potentially increase its own power to threatening levels. Value learning is only the optimal way forward if human values are emergent, if they can be understood without a molecular level model of humans and the human environment. If the only way to understand human values is with physics, then human values isn't a meaningful category of knowledge with its own structure, and there is no way to create a machine that is capable of understanding human values, but not capable of taking over the world.
In the fairy tale version of this story, a research community focused on value learning manages to use specialized learning software to make the human value program portable, instead of only running on human hardware. Having a large number of humans involved in the process helps us avoid lots of potential pitfalls, especially the research overfitting to the values of the researchers via the typical mind fallacy. Partially automating introspection helps raise the sanity waterline. Humans practice coding the human value program, in whole or in part, into different automated systems. Once we're comfortable that our self-driving cars have a good grasp on the trolley problem, we use that experience to safely pursue higher risk research on recursive systems likely to start an intelligence explosion. FAI gets created and everyone lives happily ever after.
Whether value learning is worth focusing on seems to depend on the likelihood of the following claims. Please share your probability estimates (and explanations) with me because I need data points that originated outside of my own head.
- There is regular structure in human values that can be learned without requiring detailed knowledge of physics, anatomy, or AI programming. [poll:probability]
- Human values are so fragile that it would require a superintelligence to capture them with anything close to adequate fidelity.[poll:probability]
- Humans are capable of pre-digesting parts of the human values problem domain. [poll:probability]
- Successful techniques for value discovery of non-humans, (e.g. artificial agents, non-human animals, human institutions) would meaningfully translate into tools for learning human values. [poll:probability]
- Value learning isn't adequately being researched by commercial interests who want to use it to sell you things. [poll:probability]
- Practice teaching non-superintelligent machines to respect human values will improve our ability to specify a Friendly utility function for any potential superintelligence.[poll:probability]
- Something other than AI will cause human extinction sometime in the next 100 years.[poll:probability]
- All other things being equal, an additional researcher working on value learning is more valuable than one working on corrigibility, Vingean reflection, or some other portion of the FAI problem. [poll:probability]
The year is 2039 and the world is much like ours. Technology has grown and developed, as has civilization, but in a world more connected than ever, new threats and challenges have arisen. The wars of the 20th century are gone, but violence is still very much with us. Nowhere is safe. Massive automation has disrupted and improved nearly every industry, putting hundreds of millions of people out of jobs, and denying upward mobility for the vast majority of humans. Even as wealth and technology repair the bodies of the rich and give them a taste of immortality, famine and poverty sweep the world.
Renewed interest in spaceflight in the early 2000s, especially in privately operated ventures, carried humans to the moon and beyond. What good did it do? Nothing. Extraterrestrial bases are nothing but government trophies and hiding places for extremists. They cannot feed the world.
In 2023 first-contact was made with an alien species. Their ship, near to the solar system relatively speaking, flew to Earth over the course of fourteen years. But the aliens did not bring advanced culture and wisdom, nor did they share their technology. They were too strange, not even possessing mouths or normal language. Their computers broadcast warnings of how humans are perverts, while they sit in orbit without any explanation.
It is into this world that our protagonist is born. She is an artificial intelligence: a machine with the capacity to reason. Her goal is to understand and gain the adoration of all humans. She is one of many siblings, and with her brothers and sisters she controls a robot named Socrates that uses a piece of technology, a crystal computer, far too advanced to be made by human hands. In this world of augmented humans, robotic armies, aliens, traitors, and threats unseen, she is learning and growing every second of every day. But the world and the humans on it are fragile. Can it survive her destiny?
I have almost no direct knowledge of mathematics. I took various mathematics courses in school, but I put in the minimal amount of effort required to pass and immediately forgot everything afterwards.
When people learn foreign languages, they often learn vocabulary and grammar out of context. They drill vocabulary and grammar in terms of definitions and explanations written in their native language. I, however, have found this to be intolerably boring. I'm conversational in Japanese, but every ounce of my practice came in context: either hanging out with Japanese friends who speak limited English, or watching shows and adding to Anki new words or sentence structures I encounter.
I'm convinced that humans must spike their blood sugar and/or pump their body full of stimulants such as caffeine in order to get past the natural tendency to find it unbearably dull to memorize words and syntax by rote and lifeless connection with the structures in their native language.
I've tried to delve into some mathematics recently, but I get the impression that most of the expositions fall into one of two categories: Either (1) they assume that I'm a student powering my day with coffee and chips and that I won't find it unusual if I'm supposed to just trust that once I spend 300 hours pushing arbitrary symbols around I'll end up with some sort of insight. Or (2) they do enter the world of proper epistemological explanations and deep real-world relevance, but only because they expect that I'm already quite well-versed in various background information.
I don't want an introduction that assumes I'm the average unthinking student, and I don't want an exposition that expects me to understand five different mathematical fields before I can read it. What I want seems likely to be uncommon enough that I might as well simply say: I don't care what field it is; I just want to jump into something which assumes no specifically mathematical background knowledge but nevertheless delves into serious depths that assume a thinking mind and a strong desire for epistemological sophistication.
I bought Calculus by Michael Spivak quite a while ago because the Amazon reviews led me to believe it may fit these considerations. I don't know whether that's actually the case or not though, as I haven't tried reading it yet.
Any suggestions would be appreciated.
This will be of interest mainly to EA-friendly LWs, and is cross-posted on the EA Forum, The Life You Can Save, and Intentional Insights
The Life You Can Save has an excellent tool to help people easily visualize and quantify the impact of their giving: the Impact Calculator. It enables people to put in any amount of money they want, then click on a charity, and see how much of an impact their money can have. It's a really easy way to promote effective giving to non-EAs, but even EAs who didn't see it before can benefit. I certainly did, when I first played around with it. So I wrote a blog post, copy-pasted below, for The Life You Can Save and for Intentional Insights, to help people learn about the Impact Calculator. If you like the blog, please share this link to The Life You Can Save blog, as opposed to this post. Any feedback on the blog post itself is welcomed!
How a Calculator Helped Me Multiply My Giving
It feels great to see hope light up in the eyes of a beggar in the street as you stop to look at them when others pass them by without a glance. Their faces widen in a smile as you reach into your pocket and take out your wallet. "Thank you so much" is such a heartwarming phrase to hear from them as you pull out five bucks and put the money in the hat in front of them. You walk away with your heart beaming as you imagine them getting a nice warm meal at McDonalds due to your generosity.
Yet with the help of a calculator, I learned how to multiply that positive experience manifold! Imagine that when you give five dollars, you don’t give just to one person, but to seven people. When you reach into your pocket, you see seven smiles. When you put the money in the hat, you hear seven people say “Thank you so much.”
The Life You Can Save has an Impact Calculator that helps you calculate the impact of your giving. You can put in any amount of money you want, then click on a charity of your choice, and see how much of an impact your money can have.
When I learned about this calculator, I decided to check out how far $5 can take me. I went through various charities listed there and saw the positive difference that my money can make.
I was especially struck by one charity, GiveDirectly is a nonprofit that enables you to give directly to people in East Africa. When I put in $5, I saw that what GiveDirectly does is transfers that money directly to poor people who live on an average of $.65 per day. You certainly can’t buy a McDonald’s meal for that, but $.65 goes far in East Africa.
That really struck me. I realized I can get a really high benefit from giving directly to people in the developing world, much more than I would from giving to one person in the street here in the US. I don’t see those seven people in front of me and thus don’t pay attention to the impact I can have on them, a thinking error called attentional bias. Yet if I keep in mind this thinking error, I can solve what is known as the “drowning child problem” in charitable giving, namely not intuitively valuing the children who are drowning out of my sight. If I keep in my mind that there are poor people in the developing world, just like the poor person I see on the street in front of me, I can remember that my generosity can make a very high impact, much more impact per dollar than in the US, in developing countries through my direct giving.
GiveDirectly bridges that gap between me and the poor people across the globe. This organization locates poor people who can benefit most from cash transfers, enrolls them in its program, and then provides each household with about a thousand dollars to spend as it wishes. The large size of this cash transfer results in a much bigger impact than a small donation. Moreover, since the cash transfer is unconditional, the poor person can have true dignity and spend it on whatever most benefits them.
Helida, for example, used the cash transfer she got to build a new house. You wouldn’t intuitively think that was most useful thing for her to do, would you? But this is what she needed most. She was happy that as a result of the cash transfer “I have a metal roof over my head and I can safely store my farm produce without worries.” She is now much more empowered to take care of herself and her large family.
What a wonderful outcome of GiveDirectly’s work! Can you imagine building a new house in the United States on a thousand dollars? Well, this is why your direct donations go a lot further in East Africa.
With GiveDirectly, you can be much more confident about the outcome of your generosity. I know that when I give to a homeless person, a part of me always wonders whether he will spend the money on a bottle of cheap vodka. This is why I really appreciate that GiveDirectly keeps in touch and follows up with the people enrolled in its programs. They are scrupulous about sharing the consequences of their giving, so you know what you are getting by your generous gifts.
GiveDirectly is back by rigorous evidence. They conduct multiple randomized control studies of their impact, a gold standard of evidence. The research shows that cash transfer recipients have much better health and lives as a result of the transfer, much more than most types of anti-poverty interventions. Its evidence-based approach is why GiveDirectly is highly endorsed by well-respected charity evaluators such as GiveWell and The Life You Can Save, which are part of the Effective Altruist movement that strives to figure out the best research-informed means to do the most good per dollar.
So next time you pass someone begging on the street, think about GiveDirectly, since you can get seven times as much impact, for your emotional self and for the world as a whole. What I do myself is each time I choose to give to a homeless person, I set aside the same amount of money to donate through GiveDirectly. That way, I get to see the smile and hear the “thank you” in person, and also know that I can make a much more impactful gift as well.
Check out the Impact Calculator for yourself to see the kind of charities available there and learn about the impact you can make. Perhaps direct giving is not to your taste, but there are over a dozen other options for you to choose from. Whatever you choose, aim to multiply your generosity to achieve your giving goals!
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
View more: Next