The Fable of the Burning Branch

-19 EphemeralNight 08 February 2016 03:20PM

 

Once upon a time, in a lonely little village, beneath the boughs of a forest of burning trees, there lived a boy. The branches of the burning trees sometimes fell, and the magic in the wood permitted only girls to carry the fallen branches of the burning trees.

One day, a branch fell, and a boy was pinned beneath. The boy saw other boys pinned by branches, rescued by their girl friends, but he remained trapped beneath his own burning branch.

The fire crept closer, and the boy called out for help.

Finally, a friend of his own came, but she told him that she could not free him from the burning branch, because she already free'd her other friend from beneath a burning branch and he would be jealous if she did the same deed for anyone else. This friend left him where he lay, but she did promise to return and visit.

The fire crept closer, and the boy called out for help.

A man stopped, and gave the boy the advice that he'd get out from beneath the burning branch eventually if he just had faith in himself. The boy's reply was that he did have faith in himself, yet he remained trapped beneath the burning branch. The man suggested that perhaps he did not have enough faith, and left with nothing more to offer.

The fire crept closer, and the boy cried out for help.

A girl came along, and said she would free the boy from beneath the burning branch.

But no, her friends said, the boy was a stranger to her, was her heroic virtue worth nothing? Heroic deeds ought to be born from the heart, and made beautiful by love, they insisted. Simply hauling the branch off a boy she did not love would be monstrously crass, and they would not want to be friends with a girl so shamed.

So the girl changed her mind and left with her friends.

The fire crept closer. It began to lick at the boy's skin. A soothing warmth became an uncomfortable heat. The boy mustered his courage and chased the fear out of his own voice. He called out, but not for help. He called out for company.

A girl came along, and the boy asked if she would like to be friends. The girl's reply was that she would like to be friends, but that she spent most of her time on the other side of the village, so if they were to be friends, he must be free from beneath the burning branch.

The boy suggested that she free him from beneath the burning branch, so that they could be friends.

The girl replied that she once free'd a boy from beneath a burning branch who also promised to be her friend, but as soon as he was free he never spoke to her again. So how could she trust the boy's offer of friendship? He would say anything to be free.

The boy tried frantically to convince her that he was sincere, that he would be grateful and try with all his heart to be a good friend to the girl who free'd him, but she did not believe him and turned away from him and left him there to burn.

The fire crept closer and the boy whimpered in pain and fear as it spread from wood to flesh. He cried out for help. He begged for help. "Somebody, please!"

A man and a woman came along, and the man offered advice: he was once trapped beneath a burning branch for several years. The fire was magic, the pain was only an illusion. Perhaps it was sad that he was trapped but even so trapped the boy may lead a fulfilling life. Why, the man remembered etching pictures into his branch, befriending passers by, and making up songs.

The woman beside the man agreed, and told the boy that she hoped the right girl would come along and free him, but that he must not presume that he was entitled to any girl's heroic deed merely because he was trapped beneath a burning branch.

"But do I not deserve to be helped?" the boy pleaded, as the flames licked his skin.

"No, how wrong of you to even speak as though you do. My heroic deeds are mine to give, and to you I owe nothing," he was told.

"Perhaps I don't deserve help from you in particular, or from anyone in particular, but is it not so very cruel of you to say I do not deserve any help at all?" the boy pleaded. "Can a girl willing to free me from beneath this burning branch not be found and sent to my aide?"

"Of course not," he was told, "that is utterly unreasonable and you should be ashamed of yourself for asking. It is offensive that you believe such a girl may even exist. You've become burned and ugly, who would want to save you now?"

The fire spread, and the boy cried, screamed, and begged desperately for help from every passer by.

"It hurts it hurts it hurts oh why will no one free me from beneath this burning branch?!" he wailed in despair. "Anything, anyone, please! I don't care who frees me, I only wish for release from this torment!"

Many tried to ignore him, while others scoffed in disgust that he had so little regard for what a heroic deed ought to be. Some pitied him, and wanted to help, but could not bring themselves to bear the social cost, the loss of worth in their friends' and family's eyes, that would come of doing a heroic deed motivated, not by love, but by something lesser.

The boy burned, and wanted to die.

Another boy stepped forward. He went right up to the branch, and tried to lift it. The trapped boy gasped at the small relief from the burning agony, but it was only a small relief, for the burning branches could only be lifted by girls, and the other boy could not budge it. Though the effort was for naught, the first boy thanked him sincerely for trying.

The boy burned, and wanted to die. He asked to be killed.

He was told he had so much to live for, even if he must live beneath a burning branch. None were willing to end him, but perhaps they could do something else to make it easier for him to live beneath the burning branch? The boy could think of nothing. He was consumed by agony, and wanted only to end.

And then, one day, a party of strangers arrived in the village. Heroes from a village afar. Within an hour, one foreign girl came before the boy trapped beneath the burning branch and told him that she would free him if he gave her his largest nugget of gold.

Of course, the local villagers were shocked that this foreigner would sully a heroic deed by trafficking it for mere gold.

But, the boy was too desperate to be shocked, and agreed immediately. She free'd him from beneath the burning branch, and as the magical fire was drawn from him, he felt his burned flesh become restored and whole. He fell upon the foreign girl and thanked her and thanked her and thanked her, crying and crying tears of relief.

Later, he asked how. He asked why. The foreign girls explained that in their village, heroic virtue was measured by how much joy a hero brought, and not by how much she loved the ones she saved.

The locals did not like the implication that their own way might not have been the best way, and complained to the chief of their village. The chief cared only about staying in the good graces of the heroes of his village, and so he outlawed the trading of heroic deeds for other commodities.

The foreign girls were chased out of the village.

And then a local girl spoke up, and spoke loud, to sway her fellow villagers. The boy recognized her. It was his friend. The one who had promised to visit so long ago.

But she shamed the boy, for doing something so crass as trading gold for a heroic deed. She told him he should have waited for a local girl to free him from beneath the burning branch, or else grown old and died beneath it.

To garner sympathy from her audience, she sorrowfully admitted that she was a bad friend for letting the boy be tempted into something so disgusting. She felt responsible, she claimed, and so she would fix her mistake.

The girl picked up a burning branch. Seeing what she was about to do, the boy begged and pleaded for her to reconsider, but she dropped the burning branch upon the boy, trapping him once more.

The boy screamed and begged for help, but the girl told him that he was morally obligated to learn to live with the agony, and never again voice a complaint, never again ask to be free'd from beneath the burning branch.

"Banish me from the village, send me away into the cold darkness, please! Anything but this again!" the boy pleaded.

"No," he was told by his former friend, "you are better off where you are, where all is proper."

In the last extreme, the boy made a grab for his former friend's leg, hoping to drag her beneath the burning branch and free himself that way, but she evaded him. In retaliation for the attempt to defy her, she had a wall built around the boy, so that none would be able, even if one should want to free him from beneath the burning branch.

With all hope gone, the boy broke and became numb to all possible joys. And thus, he died, unmourned.

[Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim

7 ESRogs 19 August 2015 06:37AM

This seems significant:

An almost fully-formed human brain has been grown in a lab for the first time, claim scientists from Ohio State University. The team behind the feat hope the brain could transform our understanding of neurological disease.

Though not conscious the miniature brain, which resembles that of a five-week-old foetus, could potentially be useful for scientists who want to study the progression of developmental diseases. 

...

The brain, which is about the size of a pencil eraser, is engineered from adult human skin cells and is the most complete human brain model yet developed

...

Previous attempts at growing whole brains have at best achieved mini-organs that resemble those of nine-week-old foetuses, although these “cerebral organoids” were not complete and only contained certain aspects of the brain. “We have grown the entire brain from the get-go,” said Anand.

...

The ethical concerns were non-existent, said Anand. “We don’t have any sensory stimuli entering the brain. This brain is not thinking in any way.”

...

If the team’s claims prove true, the technique could revolutionise personalised medicine. “If you have an inherited disease, for example, you could give us a sample of skin cells, we could make a brain and then ask what’s going on,” said Anand.

...

For now, the team say they are focusing on using the brain for military research, to understand the effect of post traumatic stress disorder and traumatic brain injuries.

http://www.theguardian.com/science/2015/aug/18/first-almost-fully-formed-human-brain-grown-in-lab-researchers-claim

 

 

Optimizing the Twelve Virtues of Rationality

24 Gleb_Tsipursky 09 June 2015 03:08AM

At the Less Wrong Meetup in Columbus, OH over the last couple of months, we discussed optimizing the Twelve Virtues of Rationality. In doing so, we were inspired by what Eliezer himself said in the essay:

  • Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.

So we first decided on the purpose of optimizing, and settled on yielding virtues that would be most impactful and effective for motivating people to become more rational, in other words optimizations that would produce the most utilons and hedons for the purpose of winning. There were a bunch of different suggestions. I tried to apply them to myself over the last few weeks and want to share my findings.

 

First Suggestion

Replace Perfectionism with Improvement

 

Motivation for Replacement

Perfectionism, both in how it pattern matches and in its actual description in the essay, orients toward focusing on defects and errors in oneself. By depicting the self as always flawed, and portraying the aspiring rationalist's job as seeking to find the flaws, the virtue of perfectionism is framed negatively, and is bound to result in negative reinforcement. Finding a flaw feels bad, and in many people that creates ugh fields around actually doing that search, as reported by participants at the Meetup. Instead, a positive framing of this virtue would be Improvement. Then, the aspiring rationalist can feel ok about where s/he is right now, but orient toward improving and growing mentally stronger - Tsuyoku Naritai! All improvement would be about gaining more hedons, and thus use the power of positive reinforcement. Generally, research suggests that positive reinforcement is effective in motivating the repetition of behavior, whereas negative reinforcement works best to stop people from doing a certain behavior. No wonder that Meetup participants reported that Perfectionism was not very effective in motivating them to grow more rational. So to get both more hedons, and thereby more utilons in the sense of the utility of seeking to grow more rational, Improvement might be a better term and virtue than perfectionism.

 

Self-Report

I've been orienting myself toward improvement instead of perfectionism for the last few weeks, and it's been a really noticeable difference. I've become much more motivated to seek ways that I can improve my ability to find the truth. I've been more excited and enthused about finding flaws and errors in myself, because they are now an opportunity to improve and grow stronger, not become less weak and imperfect. It's the same outcome as the virtue of Perfectionism, but deploying the power of positive reinforcement.

 

Second Suggestion

Replace Argument with Community

 

Motivation for Replacement

Argument is an important virtue, and a vital way of getting ourselves to see the truth is to rely on others to help us see the truth through debates, highlight mistaken beliefs, and help update on them, as the virtue describes. Yet orienting toward a rationalist Community has additional benefits besides the benefits of argument, which is only one part of a rationalist Community. Such a community would help provide an external perspective that research suggests would be especially beneficial to pointing out flaws and biases within one's ability to evaluate reality rationally, even without an argument. A community can help provide wise advice on making decisions, and it’s especially beneficial to have a community of diverse and intelligent people of all sorts in order to get the benefits of a wide variety of private information that one can aggregate to help make the best decisions. Moreover, a community can provide systematic ways to improve, through giving each systematic feedback, through compensating for each others' weaknesses in rationality, through learning difficult things together, and other ways of supporting each others' pursuit of ever-greater rationality.  Likewise, a community can collaborate together, with different people fulfilling different functions in supporting all others in growing mentally stronger - not everybody has to be the "hero," after all, and different people can specialize in various tasks related to supporting others growing mentally stronger, gaining comparative advantage as a result. Studies show that social relationships impact us powerfully in numerous ways, contribute to our mental and physical wellbeing, and that we become more like our social network over time (1, 2, 3). This highlights further the benefits of focusing on developing a rationalist-oriented community of diverse people around ourselves to help us grow mentally stronger and get to the correct answer, and gain hedons and utilons alike for the purpose of winning.

 

Self-Report

After I updated my beliefs toward Community from Argument, I've been working more intentionally to create a systematic way for other aspiring rationalists in my LW meetup, and even non-rationalists, to point out my flaws and biases to me. I've noticed that by taking advantage of outside perspectives, I've been able to make quite a bit more headway on uncovering my own false beliefs and biases. I asked friends, both fellow aspiring rationalists and other wise friends not currently in the rationalist movement, to help me by pointing out when my biases might be at play, and they were happy to do so. For example, I tend to have an optimism bias, and I have told people around me to watch for me exhibiting this bias. They pointed out a number of times when this occurred, and I was able to improve gradually my ability to notice and deal with this bias.

 

Third Suggestion

Expand Empiricism to include Experimentation

 

Motivation for Expansion

This would not be a replacement of a virtue, but an expansion of the definition of Empiricism. As currently stated, Empiricism focused on observation and prediction, and implicitly in making beliefs pay rent in anticipated experience. This is a very important virtue, and fundamental to rationality. It can be improved, however, by adding experimentation to the description of empiricism. By experimentation I mean expanding simply observation as described in the essay currently, to include actually running experiments and testing things out in order to update our maps, both about ourselves and in the world around us. This would help us take initiative in gaining data around the world, not simply relying passively on observation of the world around us. My perspective on this topic was further strengthened by this recent discussion post, which caused me to further update my beliefs toward experimentation as a really valuable part of empiricism. Thus, including experimentation as part of empiricism would get us more utilons for getting at the correct answer and winning.

 

Self-Report

I have been running experiments on myself and the world around me long before this discussion took place. The discussion itself helped me connect the benefits of experimentation to the virtue of Empiricism, and also see the gap currently present in that virtue. I strengthened my commitment to experimentation, and have been running more concrete experiments, where I both predict the results in advance in order to make my beliefs pay rent, and then run an experiment to test whether my beliefs actually correlated to the outcome of the experiments. I have been humbled several times and got some great opportunities to update my beliefs by combining prediction of anticipated experience with active experimentation.

 

Conclusion

The Twelve Virtues of Rationality can be optimized to be more effective and impactful for getting at the correct answer and thus winning. There are many way of doing so, but we need to be careful in choosing optimizations that would be most optimal for the most people, as based on the research on how our minds actually work. The suggestions I shared above are just some ways of doing so. What do you think of these suggestions? What are your ideas for optimizing the Twelve Virtues of Rationality?

 

Taking Effective Altruism Seriously

2 Salemicus 07 June 2015 06:59AM

Epistemic status: 90% confident.

Inspiration: Arjun Narayan, Tyler Cowen.

The noblest charity is to prevent a man from accepting charity, and the best alms are to show and enable a man to dispense with alms.

Moses Maimonides.

Background

Effective Altruism (EA) is "a philosophy and social movement that applies evidence and reason to determine the most effective ways to improve the world." Along with the related organisation GiveWell, it often focuses on getting the most "bang for your buck" in charitable donations. Unfortunately, despite their stated aims, their actual charitable recommendations are generally wasteful, such as cash transfers to poor Africans. This leads to the obvious question - how can we do better?

Doing better

One of the positive aspects of EA theory is its attempt to widen the scope of altruism beyond the traditional. For instance, to take into account catastrophic risks, and the far future. However, altruism often produces a far-mode bias where intentions matter above results. This can be a particular problem for EA - for example, it is very hard to get evidence about how we are affecting the far future. An effective method needs to rely on a tight feedback loop between action and results, so that continual updates are possible. At the extreme, Far Mode operates in a manner where no updating on results takes place at all. However, it is also important that those results are of significant magnitude as to justify the effort. EA has mostly fallen into the latter trap - achieving measurable results, but which are of no greater consequence.

The population of sub-Saharan Africa is around 950 million people, and growing. They have been a prime target of aid for generations, but it remains the poorest region of the world. Providing cash transfers to them mostly merely raises consumption, rather than substantially raising productivity. A truly altruistic program would enable the people in these countries to generate their own wealth so that they no longer needed poverty - unconditional transfers, by contrast, is an idea so lazy even Bob Geldof could stumble on it. The only novel thing about the GiveWell program is that the transfers are in cash.

Unfortunately, no-one knows how to turn poor African countries into productive Western ones, short of colonization. The problem is emphatically not a shortage of capital, but rather low productivity, and the absence of effective institutions in which that capital can be deployed. Sadly, these conditions and institutions cannot simply be transplanted into those countries.

A greater charity

However, there do exist countries with high productivity, and effective institutions in which that capital can be deployed. That capital then raises world productivity. As F.A. Harper wrote:

Savings invested in privately owned economic tools of production amount to... the greatest economic charity of all.

That is because those tools increase the productivity of labour, and so raise output. The pie has grown. Moreover, the person who invests their portion of the pie into new capital is particularly altruistic, both because they are not taking a share themselves, and because they are making a particularly large contribution to future pies.

In the same way that using steel to build tanks means (on the margin) fewer cars and vice-versa, using craftsmen to build a new home means (on the margin) fewer factories and vice-versa. Investment in capital is foregone consumption. Moreover, you do not need to personally build those economic tools; rather, you can part-finance a range of those tools by investing in the stock market, or other financial mechanisms.

Now, it's true that little of that capital will be deployed in sub-Saharan Africa at present, due to the institutional problems already mentioned. Investing in these countries will likely lead to your capital being stolen or becoming unproductive - the same trap that prevents locals from advancing equally prevents foreign investors from doing so. However, if sub-Saharan Africa ever does fix its culture and institutions, then the availability of that capital will then serve to rapidly raise productivity and then living standards, much as is taking place in China. Moreover, by making the rest of the world richer, this increases the level of aid other countries could provide to sub-Saharan Africa in future, should this ever be judged desirable. It also serves to improve the emigration prospects of individuals within these countries.

Feedback

Another great benefit of capital investment is the sharp feedback mechanism. The market economy in general, and financial markets in particular, serve to redistribute capital from ineffective to effective ventures, and from ineffective to effective investors. As a result, it is no longer necessary to make direct (and expensive) measurements of standards of living in sub-Saharan Africa; as long as your investment fund is gaining in value, you can rest safe in the knowledge that its growth is contributing, in a small way, to future prosperity.

Commitment mechanisms

However, if investment in capital is foregone consumption, then consumption is foregone investment. If I invest in the stock market today (altruistic), then in ten years' time spend my profits on a bigger house (selfish), then some of the good is undone. So the true altruist will not merely create capital, he will make sure that capital will never get spent down. One good way of doing that would be to donate to an institution likely to hold onto its capital in perpetuity, and likely to grow that capital over time. Perhaps the best example of such an institution would be a richly-endowed private university, such as Harvard, which has existed for almost 400 years and is said to have an endowment of $32 billion.

John Paulson recently gave Harvard $400 million. Unfortunately, this meant he came in for a torrent of criticism from people claiming he should have given the money to poor Africans, etc. I hope to see Effective Altruists defending him, as he has clearly followed through on their concepts in the finest way.

Further thoughts and alternatives

 

  • Some people say that we are currently going through a "savings glut" in which capital is less productive than previously thought. In this case, it may be that Effective Altruists should focus on funding (and becoming!) successful entrepreneurs in different spaces.
  • I am sympathetic to the Thielian critique that innovation is being steadily stifled by hostile forces. I view the past 50 years, and the foreseeable future, as a race between technology and regulation, which technology is by no means certain to win. It may be that Effective Altruists should focus on political activity, to defend and expand economic liberty where it exists - this is currently the focus of my altruism.
  • However, government is not the enemy; rather, the enemy is the cultural beliefs and conditions that create a demand for the destruction of economic liberty. To the extent this critique, it may be that Effective Altruists should focus on promoting a pro-innovation and pro-liberty mindset; for example, through movies and novels.

Conclusion


Effective altruists should be applauded for trying to bring evidence and reason to a subject that is plagued by far-mode thinking. But taking their ideas seriously quickly leads to a much more radical approach.

 

What do rationalists think about the afterlife?

-16 adamzerner 13 May 2014 09:46PM

I've read a fair amount on Less Wrong and can't recall much said about the plausibility of some sort of afterlife. What do you guys think about it? Is there some sort of consensus?

Here's my take:

  • Rationality is all about using the past to make predictions about the future.
  • "What happens to our consciousness when we die?" (may not be worded precisely, but hopefully you know what I mean).
  • We have some data on what preconditions seem to produce consciousness (ie. neuronal firing). However, this is just data on the preconditions that seem to produce consciousness that can/do communicate/demonstrate its consciousness to us.
  • Can we say that a different set of preconditions doesn't produce consciousness? I personally don't see reason to believe this. I see 3 possibilities that we don't have reason to reject, because we have no data on them. I'm still confused and not too confident in this belief though.
  • Possibility 1) Maybe the 'other' conscious beings don't want to communicate their consciousness to us.
  • Possibility 2) Maybe the 'other' conscious beings can't communicate their consciousness to us ever.
  • Possibility 3) Maybe the 'other' conscious beings can't communicate their consciousness to us given our level of technology.
  • And finally, since we have no data, what can we say about the likelihood of our consciousness returning/remaining after we die? I would say the chances are 50/50. For something you have no data on, any outcome is equally likely (This feels like something that must have been talked about before. So side-question: is this logic sound?).

Edit: People in the comments have just taken it as a given that consciousness resides solely in the brain without explaining why they think this. My point in this post is that I don't see why we have reason to reject the 3 possibilities above. If you reject the idea that consciousness could reside outside of the brain, please explain why.

The Ten Commandments of Rationality

-5 Sophronius 30 March 2014 04:36PM

(Disclaimer/TL;DR: This article, much like Camelot, is a silly place/post. Nonetheless I think it presents a pretty solid list of 10 rationality lessons to take away from Less Wrong which must not be forgotten upon pain of eternal damnation/irrationality.)


In a realm not far from here, somewhere within a bustling metropolis, there lies an old and dusty book. It is placed in a most conspicuous location; in the middle of a busy street where countless citizens walk by it every day. Yet none pick it up, for it is placed on a pedestal just high enough that it cannot be reached or seen easily, and the slight inconvenience of standing on one’s toes to reach for it is sufficient to deter most. Yet if a traveller were sufficiently aware to look up and see the book,  and curious enough to reach for it, and willing to suffer the slight discomfort of having to touch its muddy cover to open and read its ancient pages, that one would find within a wealth of wisdom and rationality that would transform the reader’s life forever. For this is the most holy Book of Bayes, and its first and last pages both read thusly:

 

The Ten Commandments of Rationality

 

1)  Thou shalt never conflate the truth or falsehood of a proposition with any other characteristic, be it the consequences of the proposition if it be true, or the consequences of believing it for thyself personally, or the pleasing or unpleasant aesthetics of the belief itself. Furthermore, thou shalt never let thy feelings regarding the matter overrule what thy critical faculties tells thee, or in any other way act as if reality might adjust itself in accordance with thine own wishes.

2)     Thou shalt not accept any imperfect situation if it may be optimized, nor shalt thou abstain from improving upon a situation by imagining ever better options without acting on any of them, nor must thee allow thyself to be paralyzed with fear or apathy or indecision when any action is still superior to doing nothing at all. Thus let it be said: Thou shalt not allow thyself to be beaten by a random number generator.

3)     Thou shalt not declare any matter to be unscientific, or inherently irrational, or a false question, or with any other excuse wilfully close thine own eyes and expel all curiosity regarding the matter before thou hast even asked thyself whether the question is worth answering. To transgress thusly is to forfeit any chance to update thy own beliefs on a matter that is truly unusual to thee.  

4)    Thou shalt not hold goals or beliefs which conflict with each other, in such a manner as to violate most divine transitivity, and thereby set thyself up for most ignominious defeat, and rest easy in knowing this fact. Rather shalt thou engage in mindfulness and self-reflection, and in doing so find thy own true priorities, and solve any inconsistencies in a utility maximising manner so that thou may not fall prey to the wrath of the most holy Dutch Book, which is merciless but just.

5)     Thou shalt never engage in defeatism, nor wallow in ennui or existential angst, or in any other way declare that thy efforts are pointless and that exerting thyself is entirely without merit. For just as it is true that matters may never get to the point where they cannot possibly get any worse, so is it true that no situation is impossible to improve upon. 

6)    Thou shalt never judge a real or proposed action by any metric other than this: The expected consequences of the action, both direct and indirect, be they subtle or blatant, taking into account all relevant information available at the time of deciding and no more or less than this.

7)     Thou shalt never sit back on thy lazy laurels and wait for rationality to come to thee, nor shalt thou declare that thy beliefs must be correct as all others have failed to convince thee of the contrary: The cultivation of thy rationality and the falsification of thine beliefs is thine own most sacred task, which is eternal and never finished, and to leave it to others is to invite doom upon the validity of thine own beliefs and actions, for in this case others will never serve thee as well as thou might serve thyself.

8)    Thou shalt never let argumentation stand in the way of knowledge, nor let knowledge stand in the way of wisdom, nor let wisdom stand in the way of victory, no matter how wise or clever it makes thee feel. Also shalt thou never conflate exceptions for rules or rules for exceptions when arguing any issue, nor bring up minutiae as if they were crucial issues, nor allow oneself to be swept away in arguing for the sake of argumentation, nor act to score cheap and yea also easy points, nor present thy learnings in a needlessly ambiguous manner such as this if it can be helped, or in any other way allow oneself to lose sight of thine most sacred goal, which is victory.

9)     Thou shalt never assign a probability exactly equal to 0 or 1 to any proposition, nor declare to the skies that thy certainty regarding any matter is absolute, nor any derivation of such, for to do so is to declare thyself infallible and is placing thyself above thine most holy lord, Bayes.

10)  Thou shalt never curse thy rationality, and wish for ye immediate satisfaction over thy eventual victory, all for the sake of base emotion, which is transient whereas victory is transcendent. Let it be known that it is an unspoken truth amongst rationalists -indeed it is the first and most elementary rule of rationality and yet oft forgotten by those practiced in the art- that base impulse and most holy reason are as a general rule incompatible, as there cannot be two skies.

 

Such are the Ten Commandments of Rationality. And Lo! If one abides by these rules, then let it be said that they act virtuously, and the heavens shall reward them with the splendour of higher expected utility relative to the counterfactual wherein they did not act virtuously. But to those who do not act virtuously, but rather act with irrationality in their minds and biases in their thinking, and who in doing so break any of the Commandments of Rationality, to them let it be said that they have transgressed against thy lord Bayes, and they shall be smitten by the twin gods of Cause and yea also Effect as surely as if they had smitten themselves. For let it be said: The gods of causality may be blind, but their aim doth be excellent regardless.

 

(All silliness aside, what do you all think? Is this a good list of 10 things to take away from Less Wrong? Do you have a better list? Are posts like these a waste of time? Or, Bayes forbid, did I get my thees and thous wrong somewhere? Let me know in the comments.)

Less Wrong’s political bias

-6 Sophronius 25 October 2013 04:38PM

(Disclaimer: This post refers to a certain political party as being somewhat crazy, which got some people upset, so sorry about that. That is not what this post is *about*, however. The article is instead about Less Wrong's social norms against pointing certain things out. I have edited it a bit to try and make it less provocative.)

 

A well-known post around these parts is Yudkowski’s “politics is the mind killer”. This article proffers an important point: People tend to go funny in the head when discussing politics, as politics is largely about signalling tribal affiliation. The conclusion drawn from this by the Less Wrong crowd seems simple: Don’t discuss political issues, or at least keep it as fair and balanced as possible when you do. However, I feel that there is a very real downside to treating political issues in this way, which I shall try to explain here. Since this post is (indirectly) about politics, I will try to bring this as gently as possible so as to avoid mind-kill. As a result this post is a bit lengthier than I would like it to be, so I apologize for that in advance.

I find that a good way to examine the value of a policy is to ask in which of all possible worlds this policy would work, and in which worlds it would not. So let’s start by imagining a perfectly convenient world: In a universe whose politics are entirely reasonable and fair, people start political parties to represent certain interests and preferences. For example, you might have the kitten party for people who like kittens, and the puppy party for people who favour puppies. In this world Less Wrong’s unofficial policy is entirely reasonable: There is no sense in discussing politics, since politics is only about personal preferences, and any discussion of this can only lead to a “Jay kittens, boo dogs!” emotivism contest. At best you can do a poll now and again to see what people currently favour.

Now let’s imagine a less reasonable world, where things don’t have to happen for good reasons and the universe doesn’t give a crap about what’s fair. In this unreasonable world, you can get a “Thrives through Bribes” party or an “Appeal to emotions” party or a “Do stupid things for stupid reasons” party as well as more reasonable parties that actually try to be about something. In this world it makes no sense to pretend that all parties are equal, because there is really no reason to believe that they are.

As you might have guessed, I believe that we live in the second world. As a result, I do not believe that all parties are equally valid/crazy/corrupt, and as such I like to be able to identify which are the most crazy/corrupt/stupid. Now I happen to be fairly happy with the political system where I live. We have a good number of more-or-less reasonable parties here, and only one major crazy party that gives me the creeps. The advantage of this is that whenever I am in a room with intelligent people, I can safely say something like “That crazy racist party sure is crazy and racist”, and everyone will go “Yup, they sure are, now do you want to talk about something of substance?” This seems to me the only reasonable reply.

The problem is that Less Wrong seems primarily US-based, and in the US… things do not go like this. In the US, it seems to me that there are only two significant parties, one of which is flawed and which I do not agree with on many points, while the other is, well… can I just say that some of the things they profess do not so much sound wrong as they sound crazy? And yet, it seems to me that everyone here is being very careful to not point this out, because doing so would necessarily be favouring one party over the other, and why, that’s politics! That’s not what we do here on Less Wrong!

And from what I can tell, based on the discussion I have seen so far and participated in on Less Wrong, this introduces a major bias. Pick any major issue of contention, and chances are that the two major parties will tend to have opposing views on the subject. And naturally, the saner party of the two tends to hold a more reasonable view, because they are less crazy. But you can’t defend the more reasonable point of view now, because then you’re defending the less-crazy party, and that’s politics. Instead, you can get free karma just by saying something trite like “well, both sides have important points on the matter” or “both parties have their own flaws” or “politics in general are messed up”, because that just sounds so reasonable and fair who doesn’t like things to be reasonable and fair? But I don’t think we live in a reasonable and fair world.

It’s hard to prove the existence of such a bias and so this is mostly just an impression I have. But I can give a couple of points in support of this impression. Firstly there are the frequent accusations of group think towards Less Wrong, which I am increasingly though reluctantly prone to agree with. I can’t help but notice that posts which remark on for example *retracted* being a thing tend to get quite a few downvotes while posts that take care to express the nuance of the issue get massive upvotes regardless of whether really are two sides on the issue. Then there are the community poll results, which show that for example 30% of Less Wrongers favour a particular political allegiance even though only 1% of voters vote for the most closely corresponding party. I sincerely doubt that this skewed representation is the result of honest and reasonable discussion on Less Wrong that has convinced members to follow what is otherwise a minority view, since I have never seen any such discussion. So without necessarily criticizing the position itself, I have to wonder what causes this skewed representation. I fear that this “let’s not criticize political views” stance is causing Less Wrong to shift towards holding more and more eccentric views, since a lack of criticism can be taken as tacit approval. What especially worries me is that giving the impression that all sides are equal automatically lends credibility to the craziest viewpoint, as proponents of that side can now say that sceptics take their views seriously which benefits them the most. This seems to me literally the worst possible outcome of any politics debate.

I find that the same rule holds for politics as for life in general: You can try to win or you can give up and lose by default, but you can’t choose not to play.

What was your biggest recent surprise?

11 DataPacRat 09 June 2012 11:57PM

I recently flipped through the "Cartoon Guide to Physics", expecting an easy-to-understand rehash of ideas I was long familiar with; and that's what I got - right up to the last few pages, where I was presented with a fairly fundamental concept that's been absent from the popular science media I've enjoyed over the years. (Specifically, that the uncertainty principle, when expressed as linking energy and time, explains what electromagnetic fields actually /are/, as the propensity for virtual photons of various strengths to happen.) I find myself happy to try to integrate this new understanding - and at least mildly disturbed that I'd been missing it for so long, and with an increased curiosity about how I might find any other such gaps in my understanding of how the universe works.

 

So: what's the biggest, or most surprising, or most interesting concept /you/ have learned of, after you'd already gotten a handle on the basics?

Terminal Bias

18 [deleted] 30 January 2012 09:03PM

I've seen of people on Lesswrong taking cognitive structures that I consider to be biases as terminal values. Take risk aversion for example:

Risk Aversion

For a rational agent with goals that don't include "being averse to risk", risk aversion is a bias. The correct decision theory acts on expected utility, with utility of outcomes and probability of outcomes factored apart and calculated separately. Risk aversion does not factor them.

EDIT: There is some contention on this. Just substitute "that thing minimax algorithms do" for "risk aversion" in my writing. /EDIT

A while ago, I was working through the derivation of A* and minimax planning algorithms from a Bayesian and decision-theoretic base. When I was trying to understand the relationship between them, I realized that strong risk aversion, aka minimax, saves huge amounts of computation compared to the correct decision theory, and actually becomes more optimal as the environment becomes more influenced by rational opponents. The best way win is to deny the opponents any opportunity to weaken you. That's why minimax is a good algorithm for chess.

Current theories about the origin of our intelligence say that we became smart to outsmart our opponents in complex social games. If our intelligence was built for adversarial games, I am not surprised at risk aversion.

A better theoretical replacement, and a plausible causal history for why we have the bias instead of the correct algorithm are convincing to me as an argument against risk aversion as a value the way a rectangular 13x7 pebble heap is convincing to a pebble sorter as an argument against the correctness of a heap of 91 pebbles; it seems undeniable, but I don't have access to the hidden values that would say for sure.

And yet I've seen people on LW state that their "utility function" includes risk aversion. Because I don't understand the values involved, all I can do is state the argument above and see if other people are as convinced as me.

It may seem silly to take a bias as terminal, but there are examples with similar arguments that are less clear-cut, and some that we take as uncontroversially terminal:

Responsibility and Identity

The feeling that you are responsible for some things and not others, like say, the safety of your family, but not people being tortured in Syria, seems noble and practical. But I take it to be a bias.

I'm no evolutionary psychologist, but it seems to me that feelings of responsibility are a quick hack to kick you into motion where you can affect the outcome and the utility at stake is large. For the most part, this aligns well with utilitarianism; you usually don't feel responsible for things you can't really affect, like people being tortured in Syria, or the color of the sky. You do feel responsible to pull a passed out kid off the train tracks, but maybe you don't feel responsible to give them some fashion advice.

Responsibility seems to be built on identity, so it starts to go weird when you identify or don't identify in ways that didn't happen in the ancestral environment. Maybe you identify as a citizen of the USA, but not of Syria, so you feel shame and responsibility about the US torturing people, but the people being tortured in Syria are not your responsibility, even though both cases are terrible, and there is very little you can do about either. A proper utilitarian would feel approximately the same desire to do something about each, but our responsibility hack emphasizes responsibility for the actions of the tribe you identify with.

You might feel great responsibility to defend your past actions but not those other people, even tho neither is worth "defending". A rational agent would learn from both the actions of their own past selves and those of other people without seeking to justify or condemn; they would update and move on. There is no tribal council that will exile you if you change your tune or don't defend yourself.

You might be appalled that someone wishes to stop feeling responsibility for their past selves; "but if they don't feel responsibility for their actions, what will prevent them from murdering people, or encourage them to do good?". A rational utilitarian would do good and not do evil because they wish good and non-evil to be done, instead of because of feelings of responsibility that they don't understand.

This argument is a little harder to see and possibly a little less convincing, but again I am convinced that identity and responsibility are inferior to utilitarianism, tho they may have seemed almost terminal.

Justice

Surely justice is a terminal value; it feels so noble to desire it. Again I consider the desire for justice to be a biased heuristic.

in game theory the best solution for iterated prisoners dilemma is tit-for-tat: cooperate and be nice, but punish defectors. Tit-for-tat looks a lot like our instincts for justice, and I've heard that the prisoners dilemma is a simplified analog of many of the situations that came up in the ancestral environment, so I am not surprised that we have an instinct for it.

It's nice that we have a hardware implementation of tit-for-tat, but to the extent that we take it as terminal instead of instrumental-in-some-cases, it will make mistakes. It will work well when individuals might choose to defect from the group for greater personal gain, but what if we discover, for example, that some murders are not calculated defections, but failures of self control caused by a bad upbringing and lack of education. What if we then further discover that there is a two-month training course that has a high success rate of turning murderers into productive members of society. When Dan the Deadbeat kills his girlfriend, and the psychologists tell us he is a candidate for the rehab program, we can demand justice like we feel we ought to at a cost of hundreds of thousands of dollars and a good chunk of Dan's life, or we can run Dan thru the two month training course for a few thousand dollars, transforming him into a good, normal person. People who take punishment of criminals as a terminal value will choose prison for Dan, but people with other interests would say rehab.

One problem with this story is that the two-month murder rehab seems wildly impossible, but so do all of Omega's tricks. I think it's good to stress our theories at the limits, they seem to come out stronger, even for normal cases.

I was feeling skeptical about some people's approach to justice theory when I came up with this one, so I was open to changing my understanding of justice. I am now convinced that justice and punishment instincts are instrumental, and only approximations of the correct game theory and utilitarianism. The problem is, while I was convinced, someone who takes justice as terminal, and is not open to the idea that it might be wrong, is absolutely not convinced. They will say "I don't care if it is more expensive, or that you have come up with something that 'works better', it is our responsibility to the criminal to punish them for their misdeeds.". Part of the reason for this post is that I don't know what to say to this. All I can do is state the argument that convinced me, ask if they have something to protect, and feel like I'm arguing with a rock.

Before anyone who is still with me gets enthusiastic about the idea that knowing a causal history and an instrumentally better way is enough to turn a value into a bias, consider the following:

Love, Friendship, and Flowers

See the gift we give to tomorrow. That post contains plausible histories for why we ended up with nice things like love, friendship, and beauty; and hints that could lead you to 'better' replacements made out of game theory and decision theory.

Unlike the other examples, where I felt a great "Aha!" and decided to use the superior replacements when appropriate, this time I feel scared. I thought I had it all locked out, but I've found some existential angst lurking in the basement.

Love and such seem like something to protect, like I don't care if there are better solutions to the problem they were built to solve; I don't care if game theory and decision theory leads to more optimal replication. If I'm worried that love will go away, then there's no reason I ought to let it, but these are the same arguments as the people who think justice is terminal. What is the difference that makes it right this time?

Worrying and Conclusion

One answer to this riddle is that everyone is right with respect to themselves, and there's nothing we can do about disagreements. There's nothing someone who has one interpretation can say to another to justify their values against some objective standard. By the full power of my current understanding, I'm right, but so is someone who disagrees.

On the other hand, maybe we can do some big million-variable optimization on the contradictory values and heuristics that make up ourselves and come to a reflectively coherent understanding of which are values and which are biases. Maybe none of them have to be biases; it makes sense and seems acceptable that sometimes we will have to go against one of our values for greater gain in another. Maybe I'm asking the wrong question.

I'm confused, what does LW think?

Solution

I was confused about this for a while; is it just something that we have to (Gasp!) agree to disagree about? Do we have to do a big analysis to decide once and for all which are "biases" and which are "values"? My favored solution is to dissolve the distinction between biases and values:

All our neat little mechanisms and heuristics make up our values, but they come on a continuum of importance, and some of them sabotage the rest more than others.

For example, all those nice things like love and beauty seem very important, and usually don't conflict, so they are closer to values.

Things like risk aversion and hindsight bias and such aren't terribly important, but because they prescribe otherwise stupid behavior in the decision theory/epistemological realm, they sabotage the achievement of other bias/values, and are therefore a net negative.

This can work for the high-value things like love and beauty and freedom as well: Say you are designing a machine that will achieve many of your values, being biased towards making it beautiful over functional could sabotage achievement of other values. Being biased against having powerful agents interfering with freedom can prevent you from accepting law or safety.

So debiasing is knowing how and when to override less important "values" for the sake of more important ones, like overriding your aversion to cold calculation to maximize lives saved in a shut up and multiply situation.

Simpson's Paradox

68 bentarm 12 January 2011 11:01PM

This is my first attempt at an elementary statistics post, which I hope is suitable for Less Wrong. I am going to present a discussion of a statistical phenomenon known as Simpson's Paradox. This isn't a paradox, and it wasn't actually discovered by Simpson, but that's the name everybody uses for it, so it's the name I'm going to stick with. Along the way, we'll get some very basic practice at calculating conditional probabilities.

A worked example

The example I've chosen is an exercise from a university statistics course that I have taught on for the past few years. It is by far the most interesting exercise in the entire course, and it goes as follows:

You are a doctor in charge of a large hospital, and you have to decide which treatment should be used for a particular disease. You have the following data from last month: there were 390 patients with the disease. Treatment A was given to 160 patients of whom 100 were men and 60 were women; 20 of the men and 40 of the women recovered. Treatment B was given to 230 patients of whom 210 were men and 20 were women; 50 of the men and 15 of the women recovered. Which treatment would you recommend we use for people with the disease in future?

The simplest way to represent these sort of data is to draw a table, we can then pick the relevant numbers out of the table to calculate the required conditional probabilities.

Overall 

  A B
lived 60 65
died 100 165

The probability that a randomly chosen person survived if they were given treatment A is 60/160 = 0.375

The probability that a randomly chosen person survived if they were given treatment B is 65/230 = 0.283

So a randomly chosen person given treatment A was more likely to surive than a randomly chosen person given treatment B. Looks like we'd better give people treatment A.

However, since were given a breakdown of the data by gender, let's look and see if treatment A is better for both genders, or if it gets all of its advantage from one or the other.

continue reading »

View more: Next